Ram Venkatesh, CTO and co-founder of Sema4.ai, explains why AI can no longer be limited to generating insights. For enterprises to truly benefit, they need AI that can act on those insights reliably, securely, and at scale.

 
 
 

286 Audio.mp3: Audio automatically transcribed by Sonix

286 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Eye_on_AI_286_-_Audio.mp3

RAM:

As this adoption of data happened over the last decade, the dirty little secret is the number of people that companies were employing to get value from. Data was also growing exponentially. Isn't automation and RPA supposed to help with all of that? And we uncovered that essentially traditional RPA showed people the art of the possible. Like what you could automate, but they couldn't really because they needed more flexibility than what was possible. So agents have conversations. I think the conversational mode is one of the nice, flexible things. You don't have to build everything in a UI with three levels of dropdowns, but it's also a way for us to create unattended pieces of work. First off, thanks for having me on the show, I appreciate it. Um, look, I think that, uh, most recently I came out of the big data world, so we were the folks who helped commercialize an open source initiative called Hadoop. You might have heard of that at Hortonworks and then at Cloudera. So I felt that was a very nice way for us to have ringside seats to large companies adopting data in every fabric of what they did, right? So when people say data as DNA, that happened in the last ten years, right. So people used to have tiny little silos of transactional data. And now fast forward now. And they got petabytes exabytes of data that. So that was that's the, the the sort of customer base exposure area that we came out of. And you know this along the way there we kind of saw towards the end of that journey, there was an opportunity to do something that was truly interesting with AI.

RAM:

This was just, you know, ChatGPT had happened. People were trying to figure out how they were going to leverage this in the context of data. And we saw a problem that as this adoption of data happened over the last decade, the dirty little secret is the number of people that companies were employing to get value from data was also growing exponentially. Yeah, that didn't make any sense. You know, we're thinking nobody has budget that just because you got more data this year, you year. You got to have more people analyzing the data. So we started digging into it and said, what are all these analysts? They're not all data scientists. What are they all doing with the data? What's what's going on here? And the unpack was, I think, for companies or organizations of any size to get from sort of insights to outcomes, that's not a straight line. Right. That's kind of where there's a lot of human looking across multiple systems, prioritizing, figuring out what's really an outcome that we care about versus what is noise separating that out. These are all the kind, important, clever things that people do. And then at the end of the day, you got to do something about that insight. And I examined this problem. It's a beautiful problem. And I said it on the shelf. That doesn't make us any money. It is. How do you get to actually doing something with the data? So these are the places where we saw humans. Analysts in Excel typically were playing a big role in spite of all the data infrastructure we sold them. That is the that is the reality of what they were dealing with.

RAM:

So we said, okay, isn't automation and RPA supposed to help with all of that? And we uncovered that essentially traditional RPA showed people the art of the possible, like what you could automate, but they couldn't really because they needed more flexibility than what was possible. So we said, what is the underlying unlock with LMS that we think is I think is amazing is language is programmable, which means that now you can say in a way, knowledge work, which is all human language centric, is now programmable. So this was two years ago, and we needed to convince investors and everybody else why we were not working on chatbots and Rag. And we said, no, it's really this sort of we were looking for this word and we said, okay, agency is a really nice way for us to think about what it is that people have that lets them go from, you know, insights to outcomes. And so we said an agent oriented system is going to be likely something that is going to be very helpful. And fast forward to today. That's kind of how semaphore got together. It's all about the meaning of work, i.e. semantics, and then just doing the work in an authentic way. So that's that's a little bit about myself. And you know, my background you asked about, you know, what did I go to school in. It's actually material science. So I spent a lot of time in just code engineering before computer science and probably 30 years of data along the way. And now we're all in the genetic space, aren't we?

CRAIG:

Yeah. And let me ask so some before you say it's not just chatbots and rag. Um, does it also do chatbots in rag.

RAM:

So agents have conversations. It's I think the conversational mode is one of the nice flexible things. You don't have to build everything in a UI with three levels of dropdowns. So yes, semaphore is a conversational agent system, but it's also a way for us to create unattended pieces of work. So you might go like, oh, I got a new email. The email has an attachment. The attachment has, um, let's say a wire transfer inbound request. Our agents can listen to an email inbox, pick up the work, do the work, and if there is nothing more to be done, there's no conversational interaction there. But if they get stuck, like, you know. Oh, this. I don't know to whose account this should be credited to, then our agents have to interact with humans. So that's where that conversational piece comes in. And the piece about rag is something that we kind of we believe that for the kinds of agents that we are building and we want to build for our customers, these are accuracy is going to be so important. And our agents are we like to say our agents are really good at following plans and asking questions. I think Llms are suspect at answering questions. Students. And so this is where instead of Rag, where we give the LLM the 80% answer and hope that it picks out and generates and augments like the right accurate answer. We do that in a semantic layer that's outside of the agent. So then there's always transparency. You know what was the question. You know what was the answer. So and this brings your entire enterprise's context, which none of these llms have been trained on. And you don't want them being trained on that kind of information. That's kind of where our agents have a role to play.

CRAIG:

Okay, so so you you don't use rag, but you use the semantic layer. Is that right?

RAM:

Exactly, exactly.

CRAIG:

And how does an enterprise. Well, let's back up. So yeah. Um, it's a platform. It's a no code platform. Um, people are interacting or users are interacting with it through natural language. Uh, text, I presume, uh, when you ask it to build an agent that will read your emails and respond to certain kinds of emails with certain kinds of answers. Um, in in the background, are there sort of pre-built modules that then the system is, uh, pulling together, or is, uh, is it coding up a new agent each time? Yeah.

RAM:

And fascinating set of questions. Right. Because there's so many ways to think about agents, and I don't think any of them are wrong. Right. It's just different. So for the kinds of work we're talking about here, look, I think if you if you step over, step back a little macro, I think there's a tremendous variety of the kinds of things you could do in an authentic way. What we are focusing on is the actual core business function of that company. So let's say you're a bank and you're doing retail banking. So this is how do you interact with your customer? Let's say you're a manufacturing company and it's the operational shop floor activity, right? Let's say you are a you're a multinational bakery. It's how do you do inventory placement and procurement. Right. So it's the core business functions. The people who know how to specify this work. They live in the line of business. They don't live in it. We like to think that they live in tech, but they don't. The actual work specification lives on the business side of the house. So for us, that starts off with the person who knows the process sitting down with our system. We call this thing a runbook where they explain what they're going to do. This is much more than a prompt. This is not we don't want to turn your, you know, your inventory specialist into a prompt engineer. That that movie is not going to go so great in my opinion. Right. Instead, what we what we want them to focus on is how would they tell a new person in their team to do the work? What does best best practice look like? What You know, we want to make sure that we capture the intent of the work, not the steps of the work, because then there's some flexibility in the steps that we could come up with later.

RAM:

Yeah, but the idea is that you define your agent up front for your business process. Somebody vets it both on the business side and and on it as you're asking about like prebuilt modules and so on. Look for these agents to work. You need to specify the work which is English. And then the second piece is your enterprise data that somebody's working with. Folks who know the data, models who know where what the right repositories are in your company. You bring them together, and then the third piece is being able to automate and take actions in downstream systems. You can think of this in today's vernacular like an MVP architecture for actually taking outcomes. So in our world, we want the business user to create this agent up front through these three mechanisms. And after that, that agent just does the work on a daily basis. So a big part of what we're talking about is work that has a spec to it. So next year, you know, let's say you're doing a 401 K rollover with this agent. Creativity is the last thing you want from this agent. You want to be able to tell the auditor next year oh this is what happened. Here is what our runbook was. Here's what our SOP was. Here's the data elements that were used. This was the customer. This is how we kind of got here. So we help them build agents that are extremely reliable and accurate at what they do.

CRAIG:

Yeah. And then the question about what's happening under the hood, are you ah do you have a coding model that's coding up, uh, agents or do you have, uh, modules that have been built and tested that, that you're putting together?

RAM:

So, yeah, it's the the nuance here is that we rely on the reasoning models for language understanding, whether it's open AI, whether it's anthropic cloud. Right. So we stay with the mainstream models for reasoning, because I believe that those are the best at what they do. Excuse me one moment. I'm losing my voice.

CRAIG:

Sure.

RAM:

Okay. Let's give this another shot. So. So, yeah, I think that, um, for reasoning, we rely on the reasoning models in particular. Then when it comes to the semantic layer, this is where we help customers build together really interesting data models that can span all the different modalities. What I mean by that is you might have some data sitting in snowflake. You might have some data sitting in MongoDB. This almost every customer has this, you know, once an operational store. The other is an analytics store. How do you ask questions across them? In what language do you ask these questions? How do you get answers back? This is the layer when you're talking about like, you know, deep integration. For us this is this has to do with the meaning of data and the meaning of document. So these are the two areas that we specialize in. And then you can have integrations to existing systems. Or most of our customers would have customized you know nobody runs. No two customers run SAP in quite the same way. Right. So if you're going to go talk to SAP at the end of the day, there's an there's an MCP server that you have to construct. So we have an SDK and a way for people to define these. And then you put all these three things together and you get yourself an agent that can reason, is knowledgeable about your data and then can do things on your behalf. So that's kind of how you get there.

CRAIG:

Yeah. And the semantic layer, um, uh. That's, uh, pulling together data from different sources and, and rationalizing it or, or making it, uh, does it build an ontology? I mean, what what does that semantic layer do and how does it do it?

RAM:

Yeah. So the the key here is that we especially coming from the data world, we said here's what we're not going to do. What we're not going to do is take all of your data and build a something else from it, like a knowledge graph or a, you know, another copy of the data, because typically that just you used to have two data stores and now you have three. People will always tell you the third one's better but doesn't matter. Now you have three problems that you're going to have to go think about from a data management standpoint. So our layer is all about understanding the metadata, not the data itself. So then this lets us come up with ways to think about entities that should be related to each other, entities that are different from each other. There are like rich complex relationships between them. Like here's a customer, this is where they live. This is their shopping cart. These are all semantically addressable things that our layer can train only on the metadata, and then come up with a language in which the LLM can ask natural language questions. So now when the agent's going through, it's going to say, oh, I reached this place in the workflow. This customer is trying to do a return. I need to understand their purchase history for the last 12 months. So then we say, oh, we have this question. We know how to reason about that. And it's going to go hit snowflake. And then here's, here's the, you know, uh, 20 items that they've bought. And we are able to do all the simple things like access checks and, you know, the table stakes, things that you need to do to make sure that this agent has access to the right, most current information, that we can do this without making another copy of the data. That's the trick. Okay. Hopefully that gives you a little bit of a sense of sort of how this layer works.

CRAIG:

Yeah. Yeah. Um, and, uh. How how scalable are these agents and and how many, how many agents can you build and run? They they there. Are they running on uh on uh some offers uh platform or then do you export them and can put them on whatever, uh, host them wherever you might host your stuff.

RAM:

As you're connecting the dots. Right. If we need line of sight to the customer's data, to the customer's back end systems, we're talking about, like for us, the customers we're going to address with the solution. So these are large fortune 2000, but mostly like in the upper end of that is kind of where where we're concentrated right now. And they tend to have a lot of concerns around regulatory risk, exposure, line of sight, data exfiltration, controls, things of that nature they care about, like the security risk and compliance profile of these agents. So for all these reasons, our agents run inside the customer's account and environment, right. So that gives them in fact, that's one of the reasons why we announced recently Team Edition, which is a way to run our agents inside of snowflake. So if you are a snowflake customer, you can run our agents without even leaving the snowflake account that you're in, let alone the rest of the enterprise, uh, infrastructure deployment that you have. So that gives them the the control and the oversight to make sure that at all times, they understand what the agents are doing and how they are functioning.

CRAIG:

I see. And and the agents, uh, that are being built on some of her, presumably, uh, you can build a series of agents that all talk to each other or work together. Um, and how large can can that system God.

RAM:

Yeah. So we there's today and how long how where do customers want to go with this? Right. So I think that, um, the agent marketplace has kind of exploded in terms of the number of approaches that people are trying and the amount of information that's out there. So I like to say that the market is over. Informative in that there's a lot of information out there. But under deployed, that's the reality of where we are. Right. Is that I think it's early innings for agents in spite of all the hype. And that's somewhat scary, right? Like how much more information could you have about agents out here? But if you think of the number of agents that customers have deployed today, where they want to go with this, though, is it doesn't matter what scale of the organization, they know they will have more agents in the future than they have today. It's kind of like data, right? We saw the same movie with data where they knew that they're going to have more data, lots more data five years from now than today. So I think we're in that phase. So where your question, I think is really important about the architecture because you this is if you were trying to solve one problem in the enterprise and you needed a solution, one agent to go solve that problem, you would operate in a particular way.

RAM:

I think most customers are setting up for a world where they have 20 agents this year. If this goes really well, 150 agents next year, and who knows, probably 2000 agents the year after. So I think this is where the in the medium term, being able to manage a fleet of agents is going to be, we believe, a very important attribute. And that's kind of what we are designing and building for, will grow with the scale of adoption. Right. That's I think that's the best way to approach these because then you you kind of you're meeting the customer just a little bit ahead of where they want to be. But you're not like over building. For there are people who think there'll be a trillion agents and they're building for that today. I think, you know, the market has a little bit more maturity to to go through. We kind of go get there.

CRAIG:

Yeah. Yeah. And so at the beginning, they're they're doing things like invoice reconciliation or, um, sort of email inbox management, things like that.

RAM:

Yeah. They, they start with like processes that they have today. See, if you think of, you know, what I'm describing to you, this kind of an agent is this is the IP of your company. You think about it, if somebody like if you are in the, you know, uh, power generation business, and if somebody else created an agent that understood how to, to run the purchasing function for your business, I have bad news for you. You've just been disintermediated. Right. So companies understand that they want to go after the processes that make them who they are, but they don't want to do that on day zero, right? So because these also tend to be, for good reasons, really complex, they'll have to talk to lots of different parts of their company. And so they start off with the building blocks and then they work their way up in complexity. But that's kind of where they're going, is to say, how do we take our core IP and express that with AI so that it's understood? So then we can optimize it? We can we can, you know, figure out how we are going to go merge a couple of functions to go do something better. You really understand where the bottlenecks in your process are. We've never had that kind of a transparency about core business functions that that's the promise I think, of kind of where we're going with this agent stuff.

CRAIG:

Yeah. Um, I asked before we started recording. Um, there are so many agent building platforms suddenly, uh, just in the last three months or so. Um, how does an enterprise decide which one is right for them? That's one question. The other question is, how do you guys, uh, stand out from the pack, or is the market so massive? As you said, it's very early days. Um, there are a lot of companies out there. Um, and, uh, everyone is looking at the space that there will be a piece of the pie for everybody.

RAM:

Yeah. I think, uh, this is an important question for customers, too, as they're trying to figure out how do we make sense of all this noise, right. Look, I think that there is like a to me, there's almost like a maslow's hierarchy of agents with their needs and wants. Right. Is that the good news, I think, is that this way of building business applications, I just think of an agent as the next massive iteration of an application, just like you wouldn't be surprised if if you came to know that there were several million applications deployed today. Of course there are. That's just a basic building block. I think with AI, people have been trying to figure out how to monetize, how to apply AI, right. And I think that AI is more than knowledge based. Ai is more than search. Ai is more than generation. Ai is more than predict. There's so many elements to it. I think agents are the manifestation of an AI application, which is kind of why I think there's so many of them. So then if you think of like we see that, you know, overall there's three broad ways that I see the market today. So one is there are tasks which are inherently like developer automation centric, like, oh, you have a problem. Let me fire up my Python editor. If that's your instinct, I think then you go to like a long chain or a crew AI or a haystack or autogen or semantic kernel, or any one of these open source agent OS frameworks to go construct the agent.

RAM:

This is typically a developer centric, developer led motion. So they have prompts, but the prompts are like sprinkled through like little pieces of English Sprinkled through this Python code that you're looking through. So we are not one of those, but I think that that lends itself to a lot of the first layer of automation. This is, you know, think of this as, oh, I need to download a file and do something with this file and upload it somewhere else. Just the actions where what you're trying to do is just connect together pieces of infrastructure and get to an outcome. But there's no best practice. There's no continuous learning or refinement that's expired, expected of this, of this kind of agent. So I think this is where the DIY frameworks, there's a massive presence there. And then there are others who are building agents that they will do the work for you. Right. Like, you know, you might say like there are customer service agents or if you just drive by 101 into the city, you'll see Billboard after billboard about, here's an agent that does something for you out of the box. But the key with all of those is they're saying your work my way Because it's really.

RAM:

And many times I think that's appropriate if the work is, is sort of commodity outsourceable not differentiated. Why do you want to spend the time to, you know, to learn how to do that better? Right. So that's what these are. I call these like purpose built agents. And I think that there's a massive market for them. We're not one of those either. The way we see the this agentic workplace is to say it starts with building for the business user. Like what persona are you building for? What problem are you solving? That's kind of how I think we should think about it. Doesn't matter what XYZ model they use or you know, what is their integration philosophy, but it's for the customer. They want to know what problem are you solving? And then who in my company do I need to bring to take advantage of your solution? So if you think of it that way, the problem we are solving for is squarely in the domain of the business user. So we are saying now our agents are going to, over time, help you describe the work that you do in natural language. We understand the work, and that's a very squishy and bold statement that I think over time is going to get better and better so that we can get to improving the work. So then if you think of systems that can do that, there are very few of these today.

RAM:

But I think this is this is where we are going with agents in the, in the enterprise. I think that it's also this is a new and flexible way to connect all the layers in the stack. So everybody who used to sell an API or a product now also has agents, whether it be at the data layer, at the infrastructure layer, at the, you know, every application is going to have agents. I view these as ecosystem agents. Like let's say you did something with Salesforce yesterday. Today you can do something with Salesforce in an authentic way. Let's say you talk to Zendesk to manage your customer today. You can do that with an agent, but that doesn't change what Salesforce or Zendesk Is it just a generic way to do that? Right. And so this this is again I, I encourage customers, please don't build agentic capabilities at that layer because the vendors are going to do that over time. That's not that's literally not your problem. Right. They will solve it and then you're going to catch up to it. So so I think that hopefully that gives you a sense of sort of the landscape, the way I see it. And we are focused on the business user. Think of it as that's the that's the North Star.

CRAIG:

And as you as an enterprise gets more and more agents, uh, into their processes and workflows, uh, there's a management problem. Do you see. Some of where have a like a an agent management or orchestration layer, a super agent that watches all the other agents?

RAM:

Yeah. I think, uh, short answer is that there is a single pane of glass. That's where both human operators and agents can supervise and manage agents. It's very interesting when we started deploying these agents, when you deploy an agent into production, every agent has a supervisor, a human, it's like an email ID or a group. And now they get to have some more rights to see what the agent is doing. So you can come in and say like, oh, you did these ten things and you got eight of them, right? And this is what you got wrong. And I'm going to make this change, and I'm going to now create a new version of this agent, and I'm going to watch how that operates. Right. So that, you know, we call this person like the process architect. I think this is like a new job role, where this is a person who really understands the process and the agent is the is the workforce that they have to delegate tasks to to get what they're going to do done. So so it's a that's the model that we have is like a single pane of glass for the basic stuff. And then every agent has a supervisor, every group of agents has a supervisor. And then you kind of go from there.

CRAIG:

Yeah. And what did you call that person a process.

RAM:

A process architect.

CRAIG:

Architect. Yeah. And that's a the business.

RAM:

Uh, exactly.

CRAIG:

Uh, person with the with the business knowledge. Correct. Yeah.

RAM:

And that's where most of the learning has been for us. It's. I thought it would be on the, on the technology side, but, you know, these folks have never had the chance to make small, simple changes to what they do, ever. Right. Because usually it'd be like, oh, I need to go get a new version of the software from a vendor, etc. here. How do they make a change to the agent? They go change the runbook. So what we're seeing is 90% of the changes to our agents in production. They have nothing to do with data. They have nothing to do with automation. It's how you do the work because this is what people do. Like you don't do the same task in the same way twice. Right. Regardless of the complexity of the task, there's always some feedback. Learning things you're thinking about and optimizing. We are seeing the same in the runbooks. So now these process architects can make small changes. And then sometimes they're intimidated that, oh the change I'm going to make is going to be in production. So they are part of a Cicd pipeline though they sit on the business side of the house. Right. So I think this is this is where being able to do better things faster, that's the promise of the how you think about it in an ancient way. Yeah.

CRAIG:

Yeah. Um, and uh, which which industries are you most focused on? Um, everyone seems to be going after financial services and healthcare.

RAM:

Well, simple. Right. Follow the money. And that's that's kind of where a lot of the money is spent on data analysis. These are also the industries which again, going back to sort of my background, we saw that financial services healthcare and telco. These were all sort of the big, uh, you know, investors in data infrastructure. So I think they have some of the processes in place. They understand what what agents to build. This is the one question I got for the customers. I tell them, I cannot help you with one thing, which is if you don't know the agents that you need to build, we're going to take a long time to figure that out together. So I think these industries, because of their data maturity, they they are they are very they're in a good place to go do that. Right.

CRAIG:

Yeah.

RAM:

Yeah.

CRAIG:

Um, and then, uh, how do you charge for this? Is this, uh, of subscription by usage or, uh, by seat?

RAM:

Yeah. This is, uh, it's early enough in the cycle that I think the honest answer for us is it depends is that we we we know that people are very comfortable with consumption based models. But in this case, since they don't know how many zeros they're going to add for the number of agents that they're going to create in years two and three. So this again, we took a story from our data playbook was what we did there was we said in the early days we had a very sophisticated strategy for telling the customer how to think about us from an economic standpoint. We said, you know, Teradata, it charges you X dollars per gigabyte per year, will charge you 1/20 of that price. At that point, I didn't have to tell you anything more esoteric about how the software worked, because then the customer knows that, oh, we can store 20 times as much data and not break the bank, right? So I think that we are at a similar moment here where our focus this year is adoption, adoption, adoption. So we want to make sure that there's no friction barrier to people consuming before they understand how they are going to actually deploy this technology together.

RAM:

So typically, if it's a large enterprise, you know, You know, there's an idea of, let's make sure you have some skin in the game. So there's like a platform fee, and then they can build a set of agents. And that set can be very large based on the size of the organization that we are dealing with. But I expect over time this will go to consumption based pricing. I don't think it will be outcomes, because for the kinds of outcomes I'm talking about here, it's hard for me on the outside to know what it's worth to somebody on the inside to like, book a ticket or do XYZ function. So I think it will be consumption in the medium term. But right now I think it's about make sure that your monetization choices are not an inhibitor to experimentation, because I think that's what they really need to do, is they need to understand for themselves how they can deploy this technology before they can decide how much they're going to pay for it.

CRAIG:

Yeah. Um, and, uh, you know, we talked about how how you differentiate yourselves. But the other side of that question is, uh, with the proliferation of, of agent and building platforms. Yeah. Uh, how do you see the the the market developing? I mean, I, I was saying, you know, is, is the market so big and we're so early days that there's plenty of runway for all these companies, uh, in their various directions or verticals. Uh, or do you see, uh, us reaching a point soon where there's going to be, uh, a shakeout? That's one question. And the other is because, as you were saying, companies don't know which, uh, which platform to choose. Uh, but now with MCP and A to A and these different protocols, uh, if, if they commit to say some for AI, uh, and then down the road, There's another vendor that they want to work with. Uh, can you then, um, they're not locked in, right? Uh, they can they connect to other systems or. Or is it fungible in that way?

RAM:

Oh, there's there's multiple good questions in there, right? Yeah. Look, I think that, uh, I still think this is early innings. So it seems like there's no. I hope we don't converge around any one solution. Of course. You know, semaphore. I'd maybe I'd be saying something different. I don't think so. Because I feel that our understanding of knowledge work is not where it needs to be at the macro level, because people are so creative at how they do their job. And for us, it's hubris for us to think that we got this all figured out because X, Y, z at this point in time. So I feel like it's early innings in a genuine, like, intellectual curiosity kind of way. So I think that it's too soon For if the market were to converge to soon, I think we'll just lock out good ideas. And that's never a good way to make progress, right? I also think that it's not that there'll be hundreds, but I think that the data and analytics space has some clues about how this might work, that there will be. I think that for the long term, for a long time, there were multiple winners in the space. But then over time, people realized that, okay, as as your practices sort of become more straightforward.

RAM:

And I think over time you can converge. But for the longest time, you know, there were probably a dozen analytic vendors. So I think similarly for the enterprise, there's going to be a number of different ways to do agents in the short term at least, you know, bad news for you. The noise is not going to subside. I think in the next 18, 24 months, we're still going to see a lot of innovative approaches, because the one nice thing is everyone that I told you in this spectrum of different kinds of agents, we're not building agents in the same way. I think that's a good thing, because that's kind of how you find out what works and what doesn't work, and which parts need to be combined. Right. So so that's sort of my view on like the the medium term take on this is there's going to continue to be alternative choices in the industry of how we how we go tackle in different use cases. And as we get more understanding about which ones are better candidates for an agentic approach, then I think you'll start to see some some convergence or alignment there.

CRAIG:

Yeah. Um, I also asked you before we started recording whether there is developing a sort of cottage industry of consultants. Yeah. That are deeply, uh, familiar with some of who are deeply familiar with. I don't know, bedrock at Amazon, deeply familiar with me. Um, and and then can, uh. Because you You're asking, um, this, uh, process architect, as you call them, uh, in each company or multiple. Yeah. People to get trained and and understand, um, how to use, uh, some of for and as intuitive or no code as it might be. Yeah. That takes time. And the reason I ask is I was at, uh, AWS, uh, at one of their conferences and they were saying, uh, I think it was bedrock or. No, it was another product. This is before the agent, uh, craze. But they were saying that there are people that are going to their training programs getting familiar with the ecosystem around, uh, bedrock and SageMaker and all of these. And then, uh, they are going out and acting as consultants for, uh, for corporations that want to implement this stuff. Um, because, uh, you know, Amazon doesn't necessarily have the personnel available to, to, to do that. I mean, you mentioned consulting, uh, businesses, uh, sort of growing out of, uh, existing, uh, model, uh, producers. But is do you see that developing?

RAM:

I think that this is, uh, again, it's early enough that in some cases, you know, the the customer wants to know what is the best practice. Where do I start? How do I do my what I'm doing? But I think the other interesting side of the trade is product. Companies like ourselves really want to know how they are building what they are building. Right. Because this business user is not an easy persona to solve for. And I think traditionally we have told them this is how it's going to work and hope that we got it right, or we just have people living there like consultants doing the whole thing for them. You know, that whole BPM like saga. I don't think businesses are thrilled about that. They don't want somebody to come in and build something that they don't understand. So I think that but there is a lot of appetite for this bidirectional value exchange where we can share with them. Like this is how you write a runbook. A lot of questions we get asked are the basic like is this a good use case? And it's not always what's it worth to you. It's got to be a little bit more than that. It's got to be, you know, what are the elements that make up a generic use case? That's what they're asking us, right? And that's a product model question.

RAM:

And these models are evolving so fast that I think you need that that channel in and from them. I think that there is a notion that this whole flywheel has gotten so much faster. If we have to have build and ship cycles that are six weeks long. It's there isn't enough time for us to say like, oh, we have a consulting part of our company, and then we have product management and then we have engineering. And, you know, information is going to go bounce around through that way. So I think it's it's it's continuous everything. So I think that's kind of where the, the consulting relationships that could really work are where the customer gets value from us. And then we get value in learning with the customer. But probably they are not looking to hire people. They need to have solutions that they can build and extend and maintain at the end of the day. And how do you do that in the context of a product sale? I think that's the that's sort of the business innovation that, that that kind of has to happen to make this model work.

CRAIG:

Yeah. So what you're saying is, is, I mean, SaaS, Says. The whole premise is that, uh, you know, companies, SaaS companies can scale because they don't have to have someone in holding the hand of the user. Um, but so this, this exchange you talk about, how are you, uh, learning from the customer without having, uh, some for employee, uh, directly engaged with the customer on, on what they're building, or is that what you're doing?

RAM:

I think that this is a a deep relationship with a few strategic set of customers. I don't know how this model scales to be, to be honest with you, as you as you get to larger numbers of customers, this model also has to scale right at the at the lifecycle of the industry and where we are at. We work very closely with, we call them design partners. So for these customers right, there's there's a relationship there where we are very transparent with our roadmap, with things we are building or we're not building. We say like, this is not a genetic. There's nothing new here for you to exploit at this point in time for this use case. So you're better off not doing this use case at this point. They value that that. So it's a very, uh, high touch frequent relationship. There's a mix of strategic and execution in it. Right. And so this but it's a good question as to how do you make this model scale. We know that certain parts of this have to become self-service. To your point. Like the reason we picked snowflake was we said like, look for some of our this is not a complicated thesis.

RAM:

How do we scale by building better agents faster and getting them deployed in production faster. If you could do these two things, you scale. So we focus on the second half of that, which was how do you get them into production faster. And we said, what if you did not have to have it involved in any step of that value chain? So that's where the Snowflake Team Edition thing comes in where we said we can leverage the SaaS governance controls that they already have in place. They've gone through, and they spent a year and a half proving out that snowflake is good enough for their security and compliance story. Now they can run their agents there without having to to to look at that question again, so that I think that's there are parts which should not be consultative in my opinion, so that you don't like, forget the good lessons that you learned from the past. And then the other part is like the agents you build and how are they accurate, whether it's like collective learning to be had. That's the piece that we want to make sure.

CRAIG:

Yeah yeah yeah yeah. And I see you're, you're early in this whole agent adoption and and the more you engage with, uh, with customers, the more you learn about what customers need. And then eventually you can, you can make it more self-serve and scale. How how, uh, do you do you is some of for Useful for small and medium sized enterprises or even individuals? Or is it really geared toward, uh, very large multinationals?

RAM:

So for us, it is. We started there with a large multinational, and then we, uh, funny thing happened along the way was we realized that there's another constituency for whom agents are very appropriate, especially as the agent space has exploded, which is domain expert. There could be cis, or there could be ISV software vendors who now need to build agents that they are going they they know much more about the domain. The one thing I tell the customer is if you have a purchasing problem, I'm the last guy you want in that room because I know nothing about that domain. Right. But if there is a company that focused on, let us say, purchasing for apparel for mid market, and now they're going to go build an authentic solution. We work with them. We got the right tools and capabilities. That's how I see us going after other segments is that they have these are already existing customers of theirs, or they know how to appropriately scale to that part of the ecosystem. And because I think that in the in that segment, they need instant value, like those agents have to do something for them out of the box on day zero, which happens through the partner for us.

CRAIG:

Right, right. I see. And and so, you know, this is it's an exciting time. Uh, where where do you see this going? How quickly you guys growing? Uh, is, uh, in two years, are we going to be living in a very different, uh, economy?

RAM:

I think that at least for for agents will be, in our vernacular, in two years for sure. Yeah, right. As in, you know, I use an agent to do X today. It's a when you say I used a travel agent, you mean a human? Yeah. I think in the future when you say a travel agent, you will be like, yeah, I think it's a human or an agent, right. So I think that broad sort of consumer adoption of agents, I think, will actually happen more transparently quicker in that time period, for sure. Which is kind of nice. Right. Then we can have personal agents and assistants roaming around and doing transactions on our behalf, or starting to do transactions on our behalf. I think that's the world that I see. If this goes well, we should be in that kind of a world. I think two years from now, I think for some of our itself, look for us. We said for this constituency of customer, we're not focused on cherry pick demos. We're focused on security and accuracy. This trust takes time to build. I don't care how fast your large language model comes out. Trust is something that it takes a long time to earn and easy to lose, right? So I think that because of that motion, I think for enterprises, they will adopt the technology at, at a, at a good pace. But I don't think you're going to get to thousands of agents this year regardless of what people are telling you. Yeah.

CRAIG:

And that the idea that that we'll all have personal agents. How far out do you do you think that is?

RAM:

I think that's very approachable. It'll be like, how good are they? Right? Like, do you have a an intern or do you actually have, you know, a knowledgeable domain expert that you would rely on to go do your taxes? That's the that is the question. But I think the model will not change. It'll just be that agents get better, faster, smarter, cheaper over time. And I think we are we are well on that journey now. Right.

CRAIG:

Okay. Is there anything I haven't touched on that you want listeners to hear?

RAM:

No. I think this was a pretty broad brush conversation.

RAM:

I appreciate the questions. And yeah, this was good. Thank you.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including world-class support, enterprise-grade admin tools, advanced search, upload many different filetypes, and easily transcribe your Zoom meetings. Try Sonix for free today.


 
blink-animation-2.gif
 
 

 Eye On AI features a podcast with senior researchers and entrepreneurs in the deep learning space. We also offer a weekly newsletter tracking deep-learning academic papers.


Sign up for our weekly newsletter.

 
 

WEEKLY NEWSLETTER | Research Watch

Week Ending 9.14.2025 — Newly published papers and discussions around them. Read more