Anurag Dhingra, Senior Vice President and General Manager at Cisco, discusses where AI is actually creating value inside the enterprise. The conversation goes deep into the invisible layer powering modern AI: infrastructure.
312 Audio.mp3: Audio automatically transcribed by Sonix
312 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Anurag:
These days, when someone asks me like what's new, I think they expect me to start talking about AI. And of course there's a lot of AI innovation happening. But networking is foundational. And our goal right now is to apply AI to solve real world problems. And then when we talk about collaboration, it's all about delivering a delightful experience. How do we remove distance between people? So you know, people who are made, people who are remote all feels like together, the technology kind of fades away, and then you can focus on having a conversation. And then how do you improve productivity by leveraging AI, digit AI increasingly? So that's what's going on at a very macro level. If you go back way back in the early areas of networking, there are so many different types of networks, and you had to be on a certain network to talk to people who are also on that network. You couldn't talk across these networks, and that's the problem that Cisco solved. So it was all about interoperability and connecting disparate things. You don't have to focus just on the products that we have. Like I have to have a conversation.
Craig:
Sure. It's uh yeah, I'm certainly interested in how you view agents in the enterprise and the challenges there. So let's start by having you introduce yourself to listeners, how you got to Cisco, what you do there.
Anurag:
Yeah, so thank you for having me, Craig. I currently run the enterprise networking collaboration business at Cisco as the general manager of both of these technology areas. I've been with Cisco for almost two decades now. I started as a software engineer and then slowly took on sort of leadership roles and progressively increased the scope of what I do at Cisco. And right now, like I said, I'm responsible for the enterprise networking and collaboration business.
Craig:
Okay. Yeah, I wanted to hear about what's new, obviously. You're responsible not only for the networking, but for the collaboration tools. Is that right? And when I think of collaboration tools, yeah, I think of WebEx. Is it broader than WebEx? And if so, in what way? And what's new with enterprise connectivity?
Anurag:
So two parts to it, enterprise networking or enterprise connectivity, as we like to call it, is really about the infrastructure of how people connect to the network, both inside of an organization, but also across org boundaries, including connecting to the internet. So that's the connectivity part of it. And then on top of that are the collaboration tools. WebEx traditionally is known for video conferencing, video meetings. We also do calling, we have hardware that we build for conference rules. So that enables people to have group meetings in a conference room. We are also in the contact center and customer experience space. So all of that is part of the collaboration portfolio. And these days, when someone asks me like what's new, I think they expect me to start talking about AI. And of course, there's a lot of AI innovation happening. But networking is foundational. And our goal right now is to apply AI to solve real world problems. So when I think about the networking space, it's about simplifying operations for IT organizations. It's about helping them manage their security exposure in a better way. And then when we talk about collaboration, it's all about delivering a delightful experience. How do we remove distance between people, people who are in a room and people who are remote? It all feels like you're together, the technology kind of fades away, and then you can focus on having a conversation. And then how do you improve productivity by leveraging AI, agentic AI, increasing? So that's what's going on at a very macro level.
Craig:
Yeah, and what's interesting about Cisco, you guys started out in the router business, as I recall, building the equipment that that connects the internet together. There's you know the cables or the satellite signals, but it's really Cisco hardware that at least in the early days connected everything together. And so in this case, if you're using WebEx, I would guess that one of the advantages of uh using a Cisco product is that you have not only the application and as you were saying, the hardware now, but also the connection, the network itself. So there's uh there should be better security and more seamless interconnectivity. Is that right?
Anurag:
Yeah, so you were talking about the founding days of the company. And uh for me, the origin story uh for why Cisco even exists and how the company started was really around connecting different types of networks together. If you go back way back in the early days of networking, there were so many different types of networks, and you had to be on a certain network to talk to people who are also on that network. You couldn't talk across these networks, and that's the problem that Cisco solved. So it was all about interoperability and connecting disparate things. And that's really the DNA of the company. We believe in an open ecosystem and connecting things together. But then as we started to add collaboration applications, it became not just connecting things, but connecting people with each other. And so there's a very nice sort of analogous way of looking at the application business that we are in. Now, our goal is to deliver amazing technology for each of the various products that fit together. So when you're looking at networking, whether that is switching or wireless connectivity or van connectivity, either in a campus or in a branch or even in a data center, connecting everything seamlessly is the core of the company. And then on top of that is the security portfolio that is increasingly more tightly integrated with network as an enforcement point of security policy. So that layers on nicely. Then there are collaboration applications like WebEx that we were talking about, helping people connect with each other better. And you wrap all of that into amazing observability tools so you get to true digital resilience. The digital enterprise is all about monitoring how everything is working, but predicting what things are going to be, and then avoiding outages. And so that's the overarching. If I look at everything that Cisco does, all of the products, it fits into one of these buckets. And each of these is a first-class industry-leading product portfolio in itself. But when you buy more from us, when you deploy more of our architectures, then you start to unlock capabilities that individual products don't give you. So for example, the video conferencing devices that I was talking about earlier that go into conference rooms have thousand eyes of observability built into it. So when you are on a WebEx meeting or a Zoom meeting or a Microsoft team meeting, it doesn't matter. But if you're using one of those devices in a conference room and you're getting less than excellent experience, we can actually give you end-to-end hop-by-hop view of what's happening on the network, all the way from that conference room to the application. And that's the type of amplification that these technologies, when you stack them up and get our customers.
Craig:
Yeah. And do you see that in the quality of the connectivity, or is it in the various applications attached to the network? Where does that make a difference? Or is it in the security?
Anurag:
So it's it's actually all of the above, right? So, first of all, these applications that we're talking about, whether those are meeting applications or contact center applications, they're built to work on any network, right? We are not delusional that the whole world runs on Cisco networks, although a majority of the world does, but obviously customers have a choice in their technology providers. So these products are built to perform the best on any network. But because we are a networking company, when we're designing WebEx meetings as an example, we intimately understand how to handle network impairment. There's this packet loss, there is bandwidth constraints, there is latency between people and the internet. And you are, if you're a global organization, you may have people in places where collectivity is challenging. And so, how do you build an application that can manage through all of that? It's the internal secret sauce that we get by learning in talking to our colleagues in the networking part of the company, for example. But then when you are running on top of the Cisco network, we unlock new capabilities. So I'll give you an example. Uh going back to that video conferencing in a conference room, that video conferencing device is ultimately wired into some type of a network gear, right? It is probably a switch that it plugs into. You might have multiple cameras in the room, you might have multiple microphones in the room. They all plug into something. And if that something is a Cisco switch, you get complete visibility into is everything configured properly. When you plug it into that switch port, the switch recognizes that this is a Cisco device. So it automatically configures it with the right security policy and any other configuration that the IT team might have. So it's really about simplifying the life of the IT team as much as it is delivering amazing experience to people who are using the technology as end users. So it's really combining these technologies and stacking them up delivers additional value that you would not get otherwise.
Craig:
Yeah. And where does AI come into it? You were talking about injecting AI into this.
Anurag:
Right. So again, I'll start at the networking layer. This is very exciting times in the networking layer. And people typically think about AI and networking together. They're typically thinking these large data centers where you hear this in the news all the time billions of dollars are being invested in building large data centers for AI training and AI applications. Obviously, that infrastructure is connected inside of a data center. So there's a networking aspect to that. And Cisquit is a premier networking vendor in that space. But the business that I run is more a business that people like you and me touch and feel. We walk into a building, we're connecting our laptop or mobile phone to a Wi-Fi. What is the implication of AI in this space? And so the first application is really targeted at simplifying the deployment and management of networks. And this is again targeted at the IT personas, people who have to manage hundreds, thousands, even tens of thousands of Wi-Fi access points as an example. How do you optimize them to deliver the best Wi-Fi connectivity across the carpeted area that you're covering? If you're in a factory or a manufacturing environment, how do you make sure there's seamless wireless connectivity to the robots that are assembling things these days? And more and more these robots are mobile, they're moving around, so they can't always be tethered with a wire. So, how do you provide connectivity in that space? And that's where AI is really good at doing things faster and at a scale that would take hours to humans. And the other capability that we're seeing with the Gentic AI is a lot of this can be autonomous, where the system monitors itself and can optimize and configure itself. And the idea really is the humans who are managing these infrastructure aren't just chasing problems all day long. They can actually focus on outcomes that help their organizations become more productive or deliver more capabilities or produce more. And so the goal really is to enable those humans to focus on more complex problems and the systems become somewhat self-healing where possible.
Craig:
Yeah, self-healing is an interesting way to describe it. Is that can you give us an example that's interesting, the connectivity with robots, where you can't have the robot talking to the cloud, or maybe you can, depending on the implementation, but you need immediate inference at the edge. How does Cisco manage that? Yeah.
Anurag:
Yeah. That's a very good question. And you're absolutely right. For many of these applications, where we're talking about robotics or embodied AI, the latency, the round trip from that robot to the cloud and back can start to impact how well the robot works. And so the model that is powering that robot, if it's sitting somewhere far away, that actually can be a challenge. And so we see more and more in manufacturing that those models are being brought closer to where the application is. And that's really about edge AI. But you can almost imagine a manufacturing facility having a mini data center inside of it that is running that infrastructure. So of course, Cisco plays a role both on the compute side and the networking side for where those AI models are run, whether in the cloud or on edge. But then it's about if you are connecting to these models through wired connectivity or wireless connectivity increasingly, how do you deliver completely uninterrupted exposure to that application? Because when you think about your laptop or mobile phone connecting to the cloud and Wi-Fi is a little bit flaky, I mean you maybe you're not getting the best streaming experience. Maybe there's a few glitches in your video call. But a robot in a factory will basically grind to a halt, which stops the whole assembly line. And there's very direct impact, very measurable impact to that manufacturing facility. And so there, we actually have invented a new technology that we call ultra-reliable wireless backhaul. It is, it goes above and beyond what traditional Wi-Fi can do. And so we can deliver amazing connectivity. So as these robots roam across a factory floor, they get completely seamless, uninterrupted connection. So there's a lot of hard engineering problems that you solve for that type of an application where even a blip can be very expensive.
Craig:
It didn't come through. This one of the things that you're dealing with is in a massive increase in traffic and a greater attack surface for bad actors and even AI-powered threats. So how taking the robots in a factory as an example, how do you address all of that?
Anurag:
So security is actually very relevant, not just in a factory or manufacturing, but for all the different workplaces, for all sorts of organizations. And there are really two types of things that are emerging right now in terms of newer exposure. All the traditional threats are still around, but there are a couple of new emerging areas. So one is as AI, specifically agentic AI, proliferates and people start to lean on AI more as a helper application, almost as a team member, as a coworker to delegate tasks to and let AI do that task. This AI agent now that's working on my behalf needs access to pretty much the same information that I would have access to. But obviously, this is not me. And so, how do you think about securing your infrastructure when not just human, but more and more applications need access to information? And so that's like a whole new emerging area. And this will only get more and more complex as more agentic AI makes its way into the enterprise. The other thing that is happening is there's been some phenomenal advances.
Craig:
Are you guys building agents or you're just enabling agents?
Anurag:
We are doing both. So we are building agents ourselves, and these agents can do IT tasks. I was giving you some examples of self-heating systems. So these agents can monitor network, they can monitor security policies, they can optimize networks, they can take actions. So we're building agents like that. We're also building agents for collaboration so that if you are in a meeting, AI can take notes for you. And then at the end of that meeting, you can delegate to this agent to go schedule a follow-up meeting on your behalf, or nudge some people, or send some notes and follow-up action items, maybe update a ticketing system with whatever was discussed in a meeting. So we're building these types of agents across our portfolio, across everything that we do. But then as an infrastructure company, as an AI infrastructure company, we are enabling training of AI, inferencing of AI, and scaling AI across the organization. So it's a dual role that we play, both as a builder and also as an enabler.
Craig:
The agents, first of all, uh there's so much talk about agents. Two questions. One, the agents monitoring and fixing problems in the network. Are they is that new technology? There's a lot of automation microservices and things that have been around for a long time and suddenly everyone's calling them agents. And it's hard to know whether this is something really new or and how much generative AI in particular is involved. That's one question. The other is uh do you see for Cisco you're building these systems, testing these systems, and making them available to the enterprise? How much uptake do you see? Because people are still a little nervous about it.
Anurag:
Uh great questions, both of them. So let me start with the first one, which is what's new? How come everyone's suddenly talking about agents? So there is definitely new technology here, but you're also right that there are foundational building blocks, automation that has been built in the past, APIs and other tools that us humans use to manage things today. And those are building blocks. The thing that is different is those agents can now use the same tools that were built for humans. And so there's really two things that make the agentic AI become useful. One is the ability to reason. And the ability to reason is continuing to mature very fast. It can't reason like a human right now. That's not where the state of the art is, but it can reason through simple things. And that reasoning in the network management context might be hey, if you are not getting the best Wi-Fi performance, uh, maybe I need to look at this data point that I know is in the system. And once I've looked at that data point, maybe I go and tweak a setting based on that data point. This is what a human would have done, by the way. This is if I'm the IT admin and I got a ticket about somebody is not getting Wi-Fi, this is what I would do. I would look somewhere, and then maybe I'll tweak a setting. So now the AI models in these agents can actually reason through that same type of thing. And then it uses the tools that I would have used to look up the information, to look at the data, and to change the setting. It would have used the same tool that I would have used. And it can do that all autonomously. So that's that part where it the system detects something, reasons through it, takes steps automatically, is definitely new, but it's invoking the same tools that I would have used as a human. So that's the first part of the question. The second part of the question is really the uptake and the customer's readiness for adopting these tooling. And you're absolutely right. There's a trust deficit right now. And a lot of people are uncomfortable ceding control to AI for mission-critical applications. And honestly, this is where human in the loop is part of the answer, where you're leveraging AI to maybe help you become more productive, where maybe AI can figure out what's wrong, reason through what to do. But before it does something, it asks for your explicit permission. And then you maybe go back and forth with it a little bit before you make a change. I'll give you a very specific example from another domain. We have lots and lots of software engineers who build products for us. And that is an area where agents are starting to make a real difference in how products are built, how code is written. And one of the ways our engineers are using these agents is before the agent writes any code, the engineer is engaging in a dialogue about planning for that. Tell me how you will solve this problem. Tell me what you're about to do. And then let me bless that before you go ahead and write that code. And that type of back and forth is typically what you would do with maybe a junior member of your team as a maybe an entry-level coder talking to a senior engineer. And so that's the analogy that resonates with a lot of our customers. And that's how they get started. And then over time, as the confidence builds and the trust matures, then you can start to delegate more to the AI.
Craig:
Yeah. And I have a question about the reasoning, for example. If you're in a factory and there's an agent monitoring connectivity and there's an issue, and the agent needs to reason through a solution, is that sending, is that uh sending a prompt back to a foundation model in the cloud? Or are you using smaller language models on site or even at the edge?
Anurag:
So it's a mix, it's a hybrid situation. And what we're seeing more and more, by the way, these agents in the running in a robot or in a factory floor are agents that our customers are building, right? We are providing the infrastructure and we're coaching them on how to make the best use of the infrastructure. But what we're seeing is emerging is a hybrid world where some of the smaller, narrower tasks are getting run on the edge with smaller models that are closer to the application. And some of the more complex things that require bigger models to reason through, they are still in the cloud. And you will see this trend. I believe this is gonna be the way the world will be, where things will get into a very hybrid state, and more and more specialized things will become smaller, narrower models, per build, and they will be done closer to the application, and complex things still go to the cloud, and there's gonna be a little bit of a collaboration between these type of models.
Craig:
Yeah, you know, it's funny. I just had a conversation about this, and there and I got a little confused, and the people I was talking to got a little confused. When you talk about smaller models, are these uh lower parameter models that have been trained on a specialized data set, or are they uh distilled from foundation models uh and fine-tuned on a smaller specialized data set?
Anurag:
It could be all of the above. So all of those techniques are valid techniques. And there are other techniques too. Maybe the model size is big, but you provide grounding and godrails so that the data set is uh that it works with is narrower, so that it is purpose-built for a specific job. But I can also give you some examples that are not necessarily text-based. And so maybe that those are easier to explain what I mean. So, in this meeting, for example, in this conversation, I'm blurring my background. That is actually a video AI model. It is not a language model, it is a video AI model that is running in the camera that I'm using right now, and it is a small model. All it does is it understands the outline of a human and some of the typical things that we have around me, and then it can separate the human silhouette from the background. So that's a very purpose-built model. All it does is this thing. You can't ask this model to write an email, right? It's not built for that. It's very purpose-built for this. It is good at image and video recognition, and it's small enough where it is running in the camera. And so the compute that it needs is very small. And so you can think about similar, smaller, purpose-built models, even when it comes to language. And so that's the emerging small language model approach that we're talking about. And when you make it narrow and purpose-focused, that also helps you make sure that it will not go off rails, so to speak. And it helps increase trust in what it can do. And I think that's a good stepping stone for customers to focus on narrower problems first with AI and then continue to expand the surface area of what they can delegate.
Craig:
Yeah, and that's interesting. Those are not, as you said, they could be small language models, but they may be models of convolutional neural networks and other kinds of models. Yeah. And I've heard people talking, particularly in the finance sector, about models that uh that deal with numbers, not necessarily language. Is that a uh are those also built on generative pre-trained transformers, or are there different architectures? I know we're showing it. Yeah.
Anurag:
There are different approaches. I can give you two examples that are things that we are doing at Cisco. So one is we have created a deep network model. We call it a deep network model, and it uses the technique that you were talking about earlier, which is really around taking a larger model and then making it narrower, and training it with the data that is telemetry from networks and the knowledge bases that we have created over the last 40 years of our existence in terms of how to build and configure networks. And so we trained that, we distilled the large model, and then we trained it with this specific networking knowledge. And so it's really good at asking and answering questions about networks. How do I configure a VLAN, a very specific networking question? It will do really great. Then you ask it to create a slide deck, it sucks at that. Like it won't do that because we distilled it down to a narrow problem. But it's amazing at that networking question, and it will outperform a general model. If you were using any of the other AI tools out there where people use it to write documents or ask questions, it will outperform them on networking domain-related things, but it will be extremely poor at general purpose things. So that's one example. Another thing that our Splunk team is doing, and Splunk is an observability tool and security tool that is part of Cisco. They recently announced a machine data model. And this goes back to the numbers question that you were talking about. A lot of these models are trained on text that humans use, right? Reading and writing type of thing. But then there is a there's a lot of data that comes out of machines and software that is log data. It is structured differently. It is not English. Like you have to be a technologist to understand what these things mean. And so we are creating models that understand what these things mean. And they're really good at answering questions based on machine data. And again, the approach is very similar. You start with a model that is either distilled or a smaller parameter model that's trained on this information. And you make these models really good for that domain that you train on.
Craig:
Yeah. And Cisco is building some of these solutions, building some of these models.
Anurag:
We are, we have three key models that I want to call out. We have a model that is built for security domain. And that is, by the way, a model that is an open weight model, which means you can go to Humming Face and you can download and you can use it in your application. So we intentionally made it open. And we launched that earlier in this year. I think in April we made an announcement. Then in June, we made an announcement about this deep network model that I was talking about that is trained for specific networking domain problems. And then back in September, we announced the machine data model for Splunk. And so those are three very specific domain-specific models that we are building. And they are not trillions of parameter models, they are smaller models, but they are very good for domain-specific problems.
Craig:
Yeah. And the this this discussion that I was having earlier with someone are if they're small models, are they still using the architecture of large language models, even if there's a small number of parameters?
Anurag:
Yes. So for the three models that I gave you, these examples, they are using the same architecture, the same type of architecture that these large models are using, but they're built for narrow tasks so they can be smaller. And that's so that's the approach. These things will evolve. Honestly, the state of the art is changing so fast that we'll start to see new approaches emerge. And I'm confident that if we have the same conversation next year, we'll probably be talking about the novel techniques that we don't use today.
Craig:
Yeah. And maybe that's one of the advantages of working with a company like Cisco that has an end-to-end solution. It's up to you to keep up with state of the art rather than the enterprise trying to keep up with state of the art and swapping out components and things like that. If you haven't yeah.
Anurag:
So I would say I look at it in two ways. So one, we are focused on solving hard problems that we are uniquely skilled to solve. I talked about networking, I talked about security, I talked about observability. These are things that we have been doing as a company for decades. We have some unique data, we have some unique perspective, we have some unique skills. So we'll excel in producing those models. But then the second piece that goes hand in hand with that is an open ecosystem approach that we're taking. So we're when we're building systems, we are building them in extensible ways so that other people can plug in their technology and other vendors can plug in their technology. So I'll give you an example of that. So we talked about building agents for, let's say, managing networks. And this is something that we are building powered by our deep network model. But an IT organization might have a workflow that actually starts in a ticketing system, ServiceNow or something along those lines, right? A ticketing system. And that ticketing company, that vendor, might have an agent to encapsulate that workflow. And so what we want is these agents that they specialize in, and the agents that we specialize in can collaborate with each other. So then when you are troubleshooting a network problem and then you fix something with our agent, it can go and update the ticket in the ticketing system. And so there's a it's just like humans would collaborate with each other across different specializations. And so that's the approach here. I think it is essential that we take an open ecosystem approach. And Cisco, by the way, is a contributor to emerging standards. One of the standards for agent-to-agent communication is A2A. And Cisco is a first-class contributor with many of the other industry leaders. And so we are really driving the state of the art forward so that you can build an ecosystem where there's multiple agents from different companies and different people can collaborate.
Craig:
Yeah. How do enterprises work with you generally? Because either they've been a Cisco customer forever and they've grown with your tech, but if not, they have systems in place that do a lot of these things. Is there one end of the of the elephant that you start working on and then over time you get involved deeper and deeper in their systems? Or do people set up a Cisco end-to-end system in parallel and then eventually migrate to it? How does that work?
Anurag: 31:53
Yeah, I think it's again, it's a vast spectrum of customers and it's all of the above. But if I were to specifically talk about AI, because that's where a lot of these conversations are starting and taking place right now, there are there's a there's a long tail of customers here, right? There are some smaller customers who don't have the IT staff or the skill to take on a lot of engineering and development. So they want a completely shrink-wrapped current key solution, and we give that to them so that they can just deploy and benefit. On the other spectrum are large enterprises that actually have software engineers and large IT staffs, and they are building a lot of these agents and tools themselves. And they want a partnership, not just a vendor-customer relation. And so, more and more as we are building these agentic systems, and one of the examples is a capability we call AI Canvas. This is the next step in our network management agencings. Uh, we are engaging with them as design partners. So, as we are building these systems, we are bringing them into the process early. And we are giving access to the code and capabilities early in the process so that they can start to not just experiment with it, but give us feedback on what it does well, where it's falling short, how can we improve it together. And they benefit from it because they get early access to technology and they get to influence and shape our roadmaps. And we benefit because we learn from real-world scenarios. So when we bring the product out for general purpose consumption, it is actually ready and it will solve real-world problems. So it's a very synergistic relationship that we're leaning into as we're building these AI tools going forward.
Craig:
Where do you see this going? At this point, you you, as you were saying, uh the tech is advancing, so you've got to keep keep up with it. But in terms of your products, you're where do you see this going? Is it moving increasingly to the edge so that a lot of these capabilities will be in people's pockets? Or is it is there some other direction that you're going?
Anurag:
Yeah. Yeah. So I think in some ways it's what's the macro trend and how do we either accelerate or adapt to that macro trend, right? So that's how these conversations go when we are thinking about building products. And so the macro trends that I see are really twofold. One, I think the large models that require a lot of infrastructure to train and a lot of infrastructure to actually operate, do inferencing, and you're using them, uh, they will continue to get better and more sophisticated and maybe even larger. And they will continue to run in a data center somewhere just because of the infrastructure requirements. And so our role as a company is to enable the acceleration of that. So we work very closely with hyperscalers who are building these data centers, and we work very closely with the model companies themselves in terms of the support that we can give them, both from a compute and from networking point of view. So we work hand in hand with them in terms of what they're building and what is needed to accelerate that. So that's one part of it. The second thing that is really around AI applications. And those are applications that people, normal people use to get their work done. They're not training AI, they're just using an application. And as those applications start to proliferate, how do we make sure that the network doesn't become a bottleneck in terms of the bandwidth or the performance of that? How do we make sure that security is not compromised as those applications proliferate? And the applications we are building, how do we make sure they're delivering a delightful experience? They're intuitive, they're helpful, they're useful. So it's a spectrum that we cover as an infrastructure company, as a and also as an application company.
Craig:
Yeah. That's interesting. The number of applications that are trying to get through the network, how do you is it increase bandwidth or do you quantize? Yeah.
Anurag:
Um, so there's uh there's uh uh not like a one single answer for that. But what we do is when we think about how do we solve for a world where more and more AI applications will proliferate, we think that you need to take an end-to-end architecture viewpoint. You can't just look at one part of the puzzle and say, what do I do for this box? What do I do for this Wi-Fi access point in isolation? It doesn't quite work. So you have to look at an end-to-end view. And so we have an architectural perspective here, and it has three pillars. The first one is how do we leverage AI to empower the IT teams and simplify their operation? So IT as a has a helper, as a co-pilot in software agents, and this is about then take offs and what we're doing to help them manage the complexity. So that's one aspect. Second, I talked about the security exposure, it is going to get more complicated. How do we make that into the products that we're building, the infrastructure products? And the third one is as we're building hardware products, Wi-Fi, switching, routing. What does it mean to make those products ready for AI? And this is where we're looking at things like throughput, extremely high throughput for in a dense phone factor that is delivered in a very efficient, energy efficient way. How do we optimize for latency? How do we generate telemetry so that those AI agents can actually observe and act on it? And this is where one of the things Cisco has invested in is building custom network silicon. So these are chips that go into our switches that move packets around at amazing speed, but they are very programmable so that as the state of the art evolves with software upgrades, we can unlock new capabilities without forcing everyone to buy a new gear. When you think about infrastructure investments, people are not buying these products for six months. The state of the art is moving so fast, what we are doing today will look very different in six months. The AI would have advanced. But these are expensive infrastructure projects. You're not gonna change your network every six months. You're probably making that investment for five, six, seven years. So, how do you build flexible infrastructure? And that is why these programmable, purpose-built silicon chips, the operating systems running on top of it, the applications running on top of it that are architected to be flexible and highly observable and programmable, that's the full stack view that you have to take it. So that's what we're doing.
Craig:
Yeah. And you were saying earlier that you expect foundation models to continue to grow in size and power and capability. Will they remain the foundation of a lot of these applications? Or do you think this move toward smaller models, specialized agents, compute at the edge will take some of the work of the load off of the foundation models?
Anurag:
Yeah. I think it will definitely be a mix of models, some large, some small. And also I think it is clear already that purpose-built models have a place and they are very successful for a specific, either specific tasks or jobs or specific verticals, even specific industries. You were talking about the finance industries. There are models for healthcare, there are models for manufacturing, and they have a, they definitely have a place. And I think it's gonna be a spectrum of models and sizes and purposes, and and that will continue to evolve. I I don't believe in a world of view that says there's one model, one giant, gigantic model that can solve all the problems the world has. I think it is gonna be much more diverse than that. And so the approach that we are taking is we are specializing in certain things, but we're building open ecosystems so that you can leverage the technology that others are building.
Craig:
Yeah. One of the things we haven't talked about is customer care solutions or customer service. And I've there's an explosion of these companies with using large language markets. Models, text-to-speech models or speech to text to improve customer service. Is Cisco a big in that space, or do you focus on the connectivity?
Anurag:
We are very big in the contact center industry. And you're absolutely right. That is one of the places where AI is causing true disruption and is starting to deliver results already that are very measurable and very meaningful. And so we have a product that supports customer service organizations, customer engagement organizations to reach out to their customers and offer different channels. Voice obviously remains a core way. People still call phone numbers to talk to organizations. But increasingly digital channels, whether that is WebChat, whether that is SMS or iMessage or WhatsApp channels like that. So we have a full product that does that. And increasingly it is enabled by AI. And what we're doing is twofold there. One, we have a fully autonomous voice AI agent. So when you call in for support or something like that, the call is answered by an agent, but it increasingly sounds and feels like you're talking to a human. And the idea is not to fool people into believing they're talking to a human when they're not. Most of our customers actually start the conversation by the AI introducing itself as AI. But it's a natural conversation. You can speak to it just like you would to a person. You can interrupt, you can pivot the conversation. And the AI models are getting sophisticated enough to handle all of that very quickly and in a natural way. And then I was talking about agentic AI earlier. So they cannot just hold a conversation with you, they can actually use tools that human agents would have used to take actions. So they can fulfill the intent. And that is definitely something that is starting to make a difference. But when calls go to a human, because AI can't solve everything, and sometimes you want to talk to humans. If the stakes are high, you probably want to talk to a human. When calls get to a human, AI can be a companion for the humans as well. So while they're holding a conversation, AI can look up all of the information. AI can make suggestions on what the next step should be. So AI is really a tool for humans as well. And so it's a dual value that we're delivering. And then for people who manage these context centers, the supervisors and the administrators, they can get a lot of insights as well. And then they can tune the processes and the agents as part of that. So it's a it's an amazing capability. It's making a difference, and it's very real in that industry already.
Craig:
Yeah. And I should know this, but I don't. Are you in partnership with one of the big foundation model builders, or are you model agnostic?
Anurag:
So we are. The way we've architected these systems is we are model agnostic. So out of the box, we ship these agents with specific large language model. But we have customers who come to us and say, hey, I'm using a different model for all of my other workflows, all of my other applications. Can you leverage that model? And so we allow them to bring their model into the system. And we still work with them and with the model provider to tune to make sure that you deliver a great experience. But just to build on that, the language model is only part of the puzzle here. The other part of the puzzle is the speech models. So the ability to not just talk to a human but understand what they're saying, being able to translate that into something that a language model can act on. And so that's the technology that we have invested in ourselves. And it's like the video blurring example I was giving. These are purpose-built media models that work with audio. And that's something that we've been doing for decades. So we have some unique exposure and understanding of this. And we have data and learning. So we are investing in building those models ourselves. So it's a combination of models that go into an agent that is delivering autonomous service.
Craig:
Yeah. And so what do you want my audience to take away from this? What is yeah?
Anurag:
So I would say a couple of things. So one, I think the opportunities are very exciting. The AI applications that we're seeing are real and starting to make a difference. The state of the art is moving very fast. And sometimes we hear a lot of fear and trust deficit in customers adopting these tools. And that goes back to how do we explain what we're doing? How does the system work and the guardrails and the controls that we offer or how we bring humans into the loop? So we have a sort of a thoughtful approach to building these products. I also sometimes hear just concerns around AI displacing jobs and what does this mean for society at large. And those are some real hard questions. And I think we still need more work done, industry, public partner, public-private partnerships. So we still need to figure out how to make sure that the world that we're going to live in is a net positive for society. And I tend to have a more optimistic viewpoint because the way I look at this is you can take the approach of AI is about automation and eliminating things that we used to do. And definitely some of that is happening and will happen. But then there's the other approach, which is what could you do now that you were not able to do before? And that is a little bit less clear. The answers there are less obvious. But I take the approach that if you look at prior technology waves, we've figured out collectively newer opportunities and newer avenues. And I feel like we'll get there with AI as well. And so right now, my guidance to my teams is don't be a bystander. At a minimum, you need to take the plunge, understand the technology. So not only so that you can be effective, more effective at doing your job, but also to help shape new opportunities for what we can do with this technology. And so taking a sort of a learning mindset approach in whatever you do and trying to figure out how you can best use the AI tools available in that segment, that function, is probably a very good idea right now. And on our side, as Cisco, we will continue to be the trusted infrastructure provider for our customers. We will build a lot of technology that we have special expertise in, networking, security, observability, collaboration. But we also have an open ecosystem mindset so that it is really a collaborative effort with other industry leaders and with our customers to deliver on the outcomes that our customers want. And so that's what I'll leave the audience with. Exciting times ahead. The technology is moving very fast, and there's a lot more to come.
Craig:
Okay, we'll leave it there. And I agree with you. And I tell people this all the time who are afraid of AI or without knowing anything about it, hate AI is educate yourself because that's the best defense.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including secure transcription and file storage, upload many different filetypes, transcribe multiple languages, generate automated summaries powered by AI, and easily transcribe your Zoom meetings. Try Sonix for free today.