
Tyler Xuan Saltsman, CEO of Edgerunner, joins Craig Smith to discuss how AI is transforming military strategy, logistics, and defense. Edgerunner pioneers generative AI for mission planning, autonomous drones, and battlefield intelligence—enhancing security while keeping humans in control.
235 Audio.mp3: Audio automatically transcribed by Sonix
235 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Tyler Xuan Saltsman:
Our logistics agent is used with a meta model fine-tuned on a bunch of military logistics doctrine, and then now that model now thinks like a military logistician and what we do is we build these adapters. They're called LORAS, which stands for a low rank adaptation of a large language model. And then now what you do is that adapter freezes 99% of the parameters of the model and then now the model only speaks to the information pertinent in that adapter. So, rather than a generalized chat model, which is what we see today, that are pretty much unusable, it's now specific to a logistician.
Tyler Xuan Saltsman:
Tyler Saltzman, here I'm founder and CEO of Edgerunner. We're building generative AI for the warfighter, and what that means is we're building domain-specific intelligence that's occupation-specific for each warfighter's role, whether it's logistics, condition-based maintenance, whether you're a fighter pilot, you operate a battleship or a submarine. We're building AI that'll augment all of our warfighters to ensure national security. And what also makes us different is our AI is open and auditable, meaning you can understand the training data and the biases that went into the AI, unlike our peers today.
Craig Smith:
That's interesting. And what are the primary models and use cases I mean? When you say logistics, I mean specifically, or you know someone managing a weapons platform.
Tyler Xuan Saltsman:
Yeah. So right now we're in a crawl walk run the crawl phase. We're actually excited to partner up with Meta. So we have clearance to use Meta's Lama models for DoD applications. So our logistics agent is used with a Meta model fine-tuned on a bunch of military logistics doctrine, and then now that model now thinks like a military logistician and what we do is we build these adapters. They're called LORAs, which stands for a low-rank adaptation of a large language model. And then now what you do is that adapter freezes 99% of the parameters of the model and then now the model only speaks to the information pertinent in that adapter. So, rather than a generalized chat model, which is what we see today, that are pretty much unusable, it's now specific to a logistician.
Craig Smith:
And this is a fine-tuned model. You're not using a RAG system.
Tyler Xuan Saltsman:
We also use RAG. So I like to think of what we're doing is we're building a combination of a RAG pipeline of LoRa adapters and with small language models that all work together synergistically. So, for example, you would use RAG on things that are static and that don't change, and then you would use a small language model with a LoRa on something that's dynamic and that you need to be creative. So here's a good example.
Tyler Xuan Saltsman:
Let's say you're a fighter pilot and let's say I need to know everything about my fighter jet. How do I maintain it, how does it perform? Under certain circumstances, An interactive manual on steroids, if you will that I can talk to. That would be your RAG pipeline, because RAG retrieval augmented generation is just retrieving all the relevant information of that fighter jet. But now let's say I have a bespoke mission where I'm doing something that's not common. Let's say I have to do an air-to-ground mission and I'm flying low to the ground with an F-22 or an F-16. That would then be a small language model with a LoRa adapter attached to it. So I'm using a combination of RAG when I need to interact with my fighter jet.
Craig Smith:
And then I have my small language model with my LoRa for the bespoke mission. That's specialized. Yeah, talk about how that small language model with LoRa would work. And is that on the edge? I mean, is that in the aircraft? That's right.
Tyler Xuan Saltsman:
So, for example, what's great about LoRa is it acts as fine-tuning on the edge, because we're freezing the basically 99 of the entire model and we're only speaking to what's in that, in that adapter, so that pilot would then have an adapter that's specific to the terrain, the demographics, the mission. Here she is executing the target that they might need to go take down, or maybe it's a reconnaissance mission. Basically, all the situational awareness you'll need will be in that LoRa, and then the small language model from Meta, for example, will then be augmented by it. So we would build that LoRa, we would run that Meta model, we'd also provide the rack pipeline, and that would all work right on a device on the edge that doesn't need the internet.
Craig Smith:
And how does the pilot interact with the model?
Tyler Xuan Saltsman:
Right now the pilot. They would type in the device, but we're in the process of making this voice-enabled kind of like Jarvis or Iron man.
Tyler Xuan Saltsman:
And what's an example of what a pilot would be asking the model for to divert and ground the aircraft. That could be something that comes up. Or let's say let's say there's a, there's an enemy on you know, on the horizon, you know, based on my payload, what I have. Do I engage the enemy? What do I do? What kind of evasive maneuvers can I make safely based on you know how, the how that fighter gets configured, something like that. And again, I don't know much about flying, but I could get into more like logistics, which is what I know. I used to be a logistician for the Army and we also prepared for you a logistician demo of our technology.
Craig Smith:
Sure, yeah Well, let's talk about that, because logistics is a big piece of the military. That's right.
Tyler Xuan Saltsman:
We see in Russian-Ukraine conflict right now, ukraine sometimes gets the upper hand because of better logistics, and so, of course, if you run out of fuel or life support or ammo you name it your fighting force is down, and so we have to make sure that we're optimizing for that. So, for example, with logistics, it can help me like which ammo goes with which ammo on which truck? What happens if this truck goes down? What alternate routes do I have If I get ambushed? What are my, what are my friendly units around the area? You know, if I'm hearing a weird noise in this truck, what? How do I troubleshoot it? What's wrong with it? So I can detect all of this and help me on the fly right away. But I'll pause there.
Tyler Xuan Saltsman:
Colton can bring up the demo. Let's walk through that and show you the actual working prototype. And again, this works locally on device without ever needing the internet. So we're going to show that we're going to turn on or turn off Wi-Fi. Of course, this is just a demo, but here you can see a very comprehensive prompt. For example, if you asked chat, gbt, this or any kind of model, it wouldn't know this Because, again, these models are too generalized. Now what we're able to do is drill into this exact mission ask or the prompt on how do we properly plan a logistics mission of this magnitude and how do we mitigate risk? What's the equipment we're going to need? You know what does the planning phase look like versus the execution phase you name it so we can get very granular, and this helps a logistician officer like me better plan, and you know and do my job.
Craig Smith:
And what are the PDFs listed below?
Tyler Xuan Saltsman:
So what we do is we actually cite all the sources that we trained on. So now, if I say, okay, this is an interesting line item and the recommendation the AI gave me, where did that come from? Now you can drill into that, that, the drill into the doctrine of where it came from.
Craig Smith:
Yeah, yeah, this is remarkable and for audio listeners, we're looking at Edgerunner, the logistics agent built on. How do you say that it's an instance of Edgerunner or a use case of Edgerunner?
Tyler Xuan Saltsman:
That's right and what this is. So we're actually showing you all the different sources we train on, and we train this on thousands of different PDFs related to the Army and logistics, so it's very specific.
Craig Smith:
And it's giving a breakdown of day by day. You know morning, midday, late afternoon, exactly what has to happen. Exactly late afternoon, exactly what has to happen.
Tyler Xuan Saltsman:
Exactly Like this. This, what I just generated for me in literally 30 seconds, can take, could take me hours, or even or even a couple of days, and so now this can augment and this is based on, like the army doctrine way. And then, of course, you know, things in the battlefield aren't always going to be by the book or by by doctrine, but at least now it's giving me a very good baseline of what the mission should look like.
Craig Smith:
And before something like this or currently, it's all done by hand, or are there systems that are doing this without generative AI? Without generative AI?
Tyler Xuan Saltsman:
So this is completely not done by hand at all, but what you can do is, now that we have this baseline, we can then go whiteboard and have an art of the possible, and maybe we want to take out day two, so then we can just quickly modify it via hand jamming.
Craig Smith:
Right, and what's the uptake? At this point, I mean, where are you guys in your? Give me a little background on the company, when you formed and where you are.
Tyler Xuan Saltsman:
So we're all pretty much X, stability, ai and stability was.
Tyler Xuan Saltsman:
It was known for a stable diffusion, which is the most popular, one of the most popular media models in the world for a, for text to image modality and, from you know, in in working locks up with all these scientists. You know we started to cause I was I was head of supercomputer when I was at stability and working with these scientists, we realized we had a core competency of being able to train and inference models across any different types of chip architecture and hardware, but also to make these models much smaller, and then they and when they're smaller, they can live right on the device and not need the internet. I realized we had this core competency. It's like let's go build this for the military and, given my background as an ex-army officer and also meeting these scientists, it just made a lot of sense for us to work together. And so it was challenging to its stability because we weren't really creating products that added value to customers. It was more just a cool thing. Customers it was more just a cool thing, but there was no real value creation.
Tyler Xuan Saltsman:
I think that's a problem with AI today it's not actually solving anyone's problem and it's not really doing anything useful other than making cool things. So now we're solving for that by actually building products that our customers the DoD that they could use. So we're only about eight months old. We raised five and a half million. Madrona is the biggest VC on our cap table. They're an amazing partner. Matt McElwain has been an awesome supporter of us, as well as John Turow, and then, of course, we have some angels. So, yeah, we have a full team of 18.
Craig Smith:
And we're starting to ramp up to scale early next year. You know, I worked on the National Security Commission on AI for a couple of years the two years that it ran and there was a lot of talk about integrating AI into. I mean, that was the focus of the commission really how to accelerate that integration. Is there anything? I mean, how does this work? So you guys, you know you have a company and a product and relationships with the DOD. It's so complex the acquisition process. How do you manage that? How do you navigate that?
Tyler Xuan Saltsman:
You know we're going to be going strictly to the channel. Right now we're on Kerasoft. They're one of the biggest channel distribution partners of the DoD. We're an approved vendor on Kerasoft. We're on NASA Soup as well as ITES SW2 contract vehicles that the Army uses. The DoD uses NASA Soup. We're good to go. We're on that vehicle. When you need us you can procure us directly through them.
Craig Smith:
And then, of course, this will expand, go ahead, and then it's up to the individual unit commander or who makes that decision about whether to to onboard edge runner.
Tyler Xuan Saltsman:
Yeah. So that'll come way up at like the at the J level, even at the pentagon. Typically it'll it'll come down from the top. If there's some sort of ai mandate or policy, then there'll be. What we're going after is called broad area announcements. There'll be broad area announcements of ai to do xyz and then we'll then we'll then bid on it.
Tyler Xuan Saltsman:
But what helps us is we're actually for us, we qualify as a small business and we're veteran owned and I have a PTSD disability rating. So we're a strategic partner for these bigger partners that are required to work with smaller guys like us. And, of course, we have core competencies of building AI that's personalized and completely air-gapped, which actually segues into our computer vision technology. So not only are we building agents and assistants on device, but let's show you our computer vision technology.
Tyler Xuan Saltsman:
And what we're solving for is right now, in Ukraine, the life expectancy when you're spotted by a Russian drone is only seven minutes until you find cover and concealment. And so how do we neutralize these Russian drones? Well, we're building drones that can see now rather than requiring a pilot, because when you require a pilot, it's easy to jam these drones and disconnect the pilot from the drone, rendering the drone useless. But now we can put vision on swarms of consumer grade drones, which are very cheap, that then carry a payload via a kamikaze suicide drone style. So now we'll show you what that computer vision technology looks like and it works. Today and that's the problem that we're solving, for is it's easy to jam these drones, but now, when they can see, like what you're seeing now, the drones are autonomous and they can fly right into that tank with a payload and neutralize it.
Craig Smith:
And how do you ensure that it's not going to fly into a Ukrainian? How do you ensure that it's not going to fly into a Ukrainian? You know, piece of Ukrainian armor.
Tyler Xuan Saltsman:
No, that's a great question, and what we're doing is we're building the neural network Just like a human. If a human can discern the difference between a Ukrainian tank and a Russian tank, ai can as well. You do that by training the model on hundreds of thousands of images. Now to your point. If they look identical, then you're right. We can't. So if Russia now rolled out tanks that are identical to Ukrainian tanks, that would be problematic. We'd have to find another solution. But right now, it's very easy to discern what a Russian tank looks like versus an ally tank.
Craig Smith:
Yeah, okay. And so again for audio listeners that don't want to go over to YouTube to see the video we're looking at footage from a drone, or from a series of drones, hitting Russian armor. Is this training data or is you don't have these drones fielded these yet do?
Tyler Xuan Saltsman:
you Correct. So the drones aren't deployed in real life combat zones yet. Right now we're testing and this is an overlay of our technology from drones that can't see, Now we're showcasing that they could see. So this is more of the art of the possible, but the technology does work and we're in the process of bringing this to life with our partners. So we're working with Service Capital as well, as the drone company is called Rapid Flight and their core competency is they build 3D. It's easy to. They print drones via a 3D printer and then these drones are one-way drones that are cheap to make. They can make them quickly and then, of course, they can carry a payload. And where are those drones made? They're made here with our partner called RapidFlight.
Craig Smith:
I see, are you familiar with what Eric Schmidt's doing with WhiteStork?
Tyler Xuan Saltsman:
I think it's called white stork I know eric schmidt's got some really cool projects going on that I'd love to partner with him on um. So I'm not familiar with that exact project, but I know that eric has a keen interest in in what we're working on as well, so if you happen to know him, please give him a shout yeah, well, I'm hoping I him on the podcast soon so I'll mention it to him.
Craig Smith:
But he's very interested in drone technology, just in how it's changing the nature of warfare. I mean it really has. I mean, what do you think If all the analysis I've read is that you know armor is pretty much obsolete at this point?
Tyler Xuan Saltsman:
That's right. I mean, as drones become ubiquitous, it's definitely going to change how we fight wars. Now we're going to have drone-on-drone combat Also. What will be interesting, too, is AI. Large-language models still aren't intelligent yet, and I think largely it's because they don't understand the physical world. Well, how do we understand the physical world? We've put blms on drones, and drones now scan the battlefield and are describing everything that they're seeing. Now we can take that new synthetic data and reinforce the neural network of the large language model, which then makes the on-device agents better. And now we get another step, another great leap forward of AI. Now, I don't think this will create AGI, but I think it'll make generalized AI much better.
Craig Smith:
Yeah, and you'll forgive me, tyler, I don't remember whether we spoke about this. Were we talking about coordinating swarms last time we spoke?
Tyler Xuan Saltsman:
A little bit, and I think the chorus on the conversation was around that humans will always be in the loop. We'll never want AI to be the shock collar, if you will. I think AI should be great at the expert assistant that's advising you objectively and fairly right but never actually making the decision, kind of like Jarvis in Iron man. If you watch the Marvel movies, tony Stark's always ignoring Jarvis, even though Jarvis is probably right. But he still has the ability to ignore Jarvis Because I think there's still the human gut element that we need to trust. But of course AI is going to make us better at what we do.
Craig Smith:
Yeah, Although a human in the loop. How do you explain a human in the loop with a drone that is, you know, is vision enabled and a kamikaze drone that's, you know, navigating its way on its own?
Tyler Xuan Saltsman:
I think what we do is so we can train drones to see hand and arm signals, so I can wave to my drone to come here or stop or go, things like that. So now we call this being context aware. So normally a drone can just see a human and it's not recognizing the symbols I'm making with my hands. But now that the drone's context aware, I can effectively control it. You know, kind of like soldiers, I can't talk to my soldier, but if we need to go attack I can do certain hand and arm signals and we can go engage without talking.
Craig Smith:
But yeah, so the video that we just saw a drone that identifies a target and flies into it. How is the human watching what the drone is seeing and can abort the attack if it feels like it's making a mistake?
Tyler Xuan Saltsman:
Of course a human could Also. We can take this a step further. Let's say, rather than trying to blow up the tank entirely and killing the tank operator, we just want to hit the track of the tank and just disable it. We can do things like that as well, just like with moving trucks on convoys. Why don't we just disable the truck Now? The convoys are sitting duck. We don't have to kill the enemy. We can actually just neutralize their supply chain and we can get smarter and control the collateral damage and the chaos.
Craig Smith:
Oh, that's interesting. So that's part of what you're doing with the vision system. That's right being able to identify components or areas on a vehicle.
Tyler Xuan Saltsman:
That's right, and I think a lot of this too is the economics of warfare. If you look at the invention, the Barrett 50 cal it's a single man operated 50 caliber rifle, but it was really intended as an anti-material rifle to take down radars and grounded aircrafts. And it's a $10 bullet and you can disable a multimillion dollar aircraft. So a lot of this of warfarefare 2 is going to be, you know, economics-based. These consumer grade drones are very cheap but they can also take out very expensive assets. And then, of course, you know that'll help you win the fight as you bankrupt Russia.
Craig Smith:
Yeah, how does this? How is this going to affect air warfare or surface warfare on the ocean?
Tyler Xuan Saltsman:
You know, I think eventually human pilots will be a thing of the past. I think AI will become much better at flying and much better at dogfighting capabilities and I think humans again will be in the loop of sort of again orchestrating and quarterbacking. But the days of, you know, the top gun style dog fights, I think those will, those will come to an end.
Craig Smith:
Yeah, and how, how quickly do you see this moving? Because we certainly saw I mean I've said this on, probably to you and and uh to others on the podcast that at the beginning of the Russian invasion there was a 40 mile line of armor headed toward Kiev. Yeah, and had there been, you know, enough drones, uh, that they would have taken out that whole line. And that was just what two years ago, I mean, and now drones are ubiquitous on the battlefield. So it's moving very quickly. How quickly do you think? And another thing is Mark Milley. I heard him at a conference and he said you know, the aircraft carrier, uh, that that they're working on her building today is going to be obsolete by the time it's in the water because you're not going to use aircraft carriers. Everything is going to be, you know, unmanned vehicles and that sort of thing. So, yeah, how quickly do you see this moving into actual battlefield situations?
Tyler Xuan Saltsman:
I think the entire transformation process, I think, will be less than a decade. I think in 10 years we'll have drones. I mean, we already have drones today that can approach supersonic speeds. But I mean, to your point about completely unmanned doing everything, yeah, I'd say we're. We're less than a decade out, just like a, just like the Jarvis experience here, ironman, that technology is nearly here and we're building that. That's also less than 10 years out.
Tyler Xuan Saltsman:
Now, when people are saying AGI will take over the world in 2028, I think they're insane. And some people we know have said that I think AGI is at least 20, if not 30 years away. So it's a long ways away. But I think AI will become very sophisticated, where it'll resemble reasoning in AGI. But it's not quite there Because, again, humans will always be in the loop, and so, again, that's our philosophy is, rather than one big mega model controlling everything, it's going to be swarms of agents and these different agents have different core competencies, just like humans do. But again, the agents will always work for us and that's the point of it, just like Jarvis always works for Tony Stark. He never goes rogue, he sticks with Tony, and that's sort of how we envision AI it sticks with you.
Craig Smith:
Hang on a second. There was a question I was going to ask yeah, do you follow what China's doing at all? Because certainly they've been very active, particularly in the drone space.
Tyler Xuan Saltsman:
Yeah, you know, I think what's sort of a sobering moment is us losing the DJI protocol manufacturing to China, and so we lost that and we can't lose. The AI race to China and China is very formidable. Their Alibaba models out of China, the coin series, are phenomenal, and so are they ahead of us? You could argue they are, but just the fact that it's even an argument is concerning we should be way ahead of China and we're not. So, yeah, I think we need to. We need to put our foot on the gas, we need to take this seriously and we need to really ensure that folks that are building AI uphold national security and that protect Western principles and values. Because, again, how do we make AI think like an American? How do we make culturally aware? If you take a model and then translate it to Hebrew, that doesn't mean it's thinking like an Israeli, it's just a translation. So as AI gets more advanced, it'll learn our culture, and it's important that we build AI that magnifies our culture and that protects it.
Craig Smith:
Yeah, I mean I have to ask on this computer vision enabled drone. Could it be deployed with facial recognition technology?
Tyler Xuan Saltsman:
It could be. But then there's the argument of it's unethical to capture people's faces and store it. But yes, the technology is there and you could do it Absolutely. I know China's doing that. I know with GDPR they're trying not to do that. So if you want to track a human, you don't need to. You wouldn't even need to track their face. You can actually analyze their gait, the way they walk. I walk differently than you do, so my gait can be a unique identifier, even though you're not looking at my face. So things like that is how you get creative and have a workaround.
Craig Smith:
Yeah, I just remember the, you know, the hunt for Osama bin bin Laden. If there had been face recognition-enabled drones, that whole thing might have ended much earlier. You mentioned at the very beginning the seven seconds that a warfighter has from the time that it's identified by a drone or when a drone locks onto the human. Yes, seven minutes.
Tyler Xuan Saltsman:
Oh, seven minutes. Yeah, I'm sorry when it sees you to, when you need to get cover and concealment, and it used to be much longer. I mean maybe it's seven, eight, I don't know 10 minutes max, but the window's very short. But the point is it used to be hours and now that you know drones are ubiquitous now and getting more dangerous, it's now shrunken down to seven minutes around.
Craig Smith:
Yeah, is there anything on the defense side that you guys have looked at that soldiers on the ground can use when they're identified by a?
Tyler Xuan Saltsman:
drone building. If, if, let's say, you are in the field, you know and you have, you have a drone, call it, you know a fob that has 3d printing, we can quickly assemble the drones and zip them out there. But I would imagine fleets and swarms of these, these cheaper drones that, if I am seen, we can just deploy them right away in my AO and then, if a drone's coming at me, these suicide drones will then fly into it, and so that, really, to me, is the solution. I know that there are other companies that are using AI to actually operate a machine gun, and a machine gun will then shoot the drones down. But the problem with that is, with ballistics, your MOA, your minute of angle, is off. So if I'm shooting something 1, a thousand meters out and it's a 5.56 round, there's a variance of where that bullet will actually be, even though I'm right on target, especially with windage and the air density, you name it. So it's still not very effective, because you would want to shock and to shoot a drone, not a machine gun.
Tyler Xuan Saltsman:
Or? I think the best solution will be cheaper suicide drones that'll lock on and get right to it and again, and that'll save money and it won't give away my position. There's lots of advantages to doing it this way, and I think we need more people with actual military experience in defense tech. A lot of these folks are amazing technologists and they're great capitalists, but they don't think like a warfighter. So you can't really build products that will augment the warfighter if you don't really know how they think.
Craig Smith:
Yeah, is there any of this technology being developed in Ukraine, where you have warfighters that are getting a lot of experience?
Tyler Xuan Saltsman:
I would imagine. So you brought up Eric Schmidt. I believe he's got some stuff over there, although I can't comment on it directly because I don't know, but I'd be willing to bet there is. And again, if anyone listens to this that knows, please reach out, because we'd love to help aid the fight in Ukraine with our technology.
Craig Smith:
Yeah, okay, is there anything I haven't asked that that you want to talk about?
Tyler Xuan Saltsman:
Yeah, there's. There's one last thing too Weapons detection. We have a very good weapons detection model that detects weapons with 95% accuracy. We can also enforce security at bases, at FOBs, and this is a technology I think that also needs to become ubiquitous for schools, and so we're going to show you a video of viewer discretion advised. This is real footage, but you can see the accuracy in the frame rate detection that we can track the pistols, and I also think this is the future of how do we secure our schools banks, military bases, you name it and this uh and what we're watching are are uh two people engaged uh in a gunfight in uh surveillance video?
Craig Smith:
but uh, it'll detect uh, because uh the the problem has been over and over again that someone has a phone in their hand and it's uh uh, somebody thinks it's a weapon. It'll it'll differentiate that's right yep is. Are there any examples in the in the video of uh uh, of uh phones being identified as phones in a situation where other people have guns?
Tyler Xuan Saltsman:
We can, so we can. We can actually build computer vision that'll delineate between a phone and a gun. We didn't for this case, but what I think that's interesting is, on on the, on the person's body that was neutralized, there's still a weapon on that body and the camera's picking it up, and so it's important to secure the area, remove all the weapons, things like that, you know. And again, we can put this technology on drones for reconnaissance, we can put it in low power cameras. And again, the vision element needs to augment and give us situational awareness on how to do our jobs better. And so I think the bigger vision here is how do we connect the physical world, AI, computer vision, with the call it on-device, personalized AI agents, and then how do we bridge that gap, and I think that's the future of AI.
Craig Smith:
Yeah, and this weapon detection system? Has that been deployed, or is that a research project?
Tyler Xuan Saltsman:
Not yet. It's research, but it is ready to deploy. And again, we're ready to provide this to schools for no cost, because it's the right thing to do and I never want to profit off of a horrible strategy. But we will be selling this to our military partners, but for schools, absolutely. You know we're happy to give it away.
Craig Smith:
Well, this has been fascinating. Is there anything I've missed?
Tyler Xuan Saltsman:
I would just you know, lastly, leave the viewers of. You know the dangers of AI and the biases and the dangers of tech elites trying to censor AI with their own virtue signaling that isn't indicative of our culture or the war fighters. And again, there is no right answer. But I think the most important thing is transparency. Open AI will never show you their training data. Open AI will never show you their training data. Anthropic will never show you their training data. They won't even show you the weights of their models or the code base. So forget the training data. And then, with open models, today, no one's really showing you what they trained on. So if you can't understand the training data, you can't understand the biases. So when hallucinations happen, which they can, you know you can't really go back and understand that calculation if you don't know the training data. So it really all comes back to that and I think that's the biggest problem with AI today Garbage in, garbage out, bias in, bias out.
Tyler Xuan Saltsman:
We saw Google literally whitewashing or changing history and removing white people, which is it's, it's borderline evil to do that, to change history. It's, it's problematic and we can't be doing that. So it's and it's fine if you want to do that and that's what your customer wants. But now show the training data. You know what. How did you do what? What did you use to create that? And that's that's where the call it, the part that I just can't ever stand with.
Craig Smith:
Yeah, and in your training data you were saying it's open. That's right.
Tyler Xuan Saltsman:
So we're building, we're in the process, or call it phase two. We're constructing a military data set that thinks like a military warrior. So imagine, like noble Greek mythology, with military tactics, with leadership books, lots of curriculum that a young military officer would have to read, combined with lots of academia papers for just the volume. So there's lots of large, open data sets on Hugging Face. But then taking a step further by making it culturally aware. So with a military data set trained off of Air Force, navy, army, marines, all of that public domain, rather than just scraping all of the Internet. We're scraping public domain that's akin to the military, combining it with an academic data set that we can commercially use, which is right on the hugging face.
Tyler Xuan Saltsman:
Then you fuse that together and then you create your small language models and then you create your LoRa adapters and then you have your RAG pipeline and then now what we do is we create something called function calling, which that is the start of agents, that's the start of a model doing something for you. So imagine if you say, all right, summarize this email, send the email out for me and then schedule a calendar invite with the stakeholder. Ai will now do that via agentic workflows and then I can actually tell it like a human conversation via natural language processing. And that's the future of AI Right on the horizon. We have the technology ready today, so we're excited to showcase this at CES in Vegas with our partner Intel and hopefully more. And thank you for the time and I appreciate the platform to talk about our vision and why we're aligned with American principles.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including automated subtitles, secure transcription and file storage, world-class support, generate automated summaries powered by AI, and easily transcribe your Zoom meetings. Try Sonix for free today.