Andy Kurtzig, founder and CEO of Pearl (formerly JustAnswer), offers a crucial insight: his platform provides a comprehensive solution to the most significant challenges facing AI today: hallucinations, liability, and monetization.

 
 
 

266 Audio.mp3: Audio automatically transcribed by Sonix

266 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Andy:

Pearl will give you an AI answer and then you'll get an actual doctor, will show up, read the conversation, read the answer and then give it a trust score from one to five.

Andy:

We focus on primarily text and professional services, so medical, legal, veterinary, accounting, tech support, home improvement, these kinds of areas, so and that focus yields a very different approach to how we handle things. So and that approach results in a significant improvement in quality for the user. We do. We've got 12,000 experts that are on our network. Like I said, we've got pearlcom as part ofa collection of businesses and across those businesses, we have about 12 000 experts that are at the beck and call of our pearlcom users, and so they're there to jump in and verify these answers. We've figured out how to monetize it, and that is through real live human expert professionals that the human expert needs to be in the loop. People understand that a doctor costs money, a lawyer costs money, an accountant costs money a fraction when you do it through AI and online but people are used to paying money for that, and that creates a business model here that is both supportive, that improves the quality, reduces the risk and creates monetization opportunities.

Craig:

Building multi-agent software is hard. Agent-to-agent and agent-to-tool communication is still the Wild West. How do you achieve accuracy and consistency in non-deterministic agentic apps? That's where agency A-G-N-T-C-Y comes in. The agency is an open-source collective building the internet of agents. And what's the internet of agents? It's a collaboration layer where AI agents can communicate, discover each other and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for interagent communication and modular components to compose and scale multi-agent workflows. Build with other engineers who care about high-quality multi-agent software. Visit agencyorg and add your support. That's A-G-N-T-C-Y dot O-R-G. Visit them today to support high quality multi-agent software. Go ahead then.

Andy:

Well, it's nice to be here and happy to give my background. So my name is Andy Kurtzig, I'm the founder and CEO of Pearl pearlcom like a pearl of wisdom, and what we do is we take solo professionals and turn them into AI superheroes, using essentially agents to do the sales and the marketing and the customer service and the account management and payments and all that for them, so they can focus on what they love the actual human judgment expertise part and not the other stuff. And I've been doing this for sort of blending AI and humans together for roughly 20 years now. Wow, and really proud of how far we've come.

Craig:

Yeah, when did you found Pearl?

Andy:

So Pearl is a subsidiary essentially of Just Answer, which I founded 20 years ago. Oh, okay, Great. Yeah, it's a collection of brands Just Answer, Pearl. I've got a couple other businesses as well.

Craig:

Yeah, so Pearl is. What I understand is an AI platform that combines LLMs and a network of human experts. Excuse me to verify responses Can you tell me about I mean, it must be tough in this environment to compete with ChatGPT or Perplexity. Can you tell me how you compare to those guys and how you manage that?

Andy:

Of course. So, first of all, we're very focused on the professional services sector. So we're vertical, they're horizontals, right, they're doing everything from coding to video and images and everything in between. We focus on primarily text and professional services, so medical, legal, veterinary, accounting, tech support, home improvement, these kinds of areas, and that focus yields a very different approach to how we handle things, and that approach results in a significant improvement in quality for the user, for the consumer. So, specifically, we're 22% more accurate than ChatGPT and Google Gemini in professional services.

Craig:

And professional services. It runs the gamut from legal to healthcare to I mean, how broad is that vertical?

Andy:

Yeah, so if you take all the subcategories, even within medical, you've got everything from oncology to dermatology and endocrinology. If you take all the subcategories, you put them all together, it's a roughly 700 different different categories that that we, that we focus on, and, and they all have some very similar characteristics. Number one is that quality really matters. Right, and, and and humans are a key element of of of delivering that quality. Obviously, in medical or legal, for example, making a mistake is extremely unfortunate and costly and, you know, not a good place for lots of hallucinations.

Craig:

Sure, and how does that work? Then you have this network of human experts. And you have this network of human experts. Someone asks an LLM a question. Just describe how it works.

Andy:

Yeah. So it's fairly straightforward. You go to pearlcom, you type in your question. It works kind of like a chat, gpt or Gemini or perplexity At the beginning.

Andy:

You type your question and then it'll go back and forth with you, typically more like a little bit, more like some of the deep research tools out there do and qualify and actually take, get rid of the need for fancy prompting, because we find that's a big source of, of hallucinations, is is the prompt that the average person puts in isn't great and so then the output isn't great. So we take care of all that. So you just put in in your language what you're trying to do I have a rash on my foot or whatever it is and we we will go back and forth if you clarify the question, just like a doctor in a live environment would, and then we actually give you the, the, the ai answer from pearlcom. That's just the beginning. That's what's? 22% more accurate and, more importantly actually, than the 22% more accurate, it's 41% less wrong. So when you're dealing with medical, legal, these kinds of accounting, these kinds of categories being wrong, is what you're really trying to avoid, because that's where the catastrophes happen.

Andy:

Um, and and. So that's the, that's the first step higher quality ai lm answer, but then the the the additional twist here is we will then have an actual human expert verify the accuracy of that ai response, of pearl's own response. So in that case, with your rash on your foot, pearl will give you an ai answer and then you'll get an actual doctor, will show up, read the conversation, read the answer and then give it a trust score from one to five, so that you then know okay, can I trust this thing or not? And if you can, great, have a nice day. And if you can't, well then you got to keep digging.

Craig:

Yeah, you know. There are a lot of these systems now that have an LLM verifier that looks at the answer from another LLM and scores it. Is any of this automated and if not, how do you do it in real time? I mean, do you have experts in each particular field on call, you know, 24-7, or yeah, how do you manage?

Andy:

that we do. We've got 12,000 experts that are on our network. Like I said, we've got pearlcom is part of a collection of businesses and across those businesses we have about 12,000 experts that are at the beck and call of our pearlcom users, thousand experts that are at the beck and call of our pearlcom users, and so they're there to jump in and verify these answers and read them and and give the trust course and then, if you want to as a as a user, you can then choose to pay and then actually end up on a phone call or a live chat with the doctor about and for as long as you want. So so that's kind of step three of the experience and that's where the money comes as well. That's where the monetization happens. So the AI answer is free, the trust score is free from the human expert, but then if you want to talk with them, then you can pay a monthly fee and talk to them all you want.

Craig:

And then you pay a monthly fee. It's a, not per call. Yeah, that's right. Okay, and so who are the users in this? Are they casual users that want one answer for one thing, or is it other medical professionals or other lawyers, for example, or paralegals? I mean, is it more a B2C or a B2B product?

Andy:

It's interesting, we launched it as a B2C product and we're finding a lot of power usage out of the B2B group. So, like you said, lawyers or doctors or mechanics or accountants that just need help. So I'll give you a funny story In the mechanic category we're seeing a lot of mechanics that might be in a small town, like the middle of Alaska or something, and they're the only mechanic in town, and so you can imagine they might know how to deal with a Ford or a Chevy, but then somebody brings them a Volvo or a BMW or a Saab or whatever it is.

Andy:

They don't know how to repair those cars, but they're the only mechanic in town, and so they turn around and start using us to be able to service those cars as well.

Craig:

Well, that's fascinating. So, and how quick is the response? I mean, if I'm a mechanic on among your experts, I is it's through my smartphone that I get a, an alert. I look at it and, and how much time does it generally take to score the LLM response?

Andy:

It usually takes a few minutes. It's not very long yeah.

Craig:

Wow, yeah, and then you pay the expert for their time and take a commission. In fact, is that right?

Andy:

Yeah, well, so for the verifications we pay the expert for their time and then give it to the users for free. So we're eating that cost in the hopes that some customers will choose to then become subscribers.

Craig:

Yeah, and what is the subscription fee?

Andy:

It's about. It's about 30 bucks a month.

Craig:

I see, yeah, so similar to to any of the chip, the LLM, yeah, yeah.

Andy:

Except for that includes actually talking on the phone or on chat live with a human doctor or a lawyer.

Craig:

Yeah, that's fascinating. Yeah, yeah, and, and. On this back and forth. You talk about uh, so you ask it a question. Uh, pearl asks these nuanced follow-up questions to better understand your intent. Uh, and and are the? Have you refined I mean fine-tuned the models that you're using with uh answers collected from these uh experts?

Andy:

Of course. So that's a key part of what makes it 22% more accurate than than chat, gpt and Google Gemini is, is, is the is the rag of all of our data. So we've got about 30 million past conversations between doctors and their customers, and lawyers and the customers, et cetera, in our database that we can then use to increase the quality of the answers.

Craig:

Yeah, and that fine-tuning. What's the base model, or do you use multiple LLMs?

Andy:

We use multiple LLMs underneath as well, so we've built our own eval system as well, because we've been doing this for so long that evaluates the various different models for various different purposes, and so every time you use Perl, you're using at least two different LLMs or happening behind the scenes. It's all invisible to you as the user, but at least two different LLMs are being used behind the scenes.

Craig:

Yeah, and so there's. You know if you're dealing with healthcare or legal, there's a potential liability issue. How do you handle that?

Andy:

That is an excellent question. So let's start with how do the LMs handle that? How does ChatGPT handle that? How does Google, gemini, perplexity, et cetera, handle that? And the short answer is they don't and they can't and they're liable. And we've got a bunch of data on that, everything from Supreme Court justices to law journals, to even consumers and their intention. We did a survey recently and asked consumers if an LLM was wrong, what would you do about it? And 40% said they would consider legal action against AI companies and 50% of Americans believe the AI platforms are legally responsible for the answers they provide.

Andy:

So that's a giant flaw in the foundational models, big game plans. There's actually three big flaws. That's one of the three big flaws. We can chat about the other two if you like, but we'll focus on risk for now. Huge flaw I mean they're starting to see the pain of that right. You've seen the lawsuits that kid that killed himself because AI told him to, and all these things himself because they, I told them to, and all these things. I mean it's just tragic and the family of that kid and the families of some others that have been like given hallucinations, quote, unquote, um are now suing google and and some of these other companies, and I think they're gonna win. They don't have an argument against it.

Andy:

Now, with all said, there is this thing called Section 230. It's an act of Congress that has protected Google and Facebook and many of these other companies from that exact kind of lawsuit in the past. But there's a key way that Section 230 works. It works for platforms, and a platform or a marketplace is where somebody might come Like, even on Google. You go to Google, you get a billion search results, you click on with some of one of them and you end up on some other third party site, and then that third party site is the one that's then giving you the answer, and that's how they deal with it. Google can say section 230. We're just the platform. You can't sue us, and that's worked like a charm for them for years Until now, where they're not doing that. They're not sending you to some other site, they're not even attributing any other sites. Most of the time they're just telling you you're going to be fine, don't worry about the cancer or whatever it is that you're asking it about, and they're liable for that.

Craig:

And when you say they're not going to a third party, it's because they're letting their LLM answer.

Andy:

That's right, that's right. And they often don't even know the source. It just all that data just kind of sucked in, and then they give you the answer and and they don't even know if it's true or not. And they also don't know where they got it, and so they don't even attribute it. Not that that would solve it, but they're not even close to solving it. And so that's the piece where Pearl actually steps in and can really help out forward with the human expert verifications and conversations, which turns us back into a platform again which gets Section 230 protection again and it solves that big problem Building multi-agent software is hard.

Craig:

Agent-to-agent and agent-to-tool communication is still the Wild West. How do you achieve accuracy and consistency in non-deterministic agentic apps? That's where agency A-G-N-T-C-Y comes in. The agency is an open source collective building the internet of agents. And what's the internet of agents? It's a collaboration layer where AI agents can communicate, discover each other and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for interagent communication and modular components to compose and scale multi-agent workflows. Build with other engineers who care about high-quality multi-agent software. Visit agencyorg and add your support. That's A-G-N-T-C-Y dot O-R-G. Visit them today to support high-quality multi-agent software. I see, and the liability if there is a wrong answer. Does that extend to the experts or are they protected by 230?

Andy:

No. So the experts are the ones giving the answer, and so they're the ones that are held liable in this case. But then they've got their own insurance and they've got their own ways of dealing with that, where they know how to deal with medical malpractice or whatever it is. That's something that they're used to and have protections for.

Craig:

Yeah, and have you tested? How uniform are the answers across experts? That's something I've always wondered.

Andy:

Yeah, you know it's interesting. You kind of, in some ways you want uniformity, but in other ways you don't right. So when you've got a medical condition, you don't want everybody telling you sorry, you're out of luck, have a nice life. You want a bunch of different angles and a bunch of different perspectives and points of view, and sometimes the first one says, sorry, you're out of luck, have a nice life. The next one says, oh, there's this new thing over here. Have you heard of that? You might want to try this. I don't know if it'll work. And so some of the differences are actually useful, especially when you're dealing with tricky diagnoses and things like that.

Craig:

Well, that's interesting. So if I have a medical question, I go on Pearl, I ask it, I get an LLM answer. Do I have to prompt it for the trust score.

Andy:

Nope, it'll prompt you. It'll say hey, you know, it'll prompt you, and sometimes it'll even just automatically do it for you, depending on the situation. Right, and then I get the trust score and do I have to ask for the to talk to the expert then yeah, if you actually want to have a phone call or a live chat with the expert, you have to ask for that and you have to sign up for the subscription. Sure, yeah.

Craig:

But can I then talk to two experts, get a second opinion if I don't like the first one?

Andy:

Yeah, yeah, with the subscription you can absolutely do that, and that's one of the beauties of this is you can get multiple opinions really easily right. The part of the reason a lot of people don't get second or third or fourth opinions is because it's so hard. You got to make an appointment for three weeks from.

Andy:

Tuesday and you got to drive down there and you show up and you know they're late and whatever else happens, you know. Now you can sort of just bam, bam, bam, get three different opinions very quickly and get three different opinions very quickly.

Craig:

Yeah, actually that sounds amazing. I mean, did you consider charging per session or per question, because a lot of times people only want one answer. They don't necessarily want a subscription. One answer, they don't necessarily want a subscription. Or you think the subscription can you cancel, sign up for a month and cancel immediately, and then you've spent $30 to get an expert opinion, which is extremely cheap. How does that work?

Andy:

Yes. So, first of all, it's very easy to cancel. So, yes, you can just sign up, get your one or your two or your three opinions, or whatever you need. You got a whole month to do it then and then you can cancel and only pay 30 bucks for all that and yeah. But what we find is people really like the ability to sort of ask a second and a third opinion, and a lot of these things kind of end up being maybe chronic issues, and so you want to be able to ask over time as well, and this is a way that we can price it lower. So, obviously, if you just you know, without insurance showed up at a doctor's office and said, hey, I want to want to pay you, they're going to charge you hundreds of dollars for that, plus the drive time and everything else you have to deal with, we're charging $30, and you can get multiple opinions that way.

Craig:

Yeah, yeah, okay, I'm sorry I'll cut this out. I just had a message pop up that knocked my train of thought askew. No problem, I'm reading some of these notes. Yeah, I'm seeing this note about why Perl is the first truly monetizable AI search platform. Can you go through that?

Andy:

Yeah, so let me frame this in a context and then I'll answer that question specifically. Sure, I believe there are three fundamental problems with the foundational models out there the chat GPTs and the Google Geminis and perplexities and such. And we've already talked about one, which is risk. They're liable for huge amounts of risks. I mean they're going to end up with billions and billions of dollars worth of lawsuits from everybody that that they hallucinated to and caused damage to, and. And it's not a big deal when you're writing a poem, maybe, but for the family dinner party, but when you're dealing medical, legal, these kinds of categories, it is a big deal and they're not taking that seriously enough, uh, and so they're giving out wrong answers and they're hurting people and they're hurting pets, and they're hurting people's legal situations, et cetera. So risk is a big one. Second issue of three is this quality, the underlying quality, right? So we're seeing hallucinations like crazy and in fact, with these new versions of models that just came out, they're actually hallucinations went up, not down. This is a fundamental part of how these foundational models work is that they hallucinate. That is just embedded in how they structurally work. And so those hallucinations again are a big problem and nobody's got an idea on how. Everybody's got lots of ideas but nobody has a clear path on how to, how to get past the hallucinations today's just seem like they're just a part of these systems.

Andy:

And then the third, of course, is monetization. We're seeing in the news left and right these not only these, these ai companies, but but many companies in and around the space are losing money hand over fist right. They're spending this much on gpus from nvidia and there. They're spending this much on GPUs from NVIDIA and they're getting this much in revenues from if they're getting any revenue at all. Nvidia is making a lot of money off of this whole thing, but most of the foundational, all the foundational models and big players are losing lots of money on AI today. And so those are the three big fundamental problems with the, with the foundational models. It's quality, risk and monetization. You asked specifically about monetization, so let's talk about that.

Andy:

They don't have a good monetization method yet charging 20 bucks a month kind of works for early adopters like you and me and and small businesses and stuff. But but that's not the model that's going to end up winning. Something more like what Google does with a free model is what's going to end up winning to reach the masses of consumers. And we're seeing the free models out there. That's lovely, but they're not figuring out how to monetize it yet. So where are the ads going to go? And what are you going to do when you ask what's the best tennis shoes to buy and the answer is Nike, but the advertiser is Adidas, right, what are you going to do? Right? And so they don't know what they're going to do with that.

Andy:

And it's not like kind of Google where you can just say, well, we kind of suck anyway, here's a billion links and a billion ads and you just sort of you figure it out.

Andy:

They're trying to tell you an answer and when you start getting into the answer business, it's tough to both give a great quality answer and put ads in there.

Andy:

To put ads in there, you can imagine asking for the score of the NBA game and one model decides to say, well, before I tell you the score of the NBA game. Let me tell you about the all new Toyota Tundra. Right, nobody would go to that model. They'd go to the next model down the list, and so there's a lot of question marks still about how to monetize these things, a lot of question marks still about how to monetize these things. And we figured out how to monetize it, and that is through real live human expert professionals that the human expert needs to be in the loop. People understand that a doctor costs money, a lawyer costs money, an accountant costs money. It's a fraction when you do it through AI and online, but people are used to paying money for that, and that creates a business model here that is both supportive, that improves the quality, reduces the risk and creates monetization opportunities.

Craig:

Yeah, Before I go on to what's going to happen to these LLM-based search models or engines or services, whatever, how do you so I? I asked Pearl about a bump on my nose that may be malignant.

Craig:

You know, it gives me an answer, I get a trust score, uh, but I want to talk to an expert. Uh, this is all within my subscription, I imagine. But once I have that excuse me, that expert on the phone, uh, you know he, he answers the question I say yeah, but you know, I have ringing in my ears. What should I do about that? I mean, you can imagine that the conversation with the expert goes on for an hour as suddenly this person has an expert on the line and can ask all the questions that they want to ask. How do you prevent that?

Andy:

We don't prevent that. We love that. We're trying to help people. That's, at the end of the day, what our mission is is to help people and we're happy to help people. Not only do we help people like that in that kind of situation with that doctor, but we have all kinds of specialists. I mentioned we have 12,000 different experts and so sometimes the person that's going to talk to you about the bump on your nose is a different person than the one who's going to talk to you about the ringing in your ears, and we're happy to connect you with the ear doctor to talk about the latter and the dermatologist, or whatever it is, to talk about the former or the ENT.

Craig:

And that's all within the monthly subscription.

Andy:

Yep.

Craig:

Yeah, yeah.

Andy:

It's amazing.

Craig:

That is remarkable, and so what's going to happen to AI search?

Andy:

Well, I think they're going to have to evolve. I think we all have to get used to the fact that this is about as good as the quality is going to get. There's going to be lots of optimizations, but the hallucinations are endemic to the way these things work and we're not going to get to 99% accuracy like you would need, for you know you'd need even more than that for healthcare and legal and things like that. You have to assume I mean, our estimates and professional services are that they hallucinate about 37% of the time. Services are that they hallucinate about 37% of the time. So you have to sort of recognize it's going to hallucinate and that means there's certain jobs that it can do and certain jobs that it can't do, and we just need to sort of get good at recognizing what kinds of jobs to use it for, what kind of jobs not to use it for. So that's on the quality side. I will say we should expect that the prices will continue to come down so that the competition and these LLMs are becoming commodities. Between all the open source models and DeepSeek and everybody else coming into the market, the cost to the users will continue to fall.

Andy:

Yeah, are there cases where you think AI can be trusted more than others kind of logo I want for my business I'm an expert on you know these kinds of things anything that I can evaluate properly, I you know. If so, if it gets it wrong, I don't like that letter, I don't like that email. I'm going to change this, this, this, this and this. I don't like that logo. Try again, try again, try again.

Andy:

Right, if I'm a good judge, if I'm an expert myself in that topic and we're all experts at many things then it's fine because we can judge it, we can know where to use it, where not to use it. The problem is when we're not an expert, when we use this for medical questions or legal questions or for veterinary questions, and AI tells us something and, by the way, it tells us with 100% confidence every time oh no, don't worry about that cancer, you're going to be fine, just eat some rocks and it'll go away. You know, oh, okay, I mean that's that's what these things are doing out there, but but in the absence of that knowledge right, I'm not an oncologist, I wouldn't know. And so I get this information and I trust it. And that's where you got to be careful, that's where the big mistakes happen, and that's where you need a human in the loop, a real life human expert.

Craig:

Are there cases where the human expert says look, this issue is too complicated or really requires a face-to-face meeting with a professional Of course, Of course.

Andy:

Yeah, I mean you can imagine all kinds of scenarios, right? A surgery is required on your pet. I can talk to you about what that surgery might look like. But I'm not doing a surgery online, right? I can't go represent you in court on Tuesday, for example. I mean, these kinds of things you need to go face to face.

Craig:

Yeah, so do you think this is the model that others will eventually adopt? And what about corporate executives that are pouring money into developing their own enterprise, llms?

Andy:

Let's see. So, in terms of the, is this a model that others will adopt? Yes, we're big believers in that and in fact, we've built an API. So we've got a product called we call it internally experts as a service and you can go to pearlcom slash API and you can stream our experts right into your application through our API and do that, just like you would stream open AI into your application. You can stream real-life human experts doctors, lawyers, et cetera right into your application or for your use case, and so we're big supporters of that and we do think this is the future.

Andy:

The combination of the best of what humanity brings, combined with the best of what AI brings, gives you the best possible solutions. So that's where I think the world's going. That solves the quality problem, that solves the risk problem and that solves the modernization problem. It's the hat trick of the fundamental problems with foundational models. In terms of enterprise, I think everybody's doing it a different way. I think some are doing it better than others. Most are doing it poorly today, but you got to be doing it. If you're not at least playing with it, you're going to get left behind. So I think that's kind of what's causing people to be playing with it and trying to do things with it, but I think we're learning a lot very quickly about how to how to make these things useful.

Craig:

Yeah, the API is fascinating. So if, if I built uh uh uh chat bot, uh, a chat bot, uh to talk about the history wherever you know, a GPS based chat bot that would tell you the history of wherever you are. Do you have the experts on call to to back up answers? It's it's something I'm personally interested. I love history yeah, yeah, so.

Andy:

So so the experts we have to look through the expert category to see what kinds of experts you want. So so we've got, you know, 700 different categories, but but the big big buckets are medical, legal, veterinary, accounting, tech support, home improvement, appraisals. We've got a lot of categories, so we'd have to go through the categories and see if any of these match up what you need.

Craig:

Yeah Well, actually that raises another question how does the LLM know which expert to route the question to?

Andy:

There's multiple different ways. In the API are you talking about, or in Perlcom? Well, in Perlcom, to begin with, yeah, yeah, in Perlcom we use AI, of course, to route it to the right one, and the API then the API. We can do it either way. We can either use AI to route it to the right one or you just tell us right. I'm a medical site and I only want doctors. It's just going to go to doctors and you can pick the specialty or not when you send in the API call.

Craig:

Yeah, and so this API, I mean that's really interesting. Yeah, I mean that's really interesting. A hospital on its portal could have a white-labeled question chatbot.

Andy:

Exactly.

Craig:

That routes through the API to your experts.

Andy:

Exactly, and there's multiple use cases for a hospital. There's that sort of B2C kind of consumer application. Hey, you know, if you're really worried about this and you want to talk to a doctor about it, at 3 am, we got one ready for you right now to do that. So that's one application. They can do that either with the AI, like we do with Pearl, combined with the human, or just straight to human, if they want to do it that way, to the human doctor.

Andy:

Other use cases we're finding in in health care environment is just quality is so critical in health care. If you're a hospital and you're even playing with your own llm, quality is almost always the first question that that they're asking and trust. And how do we ensure that that that we're using lms but we're doing so in a trustworthy quality fashion, and that's where they can stream in live doctors to to verify things, even for internal uses, right. So even if you're a nurse practitioner and you want to use this, or even if you're a doctor and you want to use this, right, how can we up the qualities by the time it even gets to that doctor so that it's more trustworthy than just hallucinating LLM?

Craig:

Yeah, and you were saying that you're a platform. If I'm a hospital and I want this functionality but I don't want to leave it to your experts, can I use the API and then have it route to my experts?

Andy:

We haven't done that before. I mean, usually people would have their own system for that, right. So so if your plan is routed to your own experts, you may or may not need that platform. You can certainly use Pearl for the AI part, yeah, and then, and then try to do your own thing with your own experts at that point. So that's easy to do, but but yeah, yeah, I guess that's doable in that way.

Craig:

Yeah, I'm just thinking that there may be organizations that don't have the want to spend the money. You've already created the system. Yeah, and then can you talk about you? You've done some research into AI accountability and trust. Can you talk about some of the findings? Of course?

Andy:

Yeah. So we asked you know whether consumers would pay for AI if it meant better, more accurate answers, and 42% said yes, they would. Another thing that we learned was that nearly 47% of Americans would trust AI more if humans validated its answers, which is obviously part of what led us to this path. And then we already talked about some of the data, about the legal liability part of things and how at least Americans would do hold these LRM's liable and would 40% say they would consider legal action from AI companies if they were given wrong answers.

Craig:

Yeah, and how are you is this? I mean, it seems the market would be endless depending on your distribution and you have JustAnswers, which has a large user base, but what are you doing to get this out there? I mean, are you primarily looking at integrating with other platforms and services, as I said, with hospitals or maybe law firms, or are you primarily just marketing the direct service through pearlcom?

Andy:

That's where you come in, craig. So we're doing both. Obviously, we have the API so that people can use it in whatever creative ways they can imagine, and then we're also marketing pearlcom directly out in the market and doing that so that we can be on the cutting edge and learn about how consumers are using it and what they want and don't want, and try to make this you know eat our own dog food, so that when people are using our API, for example, they're getting the best of the best and the latest thinking on this stuff.

Craig:

And how many users does JustAnswers have?

Andy:

In terms of traffic, we were just named SimilarWeb's fastest growing site in the world. Last year number one ChatGPT number nine, by the way. We've got about 60 million visitors a month.

Craig:

Wow, yeah, yeah, yeah. This is fascinating. I mean, I'm definitely going to use it. You know, I don't know if I'd want a subscription running for a year, but you know it's inexpensive enough that I could dip in and out when I have questions. And I love the idea that you could connect it to your own site by API. And how do you charge there? Is it a usage or is again just a monthly subscription?

Andy:

That one's typically a usage. Yeah, they've got their pricing. It depends on the size and scale and those are kind of custom prices at the moment. But yeah, it's typically on typically done per usage, per question basis.

Craig:

Right, If someone's using the API and can't find a category to fit their vertical, can they talk to you about developing an expert cohort for that vertical? Or how do you? As you grow, I imagine you're adding, sort of broadening the categories.

Andy:

Yeah, but we do have 700 categories. So we've got a lot to start with and it is a lot of work to get one set up. So it's not to sort of oh wouldn't it be great if you had this and then the next day we've got that. You've got to build a liquid network, I mean, in order to be able to have the response times that people demand you know, a few minutes kind of thing you've got to have a lot of professionals on the other end of that. And in order to have enough professionals you've got to have a lot of consumers too to sort of build up the network effects and build up the demand. Consumers too, to sort of build up the network effects and build up the demand. So it is a big endeavor to add a new category and we do so thoughtfully and carefully. But we've done that over the last 20 years. We've built up about 700 categories.

Craig:

Yeah, you know, I'm just thinking. I spent my career at the New York Times, and the Times owned Aboutcom for a long time. Who bought Aboutcom? Do you remember? It wasn't you guys, huh.

Andy:

It wasn't us. I remember the deal very well, but who bought them?

Craig:

Yeah, well, that's okay. I was just curious whether it was you guys. But in the integrations, these API integrations, do people generally pass it along to users as a loss leader, just as a service, or are they adding a margin in charging so it covers their cost to you?

Andy:

I've seen both. So the obvious example is where they do the latter, where they charge a premium and then they keep the difference, and so that's their business model and that's a perfectly reasonable way to do it. The other thing we've seen that's creative and clever is what we've seen with one of the giant, giant secondary markets for car parts you can guess the name of the company. We have a deal with them where they make it free for their consumers to help them select the correct car parts for their vehicle. So the problem that they had was a lot of people were coming to the, to this giant site, and thinking they knew the part that they needed for their to fix their car, and then, of course, many of the times it was the wrong part, and so they'd return them.

Andy:

That was a big cost, and the other problem was a lot of people would just be uncertain. They would get into the to the conversion funnel. They, they'd want to buy this part, but they're I'm just not sure enough that this is the right one. Screw it, I'm going to go down to pet boys or whatever the local shop is in auto zone and buy it there. And so we that they, they brought us in to have the mechanics do that task for their consumers. So here's the problem which part do I need the mechanic? The ford mechanic can tell you oh, it's this part, and, and for your particular car. Now you can buy with confidence. So their conversion rates went up, so they made more money by selling more parts and their returns went down. So they saved money on the other side too. So that ended up being a very good ROI as well.

Craig:

Yeah, I'm really fascinated by this. I'm really fascinated by this. Is the pricing, the usage pricing on the API. Would it be possible to build a site that answers questions using the API that charges less than the $30?

Andy:

It would be possible, it would be.

Craig:

Yeah, yeah, yeah. Is that something you guys have a problem with? Or encourage, or?

Andy:

It depends. I mean, well, it depends on the partner. I mean, I think there's, you know, the reason that that might work is if, if a partner had a big brand, for example, you know, then they might be able to pull that off because they're paying less in customer acquisition costs and things like that. So that might work for them. Yeah, okay Is there anything I haven't covered that you want to say? I don't think so. Just you know I appreciate the work that you're doing and I appreciate our time today.

Craig:

Yeah, yeah, no, I'm thinking of ways I could integrate this into my site or into my podcast, because it's yeah, it's an amazing idea that I really haven't heard anywhere before. Thank you, so I would think that this will be quite successful. Like many things, it depends so much on exposure and distribution.

Andy:

Just appreciate your time, craig. This has been fun. Yeah, okay, great.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including world-class support, powerful integrations and APIs, collaboration tools, secure transcription and file storage, and easily transcribe your Zoom meetings. Try Sonix for free today.


 
blink-animation-2.gif
 
 

 Eye On AI features a podcast with senior researchers and entrepreneurs in the deep learning space. We also offer a weekly newsletter tracking deep-learning academic papers.


Sign up for our weekly newsletter.

 
 

WEEKLY NEWSLETTER | Research Watch

Week Ending 6.29.2025 — Newly published papers and discussions around them. Read more