Vall Herard, CEO of Saifr.ai, explores how AI is transforming compliance in financial services. Vall explains how Saifr integrates into Microsoft Word, Outlook, and Adobe, reducing compliance risks in marketing, emails, and AI chatbots.

 
 
 

236 Audio.mp3: Audio automatically transcribed by Sonix

236 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Vall Howard:
Because large language models, they are making their way into every aspect of what we do. The idea of using a generative AI model to generate content, where someone can just copy and paste that and send it out, for example. The idea behind Cypher adding a layer on top of any large language model to ensure that the content is closer to being compliant once it's being generated. And also in instances where we detect that it's non-compliant, to ensure that we can provide a suggested language to help make it compliant. We think by adding that safety layer, that guardrail, that will lead to wider adoption, because in regulated industries, that's one of the barriers. For example.

Craig:
What does the future hold for business? Ask nine experts and get ten answers. Bull market bear market rates rising or falling? Inflation going up or down? Can somebody please invent a crystal ball? Until then, over 40,000 enterprises have future proofed their business with NetSuite by Oracle. The number one cloud ERP bringing accounting, financial management, inventory HR into one fluid platform with one unified business management suite. There's one source of truth giving you the visibility and control you need to make quick decisions with real time insights and forecasting. You're peering into the future with actionable data. If I were a larger organization, this is the product I'd use. Whether your company is earning millions or even hundreds of millions. Netsuite helps you respond to immediate challenges and seize your biggest opportunities. Speaking of opportunities, download the CFOs guide to AI and machine learning at NetSuite. Ai that's NetSuite. Netsuite. All run together to get the Cfo's guide to AI and Machine Learning. The guide is free to you at NetSuite. Okay, so, uh, why don't you start by introducing yourself to listeners, uh, and how you got to safer. And then we'll talk about what safer does and its partnership with Microsoft.

Vall Howard:
As it says there, my name is I'm the CEO of safer AI. Um, as far as my background is concerned, uh, I studied, uh, mathematical economics in engineering. I actually had a professor of economics at Syracuse University who told me that there were these things, uh, called derivatives on Wall Street. And if you could solve a linear set of equations, it might be worth looking into. And so I ended up taking a job, uh, at the Bank of New York, uh, as an analyst. Uh, and then I worked, uh, various places, uh, on Wall Street, uh, went to NYU, studied, uh, financial engineering or mathematical finance, as it was called back then. Uh, and I continued to work, uh, on the street. I, uh, eventually took a detour because I wanted to work as a consultant. Um, and so built, uh, risk management systems from a quantitative standpoint. And one of the capabilities that we built was acquired by a UK based company. And so that's how I ended up in the technology space, if you will, or financial technology or what people call fintech. Yeah.

Vall Howard:
And eventually I made my way back, uh, working at places like UBS. And then I went back into, uh, fintech, worked at a couple of places that were building, uh, analytics for pricing complex derivative products. Um, but my career had always been in the quantitative side. Back then we were, uh, I remember one of my first jobs at the Bank of New York was building a model to try and predict what the revenue for the Department for the Unit Investment Trust Department was. Uh, at the time, we were not calling it AI, but, uh, it was essentially building AI models, uh, using regression analysis. Um, and after doing that for a while, uh, ended up working at various other companies. Um, one of the companies we ended up selling to Moody's Analytics. It was an Edinburgh, Scotland based company that was working in the insurance space, solving some fairly complex, uh, problems. Um, and I ended up, uh, taking a role at Fidelity Labs. Uh, Fidelity Labs is the innovation and incubator for Fidelity Investments. And that's what I do for CFO was born.

Craig:
And. Okay. Well, tell us what Seyfried does.

Craig:
Um, and and how generative AI can help AI be more compliant and safety and trust.

Vall Howard:
Yes. So, uh, safer's mission is to make AI safer. And by that, what we mean is the following, uh, in financial services, uh, the notion of safety, uh, comes up through regulations in a number of different ways. Uh, one of the ways that it shows up, for example, is in marketing communications. So a company cannot mislead a consumer or an investor into buying a product, let's say, for example, or in other areas it shows up in other industries, uh, similar to pharma, for example, uh, where products cannot be misleading, you cannot make exaggerated claims about products or those same kinds of rules exist in, uh, in finance, uh, as well. And people use AI to help generate content, whether it's marketing copy, whether it's emails. And so there's a need for, um, content when it's being created for it to be compliant. And so when we talk about safety and, and those rules are intended, by the way, to keep consumers safe. And so when we talk about safety, um, within the context of safer, What we are trying to do is to ensure that when AI creates content, or if a human is writing content and they are using AI to either edit, uh, what's been created, that we can make that content as close to being compliant as possible.

Vall Howard:
Again, the reason why I say as close as possible is that these are models. These are mathematical models. They are going to need to make errors. They are going to make mistakes. And so there is a need for human review. Uh, if you will. But the idea that, uh, if you can get a generative model to create content, uh, that is closer to being compliant, you take a lot of friction out of the process, because typically what that process looks like is someone whether it's a subject matter expert, let's say a portfolio manager or it's writing content, then they'll work with a copy editor, let's say, for example, um, that then gets sent to a compliance team that then reviews that content, and then there's a lot of back and forth that happens in that process. So the idea behind cipher was to help, uh, in the creation of the content, to help make it more compliant so that by the time it gets to the compliance team, there's, um, less for them to review. And so, consequently, you take friction out of the process and you're able to create content much faster.

Craig:
Yeah. Uh, yeah. You guys, uh, referred to safer as a grammar check for regulatory compliance. Is it is it only for marketing materials? Is it for, uh, other other materials?

Vall Howard:
Yeah. So any content and financial services needs to abide by those rules. So for example, anything that a consumer or an investor can potentially read or see or even hear that needs to be compliant. Right. And so consequently, we've built models where if someone is using an inference or let's say to post a TikTok video, you can download that video and then we can go through it and then, uh, detect instances where there is something that's potentially non-compliant in that video and then alert you if this, uh, is where something is potentially non-compliant. And also we can generate, uh, suggested language that is combined, uh, to say, hey, if you were to say this in this way, then it's closer to what would be, uh, considered to be compliant based on, uh, US financial rules. Let's say, for example, whether it's the SEC, uh, marketing rule or Finra rule 2210. Yeah.

Craig:
Can you give us an example of of language that, uh, might someone might write thinking it's okay and that safer would catch is potentially non-compliant?

Vall Howard:
Yeah. So if someone were to say, for example, that, uh, I guarantee if you invest in this product that you're going to generate X, uh, return, uh, that is something that uh, because investment products involve risk, uh, that is something that is potentially, uh, non-compliant. And so we can suggest how you would rewrite that to make it compliant. Another example is, uh, we have an image detection capability where in certain contexts, let's say, for example, uh, an image may be non-compliant because it may give the illusion of outsized return. Let's say, for example, someone, uh, you're writing content about a certificate of deposit which has a, uh, or any product for that fact that has a low return, and you're using an image of a $200 million yacht, let's say, for example. So within that context, that image may not be compliant, where it could mislead someone into thinking that by investing in this product, I'm going to generate outsized returns.

Craig:
Yeah. That's interesting. So, um, how closely is that sort of stuff policed by is this the SEC. Would that be the regulatory body that polices that?

Vall Howard:
Yes. So the SEC, uh, Finra uh, so, uh, for example, uh, our regulatory bodies, uh, the CFPB, for example. Uh, and so, um, if you think about, uh, what some of these regulators, what their function is, which is to have a fair, uh, fair market to make sure that, uh, companies are providing, um, uh, a transparency, uh, in terms of the risks that's involved Involve in in their products. So those are some of the regulatory bodies that that have an interest in these kinds of, uh, regulations and ensuring that, uh, the investing public is protected.

Craig:
Why is compliance a barrier to AI adoption in regulated industries? And in what way does safer, uh, lower that barrier?

Vall Howard:
Yes. So, um, because, uh, large language models, they are making their way into every aspect of what we do. Uh, if you will, uh, the idea of using a generative AI model to generate content where someone can just copy and paste that and send it out, for example, um, the idea behind safer adding a layer on top of any large language model, whether it's GTP, whether it's neutral, whether with Islam, uh, to ensure that the content is closer to being compliant once it's being generated, and also in instances where we detect that it's non-compliant to ensure that we can provide, uh, suggested language to help make it compliant. Uh, we think by adding that safety layer, that guardrail, that will lead to wider adoption, because in regulated industries, that's one of the barriers, for example, to wider adoption as far as content creation is concerned. Um, and so this is where, uh, going back to something you said earlier, why we use the term, uh, really, uh, grammar check for compliance. Because if you think about the same way in which, uh, someone who's writing using, uh, a word processor, let's say, for example, Microsoft Word, um, they'll have a grammar check. Uh, imagine that if someone is not a compliance expert, they are writing stuff that is supposed to be compliant. The ability to help them generate content that is compliant. Uh, we think, uh, adds a lot of, uh, uh, efficiency tool to what the knowledge worker is using.

Craig:
Yeah. And I imagine, uh, in a very large global organization, uh, you know, a bank, for example, uh, there is marketing material or public facing material being produced at all different levels of the organization in all different geographies. So it's, uh, otherwise difficult to track, um, compliance. Is that is that right?

Vall Howard:
Yes. And so, um, but not only that, I would say that plus, right? So even in instances where someone is sending an email or where it's not public facing where it's institution to institution, there is a need for that communication to be compliant as well. Right. So for example, you cannot send an email that contains certain material, non-public information that may lead to insider trading. That's just one example. Right. And so organizations have an obligation to monitor some of those communications to ensure that they are compliant. And so we've built capabilities to help with that process as well. So it's not only the public facing piece, but there's also internal and company to company email communications that have to be compliant. Um, and through the help of, uh, some of the tools that we've built, we can use AI to detect an instance where there may be something that falls outside of the regulatory rules and alert the person. They're not intended to stop you, but we can alert you and say, hey, this these are some of the potential issues with that communication.

Craig:
Yeah. Are companies using this not only for, uh, content, uh, going forward, but to review content that they already have existing?

Vall Howard:
Yes. So, uh, hopefully content that already exists went through a human review process, uh, before it was approved for distribution. But, uh, the capability can be run on archived, uh, material as well, because there is, uh, this so-called rule seven A for compliance where, um, public facing documents have to be archived over a certain period of time, because let's say that three years ago you made a claim Acclaim in public, uh, public facing documents. And then, uh, three years after that, uh, that's when there's, uh, a potential loss because it was misleading. Uh, if you will. So the ability to go back and say, this is what was said and it was, in fact, non-compliant with the rules. That's why you have rule 17 844, uh, warm storage. So that, uh, in instances where you do have infractions that happen in, uh, the past where an investor or the public was misled, you can go back and take corrective action so you can run it on that, uh, archive as well. Um, but one of the more interesting use cases that we've seen is, uh, a lot of companies are building rang systems to help answer customer questions, for example, to make that process more, more efficient. Um, in those communication channels, let's say that you build a smart, uh, agent, a smart bot to help us answer customer inquiries. The answers that are generated have to. There's a need for those to be compliant as well. Uh, and so uh, having safer as uh, a layer that can sit on top of that and see what answers were generated, uh, and uh, providing some context or some risk score, if you will, in terms of the level of, uh, regulatory risk that may exist in that answer is one of the more novel use cases where we see the models being adopted.

Craig:
Um, and does this, uh, is it a, a plug in? I mean, you mentioned emails, for example. Um, the that it would like a spell check, uh, sort of track which you're typing and, and pop up, uh, a message or is it something that I would write an email, and then I'd copy and paste it into safer to to check its compliance. I mean, how does it work?

Vall Howard:
So we want to be where, uh, the user is, meaning that, uh, the least amount of steps that you need to take to check it is where we want to be. So we've built capabilities that plug directly into outlook, that plug into, uh, uh, Microsoft Office types of applications. But we've made, uh, the models available so that if you have your own workflow, if you are, for example, a software company building, uh, services for financial service companies, you can leverage the models and put it in your workflow. Uh, we have that. And so we want to be as flexible as possible. So if you're creating content, let's say in Adobe uh. When a PDF. And that's where you're creating content. That's where you want to check to make sure that what you're creating is closer to being compliant. You can plug it into that framework as well. Uh, but if you have a marketing workflow solution, uh, and you want it to plug it into that, uh, workflow system, uh, you can do that as well.

Craig:
Yeah. And and you have, uh, you announced a strategic partnership with Microsoft. Can you tell us about that?

Vall Howard:
Yes. Uh, and so we've been speaking with Microsoft for the better part of a year. Um, and we were very happy that, uh, the partnership was announced in Chicago, uh, at Microsoft Ignite last week. And that partnership is, um, is taking a set of all models. We've made them available in the Microsoft, uh, Azure AI model catalog, uh, next to models from, uh, OpenAI, I, for example for Mistral, for meta, so that, uh, all clients can either, uh, go to the catalog and use the models and embed the models into frameworks that they already have. Um, and so if you've already built a workflow at your company, there's no need for you to replace that. You can take, um, those models and embed it directly into that workflow through the Azure Model catalog. Uh, but if you're a software company, you can do the same as well. Um, and so that's kind of like the first part of what we've, uh, worked on with Microsoft. There are some additional models that we will be making available in 2025 through the catalog as well.

Craig:
Are there any metrics, um, about how often safer will flag something as non-compliant? Uh.

Vall Howard:
Yes. So we reduce about 70 to 80% of the friction. Um, and one of the more important metrics that we use is what I call the human coverage test. Meaning that before we release a model, we do a lot of work around human validation. But most of all, models are built through human reinforcement learning. So this is a great, uh, part of what we do. Um, meaning that, uh, before we release a model, um, we will create it, we will take content, we will have the models score the content, and then we will give a human without telling them, uh, what the model came up with a human subject matter expert in compliance, um, and have them score that content as well. And then one of our analysts will essentially compare and contrast the two to make sure that there's enough coverage between the two before we release the model. Uh, and in those, uh, scenarios, we typically look at, uh, 80 to 90% coverage, uh, or agreement between the model and the subject matter expert before we will make a model available. But above and beyond that, uh, all of the rules apply equally to everyone. It's not as if, uh, Finra is making rules specific to, let's say, company X versus company Y. Uh, so although the rules are uniform, but the interpretations that companies apply to the rules, there's, there's a little bit of room. Sure. Yeah. Because a company might decide to be more aggressive. Uh, from a risk perspective, they want to be a little bit, uh, riskier, um, versus another company. And so the ability to calibrate the models, um, if you will, to adapt the models to your own risk appetite, uh, of the company is an important consideration that we also allow our companies, once they start using the application to do so, over time, it can start running. Um, what's the risk appetite at company X versus company Y?

Craig:
Yeah. Um, and and uh, on the Azure, uh, catalog, if you pull in, uh, the safer model or models. Um, it does it how does that integrate, for example, with Microsoft Word.

Vall Howard:
So right now we have a Microsoft, uh, word plugin that you can essentially, uh, download and use. I think, uh, part of the conversations with the Microsoft team, uh, going forward is to look at, uh, that integration a little bit more closely and see if there are efficiencies that we can, uh, get by having a tighter integration. But right now that's done through a plugin, um, where someone can use the Microsoft, uh, the M365 plugin and essentially have the capabilities available in all of the Microsoft Office 365 applications. Mhm.

Craig:
Um, so how um, well, let me think here. Uh, can you give some practical insights about balancing innovation with regulatory requirements? I mean, you were talking about some some companies want to be more aggressive, others more conservative.

Vall Howard:
Yes. So from uh, from our perspective, uh, we think that, um, We'll continue to evolve the models. Uh. Let me take that back. So from our perspective, um, we think that safety as represented in regulations is what we are trying to solve. And so the extent to which that there are new adaptations of regulations. So for example, the SEC, uh, did an update to the marketing rule for registered investment advisors. And so we needed to essentially adapt the models to take that into consideration. Uh, because these rules are not static. They do change over time. And so consequently, the interpretations that a company will apply to an existing rule or an update of a rule is something that we need to be able to reflect in the models as well. And so part of the service is really to keep abreast of what's happening on the regulatory side to make sure that the risks that the regulators are calling out to keep investors and consumers safer are reflected in the safer models.

Craig:
Yeah. And so does that require sort of continually fine tuning the model. Or do you use a rag component in here.

Vall Howard:
So it requires fine tuning the models. And so we do a lot of work in reinforcement learning. Uh, and so on my team I have a, a function called uh compliance and legal engineering that works very closely with the data science team. And these are regulatory experts. In some instances, they are former regulators, uh, or staff attorney at the SEC, at Finra, who work with the team to make sure that we understand the nuances, uh, of the language of the new, uh, updated Regulations. Sometimes it's an interpretation that's issued by a regulatory body about an existing rule to make sure that we we can fine tune the models, that we can adapt the models over time to those changing regulatory, regulatory requirements.

Craig:
Yeah. And is that done? You then swap in a new model when you have one. Uh.

Vall Howard:
Yes. So that that's the intent. So there's kind of like the baseline models that are getting updated once regulatory requirements change. Um, and then we are essentially updating, uh, the baseline models for clients and also ensuring that whatever adaptation, localized adaptation that takes into account the risk appetite of the organization is also accommodated in the new model as well.

Craig:
Yeah. And so by marketing a material this this would apply also to annual reports and prospectuses and things like that.

Vall Howard:
Yes. So, uh, anything that a financial services company will put out needs to be compliant, um, with regulations. And so any content that is getting produced in some instances, press releases, for example, which is not marketing collateral. Right. Um, press releases. So if you, uh, going back to something that I mentioned earlier, if an investor can read it, can see it and or can hear it, it has to be compliant. And so, uh, the use cases across, uh, what a company would use this for is, uh, fairly, uh, fairly broad. Um, from our perspective, we think that anyone who's writing anything in financial services can use this application.

Craig:
Yeah. You mentioned earlier, uh, anything that people can read or see. And you said here, uh, would this apply also to like, podcasts of a company, uh, service company has a podcast? Yeah.

Vall Howard:
Yes. And so this is, uh, an area where we are seeing increasing usage because that's an area from E-comm, uh, compliance review perspective. Uh, that is very time consuming for companies because, uh, the way that that process works right now is that, um, someone will have to write if there is a script, if they are following a script, someone will have to write those scripts, submit it to the compliance department for review, and then you have to actually tape the podcast exactly as what was reviewed. The other way of doing it is, uh, certainly fits more of a free form podcast. You will have the compliance officer listening in, and if someone says something that is not compliant, they'll stop and make the corrections or else, um, if you actually have the podcast and it's out there. A compliance officer will come in and listen to it after the fact, like fairly fairly close after the fact. And listen. Stop, listen stop listen stop listen stop. And as you might imagine, over a 45 minute podcast that takes a while, we'd say, for you can essentially upload that into our application. And then we will transcribe it.

Vall Howard:
So we have a language model that is purposely built for financial services so that we can understand the context of, uh, financial jargon, because you might get someone, uh, who's on the podcast, who's using a financial jargon that, uh, some of the open source large language models will miss transcribe, which might lead to you not detecting a potential risk. So one of the things that we've seen, for example, is um, in a podcast, uh, where one of the large language model transcription models that's out there would transcribe, uh, mutual fund into mutual fund. Uh, and so consequently, when you go and run the risk detection, uh, on it, uh, if you are missing that context, then obviously, uh, you could potentially miss the fact that mutual fund, uh, require, um, uh, fee disclosures. Um, and so, uh, by uploading that 45 minute, uh, let's say, podcast into safer, we can very quickly, within a matter of minutes, do the transcription and highlight a at 245 on frame X. Um, this was said which is potentially non-compliant. And so that review process happens much, much faster. Mm.

Craig:
Yeah. Um, uh, this, um, um, I mean, you're focused on the financial services, and certainly financial services is a massive, massive market, but it seems that, um, your technology would apply to other regulated industries, health care, for example. Uh, do you service any other industries beyond finance?

Vall Howard:
So right now, our focus is squarely on financial services. Uh, we had a roadmap, uh, in financial services that we're executing on. But as you are, you're quietly, uh, quite rightly point out, uh, that the underlying principles, uh, that you have in financial services not to exaggerate language, not to be misleading. Uh, they're applicable across other industry segments as well. And so we certainly have, uh, a perspective in terms of, uh. Uh, Clu um, uh, utilizing the existing models that we have, how we can, uh, really help solve similar problems in other industry sectors as well. And so although the immediate, uh, roadmap is, uh, focused on financial services, we do think that there is a case to be made to expand beyond financial services, uh, into other industries as well.

Craig:
Yeah. And how do you guys charge for this? Is this, uh, a seat license or a pay as you go model or.

Vall Howard:
Uh, it is a seat license. Uh, up until, uh, a certain point, in which case it becomes an enterprise license.

Craig:
Yeah. Okay. Um, and with unlimited use and you're not, uh.

Vall Howard:
We don't limited use. We don't believe that, uh, I mean, people should know what their courses, and they should essentially be able to able to create content. And so all model right now doesn't consist of oh you can only create X number of content.

Craig:
Yeah. Uh, this is interesting, this idea of how gen generative AI can help companies be more compliant because there is uh, I mean, companies right now are starting to implement, uh, AI tools. And a content creation is one of the first, uh, areas that they, they, uh, deployed in. Um, and there is a concern that, that, you know, uh, there's a lot of, uh, subpar content being generated by these generative models. Uh, can you talk a little bit about how that affects, uh, Safety and trust.

Vall Howard:
Yes. So from all perspectives. So when you think about a large language model, it's really um, a couple of, uh, things, right? Uh, first of all, it's really a comprehension problem that you are trying to solve. Uh, essentially you are taking, uh, real world distributions and some of the underlying, uh, knowledge, uh, that's embedded in those distributions. And then you are compressing it so that you can make a prediction. Uh, and then making any sort of prediction, obviously they are uh, they are subject to hallucination, subject to, to errors. Uh, and so what we are trying to do is focus in on an area that people are already familiar with in the following sense. People are already familiar with using a grammar check, for example, uh, familiar with using a spell check, for example. And so what we've done is essentially try to focus the output of this generative AI I models through some adaptations that we've done, uh, so that what gets outputted is compliant. Uh, because if you don't do that, you may actually end up making companies less efficient in the following sense. Right? So as you deploy these tools, if you don't have these guardrails, uh, because, uh, these generative tools allow you to create more content, but at the same time, you are not increasing, uh, the number of compliance officers, uh, to a level where, um, you can do this efficiently, right? Yeah.

Vall Howard:
Uh, and so if on the one hand, the velocity with which you are creating content is going up because you have these tools now, but yet, uh, you don't have the review process scaling up as well, then you create this imbalance where the risk for the organization, uh, actually goes up and you become less efficient. And so this is why we think that this kind of capability where we can serve as this safety layer, uh, insofar as regulatory compliance is concerned, on top of these large language models, we think that is a worthy, uh, problem to solve. And I think that the adoption, uh, that we are seeing for safer, uh, proves this out. And I think this is one of the reasons why, um, you know, after about a year or so speaking with the Microsoft team, we've done the partnership with Microsoft. I think there's a realization that if you can help make the content more compliant, that you actually will increase the adoption, uh, and you will make the compliance officer, certainly at financial services companies, uh, a little bit, uh, you'll make it a little bit easier for them to approve the use of these tools.

Craig:
So you're integrated, for example, in a Microsoft office. Uh. Now, the way that I use JNI, uh, ChatGPT or perplexity is I'll ask some questions. And if I'm going to use that content, I copy it and paste it into a word document, for example. Uh, is it at that point when you paste it into a word document that safer would run? It's, it's screening and and you then have to like click a button for a safer to review the text. Or does it actually review stuff the output of the chatbot the LLM directly.

Vall Howard:
So um, both use cases we can accommodate. Um, and the reason why you want to accommodate both use cases is that, um, even in the instance where you copy and paste it into word, you are going to make some edit. And, I don't know, maybe our trend compliance officer, but if you're not a trend compliance officer, when you do your edit, you might introduce, uh, language structures that are not comply. Right. And so we want to be able to uh, to do that check as well. And then in the use case earlier that I mentioned, where there is a generative AI that is helping to generate, uh, answers to inbound inquiries, uh, where that process is seamless, where it is essentially helping the large language model that is generating the answers to those inquiries to be, uh, closer to being compliant without any human intervention. I see.

Craig:
So and in that case, it sits, uh.

Vall Howard:
It sits as a wrapper around the large language model. Right?

Craig:
Right. And so for now, I mean, these days people are using, uh, AI Jen, AI chatbots in customer service, for example. Uh, it would be a wrapper around whatever model they're using to generate those answers. And, and does it adjust the language before it? It's printed on the chatbot screen. Yeah. Yeah.

Vall Howard:
Yeah. So it's adjusted before it gets printed so that when you are actually seeing it, what you're seeing is closer to being compliant with industry regulations. Um, and so I, I think this gets at the safety considerations that I talked about earlier. It's the ability to give these large language models, uh, a way in which they can create content that actually follows industry rules, in this case financial services industry rules.

Craig:
Yeah. And you said that, uh, uptake is is healthy. Uh, how is safe for marketing this beyond appearing on my podcast?

Vall Howard:
Yeah. So beyond appearing on your podcast, uh, we've stood up a sales organization over the last, uh, year and a half or so. Um, and we are, uh, out at trade shows, speaking with, uh, industry professionals. I think this year I've spoken, uh, between Europe and North America. I've spoken at 13 different conferences. Um, I think, uh, uh, again, as a startup, you, uh, you, uh, want to start small, but I think, uh, those sort of partnership that we've, uh, entered into Microsoft and others that we that we are speaking with right now gives us an ability to scale, uh, on a global basis. Um, and so those are the primary ways in which we are marketing the product right now. And to your point, I think we are growing at about the rate that we would expect, um, given some of the projections that we have. Um, and so we are very happy with where we are and we are ready to scale the business as, again, uh, the partnership with Microsoft attest.

Craig:
Yeah. And you, you you mentioned globally, uh, do you support languages other than English?

Vall Howard:
Uh, yes. So right now we had a client, uh, that, uh, had a requirement for us to support, uh, six different languages. And so, uh, we support six different languages and looking to expand, uh, beyond that. Uh, so Spanish, for example, uh, that requirement was actually based in the US because a lot of financial services, uh, companies, they advertise in, uh, Spanish speaking, uh, market, uh, as well as English in the US. Uh, and so that was, for example, one of the requirements that we had from one of our prospects.

Craig:
Yeah. And, and I don't know what uh, certainly European Union has, uh, a bunch of languages.

Vall Howard:
Well, yeah. So French, uh, for example, was one of the requirements as well in Spanish. In German. Um, and so those are, uh, capabilities that we've built in Chinese was also a capability for this one, uh, uh, client that we're looking at, but certainly, um, we are looking to expand beyond beyond that, uh, because we think that, uh, this capability or this need, if you will, is one that's global in nature. Um, if you look across financial services with large, uh, on a global basis, there's over 100,000 uh, companies, um, that can benefit from this kind of capability. And, um, that's the market that we are focused, uh, on right now.

Craig:
Yeah. And that's that's a big market. Um. Okay. I'm I'm, uh, running out of questions. Is there anything I haven't touched on that that you want listeners to hear?

Vall Howard:
Yes. So I think from our perspective, this idea of adding a safety layer on top of generative AI, uh, will lead to greater adoption. Mhm. Um, and we think that greater adoption, um, given the penetration that we've already seen with generative AI, uh, is a worthwhile pursuit. And also one of the things that we've seen in regulated industries that. Uh, stop. Uh, in some instances, adoption of these tools is precisely this, uh, regulatory. This need for content to be compliant. And so we think that, uh, the efficiencies that we deliver. And also the fact that we address, address the safety concerns, uh, that, uh, industry, uh, participants have raised with us is, uh. Something that, uh, we're very proud of. And, uh, and so I would certainly love to speak with you more about, uh, it is, uh, there's a chance to go into some of the, uh, in depth math behind what we are doing. Uh, but I think, uh, uh, we are addressing a big problem for a very large market. Um, and we think by addressing this problem, we're going to help increase adoption, uh, of, uh, AI and generative tools in general.

Craig:
This is for regulated industries. But it seems, uh, there's so much fear about generative AI hallucinations or, um, you know, there's there was a famous case with, uh, Eric Canada, a chatbot that guaranteed, uh, a refund to a customer. And then when he filed the refund, the chatbot was wrong. And that ended up, uh, having to go to a tribunal, and there was a certain amount of reputational damage to Air Canada that, um, uh, how applicable is this solution? Not that you're looking in those, Hose for to expand the market, but this kind of thing, do you expect to see this kind of thing, uh, in, in non-regulated, uh, markets? Just a sort of a, a chap on top of what generative AI is.

Vall Howard:
Yes. So we think that, uh, given the underlying, uh, tenets of, uh, the rules that we have on the books for financial services. Uh, so, for example, uh, the idea that you cannot make an unsubstantiated claim, um, that applies broadly speaking, uh, in our opinion, uh, whether you are there's a rule in the books or whether there isn't a rule on the books. Right. Uh, or the idea that if you are making a claim, that is an Exaggeration, um, that, uh, that applies, uh, as a over all rule if you are running a good business, uh, from our perspective, whether or not that's embedded in the rules. Right. So happens that in financial services it is embedded in the rule, you cannot make an exaggerated claim. And so we think from that perspective, although we are marketing the product to financial services institutions where you have the rules on the book, there isn't anything stopping any other company, uh, from running, uh, the models on content that they are generating. Otherwise, because we think some of the basic principles, that's just good business practice, that's just doing good business. And so from that, uh, perspective, we're not going out and making the claim, oh, other industries should use it. But certainly if Is another industry segment wanted to use those rules. We think that it would be good business practice for them to do so.

Craig:
I see. That's interesting.

Vall Howard:
The key message, uh, of, uh, adding a safety layer, which is if you think about, uh, one of the number one reason and one of the number one issues in generative AI right now is safety. Uh, all stakeholders, whether it's the consumer, whether it's regulators, whether it's the companies who are building language models, the notion of safety is of is top of mind. And going back to something that you said earlier, uh, about the EU. So in the EU and if you look at the EU, I act uh, one of the underlying principles of the EU act that's been enacted is that AI needs to be compliant with all existing rules. Safer is a testament to that, right? Where we are helping you, we can take a model that doesn't comply with, uh, financial services regulatory rules and make it compliant with financial services regulatory rules.

Craig:
What does the future hold for business? Ask nine experts and get ten answers. Bull market bear market rates rising or falling? Inflation going up or down? Can somebody please invent a crystal ball? Until then, over 40,000 enterprises have future proofed their business with NetSuite by Oracle. The number one cloud ERP bringing accounting, financial management, inventory HR into one fluid platform with one unified business management suite. There's one source of truth giving you the visibility and control you need to make quick decisions with real time insights and Insights and forecasting. You're peering into the future with actionable data. If I were a larger organization, this is the product I'd use. Whether your company is earning millions or even hundreds of millions, NetSuite helps you respond to immediate challenges and seize your biggest opportunities. Speaking of opportunities, download the CFOs guide to AI and Machine Learning at NetSuite. That's NetSuite. Netsuite t. I. O n all run together to get the CFOs guide to AI and Machine Learning. The guide is free to you at NetSuite.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including enterprise-grade admin tools, powerful integrations and APIs, secure transcription and file storage, share transcripts, and easily transcribe your Zoom meetings. Try Sonix for free today.


 
blink-animation-2.gif
 
 

 Eye On AI features a podcast with senior researchers and entrepreneurs in the deep learning space. We also offer a weekly newsletter tracking deep-learning academic papers.


Sign up for our weekly newsletter.

 
 

WEEKLY NEWSLETTER | Research Watch

Week Ending 2.16.2025 — Newly published papers and discussions around them. Read more