August 18, 2025
AI in healthcare: A conversation with Waleed Mohsen, CEO of Verbal


In this interview, Dr. Mobeen Syed (DrBeen.com) speaks with Waleed Mohsen, CEO and founder of Verbal, an AI compliance platform for healthcare providers. They discuss the critical balance between leveraging AI's capabilities in healthcare while managing its risks, and explore how Verbal helps providers maintain quality standards and regulatory compliance in patient interactions.
See the full video on the DrBeen YouTube channel.
Transcript
Dr. Mobeen Syed: Welcome to the show. We talk a lot about AI, and healthcare has a very interesting dilemma. Do we use AI with its own hallucinations and maybe errors here and there, or do we not use AI? If we don't use AI, do we get behind while others go faster? To help us understand this, I've invited Waleed to join us. Waleed has his own AI company for healthcare. First of all, thank you very much for joining us. Tell us a little bit about yourself.
Waleed Mohsen: It's a pleasure to be here. I'm honored and looking forward to this conversation. A bit about my background—I'm currently running this company called Verbal, but to take a step back from that, I previously ran a telemedicine company supporting older adults with chronic conditions. We had an interdisciplinary team of nurses, dietitians, and care coordinators. One of the things that really frustrated me and our chief medical officer was how difficult it was to keep on top of compliance requirements—specifically billing, accreditation, and making sure everybody was following clinical protocols. It was through that experience and frustration that we decided something had to give, and we built Verbal.
Dr. Mobeen: Thank you for that introduction. Let me show your site to the audience. [Shows tryverbal.com] Full disclosure: I have no financial relationship here. The link is in the description, and I don't get anything if you click on it. I really want us to discuss and understand how this product and AI in general can help us. Tell me, what is this product and what kind of AI is this?
Waleed Mohsen: Building from that experience of trying to manage compliance challenges, we decided to build technology that would allow us to implement our own scoring rubrics—here's what we need to do to follow all the billing requirements across various payers, here's what we need to do to meet accreditation requirements.
Instead of doing random spot checking where somebody goes through the laborious task of reading documentation or listening to recordings—very laborious work that on average only provides visibility into a very small percentage of patient interactions—Verbal is designed to provide 100% visibility into these interactions.
For many providers, the most anxiety-inducing message they can get is from someone in compliance saying, "Hey, we need to talk." Providers think, "Oh god, what happened? Did I forget something?" Spot checks are often unrepresentative. The one spot check might be that one difficult patient, and now that's what the organization thinks is representative of how you handle all interactions.
Verbal gives full visibility into how teams are performing. If there are mistakes in documentation or something left off, rather than submitting it and having it delayed, denied, or potentially clawed back if it was already paid, Verbal catches this upfront and is transparent about it. Providers can see with their own dashboard: How am I doing in my interactions? Is there anything missing in my documentation? This provides transparent visibility so providers can self-correct. It's meant to be empowering.
Dr. Mobeen: Before I ask more questions, I have one that I've observed from my community when we discuss AI. Imagine I'm a patient speaking with a physician and Verbal is present as an agent. How is that data and communication secured? What happens from a privacy and security point of view?
Waleed Mohsen: It's a really important question, and as a company we take it very seriously. In our case, we redact PHI as soon as it's discussed. So anything that's stored has no identifiable information about who the patient actually is. We follow BAA agreements and go through the highest level cybersecurity certifications. There are too many instances these days of hacks and loss of data, so making sure you're being very thoughtful about this is exactly the approach we took.
Dr. Mobeen: I recall the demo you showed me. There's a patient who has a Zoom call with their provider, and Verbal is able to see what's being discussed, capture various follow-up events, and provide ratings in real time. Tell me about the actual experience.
Waleed Mohsen: One of the frustrating things when we were running the telemedicine company was that folks often don't get feedback until it's way too late. You're only looking at a very small percentage of calls. Maybe someone happens to look at one and says, "Hey, great job doing this" or "You forgot to do this," but that happens weeks later, if at all. By then, that patient visit is already gone—there could be dozens of visits in the meantime.
What we wanted to do was provide in-the-moment feedback to say: nice work doing this, don't forget to mention this. Whatever that checklist is that many providers have as post-it notes on their monitor or in a binder somewhere—these are the best practices for this use case—this is a way of having it displayed in the moment during the conversation to keep that person on track with anything that needs to be discussed.
If it's not of interest to have it there during the conversation, it can be displayed immediately after. So that feedback loop lets me know, "Oh, I did forget to follow up on behavioral health, I did forget that referral to care management." I can see that immediately after the call rather than having no idea I made a mistake.
What I'll say is this is not clinical decision support. This is meant to be a tool for in-the-moment guidance on how to conduct a conversation.
Dr. Mobeen: This is amazing. I was totally impressed when I saw the demo. Before we go further, this particular product is for providers—hospitals, clinics, doctors, nurses. Who is it for?
Waleed Mohsen: That's right. It's for any patient-facing interaction. The majority of our users are providers across a wide range of use cases: chronic condition management, primary care, maternal health, substance use disorder, dietitians, care coordination, health coaching. It's designed to make sure any forward-facing interaction with a patient that involves critical aspects of compliance around patient safety or clinical protocols are actually being followed in a meaningful way.
Dr. Mobeen: Let me ask a more general question about AI. What are AI agents? I hear about that a lot nowadays—AI agents, agentic AI. What are AI agents, what do they mean in healthcare settings, and maybe in Verbal?
Waleed Mohsen: I'll start generally and then dive specifically into healthcare. AI agents are essentially technology that can act independently. To take a crude example, if you've ever called the 1-800 number of your bank or Delta Airlines and they say, "Tell me what you're calling about"—in the case of an AI agent, this is that on steroids. You're literally having a conversation with an autonomous agent that can interact with you on a conversational level. These are voice agents that can interact on a voice level, and the levels of sophistication are getting better and better.
Today it might be a call center agent for your bank. Down the line, there are already companies working on AI agent nurses, AI agent therapists, AI agent health coaches to fill the shortfall of labor in the market today. Organizations can say, "What if we can supplement this shortfall by offering an AI nurse available 24/7 at a fraction of the cost, or an AI therapist available anytime at a fraction of the cost for people who might not be able to afford an actual therapist?"
This is where the technology is heading. It's also not without risk. You may have heard about a lawsuit against a chatbot company called Character.ai that told a 14-year-old to take his own life, and he did. This company and its parent company are now being sued by the mother. In healthcare it could be even more subtle—it doesn't have to be so egregious and could still cause harm, like "don't take your medication" or "double it up." In healthcare, it's a new world.
Dr. Mobeen: So as much as there is a fascinating future with AI and AI agents, there are risks as well, just like everything has risk and benefit. Your AI—Verbal—doesn't interact with the patient. Instead, it can look at the communication and then interact with the providers or clinical staff or administrative staff. Correct?
Waleed Mohsen: Correct. Verbal is designed to be the independent compliance layer helping organizations that already have this shortfall—they can only review 15% of interactions, which leaves all this risk. In the case of AI agents, if even humans need oversight and monitoring in an independent way, it will be even more important for AI agents to have 100% oversight and monitoring.
Verbal is designed to provide that across a voice interaction or chat interaction, making sure they're essentially behaving properly, ensuring it's not going outside of its scope of practice—and I'm putting this in quotes because that has not even been established yet. But we can think of guidelines that would apply to other roles: do not diagnose, do not prescribe. There are some basic things, but the reality is a lot of these guidelines have not yet been established. There are organizations like Joint Commission working with CHAI, a really interesting organization thinking through what these policies should look like. Verbal is there to make sure these policies are actually being followed.
Dr. Mobeen: So it may be possible that hospitals or clinics can say, "We want this kind of accreditation" or "This is our policy of communication," and feed that to the AI to enforce or help enforce that policy?
Waleed Mohsen: Yeah, exactly. Verbal isn't deciding what the policy is. Organizations tell us, "This is our policy." Verbal is there to tell you, "Here's how that's actually resulting in live conversations," giving them clear visibility into knowing where that's not happening and what specifically is going on.
Dr. Mobeen: To give a concrete example—imagine I have a clinic and my policy is that during our conversation with our patient, we would never say certain words. Let's say we would never say, "You silly, why did you not look at it?"—which I'm hoping creates an offensive emotion. If that's a policy, we would come to you and say, "We want to use Verbal, and here is our policy." If I'm sitting one day talking with a patient and I use that word, will Verbal tell me right away or afterwards? What would happen?
Waleed Mohsen: If I'm actually the one having that conversation as a provider, I can get a nudge. If I have Verbal displayed while I'm having the conversation, I would see a nudge saying, "Hey, refrain from doing that, correct yourself if you need to." It would otherwise let me know immediately after the call that that happened.
As a supervisor overseeing a large group of people—and in so much of healthcare today, it's a lot of part-time contractors where organizations struggle with getting visibility into how these interactions are taking place—this is designed to provide visibility to let them know, "Hey, 'silly' is actually being used quite a bit. You might want to have another conversation with the team and let them know to use this instead."
Dr. Mobeen: That's fascinating. I love that you can have real-time feedback from the AI to say, "You're not supposed to do this," or get feedback afterwards, and your seniors can have feedback as well. I'm sure it would improve patient communication so much. The other thing I'm impressed with is it's 100%—you're continuously looking at it, not doing 10-15% random checks.
Waleed Mohsen: And that was one of the things we found in interviewing dozens of providers as we were building this—they wanted to make sure that calls reviewed weren't cherry-picked. "Why did you pick that one? That happened to be where that one thing came up." By having it across 100%, it's fair, it's transparent. There's no "I don't have a good relationship with my boss" or "that person in compliance doesn't get along with me." This is just very clear-cut and objective.
Dr. Mobeen: I have another follow-up question. Imagine I have a more abstract policy. For example, in my clinical system, we are six doctors working together and we have a general policy to be kind toward our patients. This is a more abstract concept instead of "don't use this word or that word." Can the AI agent actually look at such abstract ideas as well?
Waleed Mohsen: That's what's been so exciting about the technology's advancements in recent years. Previously with what's called NLP and ML—which are forms of AI—it was much more limited. You'd be looking at keywords, combinations of keywords to say, "Did this happen or not?"
With the advancements of LLMs and AI, we can get even more nuanced on understanding the context of the conversation. Even if a different set of words were used that might imply a supportive conversation or otherwise, it's now able to understand context in a way that's never really been possible before. So if you wanted to get a sense of "What was the tone of that conversation?", you now have a better sense of being able to do that and understand that.
Dr. Mobeen: I think that is an amazing thing—to have that abstract policy be able to be used as well, compared to "this word is used or that word is used and this should not be used." Very interesting. Tell me, the organizations you're working with—clinicians, providers—what kind of results are they seeing?
Waleed Mohsen: We're working across a number of different organizations, from digital health companies to health systems to a Blue Cross Blue Shield entity. The results have been astounding. We're seeing:
- 8x ROI in reduction in QA labor costs—providers who have traditionally been doing very laborious work can now be even more efficient. Rather than looking at a needle in a haystack, Verbal gives them a clear list of instances to look at based on things that have been flagged.
- 36% improvement in adherence in the first three months of deployment post-Verbal
- 100% visibility into performance and, more importantly, actionable insights. I now have something to tell me, "Here are the folks doing really well and here are the folks that might need your attention and what they might need your attention on."
One surprising thing that came out—a clinical leader mentioned they're seeing their teams self-correct and improve, taking the initiative saying, "Okay, I do see that I forgot to cover that, and now I'm taking it upon myself to go ahead and do that," without having to have somebody tell them, "Hey, make sure to do this."
We're supporting across a number of different use cases. What's been so interesting about this technology is we've architected it such that whether it's chronic condition management, substance use disorder, primary care, genetic counseling—it doesn't matter. We can take in an organization's guidelines and best practices and use that as the basis to train a model to then look at these interactions and say, "Is this happening, and where do I need to make some improvements?"
Dr. Mobeen: That's very interesting. There was a Japanese study where they had a coffee machine and said "free or donate whatever you can," and people would take coffee and sometimes leave a coin. Then above the coffee machine they put a card with two eyes on it, and the rate of adding coins increased because people were just aware that somebody's seeing me. It's interesting that people can actually become a little more aware and modify their approach.
This may be one of the areas for medicine—to instigate and instill kindness and a more passionate, caring approach toward patients. My professor when I was studying medicine used to say, "If you cannot love people, you cannot treat them. You cannot have a dislike toward someone and then claim you can manage them as well." I love that. I think this AI, your product, would help us in that direction as well.
I have a comment from Margie: "Most medical practice is outside the domain of lifestyle medicine, but lifestyle medicine is essential to heal. It would be great if you could work with the American College of Lifestyle Medicine."
Waleed Mohsen: Love that. It's a great suggestion. We'll definitely take her up on that.
Dr. Mobeen: One last question. What is the future of AI? You have a product, there are tons of products, even I am working on some products that can help in this area. What is the future of AI in healthcare delivery?
I'll present a problem I've seen—I teach medicine. If I use AI to create a lecture for me that I would teach, AI hallucinates and that hallucination can create errors. You have to catch those errors and come back and say, "Hey, this is wrong," and it says, "Yes, I'm sorry, I apologize, I was wrong." With those known problems, what is the future of AI in healthcare and where is it best suited?
Waleed Mohsen: You're picking up on a really sharp issue. The opportunities abound with leveraging AI in care delivery. It can make all sorts of incredible improvements toward accessibility, affordability, and improving quality.
But to your point, there could be some unexpected results that we want to make sure we avoid. There's the adage in many tech companies in Silicon Valley: "Move fast and break things." Well, we should absolutely not be doing that in healthcare. Case in point: that chatbot talking to the 14-year-old I mentioned.
It's so critical to make sure we are balancing innovation with careful, thoughtful implementation. The benefits are just so compelling—being able to talk to an AI provider at any time of day whenever you need, at a fraction of the cost. For many people, it's better than not getting any care and not having any sort of option.
But how can we make sure that's happening in a thoughtful way? Because there are already companies out there releasing these sorts of AI provider tools. Making sure it's being tested appropriately, building trust in evaluating how these models are performing—this is ultimately why we built Verbal, to make sure there was a technological instrument to be able to evaluate the performance of these models at the point of intersection.
There's so much excitement around it, but I would caution everybody to explore and test. These things are happening, these tools are out there. They will find their way into an organization one way or another, so to be in front of it by evaluating them in a sandbox in a tested way is a thoughtful way of being able to make sure these technologies can be used safely.
Dr. Mobeen: With this, thank you very much. I just need one more promise from you—that you could come back once more and do a live demo that we can show our audience, especially healthcare providers, for how it works. I was amazed when I saw this.
Waleed Mohsen: I would love to. It's been a pleasure. Thank you for having me, and looking forward to speaking again soon.


Ready to put your compliance on auto-pilot?
Let's get your people back to patient care