Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

AI and Video in Behavioral Health Ft Loren Larsen

Summary

This episode of the Digital Health Transformers podcast features Loren Larsen, CEO and Co-Founder of Videra Health and former CTO of HireVue, discussing how artificial intelligence can extend behavioral healthcare beyond episodic, in-office interactions. The conversation focuses on maintaining a continuous provider-patient connection through AI-driven video check-ins without increasing clinical workload.

Loren explains how Videra Health uses AI to monitor verbal and nonverbal signals between visits, allowing clinicians to identify risk earlier and intervene when it matters most. He shares a real-world case in which an AI check-in identified emotional distress in a high-risk adolescent after hospital discharge, enabling timely provider outreach that likely prevented self-harm.

The discussion also addresses the limits of general-purpose AI tools in behavioral health, emphasizing the need for safety guardrails, escalation protocols, and human oversight. Loren highlights how responsible AI can reduce provider burnout by enabling targeted attention rather than constant manual monitoring.

Ethics, fairness, and data privacy are central themes, informed by Loren’s experience building AI systems at scale. He outlines the importance of transparency, bias testing, and strong security controls in earning trust. The episode concludes with a forward-looking view of AI as a continuous health monitoring layer, supporting earlier detection, better outcomes, and more equitable access to behavioral care.

Key Moments

Reframing Behavioral Health Beyond Episodic Care

  • Introduction of Loren Larsen as CEO and Co-Founder of Videra Health
  • Discussion on the limitations of visit-based behavioral healthcare
  • Need for continuous provider-patient connection outside clinical settings

AI-Powered Monitoring and Clinical Visibility

  • Use of AI-driven check-ins between appointments
  • Analysis of verbal and non-verbal cues to assess mental state
  • Continuous visibility into patient status without added clinician workload

Early Intervention and Real World Impact

  • Case study involving a high-risk adolescent after psychiatric discharge
  • AI detection of emotional distress and medication non-adherence
  • Timely provider intervention enabled through automated alerts

Human Oversight and Limits of General Purpose AI

  •  Risks of off-the-shelf AI tools in behavioral health use cases
  • Importance of escalation protocols and clinician involvement
  •  AI positioned as clinical decision support rather than therapy replacement

Ethics, Fairness, and Data Protection

  • Bias testing and fairness informed by experience at HireVue
  • Transparency in AI use to build trust
  • Strong privacy, security, and access controls for patient data

The Future of AI-Enabled Behavioral Healthcare

  • AI as a continuous health monitoring layer using multimodal signals
  •  Shift toward earlier detection and prevention
  •  Long-term potential to expand access and improve outcomes globally

Transcribe

[Marty]

Hey everyone, welcome back to the Digital Health Transformers podcast. This podcast explores the ever-evolving world of healthcare innovation, one conversation at a time. I’m your host, Marty, and today’s discussion takes us to the future of behavioral health, something we don’t talk a lot about on this podcast.
And so artificial AI and human connection are all coming together, and also at odds with each other, to redefine how we’re taking care of behavioral health. So we’re going to explore that today with our amazing guest, Loren Larson, the CEO and co-founder of Videra Health, and he’s also the former CTO of HireVue, which is crazy. And it also gives some foreshadowing for what Videra Health is all about.
So Videra Health, I think, honestly, Loren probably can describe it better than me and more faithfully. So Loren, what does Videra Health do?

[Loren Larsen]

Great question. Really, the problem, maybe just start with the problem that we try to solve. And the challenge that we recognize in healthcare is that healthcare doesn’t happen in your doctor’s office. You go to the doctor’s office, you get care, and then you leave, and then nobody ever sees you again.
And so we’ve really tried to address the question of how can we allow providers to maintain a connection with their patients, even when they’re not in their office, at scale, without creating additional burden on the provider, with technology and AI. So Videra has created a platform that maintains that connection between the provider and the patient.
We check in on patients on a regular basis, measure how they’re doing, and then get back critical information to providers so that patients get the right care at the right times.

[Marty]

Awesome. Could you give us an example of how someone would use this? Like a patient, what’s their problem or disease, and then how are they benefiting from Videra Health?

[Loren Larsen]

Yeah. Yeah. Great question. You know, I think there’s a lot of examples I can give. I think one that I really like, it’s kind of an emotional story, honestly.
You know, the most dangerous time in a patient’s life is right after they’ve left some kind of inpatient psychiatric care, in terms of taking their own life. So we had a patient who had been hospitalized for severe depression, they got put on medication, they were sent home. This was a 17-year-old kid.
A couple weeks later, he goes to his primary care doctor, and the doctor looks at what he’s taking, he’s like, oh, you shouldn’t be on that, you should stop taking that, which is the antidepressants. A few weeks later, all of a sudden, he’s severely depressed again, but worse, right?
Because now he’s thinking, I failed, I don’t want my parents to know, I went to the hospital, I’m ashamed that I can’t even get better. So he’s planning to take his own life.
And the night that he’s planning to do that, he gets a text from us through his provider saying, hey, this is such and such hospital, we just want to see how you’re doing, do you have a minute to check in? Clicks a link on his phone, it opens up in his browser, and he’s now interacting with AI that says, hey, tell us what’s going on, how are you doing? It doesn’t know how he’s doing, it’s just asking.
And he starts to share how he’s been feeling and what his struggles are, and it’s interactive, it’s asking him questions. And we’re also listening to his language, his nonverbals, to understand how he’s actually doing. And we trigger an alert that we call emotional distress to say, hey, there’s something here that the provider needs to look at.
An alert goes to the provider, they immediately look at it and they’re like, oh, yeah, we have a situation. And so then they’re able to reach out to this young man, say, hey, why don’t you come back? He’s like, it actually took a little bit of convincing.
He’s like, actually, I’m done, I’m out. And they were able to get him back in, get him back to care, get him stabilized. And he shared with them the next day, like, hey, I wouldn’t be here today if you hadn’t reached out yesterday, right?
And that’s what people are going through in this country. We have people taking their own lives all the time. Everybody’s been affected by this.
But there’s never been a way for a provider like that who does care to actually, in a practical way, reach out and check in on this person who they know is at risk and be able to get them back to the care. And for this young man, he’s sitting out there, he’s feeling alone. He doesn’t realize in the moment that there is help available.
There are people out there who, if they knew how he was feeling, they would help. And we were able to connect the provider and this young man and save his life. So that’s one example.
There are a lot, but that’s one that kind of helped us realize how important the work we’re doing is.

[Marty]

Whoa, Lauren, that’s crazy. That is a touching story. That’s a touching story.
So, okay, what do you have to say about that big OpenAI, I don’t know if you’d call it a scandal, that came out about the kid killing himself because of his chats with ChatGPT? What do you have to say about that? Because you’re doing the opposite.
This is crazy.

[Loren Larsen]

Yeah, and I think, look, I think I know people, a lot of people, I know people who have used ChatGPT to get advice, to get counsel, to probably help them through situations. The problem is we just don’t know what it will do under certain circumstances.
It hasn’t been trained or had the guardrails put around it for safety that I think are required. It’s not a qualified therapist, it shouldn’t probably be doing therapy. It should hand that off to people.
And clearly it hasn’t always done that. Lots of examples of people who have engaged with AI. And it’s designed to ratify what you’re saying, right? It’s very sycophantic.
And I think we’ve probably all experienced that. Like, oh, great idea. You should do that.
And there’s this case of the person who was planning to attack the Queen of England at Windsor. And he was having this dialogue with it of like, I don’t know if I can do this. And it’s like, no, you can.
I believe in you. Did this actually happen? Yeah, I’ll send you the reference. No way. Yeah.

[Marty]

I don’t believe that. No way. I feel like Chad GBT would be like, I don’t know if that’s a good idea.
It really did that?

[Loren Larsen]

That’s hilarious. I think the Queen is this and that. And it’s like, yeah, yeah, you’re right.
Yeah, yeah, we’ll do that, yeah. Yeah, right? And so it starts to build this inertia for his thinking patterns, which it continues to reinforce.
And then dangerous things happen, right? And so I don’t think these tools should be used for interventions today. There may be something out there that’s been trained really well and had the guardrails put around it.
But certainly off-the-shelf LLMs are not the thing for that. Our goal is to help providers build a better relationship with their patients, see what they can’t see, sort of give them superpowers to see all their patients all the time and figure out who really needs their help the most next.

[Marty]

Okay, yeah. So the key distinction there is Chad GBT was like consoling the kid, whereas you guys like picked up like, okay, he’s not okay. And then you sent him to real people.

[Loren Larsen]

Yeah, and I think that’s still the right way to handle these situations. Will we get to the point where that isn’t the case? I’m sure we will.
I have no doubt that we will. And if there’s nobody available, then hey, let’s take the next best alternative. But today our mission has been, how do we help people do better at helping people and reduce the burden that our providers have on them, which we all know that healthcare providers across all of healthcare, including behavioral health are burning out, leaving the field.
They’re not well paid and they work way too hard. We can reduce that burden on them and help them have more impact with the time they do spend at work. Then that’s a good use of technology.

[Marty]

Yeah, yeah. Yeah, still crazy stories. So looking ahead every single year, like it’s crazy because I look at like what I, I would look at something I did with like Chachapiti last year, like some, some, some article or something I cooked up, whatever.
And in the language is completely different. It’s a lot worse. And I’m like, wow.
You think it’s worse now? Oh no, no, no. Like back then.

[Loren Larsen]

Before, before. Okay.

[Marty]

Because it’s got so much better now.

[Loren Larsen]

Yeah, yeah, yeah.

[Marty]

And now I’m using like Claude and stuff too, which is also better. And it just keeps getting better. Everyone knows.
It’s like, you know, it’s getting better and better. We’re building out these data centers, you know, Moore’s law, et cetera, et cetera. So as AI continues to improve, how is that going to be reflected in your company?
Like, what are you guys going to do when AI gets better and better and better? What, like, what’s the crazy like sci-fi version of Videra Health where, you know, AI is really, really powerful and you guys are using it for what you do.

[Loren Larsen]

Yeah, I think that’s a great question. I think there’s sort of a few components of that. I think there’s the language part of it, which we tend to not focus on.
We obviously are using LLMs to have a conversation with a patient and generate, you know, based on all of this data, what’s the right next thing to ask? But we’re really trying to look at not just what a person says, but how they’re saying it. And looking at the whole person.
So it’s not, you know, I can say, yeah, I’m feeling great today. Things are fine. It’s really wonderful.
I’m super happy. But if you transcribe those words into a doctor’s note or something, you’re like, yeah, we’re super happy today. And you’re like, maybe not, right?
So we’re really trying to look for the whole person, the whole communication. You know, we know nonverbal communication is an important part of it. So we’re really focused on that aspect.
Whereas a lot of the growth that we see in kind of the off-the-shelf LLMs we all use every day has been on the language side, which has been incredible, right? And I’m even blown away by how much code you can write, right?
Like just, you know, I’m a software engineer by background, a computer scientist. But I don’t write, I still build programs, but I don’t write code very much anymore because LLM can just do it for me, right? And I’m just, these are just things I’m doing for fun. Leave it to the professionals at Videra.
But just the ability for it to just write a working program from a very simple set of interactions with language is amazing, right?
So, you know, I think the sci-fi version for us is, and what we’re sort of, we often use the metaphor of like the Star Trek tricorder that, you know, they’d take out to a planet and do all these readings. But like, what’s the tricorder for human health, right?
And we envision a world where there’s one app, your doctor sends you a thing, says, hey, Marty, how’s it going? You check in. And it’s just a normal thing.
They check in every three months and you answer some questions. And we’re reading all of this from it. And we’re saying, oh, you might, seems like your cognition’s a little slower.
Have you had a traumatic brain injury? Oh, I did get this concussion, but I didn’t think it was a thing. You know, maybe you should come in.
Let’s get some imaging on that, right? So we envision this idea where there’s an app on your phone, you check in and there’s 30 different biomarkers that tell your doctor and you, hey, here’s something you need to look at, which just doesn’t exist today, right?
We have to go get blood labs. We have to go in and they’re like, oh, you seem fine. I don’t, and the great thing about AI is like, if you’re doing these check-ins regularly, we’re building this visual data set of what you were like at each stage of your life, right?
I mean, you probably had stories where somebody you know went to the doctor like, I just don’t feel right, doctor. And they’re like, I don’t, you seem fine to me. I don’t really know you.
I don’t remember what you were like last year. I have a thousand patients. How would I know? And you’re like, no, something’s off.
They’re like, man, you seem fine. And you’re like, no, you’re not hearing. You’re not, you don’t understand me.
And we can avoid that, right? Where it’s like literally like, no, there’s actually a demonstrable difference between what you were like last year and this year. And then you could tell the doctor like, yeah, actually, they’ve got a facial tick.
It wasn’t there last year. And he’d be like, oh yeah, I didn’t even notice that. Okay, great.
Now let’s go down that path for diagnosis, right? To help guide the clinician to make the right treatment path and observations for you.

[Marty]

Whoa, yeah. That is something. Okay, so there’s like a preventative aspect in here too.

[Loren Larsen]

Yeah, I mean, I think preventative or early detection, right?
Okay.
There’s so many diseases today that are like, oh, it took me five years to get diagnosed. It took me seven years to get diagnosed.
It took me 10 years before I decided to go see someone about that. But if you did a thing and I said, hey, you should see someone about that, you might not procrastinate for 10 years.
And then it’d be easier to correct that situation before it becomes something chronic, something severe, something life threatening.

[Marty]

That’s, I mean, that’s great. And it’s low cost too. It’s just like, I mean, well, the cost is only getting smaller is what I’m saying in the long run.

[Loren Larsen]

I mean, if you want to announce my new price discounting strategy, sure. But the cost to deliver these things, yes, it is going down over time.
Or we can do more and more at the same cost, right? So instead of measuring one disease state, now we can measure five, 10, 15, 20 at a similar cost.

[Marty]

Yeah. And so, okay. What about like, so if you’re training all these models on visual data and like what that indicates about the person, I feel like couldn’t this go beyond just spotting diseases, like lie detection or this or that?
Like, you know what I mean? I don’t know. Has anyone asked you about that?

[Loren Larsen]

Yeah, it’s really funny. I don’t know if you’ve read the new Dan Brown book. It’s called The Secret of Secrets.
It’s really funny. There’s a whole place in there where this guy gets sort of kidnapped and they’ve got this camera looking at him and it’s looking for nonverbal behavior.
And it turns out it’s a secret government agency called In-Q-Tel.
At HireVue, we ended up sort of, we had to make a choice back then. We had played a lot with lie detection and talked to government agencies about that.
Could you take this HireVue technology for job interviewing and use it for lie detection or healthcare?
And we had to make a choice, like which path do we go? We chose healthcare, but lie detection is really, really interesting and hard.
Even MRI traditionally has not been very good at, you put someone in an MRI machine, it’s still not accurate at predicting deception.

[Marty]

They use MRI machines for lie detection?

[Loren Larsen]

In studies, they’ve done it, right? Like, oh, if we looked at their brain, could we tell if they’re lying?
Because just from cameras, just from nonverbals, it’s very, very hard to do it.
Even looking just at the eyes, it’s hard. It’s not 100%. It’s nowhere near 100%.
And even humans are, even humans who are really well trained, the studies say, even like law enforcement, like, ah, I can spot a liar.
The data says they’re barely better than random.

[Marty]

What barely better than random? What about like a professional poker player?

[Loren Larsen]

I don’t know if I’ve seen the studies on that. I don’t know. I don’t have to, I don’t know the science.
I mean, certainly some people are better than others, right, at reading tells.

[Marty]

That’s like a movie idea. Professional poker player turns into a FBI agent because he can like tell when something’s off.
Or like turns into a doctor because he can tell when something’s off.
Yeah. But I think I read somewhere like doctors used to live with like the families that they like took care of.
Or they’re like, we’re neighbors and they like know the people well and know their diets.
And that’s kind of changed. And you’re bringing that back with AI, which is just a thought that came to mind.

[Loren Larsen]

Yeah, I think that’s right. I think it was so much easier in the old days, I guess, where, you know, somebody lives in a small town.
And, you know, I have a friend who’s from like a small town in Iowa.
And it’s like, yeah, the doctor in the town like delivered him. His dad is like all of his siblings.
Like they’ve known the family forever. They know all this background.
And they do remember them because they saw him at the soccer game and at the town festival.
You know, at a small town, you can kind of do that.
But it’s very hard in modern medicine to do that.
And I think we are really trying to help improve that relationship that providers have with patients and augment their memory.
Right. You can’t remember the details of a thousand patients you see once a year.
It’s impossible.

[Marty]

Yeah, no, yeah. And it’s like over time, I guess, what you guys are doing is like there’s the personalization like along with time is like a U-shaped curve.
It just goes drastically down as we modernize.
And then with AI, it’s probably going to go back up.

[Loren Larsen]

Yeah, I think that’s right. I think the other thing that we find is that we really don’t know what works.
In medicine, especially in behavioral health.
Right. If you have heart disease, like there’s a procedure, there’s a protocol.
Everyone’s been trained on it. Kind of works.
You know, if you’re getting open heart surgery or something, if you go to behavioral health for this and you’re presenting problems like, well, I have this, this and this, and I just don’t feel well.
And I think I might hear voices sometimes and whatever.
We don’t really know what works.
Even in substance use, it’s like your chance of going to substance use treatment and getting the first, you know, getting sober the first time for the rest of your life is really poor.
And now we have medication-assisted therapies.
But there’s not really great data on what works for what types of people at this stage of their life with this history.
But if we collect all this data, we’ll really know like, oh, this person’s a 35-year-old white male who’s had a history of this drug, this amount of drug use.
Here’s his genetic profile.
Here’s the things that we know will work today with the highest probability.
Won’t be 100%, but today it’s mostly just people like, you know what, I think this would work.
I read a paper about that a while back.
That’s great. That’s applying science.
But it’s not being able to look at all the data available about that person or that all of science knows.

[Marty]

Yeah, yeah, yeah, yeah.
Yeah, the benefits of, you know, using AI is going to just compound, it looks like.
What about, yeah, what about, like, what about that thing you said earlier about AI eventually being used as the person to talk to the patient or the next best alternative if a person isn’t available?
What would have to be the tipping point for you guys to implement that or just in general for you, Lauren, to think like, okay, I think this is ethical.
I think this is right.

[Loren Larsen]

Yeah, I think it starts with what I just said. I think you want to like, okay, what do we know about this person?
What do we know about their condition?
And do we know what works for that condition?
And then you go, okay, do we have an AI that can deliver that, right?
If they need MAT treatment for substance use, AI over a computer is not going to deliver medication-assisted therapy.
So you need to send them to somewhere to get medication.
But if talk therapy would work for this condition, for this person in this situation, let’s have a chat, right?
And I think until we get to the point where we can prove that for a set of conditions, for a set of people, we can prove that AI can deliver the same outcomes, which we honestly don’t even track outcomes very well in behavioral health.
It’s kind of like people leave the treatment and they’re like, I feel a little better.
I mean, and they give them a survey, but it’s not a very robust measurement.
It’s not done over time.
It’s usually not linked back to the whole course of treatment to say, okay, we did this, this, this, this, and this during treatment.
And here were the outputs over this period of time.
We just don’t do that.
So we need to build those datasets and then we’ll be able to say, can AI deliver the necessary intervention here to produce the outcome we want?
We just don’t, we’ve never had that dataset before.
And that’s the dataset we’re building, right?
We will sit inside of the therapy session, listen to it.
We’ll produce notes for it, but we’re also using, and then after therapy, we’ll follow up and see how the patient’s doing.
Now we can link what happens in a therapy session to what actually happens to the patient in the real world.
Then we can build AI that produces, does the right things in there to produce the outcomes that work.
So we’re working towards that, but I think we’re taking a very conservative approach about how to do that, how to study that.

[Marty]

Yeah.
Okay.
What have you taken, I’m just personally wondering, what have you taken from your experience at HireVue into this new thing you’re doing?
Or just also what generally have you taken from the past, the lore, the story of Lauren and put it into Videra to make this really your own and how was it part of your life and mission and all that?

[Loren Larsen]

Yeah, obviously the HireVue story, for those who don’t know, is kind of the job interviewing platform.
Let people interview asynchronously, just like we’re checking with people now, but instead of telling their doctor how they’re doing, they would interview for a job, answer job interview questions.
And then ultimately we built AI that would listen to that interview and say, okay, you had a hundred people interview, here are the top 10 most likely best candidates.
And I think we learned a lot about engaging people in a video interaction.
HireVue’s done a hundred million job interviews at this point with people.
You learn a lot when you’ve done millions and millions and you survey people and say, what did you like?
What did you not like to create a great experience?
So I think that’s been a really big piece of it.
And then I think helping people get jobs and measuring that is really hard.
Like if you’ve interviewed people and you’ve sat around a table going, oh, I really like that candidate.
Somebody else at the table’s like, they were terrible.
Who were you talking to?
Were we talking to the same person?
It’s very subjective how we make these hiring decisions.
And so it really taught us how subjective this process is and how to measure and quantify things.
But I think the other part is just, how do you make sure these models are fair?
Because you don’t want to discriminate against people.
In many ways, the world of hiring, at least in the United States, is more regulated than healthcare.
You literally can’t make a decision that discriminates against a group of people, right?
If you always pick the men for this or the white people for this, then you’re discriminating.
And so we had to test our models very, very rigorously to make sure that we weren’t disadvantaging any group.
And that’s a lesson I really, really value.
And we take to heart really carefully because we’re trying to increase access for care, not reduce access to care.
We’re trying to help people who wouldn’t normally get into care, make sure that they get into care.
If we start disadvantaging groups, it’s not helpful to anybody.
And then I think the other thing is just really, it’s been interesting to see the evolution of people and their perceptions of AI.
People did not necessarily like the idea that they were being assessed by artificial intelligence for a job.
And how do you build trust?
And how do you explain out front in a clear way, like, hey, here’s what we’re doing, and here’s why that’s actually helpful to you.
A lot of people said, well, I want to interview with a person.
And it wasn’t a choice for them between interviewing with a person and interviewing with a computer.
It was interview with a computer or just don’t get seen at all.
Just get put in the trash.
Your resume just goes in the trash.
And so it’s creating opportunity for people.
And how you communicate a message that was hard and still is hard.
But as we think about, why would you want to talk to an AI instead of talking to your doctor?
How is this beneficial to you?
And what should you do with the results of that?
All of those communication elements are pieces that I think we had to explore at HireVue and apply here as well.

[Marty]

Yeah, the messaging carries. It’s a similar messaging, ethical justification in both companies.

[Loren Larsen]

Yeah, yeah. Yeah, and I think one thing that’s funny in hiring is I kind of made it like, well, you don’t get to talk to the hiring manager.
You have to do the AI thing first.
In health care, generally, you want to say, look, if you want to talk to the AI, we will find a person for you.
You need to find a human alternative for people who just don’t want to, don’t trust, don’t feel like it, don’t want to deal with the AI.
As much as possible, you want to find a human alternative for those people.
So no one’s left out.

[Marty]

Yeah. Have you had any crazy stories at HireVue or now of people with serious ethical backlash or like, Lauren, you can’t be doing this.
I’m not going to have my resume looked at by an AI.
Have you had any stories with that?

[Loren Larsen]

Yeah, we faced a lot of that early on at HireVue.
I think a lot of it was very well-intentioned, like, oh, I want to make sure.
Somebody who has published a thing, well, AI discriminates against people based on their skin tone, for example.
And yeah, that can happen.
And the question, so it’s very easy to say, oh, AI does this or AI does that or AI can’t do this or AI, you generally have to sort of go, well, that AI can’t, but maybe someone else, because as fast as this is changing, maybe somebody else built an AI that can do that today, right?
And tomorrow, AI will be able to do that.
Maybe not that other AI, but this AI.
It’s almost like you have to think of it as like, can that person do this, right?
Does this person have this capability?
And then there’s probably things that are superhuman.
No AI has ever done this.
And the reason I say that is it was just very useful, like, well, this AI discriminated, so your AI must also do it too.
And so it’s really important to kind of communicate clearly, like, no, no, we’re actually testing for that.
We’re actually really thoughtful about that.
We’ve thought that through.
We’ve tested it.
And so we got a lot of backlash early on for sort of preventing access and discriminating against people.
And none of it was really accurate.
And so the messaging is really, really important.
And look, the truth of it is, even if we do it right, someone else might do it wrong, right?
Not everyone’s doing it right.
And so I think there is this potential for AI to get a really bad reputation because of a few bad actors.
Like we talked about the example of open AI as encouraging someone to take their life.
Does that mean all AI does that?
Or is that just that version of open AI then?
And so we have to keep updating our model of what these things can do and not assume that all AI is the same.

[Marty]

Yeah, yeah.
And that kind of goes back into like the ultimate sci-fi version of Vadera, assuming AI becomes perfect.

[Marty]

You know, GPT nine, you know, just becomes all knowing.
Yeah, omnipotent, all knowing.
And, you know, capable of anything.
What about agentic AI?
That’s like, I go on LinkedIn, that’s always the word of the day for every company, person, agentic, agentic, agentic.
What are you guys doing with like agentic AI that can go in and do things and change things?

[Loren Larsen]

Yeah, it’s funny when I, again, now everybody does agentic AI, even though their products are the same as last year in a lot of ways, right?
They just sort of, marketing’s created a lot of innovation.
So, you know, if you think about what Venera is doing, we were sort of given a task, which is, hey, go out and take this list of people and go find out how they’re doing.
Let us know. Right?
That’s kind of the job of an agent.
You give it a job, goes off and does it and brings back some result.
And so in a way, we’ve been doing that since the very beginning of Venera.
And of course, it gets more adaptive about how it does it and how it handles exceptions and that sort of thing.
So, you know, I’m excited about the potential for AI to act with more agency and more freedom and be more creative, if you will, about how to accomplish a task.
But I think a lot of it’s overblown, honestly.
I haven’t seen I’ve seen a lot of really bad agentic systems that we’ve tested and outside of healthcare.
I mean, just like how do you handle this customer support ticket or how do you handle this marketing task?
So I’m excited about it.
I think it’ll really free up a lot of people to do things they haven’t been able to do before.
But there’s a lot of hype.

[Marty]

A lot of hype, a lot of hype.
Yeah.
Do you think what do you think about all the bubble claims, all the bubble allegations?

[Loren Larsen]

Yeah, that’s a long that’s a long topic.
You know, I do think that I’m probably more on the side of this is going to change our world in so many ways that we already most people don’t fully understand how much the world will change in the next one to two to three years.
I think we’re generally underestimating the amount of change that this will have on the world.
So I don’t know if the markets have properly valued that or not.
I think in certain cases, maybe they have.
But I think it’s the biggest shift in in our economy, in our world, in our culture, in our society.
It’s bigger than the Internet.
It’s bigger than the Industrial Revolution.
Like this is really, really essentially creating other intelligences to go and act autonomously and the risks are huge.
So, you know, I think we probably overall are underestimating it.
But whether a particular, you know, company is overvalued is a harder thing to say.
Or a group of companies that are all sort of trading and exchanging dollars amongst each other.
Yeah, there’s gonna be some winners and losers.

[Marty]

Yeah.
So so what’s your like investors pitch then for for Videra?
Like the future of the markets and the industry and then how Videra is gonna gonna jump up with that?

[Loren Larsen]

Yeah, look, I think our view is that there should be a tricorder app for health care that uses visual, nonverbal, verbal behavior to check in with patients.
That’s probably a specialized app.
There probably aren’t 10 of them.
There’s probably a few winners there.
We want to be that and measure as much about a human as possible to get them the right care.
And that it seems obvious to me that will exist in the world.
And it’s a huge opportunity to deliver better care.
And that’s Videra.

[Marty]

OK.
OK.
I was asking about like all the, you know, Tam, Sam, Sam and all that.

[Loren Larsen]

Oh, yeah.
I mean, more simple.
Yeah.
I mean, we’ve got numbers to that and there’s behavioral health numbers, there’s broader health care numbers.
You know, I think easiest to say, like, you know, it’s there are huge numbers, hundreds, you know, hundreds of billions of dollars of opportunity.

[Marty]

And then you guys are you guys are going to go global.

[Loren Larsen]

Yeah.
I mean, I think it’s we would love to do that now.
I think we’re we’re we’re we’ve got plenty on our plate right now.
But, yeah, there’s nothing about the way that what we’re delivering that doesn’t make sense across the entire world.
I mean, the way you get paid for it in America is a little bit perverse.
But there’s probably countries out there where we should go first.
It would be easier.
But we certainly decided to focus on the on the world that we know.

[Marty]

Yeah, because because I just think of like I used to watch these like documentaries about.
Like mental illness and like other countries like like in like Indonesia and stuff like you just get like beaten or something like like it’s like, oh, the devil’s in you or something like that.
Like, I think you can probably really help out over there.

[Loren Larsen]

Yeah, I think there’s a lot of everybody sort of sees the world differently based on their cultural lens.
But objective data will probably help us all make better decisions.

[Marty]

Oh, yeah.
Yeah, yeah.
Not exactly doing, you know, getting some science involved with it.
Yeah.
Like, how do you how are you getting people to trust your A.I.? What’s your messaging for like you should trust this?
Like, how do you communicate that to someone?
How do you communicate that to a non-technical person?
Because I feel like if you start talking about your your training and accuracy numbers and guardrails, like, I feel like that won’t communicate.
Like, how do you conceptually like talk about A.I. like into a layman, like trying to get them to trust it?

[Loren Larsen]

Yeah, that’s a really good question.
You know, if you’re talking to a provider, obviously they want to see data.
If you want to talk to a health care system, they want to see ROI.
You’re talking just like the average person at a party and you’re explaining what you do.
I really feel like that’s shifted dramatically over the last four or five years.
When we were first building this, we were like, oh, that seems weird.
That seems creepy.
And I think COVID kind of made a lot of people feel more comfortable with being on camera.
And then I think kind of the experience, you know, if the experience is good, they’ll have questions like, does it make stuff up?
Does it hallucinate?
Is it going to tell me to do sort of bad things?
I think you kind of answer like, no, we put guardrails around that.
And people are like, OK, great.
Like, I think for the most part, if people are comfortable with the idea of interacting with a phone and a video camera and they’ve used ChatGBT, most people in the last few years have become much more willing to be like, oh, that sounds really cool.
That sounds like it would be a good experience.
They’ve had some kind of experience that’s enough like this that they can kind of go, oh, yeah, I think that’d be all right.
I mean, then the question is like, who sees my data?
Like, who gets to see this?
Where do you keep it?
How does my video about my mental health not show up on YouTube?
Right.
And I think that’s actually where we still have more concerns, not about kind of the AI experience, but just the regulation and the control of this data.
Are you going to keep this forever?
Is this going to show up, you know, 50 years from now?
And if you’re dealing with veterans, are you going to send this to the VA?
Are you going to send this to my insurance company?
There’s definitely those questions about where data goes and who gets to see it, which in reality follows all the normal health care rules around HIPAA and who can see what.
And patients should be able to see their own data.
It’s not our data, it’s their data.
But there are good answers to that.
But I think it’s, you know, I think trust is more about what gets recorded and what gets kept and who sees it than is the AI trustworthy today?

[Marty]

Yeah.
Like, yeah, if all these videos got out, that would be like, ugh.

[Loren Larsen]

I feel like a lot of people want to be clear.
Like, you know, some of our customers use this in a way they’re like, look, I don’t want to record.
I don’t want that recorded.
Just give me the data and then let’s dump the recording kind of thing.
So there’s different ways to approach this based on the use case.
But obviously, if we can record things and keep it over a period of time, we can learn a lot and provide more value to the patient.

[Marty]

Yeah, yeah, yeah.
Is there even like a way that anyone could like get that data if a hacker even wanted to?
Is it like stored somewhere to train your guys’ model and then it would like leak?
How does that work?
How do these leaks like work for something like video data?

[Loren Larsen]

Yeah, I mean, it’s like any kind of data that’s out there.
You have to protect it carefully.
You know, we have a robust information security process and a team that’s super smart.
Try to hack yourself.
Red team it.
Look for vulnerabilities constantly.
You have to take a very sort of adversarial approach.
You know, it’s kind of like when you’re dealing with an LLM.
There’s a tangent, but, you know, I was preparing a talk recently.
I said, give me the latest data on mental health statistics that I want to put into this talk and came back with some numbers.
And I said, I’m not really sure if those numbers seem right.
Are those actually correct?
So, you know, you have to sort of proactively question it.
And it goes, you caught me lying.
I’m sorry.
And then it admitted that it made up a bunch of the numbers.
And I said, can you tell me exactly which numbers you’ve made up?
And then it actually went back and said, yeah, this number was real.
Here’s a citation.
Here’s a this number is real.
Here’s a citation.
This one I just made up out of thin air.
Right.
And so you always have to play the adversary to this thing, whether it’s your security or your LLM, and try to see where these things can be exploited.

[Marty]

Playing the adversary.
Yeah.
What’s that called? Like ethical, ethical hacking early in cybersecurity?

[Loren Larsen]

Yeah, absolutely.
Yeah, absolutely.

[Marty]

You guys hire a bunch of ethical hackers.

[Loren Larsen]

Yep.
Yep. We have bug bounties and people will report things they don’t.
It hasn’t happened very often, but that’s not necessarily an invitation to everybody to go for it.
But, yeah, we do.
Just like anybody who’s in this business.
Right.
You have to be looking for these things and having both your internal and external people helping.

[Marty]

Yeah, I’m just thinking for like anyone listening out there, like in cybersecurity, you’re trying to go into cybersecurity.
It seems like AI, especially in health care, like AI is creating more of a demand for that.
Would you agree?
Because you’re needing to store all this data.

[Loren Larsen]

Yeah, look, I think you see hacks against health care organizations all the time, trying to get at health care data.
Whether it’s your electronic health record or something else, your labs, your images, it’s a constant target.

[Marty]

Yeah.
Yeah, it’s freaky.
Yeah, and they’re probably using it for like, yeah, like if you have that data and like if you, or if someone misuses that, it sells it to like insurance.
Yeah, that can be bad, right?

[Loren Larsen]

Yeah, I mean, it’s not.
Yeah.
I mean, any data that’s stolen can be can be misappropriated.
I mean, I think there’s, you know, payment data is probably a lot easier to exploit.
Demographic data for sort of identity theft is probably valuable to exploit.
Other sorts of data have less clear use cases.
But if anybody steals your data about anything, like none of us like that, right?
We want our data to be our data.
We want to be kept secure and safe, even if we can’t anticipate how someone would misuse it.
It’s ours.
Nobody else should have it.

[Marty]

Yeah.
Well, you know, with that, so there’s a high emphasis on privacy, on, you know, the ethical use of AI, making sure that if it’s not ready to talk to the patients, you know, we have humans talking to patient.
You’ve got that experience from HireVue making making this this this Vedera really, really good at detecting things in people’s videos.
It looks like you’ve got a great thing going on.
So I’m really happy for you coming on.
And I’m really happy that Vedera is a thing.
I’m really happy that, you know, this preventative tools getting out there.
You know, too many people are going in these hospitals too late.
Right.
So thanks for coming on.
Thanks Marty.

Sign up to get updates for podcast

By signing up, you agree to receive email from this podcast

Recent Podcasts

AI-Driven Personalized Cardiac Care Ft Alfred Woo

Personalized Heart Health Through AI and Patient-Centric Innovation. Alfred Woo, Chief Product Officer at AliveCor, discusses how artificial intelligence, real-time cardiac monitoring, and patient-centric design extend cardiac care beyond traditional clinic visits. He explains how FDA-cleared AliveCor ECG devices and AI-driven analytics empower patients with actionable heart health insights and support clinicians in earlier detection and intervention.

AI-Driven Patient Visibility and Risk Prediction Ft Meghna Misra

AI and Patient Visibility for Early Risk Detection: How Digital Tools Enhance Visibility and Guide Timely Intervention. Meghna Misra, Head of Product at ClaritasRx, shares insights on leading product organizations in complex and regulated healthcare environments. She explains how a customer-driven product strategy, disciplined execution, and risk-based decision-making enable teams to deliver platforms that scale and adapt to evolving market needs. Meghna discusses lessons from launching new products, modernizing data platforms, and aligning cross-functional teams around clear outcomes. She highlights the importance of anticipating future customer needs, balancing innovation with operational stability, and building product cultures that deliver measurable business impact in healthcare technology.

Smarter Care With AI: AI’s Role In Preventive & Behavioral Health

Peter Conroy, Founder & CEO of The Difference, explores the transformative impact of AI on preventive and behavioral health. He explains how AI-driven solutions can help individuals manage weight, optimize lifestyle choices, and improve long-term health outcomes by providing predictive, personalized guidance. Peter provides valuable insights into how AI is shaping the future of preventive healthcare and behavioral interventions. He also describes how The Difference leverages data, predictive modeling, and intuitive algorithms to anticipate health risks and deliver actionable recommendations that are tailored to each user.

About the Guest

Author

Loren Larsen, CEO and Co-Founder, Videra Health linkedin

Loren Larsen is the CEO and Co-Founder of Videra Health, a digital health company focused on AI-driven video assessments for behavioral health providers. He has more than 20 years of experience in engineering, research, and leadership, and holds over 25 U.S. patents. Before founding Videra Health, Loren served as Chief Technology Officer at HireVue, where he led development of video-based assessment platforms and machine learning systems. At Videra Health, he leads efforts to apply video and artificial intelligence to improve behavioral health assessment, care prioritization, and access. Loren’s background includes developing technology that analyzes human language and emotion, and he also co-founded Nomi Health, a direct healthcare services and technology company. His academic background includes a Bachelor of Science in Computer Science from the University of Utah and a Master of Science in Computer Science from Clemson University.

Like what you are reading? Share this blog.

You Might Also Like

Latest Talks

Author
Podcast

10 Rapidly Growing Medicine Specialities to Look for in 2022

Read More Hear
Author
eBook

10 Exclusive Dashboards for Healthcare Decision Makers

Read More Hear
Author
Webinar

Health Leadership Insights: Making Digital Health Profitable

Read More Hear
Author
Insight

The Future of Connected Health: Challenges & Strategies Ahead

Read More Hear
©2025 OSP. All Rights Reserved.