Dr. Houman Hemmati: Why Clinical Trials Fail

In this episode, Ram Yalamanchili sits down with Dr. Houman Hemmati to unpack the real-world inefficiencies plaguing clinical trials and how AI can finally make a difference. But not just any AI. We're talking about AI teammates, tools that work quietly in the background to automate documentation, reduce site burden, and improve execution without disrupting care. Dr. Hemmati shares his firsthand experience as an investigator and biotech leader, breaking down why most clinical trials fail in the execution phase, not the science. This conversation pulls no punches and offers practical, tested ways AI can actually help trial sites work smarter—not harder.

Dr. Houman Hemmati: Why Clinical Trials Fail

In this episode, Ram Yalamanchili sits down with Dr. Houman Hemmati to unpack the real-world inefficiencies plaguing clinical trials and how AI can finally make a difference. But not just any AI. We're talking about AI teammates, tools that work quietly in the background to automate documentation, reduce site burden, and improve execution without disrupting care. Dr. Hemmati shares his firsthand experience as an investigator and biotech leader, breaking down why most clinical trials fail in the execution phase, not the science. This conversation pulls no punches and offers practical, tested ways AI can actually help trial sites work smarter—not harder.

Transcript

46 min

Ram Yalamanchili: Hi, Dr. Ti. How are you?

Dr. Houman Hemmati: I'm very good. Thanks so much, Ron for having me.

Ram Yalamanchili: Yeah, it's always, always great to see you and, uh, and talk to you. So I'm really looking forward to this. Um, so, uh, you know, as, as we get into this right, one of the first things I'd like to start with is, uh, tell us, tell us your story, right?

What, how, how did you end up where. Where you are and, um, and why? Um,

Dr. Houman Hemmati: yeah, my, my background's a little interesting. I, you know, I'm an md but I also got a PhD and, uh, I always envisioned a career in which science plays a major role. Traditionally, that involves being in the academic setting and practicing academic medicine, having a research lab or clinical research, uh, on the side.

And that's not what the world wanted. Uh, for me, you know, I trained in ophthalmology and afterwards when I got a fellowship. Uh, to, to train, you know, in a subspecialty in cornea and refractive surgery. It came along with a postdoc, uh, at MIT with Bob Langer, who does, uh, chemical engineering and specifically works on extended release drug delivery.

And so, uh, my career took a big shift when I went there and learned how to actually make drugs and make drugs last longer, uh, specifically in the eye. Um, and shortly after I. Practiced in the academic setting for a year. I was recruited, uh, by Allergan as it was known back then, uh, to be, um, involved in clinical development and ophthalmology.

And so I spent a couple of years. Uh, there, and once, uh, the company was sold, I went off to somewhere else and eventually ended up, uh, on my own as a startup, uh, entrepreneur, co-founder, as well as, uh, chief medical officer, part-time for a variety of different companies, uh, all in the ophthalmology space.

And so I spend my life now working on early to late stage, uh, trials, as well as even preclinical development, uh, of drugs intended to improve how we take care of different ophthalmic conditions.

Ram Yalamanchili: Yeah, and, and, uh, you know, just, just from looking at, uh, your profile and, and what you've done, uh. You've worked on many programs, uh, have obviously gotten deep expertise in this area.

Uh, and one of the things which I'm really looking forward to is just understanding what are your learnings in this place and what, what do you wish to change as well? And I think we're also in an interesting time where there's many opportunities for change and, uh, especially from a technology perspective.

There's, there's new, new innovations, which are, I think. Uh, wonderful in terms of like actually implementing and, and, and getting us there. So I, I think that's sort of what I wanted to unpack today as far as like how we converse about it and, and, and go through this. So the first step I, I'll probably, uh, start with is I'm very curious about the pain points and sort of like, what, what are your, what does your day to day look like when you, when you think about, uh.

You know, developing a drug. Like what, what, from a sponsor's perspective, what are the type of pain points you usually see? And, uh, and spend more a lot of time, either

Dr. Houman Hemmati: you personally or from your team. I, I wish we had about four hours to do this, because I can just spend that much time focused on, on pain points.

There are a lot, right? There are regulatory pain points. There's CMC pain points, there's fundraising, uh, there's clinical and clinical trial, uh, pain points, all of these things. Uh, each of each of them can be a several hour podcast. With respect to the clinical development, which is the part that I spend a lot of my life focused on, in addition to the other aspects, it's several things.

It's one, getting a trial operationalized, which means selecting sites, operationalizing it, making sure they truly understand the protocol, uh, so that they're not making mistakes. It means also getting the study launched and translating. The protocol into an actual clinical trial, which is just like taking the information from an architect, handing it to a builder and making sure they build it, uh, without the architect's involvement.

Sometimes the builder does the ro the the wrong things right? There are a lot of things that aren't in the architectural plans, and a lot of it is up to interpretation. And sometimes the architect designs a building that can't really be built, uh, or be built well, and sometimes. That collaboration between the architect and the build builder is needed.

That same thing I think I see in clinical trials. And then a lot of it is also in terms of running that trial smoothly, making sure it's enrolling properly, making sure that adverse events and other problems like enrollment issues or, or in, uh, you know, enrollment criteria that are not perfectly tuned for that particular trial are caught early so that amendments could be made.

And staying on top of it, understanding what kind of data is coming in. If there are any problems, uh, with the study design overall, that may lead to poor or uninterpretable data later on. Um, and whether there's anything that causes the, the trial to have to be stopped, uh, early. And then finally is, is interpreting the trial once the data's there and really making good sense of it.

All of these things, uh, pose challenges and, and for all of us in the trial world, one of the biggest issues is that clinical trials now are trapped in the 1980s and 1990s. When it comes to how we do things, we are literally using paper and pen for a lot of things. We are literally, depending on phone calls and faxes in order to transmit information.

Uh, there are delays caused by those, there are mistakes caused by those, and that lack of, uh, realtime high quality visibility into what's going on, uh, to be able to react to, to, to information in real time. Is causing problems for everybody. And so I think there are great opportunities to fix a lot of our pain points by modernizing, uh, how we do things.

But unfortunately, until recently, I think everything has been trapped in decades old technology and decades old thinking. Um, and, and right now I think a lot of us are looking at how things are done and realizing. It's time to also switch how we do things, to modernize it and, and create tools that would actually allow us to leverage modern technology to help, uh, resolve a lot of these issues.

Ram Yalamanchili: Yeah, and some of the things you were mentioning, I think I really resonate with because, uh, just my own past experience being a biotech founder, um, and, uh, being in the oncology space, working on a large clinical trial, um. It's very interesting that the type of problems you're mentioning kind of resonates in a different therapeutic area as well.

Uh, I think it's, it kind of speaks volumes to the type of, uh, problems we all face as an industry, right. Uh, across the board. Yes. Um, so unpacking a few things, what. What, if any, has, has worked to improve some of the problems you're talking about? And I'm talking about over the past many years of your work in the space.

Uh, I think visibility is one of the things you just mentioned. Real time visibility. Right. Um, now I could imagine a world where that was non-existent. Maybe today there is some, some amount of capability somewhere, uh, which, which may exist or could have gotten to a certain extent. But I'm just curious like.

Have you seen some amount of improvement in visibility or your interaction with, uh, let's call it your project management or your CROs to get realtime, uh, information on what's happening with the trial? Um, you, you mentioned the ability to like speeden up the trials, so what has, what's been your like experience in the past, like.

You know, 10 plus years of you being in the space and sort of what have you seen happen so far?

Dr. Houman Hemmati: Yeah. You know, one, one thing that has been nice, but it also comes at, at a bit of a cost is having, you know, sponsor level access to the electronic, uh, you know, data capture system to be able to. Log in and in a blinded or masked manner, uh, be able to see what's coming in.

So be able to look at patient level, site level data. Um, but that also comes at a bit of a price because a lot of that information hasn't been reviewed. So the quality may not be great. A lot of the data hasn't been entered manually. Into the system by the site. They are still by and large doing paper and pen records.

And then it takes someone manually transferring that, um, into the system. And even after they've done that, there may be issues of data quality, right? If it hasn't been monitored, if someone hasn't done source document verification, for example, uh, if you haven't validated what's in there, that may be erroneous information or it may be delayed.

Both. And so that level of sponsor access may, may come again at that kind of price. So you may make mistaken assumptions based on what you see. And on top of that, the, the systems are cumbersome and they're not necessarily built for sponsor level, uh, access, especially for people who are not in the weeds on the day-to-day basis.

So people who are in my role, uh, you know, in in clinical development who are more designing trials. Um, and then handing them over to clinical operations don't have necessarily all the knowledge and the tools to be able to make the most of that data. We want to be able to extract what we need without spending hours and hours and potentially making mistakes, uh, in, in the process of finding what, what information we, we need.

And so there are a lot of problems, um, associated with that. It has helped, it has been nice to have that access. Some trials get you more access than others. Uh, but regardless, that has been beneficial. But without having the high quality, without having the, the, the timeliness of the data in the system, that that can actually backfire in in some regards.

Ram Yalamanchili: But you also are saying that you have visibility through the EDC. But the data's only as good as all these other steps which need to happen, which are still very manual,

Dr. Houman Hemmati: right? Mm-hmm.

Ram Yalamanchili: Yes. If somebody had taken it on a paper somewhere and that paper was never entered into the system, there's nothing you can do, even if there's an EDC there, right?

That is correct. So, so maybe then that the problem then kind of goes back to, okay, where is the site? Who is the site? How can I empower the site to be, uh, able to do a better, um, you know, if you, if that is your, truly your bottleneck, then you probably are shifting the problem from. A, uh, a data capture to like a site workflow, right.

In some ways.

Dr. Houman Hemmati: That's right.

Ram Yalamanchili: Got it. And what sort of, um, you know, I think everyone's gotten great experiences and, and sort of like not so great experiences working with, um, the, the industry and, and sort of working with within these problems. Um, have you seen practices or opportunities where you just said, Hey, this is great, this worked amazingly well.

Uh, maybe it's, uh, you know, working with a certain CRO partner, whatever it might be, but at the same time, maybe that doesn't scale. So I'm just curious, like, have you had problems where something worked great but you wish it were, it were able to scale really rapidly, but it wasn't, uh, in, in, in this kind of scenario?

Dr. Houman Hemmati: Um, you know, generally all of it, I think is, is, uh, is problematic in that regard. I mean, the, the issue is. That anytime. I want to really be sure that what I'm looking at, even if I personally have access to it or or whether I don't, and I'm asking someone else, it still requires me to get into a phone call or a meeting, or at least a series of emails with someone else who really knows what's going on to ask them what is really happening here?

And I think that's the problem that can't be scaled, is the fact that in order to be sure that what you are seeing is real. Is complete, has been verified, validated, and and, and is and is. Thorough. The only way to do that is to have a meeting with someone, or at least to participate in a long series of, of emails.

Regardless, there is that direct one-on-one interface required, uh, with whoever is the keeper of, of all that information. And, and sometimes that is, is not so easy, right? Tho those people usually have other jobs, uh, whether they're working on other trials or even if they're dedicated to your trial, they have actual operational duties.

And sometimes being able to communicate with a sponsor and provide that information is the last on their list of priorities for obvious reasons.

Ram Yalamanchili: Right. So then it becomes almost like an attention situation, right? Uh, attention from the right person who, who hopefully if you can scale that, then, then yes, correct.

But, but clearly that is not the easiest resource to scale. Like, like, you know, great people are in, in limited quantity, and we've got. We've got a certain amount of attention from, well,

Dr. Houman Hemmati: there, there are budgetary issues as well, right? You, you can pay for it, you can, you can certainly ask a CRO to dedicate multiple full-time people to a trial simply for, for those reasons.

But the problem is that that is exceptionally expensive, right? Clinical trials right now are, you know, are so expensive that sponsors literally are asking CROs, and I've just done that recently. Can you please limit the number of people who are in our weekly meetings? We don't see person X and person Z contributing very much.

They have said nothing in the last several weeks of attending meetings and we've gotten a big bill for it. That's literally what sponsors have to do. I've had to do it. It's, it's part of that lean, uh, trial mentality. And, and when you do that, uh, imagine having to add more people just to have that extra visibility and extra access that doesn't make financial sense.

And so I think cutting that out. Being able to, to, to automate that process or make it quicker and simpler without having lots of additional full-time people involved is essential. Right, right.

Ram Yalamanchili: And just switching to this right, obviously. You know, in some ways cost is tied to the capacity because you're hiring resources.

Mm-hmm. And if you are, if you're trying to be more efficient, you, you might at this point, uh, choose a path where you're reducing the capacity of what you could do with the resources you've got. Basically, that's kind of what you're saying. Gimme exactly number of people. On my project or keep it to a certain, uh, number of hours.

Um, how does that affect, like your decision making? I mean, you know, I'm just, I'm just curious like are there opportunities where you say, well, this trial is going this, this route, and. You know what? I'm very curious about something else, and I, I, I'm, you know, we need to dig down into it. But then if answering that additional question is a massive cost, then are you then making that additional, should I do it or not?

Does this happen on a regular basis from your perspective?

Dr. Houman Hemmati: Uh, it has, you know, in, in terms of, in interpreting trial data, for example, that's been, uh, you know, something that, that, that I found essential, you know, a, a trial that I was recently involved in, um, appeared at first glance to not have worked, but once we dug deep into it, a process that, um, came at a great cost, we were billed, you know, $25,000 and it took four weeks.

For their team to take the data and analyze it in the way that we wanted to. Something that I could have done on my own had I had access to the full database of validated info, uh, in a, in a matter of seconds or minutes, right? That rescued the trial and it showed us that there actually was an amazing effect.

Something that we didn't anticipate when we initially designed, you know, how our, our, our analysis plan similarly, uh, I know of another trial that I wasn't involved in where the data again came out as a failure, uh, a blockbuster failure and a big phase three. However, once they went back and looked at two different subgroups.

They found, hey, this actually worked really well in one of those subgroups, so very well defined subgroups, something that hadn't been expected. Guess what? That company was just sold for a couple billion dollars based on that analysis of the data that came out after they thought it had failed. Um, many such things.

But that analysis itself. Took several months and cost many hundreds of thousands of dollars because they needed an army of people to look at it and figure out how they can make sense of what they expected to be a success and didn't turn out that way once they got their top line data. Um, many things like this that I think could go much faster, could go much quicker and actually could go better because.

If you have software that can think better than one, two, or even 10, people can look at all different possibilities rather than just those few that initially come to mind. We can end up extracting way more information out of trials. Those that have succeeded, I. Those that have failed potentially, uh, or those that are somewhere in the middle.

Um, there, there's, there are infinite possibilities, but unless we have that, that immediate analytic ability, we may, you know, make the wrong assumptions about our trial data, whether it's. In real time with, with masked data that's coming in, or at the very end when you have top line data coming in, you're trying to make immediate decisions on where you take your program, uh, from there or, or, or, uh, attempt to attain funding to continue your program.

These are things that are very time sensitive, uh, and you don't have the ability or the luxury of spending more money on it sometimes, or, uh, the ability or luxury, uh, to spend a lot of time on, uh, figuring things out.

Ram Yalamanchili: I see, um, lot of interesting things there to unpack. Uh, I think I've got a couple of things here.

First, there seems to be this notion that, you know, I'm gonna run the trial to a certain extent and then I do an analysis and I get some insight, and then we keep moving. But even when you do that, you somehow found out at the very end that it failed. And, and then you had to do some kind of a subgroup analysis, and then you found out, well, this actually worked in this particular case, right?

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Why hasn't that happened earlier? Like what, what, what contributed or what, what sort of. Limitations exist today where there is no, uh, sort of like course correction mechanism where you could in, in, you know, in a much more higher frequency figure out, well, could I have done something different? Could I have made an amendment in the protocol to be able to do this?

Right? Is, are those the type of things which you, which you see or like, uh, from a lack of capability in the current infrastructure? Like how could you, can you.

Dr. Houman Hemmati: Talk a bit more about that. Yeah. And, and, and it comes back to having realtime access to data. Right? 1, 1, 1 issue of course, is that in a trial that is, uh, randomized and double-blinded or double masked, you, you can't have access to.

On masking data, right? So you're limited in what you can do based on that, and, and you really shouldn't, as a sponsor, uh, have access to that data because, uh, it, it, it allows you the opportunity, uh, to, to, to change the course of, of how things are done in a way that is unfair, right? Uh, and creates invalid trial data.

However, at the same time. There are opportunities to look at what's coming in, even if it is masked and say, you know what, we're, we're seeing that there are no responders, regardless of whether they may or may not be on active. Right? They may be placebo active, doesn't matter. No one's responding, uh, who has a certain group of, of you know, criteria that they meet.

Whereas there are others who, you know, we see half of them responding, or a third responding or something better. Uh, guess what? Maybe we shouldn't have enrolled anyone who has a certain, you know, set of criteria, whatever it may be for that particular disease indication, um, that allows a sponsor to get ahead of it and perhaps tweaks things in order to avoid futile treatment to people in the trial for whom this may not work.

That benefits. Potential subjects in the trial. It bene, you know, who, who were otherwise being exposed to only risk with no potential benefit. And it benefits a sponsor by getting things to move much more quickly, increasing the probability of success, increasing the probability of a, of an approval down the line.

And so it's a win-win, I think, for everybody. But until you have the data and are able to look at it, you may not know. Same thing, by the way, applies to adverse events as well, right? If you have a complete picture and real time picture. As someone who's done medical monitoring and has been responsible for signing off on the adverse events in clinical trials and actually monitoring for things.

That are serious. That may even be unex unexpected, but at the very least be severe enough, uh, to warrant further investigation. If you don't have that real time access, there may be delays. Sometimes this is up to the site to proactively remember to notify the CRO. Then for the CRO to proactively remember to notify the sponsor, these delays can actually, you know, cause problems for the subjects in the trial, right?

As what not forgetting about even the drug itself. You need to know about that. As someone who's done medical monitoring, I want to have. Immediate visibility into those issues without having to wait for that human interface. And again, those, you know, you don't want that delay. And I've seen those delays occur, uh, and in many indications it's not that of big of a deal, but in some indications it's a life or death issue.

And, and you do need to know those sorts of things as soon as they occur. It makes sense.

Ram Yalamanchili: Uh, I know you've mentioned delays and time as another word, like everything you've unpacked, I think clearly, like there's this urgency to access data. I get that. Then there's this cost equation, but then there's also a time equation, right?

I think you wanna move at a certain pace. Uh, and I hear very often, you know, recruitment is the biggest challenge for trials to sort of go where, where they need to go. Um. And, you know, I think it also comes back to how fast your sites are activated, how great your sites are. Uh, you know, the, the whole works, right?

It's, it's a multivariate problem here. So my question for you is, uh, how do you figure out how many sites you can activate and what is the ideal number? Because in a theoretical situation, if you had infinite scale and infinite number of sites, which can be added. Maybe the recruitment problem can be solved.

Is that, is that a good way to think about it? How, how, how do you think about it?

Dr. Houman Hemmati: Yeah, it, recruitment goes to two things, right? There is, uh, site number, how many do you have, it's site quality. Uh, you can have infinite sites, but if they're not good sites or if they don't understand the protocol, or if they're deprioritizing it or if there's a problem, um, you know, somewhere else in that equation, then it's not gonna help you.

And then there's also the enrollment criteria, right? Even the very best sites. Cannot enroll an Unen Enrollable trial where you have inclusion or exclusion criteria or both that are just not reasonable. And there's some indications, newer indications, especially right where you're in uncharted territory and it's not until you attempt to enroll the trial that you find out that maybe your enrollment criteria were a little bit too narrow and restrictive in such a way that you're just not getting any subjects.

I have seen that, I've experienced that numerous times in trials where we went into uncharted territory and, you know, and, and waited and waited and waited. I, I just, I'm in the middle of one trial that I, so I can't talk about the details of it, but I've been in the middle of one trial where there was one enrollment criterion that caused enrollment to be so incredibly slow, and it's for a rare disease, so it made it far worse.

That we were getting one subject in every several months. It wasn't until we did that manual review of everything that we were able to finally figure out what was causing all these screen fails, and the moment we went and fixed that, did a small amendment that didn't impact the the trial. In any way in terms of safety or efficacy assessments, it was just a guess that we did wrong.

Uh, all of a sudden enrollment picked up very much more quickly. Um, and it's, and it allowed the company to move forward without having to do a massive raise or, or abandon, uh, the program, which could have otherwise happened. Um, I've had many other examples of this as well, where that real time access, uh, is helpful, but, but delays in that real time access.

Caused, uh, delays in the ability to detect those issues, sometimes to the point where they could have been devastating to the, to the company, to the program, uh, or otherwise.

Ram Yalamanchili: Interesting. By the way, this, this particular scenario of hurt, many, many folks tell me, especially in the clinical development world, right?

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Uh, and would you say, this comes back to the earlier discussion of not having. Access to the data or at least the people to do the analysis or, or ask, being able to ask the questions which you wanted to ask as soon as possible. Is that, is this sort of like related to that? Is that why you wouldn't you?

You waited as long as you waited before the amendment had to be made and Yes, it. That sort of like gave you a, a, a pretty quick activation. Right. So it was like a step function change when, when this particular change was made. Uh, correct. It was fascinating. So,

Dr. Houman Hemmati: yeah. I mean, look, and, and for, in a lot of trials, by the way, the sponsor doesn't have visibility into things that are happening until that weekly meeting, right?

And so there are a lot of things that happen between weekly meetings and you don't wanna necessarily wait, you know, five, six or seven days. To know about something and sometimes things aren't even available to the, to the CRO until a week later. And so now you've got a two week or three week delay to fi, you know, to to, to finding out important information.

That's unacceptable in many cases. And, and, and remember also, there are limited people and those limited people have many things that they're doing on their list, and they have limited bandwidth because of that and limited priorities, and sometimes their priority, obviously their priority is going to be.

Executing the trial, it's not necessarily everything else that the sponsor is worried about. So I think it's really not necessarily about adding people. Or replacing people. It's, it's, it's about taking the people who we already have and allowing them to really work faster, do more things with the time that they have, and allow all of the priorities to be met, uh, at the, at the very same time.

And I think it's possible with shifting how we approach technology, uh, and how we really look at conducting trials to begin with. Yeah.

Ram Yalamanchili: And I think I also realized something interesting from what you're saying. There is this notion that we are capturing data in the EDC, which is, which obviously it does, but there's also so much it is not capturing.

And that is knowledge which is being built up in your team, whoever's working on the trial. Yes. And the only way to query that knowledge is to ask them. In the midst of many other things on priorities they already have.

Dr. Houman Hemmati: Yes. So you've

Ram Yalamanchili: got this contextualized data, which is, you know, part of it in the EDC and the rest of it is in, in the ether, let's say, in in the team knowledge.

And, uh, one is potentially easier to query. The other is not, because that's like generalized context, right? And so that's, it's a very like, interesting way to think about this problem because I, I see a lot of parallels in, in sort of how, um. I see this evolving and, and including what we're working on and what we've built, how our customers are using some of the tooling and, and, and the capabilities we have.

So let me, let me switch to another, uh, ne next point here, right. I come from the view that, as you've said, there is going to be a, a, a, a sort of like a shift in how we prioritize things and what we do and what we don't do. That's, that's just natural from a productivity and, uh, evolution of like using technology in, in everybody's lifecycle, work, work, workflow, and lifecycle.

Uh, and in this case, I think for the very first time, uh, perhaps in in history, what we've got is, uh, ai, which is incredibly smart. Um, we have ais who have great capabilities, not only around, uh, logical, uh, problem solving, but also reasoning now, uh, task planning. Uh, some concepts of memorization. Short-term memory.

Long-term memory. So there's, there's various things which are now coming in, in capability which extend beyond just asking a question and getting an answer back.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Where Chad GPT was or some of these information retrieval style, uh, problems are, uh, being solved with with Chad GPT. Right? And I think all of us have probably seen it where we wanna draft a nice email.

We can ask Chad, GPD. It gives you back that. That doesn't necessarily remove the work of actually me answering 500 emails. It's, it's just me solving a part of that problem.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: And I think the capabilities of AI being able to do great reasoning, great task planning, will get us into that next level of unlock where you can say, Hey, go do this work for me, and, and I will, I will basically oversight you.

Right? So we are coming into this domain, which, uh, at Tilda, what we call, uh, AI teammates, so. These are not people, these are ais who have the access to certain tools. So in our case, we have access to, uh, you know, communication tools, phones, emails. Uh, the AI teammates have access to these tools. Uh, they have access to a computer where they can work on, uh, a browser.

They have access to a mouse and a keyboard. So they have tools on which we can now teach skills. And then, you know, they can be part of your team. You train them, you, you sort of get them to a place of competence where they're able to do certain workflows. I find it interesting the way you are describing how, um, you know, there's a siloed knowledge.

Mm-hmm. Some is an EDC, somewhere in the, in the, in the team. And now I feel like, you know, is there a world where your AI teammates can, you know, absorb that knowledge, which is previously not being captured elsewhere? And then you can ask the question that way, right? Or you can ask them to do certain things about it.

Uh, how, how do you, like if, if this were the case and this were how we were progressing, um, uh, into, at least this is my view, maybe what I should say is, do you see that happening? Do, do you see this sort of a thing? As too much of a science fiction, or do you feel like we are here, or at least we're seeing glimpses of it, and this will, this will very quickly evolve.

Where, where do you stand from that?

Dr. Houman Hemmati: Yeah. I, I, I think we're, we're not quite yet there a hundred percent obviously, because we don't see this employed in, in every trial, let alone, uh, even, you know, a major subset of trials. But we are now technologically at that place. Where I think we are at, at a major inflection point where we can now actually have AI tools that have visibility into everything.

Even AI tools that attend, you know, zoom meetings, for example. Right? And, and are absorbing everything that's coming in, you know, in real time and in the totality of it, let alone having access to the database. Having access to so many other things that may not have made it in the database, but are relevant as well as other pieces of information that are coming through, you know, various sources of communications.

All of that said, I think it's quite valuable and it gives us an opportunity to do things and see things that we have not been able to do before with a level of timing, cost, and scale that we've never envisioned before. And I think what that's gonna allow us to do, it'll unlock opportunities. To run trials far more, uh, effectively, efficiently, quickly, safely, and with a higher probability of success.

And ultimately, those are the things that any clinical trial needs, right? There is no reason why the operational aspects of a, of a very simple 30 patient clinical trial, one of which I completed a year or so ago, should. Feel like a phase three trial. It just shouldn't, and there's no reason why, you know, conducting a phase three trial, no matter how complex it is, right?

But especially in an easy field, like an eye drop trial that I do in ophthalmology should feel like you're launching a rocket to the moon. I mean, these are things that should be pretty basic. But we've overly, you know, uh, added, added too much complexity to it because of the processes that are required, you know, uh, to maintain quality standards in clinical trials.

But I think by introducing ai, by introducing automation and by really simply modernizing how things are done, uh, we have the opportunity to really reduce the cost. Increase the time. And, and that benefits not just the sponsor, it benefits ultimately patients. It gets, uh, drugs to market quicker, uh, cheaper.

Um, and, and ultimately that's what people want. Everybody I.

Ram Yalamanchili: Yeah, no, it, this is where the exciting, uh, you know, effects of implementing AI into this process is right. One other area which I've, uh, I, I've thought through and sort of worked, um, or at least saw it through my experience at Lent, my previous company, uh, and, and, and it tilda with, uh, with our current customers and,

Dr. Houman Hemmati: mm-hmm.

Ram Yalamanchili: See, I think change management is a scary word for many and. One of the advantages of sort of working with an AI teammate is you don't necessarily need to change a lot of your process. You can sort of say, this process involves a certain amount of overhead. I don't want to do it, I'm just gonna give it to my AI teammate to do it.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: So an example of this would be, we work with, uh, many sites now, and there there is an expectation from all their sponsors to first capture source. Maybe it's on paper, maybe it's on some system. Then move that data into the EDC later. Yes. So you're essentially doing double data entry, right? You're entering it into your source first, and then you're entering it into the EDC.

And from a site's perspective, they've got this problem where, okay, that means if I have 20 trials running actively, I've got potentially 20 different systems I need to work with. And, uh, you know, I have 20 different EDC logins at least, right? So,

Dr. Houman Hemmati: yeah, true.

Ram Yalamanchili: And there are many EDCs, it's true on, uh, platform A and another company could be on platform B.

So there's multi, uh, you know, there's a diversity of systems here, and what we've come in and said is, well, you know, you should really just standardize on what you do well, which is doing source entry, which is anyway, what you do. Every clinic has the capability to enter this data into an EMR or some kind of paper chart or some other system.

So yes. Right. So you do that. But the after part of, like, once you do that, moving into an another system, managing some form of your queries, that's something which an AI team at this point is fairly capable and can do it at a really like, uh, you know, high quality, consistent, reliable way. And, uh, you know, the argument here is that why do you wanna spend half your time working time on doing data entry into the EDC, where you could basically focus on patient care, which is.

Net net benefit for everybody, right? The whole ecosystem wins in this situation, but we've never changed the process itself. We didn't change manage anything here because, uh, we just kind of handed that work to somebody else. And this somebody else just happens to be a very powerful, intelligent ai. So, so that's sort of thing, right?

That is correct. Similar sort of an experience with, uh, some of our sponsors and several opportunities lately, which we're seeing is. I've got a phase three trial. I need to enable 700 sites in eight months, uh, from FPI and uh, and you know, it's across X number of countries. How do I do it? I need to staff temporarily to do all of this work and then, you know, kind of manage them out if I don't have a large project of that side.

And to me that sort of thing is really exciting because. These are processes where you're not asking to change, manage, like, don't change your process. If you, if you really just want to email every single site and, uh, you know, follow up on documents in that manner, that's not a problem at all. You can basically have an it met, which will scale immediately to do this sort of work rather than sort of, uh, doing every single part of this work with a person.

Right. And what I find is also that sometimes it's not about even like, um. Make building the team, it's really hard to build these teams because there's not enough capacity out there to be able to enable these large trials and the kind of timeline you're looking at. And I think you have resource competition.

That's why running a phase one feels like a phase three because you're competing with the same type of resource constraints, which everybody else is with.

Dr. Houman Hemmati: Yes. So,

Ram Yalamanchili: so, you know, it's, it's sort of like a persistent problem, I think, which all comes from, to me anyway. The lack of really great supply of individual talent in this industry, and we're not growing that fast enough, and that sort of has a knock on effect, in my opinion.

Anyway, on the, on the, um, number of goals we have on the shot, right? Um, number of formation of biotech companies from number of opportunities to bring new drugs into, into clinical development and. This sort of thing, all kind of, and then of course the amount of capital available because if, if you had a certain amount of capital, each of your trial costs a certain amount of money, you divide that.

That's the number of, and you have in terms of how many opportunities you've got. And so I, I sort of see this, you know, interesting opportunity where. For the first time, we're saying, don't, if you don't wanna modernize your process, or rather, if you don't wanna think about it that way, don't do that. But you can have an AI teammate come in and sort of work within your framework, uh, in this way.

Right. Which is sort of an interesting, um, uh, you know, paradigm I'm looking at. Do you, do you sort of see that, um, I mean, I guess like I would ask you like, how does that, how does that sort of fit into your framework? Does, does that. Makes sense. Do you have any concerns, anything where you feel this, this is like, um, you know, just fundamentally not

Dr. Houman Hemmati: viable?

Uh, in some ways, I guess I'll, I'll tell you, I, I think it's gonna require, uh, people to become comfortable with it. Uh, and, and, and different people are comfortable to a different degree. I'm a person who already likes to take, you know, de-identified data sometimes and, and, and just to do a quick analysis using.

You know, gr or chache bt of things just to give myself some, some confidence about things before I get an official an, an analysis, um, you know, with a limited amount of information, uh, just to see how does it do it. And, and, and I've always been very pleasantly surprised. And so I'm comfortable with the technology only because I played around with it, even outside of the formal setting within.

Within trials. Right. Um, I know on the other hand, there may be people who are hesitant only because of a, of a sense of mistrust, not knowing what's going on in the black box. Right. And what it's gonna take, take, um, is for there to be real world examples. First of where this sort of approach and having, uh, you know, the virtual teammates or the AI teammates succeeding, uh, and benefiting trials without causing problems.

And then second, it's gonna require many people, including people who are a little skeptical or, or hesitant to be, you know, uh, engaging with it themselves, perhaps in a way that's redundant even, uh, right. To have it alongside, uh, additional, you know, traditional resources for trials, but not instead of, uh, to see, okay, how is this helping me?

And if I didn't have all these additional people, uh, how would that have helped or how does that, uh, AI tool. Allow the people I do have to focus on what they really need to be focused on instead of constantly answering, you know, my queries or, or, or questions about things. Ultimately, it will happen because once we see real world examples of trials becoming far quicker and far more efficient, naturally sponsors will, will, uh, want to go in that direction.

CROs will want to go in that direction and frankly, the site. We'll want to take that approach because it's gonna make their lives way easier. Right. And as a site, you are not caring about the mechanics of the trial sites want to take care of patients plain and simple and, and the easier it can be for them to take their existing clinical framework and bring patients in for clinical trials.

Uh, without changing how their clinic is run, the better. And so for them removing any of those heavy lifting steps or making it far easier for them to do, uh, the better. And so the, I think again, this poses a win-win for everyone, and it's just a really a matter of rolling it out, uh, in a way that makes people comfortable.

Um, and, and is, and, and, and demonstrates, uh, both, you know, the benefits and the lower level of risk.

Ram Yalamanchili: Yeah, under understandable. I think the comfort and trust I think, are gonna come with more data and more just frankly just able to prove that these are actually competent and able to do the work. But, um, I think, uh, that, that just comes with time.

I think there's gonna be innovators, there's gonna be visionaries who are able to see it and say, okay, I have some value here and let's, let's kind of jump in and do something with it. Um, and, uh, sort of, I think, uh, what you are talking about is. Uh, the industry will essentially evolve because there is essentially a need for it.

Um, and we have all these challenges we've spoken about. Um, so my last question for today is, you know, independently thinking about all, you know, your journey so far, I. Do you feel like the next five years are gonna be sort of similar or do you feel like there's something rumbling in this case? Like, and, and there might be a, a, a change to the better or, or change?

I don't know what you, what you look at it. I guess I'm asking you what, what would you think about,

Dr. Houman Hemmati: uh, I, I, I, I think what, what what we're probably gonna see in, in the world of trials is what we're seeing even in the government right now, which is a focus on. Eliminating any kind of waste, uh, but to do so thoughtfully, obviously, uh, and to, to make things more efficient so we can focus our limited resources on doing what we're supposed to do very, very well and very quickly and efficiently, right?

In a way that benefits everyone. So my vision for what's gonna happen, uh, with clinical trials, especially with the involvement. Of AI is that we're gonna see a lot more self-service, uh, a lot fewer delays and a lot more confidence in, in the quality of the trial and the quality of the data that are coming out, and the ability to execute and get the results.

I. That we're hoping to see and that the results that we see are actually reflective of the performance of, of the, of the product, of the drug. And we're headed there. Uh, for sure. Right now, you know, we're at this point in evolution of clinical trial conduct that we've maximized existing traditional technologies and it's time to now take a shift.

To new technologies. And that happened when we shifted from pure paper charts to involving electronic databases and doing simple things like Excel spreadsheets and eventually to more advanced systems. But now we're at that inflection point that moves far beyond that to a new level where we're able to actually make sense of the information in real time rather than having to just simply rely on people, uh, to interpret it or act on it.

Ram Yalamanchili: That makes sense. Uh, one quick follow-up on that is what's your view from a regulatory perspective? Have you, have you had an opportunity to think about that or, or speak to anybody on that?

Dr. Houman Hemmati: Yeah, you know, I, I think from a regulatory perspective, uh, we're again in uncharted territory and the FDA, especially the, the modern FDA, that's under very new leadership as of yesterday with, with, uh, Dr.

Marty McCarey in charge. As, as FDA commissioner has an opportunity to really, uh. Modernize how, how trials are allowed to be run. And I think what they're gonna probably recognize is that this benefits the, the FDA as well. The, the more, uh, quality data they can get. And even if they're have access to some of those tools and are allowed to utilize it themselves to dig into the trials, the better.

Imagine being the FDA and having. The ability to, uh, you know, do trial audits in a way that doesn't force them to send an army of people to each site that they wanna audit, but rather allows them to do it remotely. Or even if they are in person, send fewer people. And to do it with far more robustly, with greater scalability that's gonna benefit everyone because now the, you know, the agency will be able to discover things that they may have missed in the past, while also you employing far fewer resources and making it a lot less cumbersome on the sites.

Right now, if you're a site and you have FDA coming, you know, you have to drop everything for, for a matter of days, sometimes or longer, uh, just to simply accommodate the agency for that re review. That, that can all go away if we change how things are done, uh, from from the regulatory side, of course there are gonna be regulatory approvals required for some of these things, especially when it comes to, you know, sponsor access.

Uh, and also the ability to use AI driven analytics, uh, for FDA submissions, right? They're gonna, right now the FDA wants to be able to take the raw data, right? And do their own analysis to validate. That the, the sponsor's analysis was done correctly, uh, without any fudging. And if the FDA similarly may have access to these tools, um, either traditional tools still or the AI tools or both, and they can do side by side comparisons, ultimately, I think the FDA will be, um, comfortable as well.

But I think that's gonna be a process and it creates opportunities to work in partnership with that. Understood.

Ram Yalamanchili: Yeah, no, it's a fascinating, uh, I I think it'll be a fascinating next several years. Uh, we're, we're, we're truly in a very interesting time. Um, well, Dr. Hamad, it's been a pleasure. Uh, appreciate you spending the time, uh, discussing this.

I had a lot of fun talking to you about this. Um, so, uh, thank you.

Dr. Houman Hemmati: Yeah, Ram, thank you for having me. Always happy to join you again.

Ram Yalamanchili: Alright, thank you. Bye.


Ram Yalamanchili: Hi, Dr. Ti. How are you?

Dr. Houman Hemmati: I'm very good. Thanks so much, Ron for having me.

Ram Yalamanchili: Yeah, it's always, always great to see you and, uh, and talk to you. So I'm really looking forward to this. Um, so, uh, you know, as, as we get into this right, one of the first things I'd like to start with is, uh, tell us, tell us your story, right?

What, how, how did you end up where. Where you are and, um, and why? Um,

Dr. Houman Hemmati: yeah, my, my background's a little interesting. I, you know, I'm an md but I also got a PhD and, uh, I always envisioned a career in which science plays a major role. Traditionally, that involves being in the academic setting and practicing academic medicine, having a research lab or clinical research, uh, on the side.

And that's not what the world wanted. Uh, for me, you know, I trained in ophthalmology and afterwards when I got a fellowship. Uh, to, to train, you know, in a subspecialty in cornea and refractive surgery. It came along with a postdoc, uh, at MIT with Bob Langer, who does, uh, chemical engineering and specifically works on extended release drug delivery.

And so, uh, my career took a big shift when I went there and learned how to actually make drugs and make drugs last longer, uh, specifically in the eye. Um, and shortly after I. Practiced in the academic setting for a year. I was recruited, uh, by Allergan as it was known back then, uh, to be, um, involved in clinical development and ophthalmology.

And so I spent a couple of years. Uh, there, and once, uh, the company was sold, I went off to somewhere else and eventually ended up, uh, on my own as a startup, uh, entrepreneur, co-founder, as well as, uh, chief medical officer, part-time for a variety of different companies, uh, all in the ophthalmology space.

And so I spend my life now working on early to late stage, uh, trials, as well as even preclinical development, uh, of drugs intended to improve how we take care of different ophthalmic conditions.

Ram Yalamanchili: Yeah, and, and, uh, you know, just, just from looking at, uh, your profile and, and what you've done, uh. You've worked on many programs, uh, have obviously gotten deep expertise in this area.

Uh, and one of the things which I'm really looking forward to is just understanding what are your learnings in this place and what, what do you wish to change as well? And I think we're also in an interesting time where there's many opportunities for change and, uh, especially from a technology perspective.

There's, there's new, new innovations, which are, I think. Uh, wonderful in terms of like actually implementing and, and, and getting us there. So I, I think that's sort of what I wanted to unpack today as far as like how we converse about it and, and, and go through this. So the first step I, I'll probably, uh, start with is I'm very curious about the pain points and sort of like, what, what are your, what does your day to day look like when you, when you think about, uh.

You know, developing a drug. Like what, what, from a sponsor's perspective, what are the type of pain points you usually see? And, uh, and spend more a lot of time, either

Dr. Houman Hemmati: you personally or from your team. I, I wish we had about four hours to do this, because I can just spend that much time focused on, on pain points.

There are a lot, right? There are regulatory pain points. There's CMC pain points, there's fundraising, uh, there's clinical and clinical trial, uh, pain points, all of these things. Uh, each of each of them can be a several hour podcast. With respect to the clinical development, which is the part that I spend a lot of my life focused on, in addition to the other aspects, it's several things.

It's one, getting a trial operationalized, which means selecting sites, operationalizing it, making sure they truly understand the protocol, uh, so that they're not making mistakes. It means also getting the study launched and translating. The protocol into an actual clinical trial, which is just like taking the information from an architect, handing it to a builder and making sure they build it, uh, without the architect's involvement.

Sometimes the builder does the ro the the wrong things right? There are a lot of things that aren't in the architectural plans, and a lot of it is up to interpretation. And sometimes the architect designs a building that can't really be built, uh, or be built well, and sometimes. That collaboration between the architect and the build builder is needed.

That same thing I think I see in clinical trials. And then a lot of it is also in terms of running that trial smoothly, making sure it's enrolling properly, making sure that adverse events and other problems like enrollment issues or, or in, uh, you know, enrollment criteria that are not perfectly tuned for that particular trial are caught early so that amendments could be made.

And staying on top of it, understanding what kind of data is coming in. If there are any problems, uh, with the study design overall, that may lead to poor or uninterpretable data later on. Um, and whether there's anything that causes the, the trial to have to be stopped, uh, early. And then finally is, is interpreting the trial once the data's there and really making good sense of it.

All of these things, uh, pose challenges and, and for all of us in the trial world, one of the biggest issues is that clinical trials now are trapped in the 1980s and 1990s. When it comes to how we do things, we are literally using paper and pen for a lot of things. We are literally, depending on phone calls and faxes in order to transmit information.

Uh, there are delays caused by those, there are mistakes caused by those, and that lack of, uh, realtime high quality visibility into what's going on, uh, to be able to react to, to, to information in real time. Is causing problems for everybody. And so I think there are great opportunities to fix a lot of our pain points by modernizing, uh, how we do things.

But unfortunately, until recently, I think everything has been trapped in decades old technology and decades old thinking. Um, and, and right now I think a lot of us are looking at how things are done and realizing. It's time to also switch how we do things, to modernize it and, and create tools that would actually allow us to leverage modern technology to help, uh, resolve a lot of these issues.

Ram Yalamanchili: Yeah, and some of the things you were mentioning, I think I really resonate with because, uh, just my own past experience being a biotech founder, um, and, uh, being in the oncology space, working on a large clinical trial, um. It's very interesting that the type of problems you're mentioning kind of resonates in a different therapeutic area as well.

Uh, I think it's, it kind of speaks volumes to the type of, uh, problems we all face as an industry, right. Uh, across the board. Yes. Um, so unpacking a few things, what. What, if any, has, has worked to improve some of the problems you're talking about? And I'm talking about over the past many years of your work in the space.

Uh, I think visibility is one of the things you just mentioned. Real time visibility. Right. Um, now I could imagine a world where that was non-existent. Maybe today there is some, some amount of capability somewhere, uh, which, which may exist or could have gotten to a certain extent. But I'm just curious like.

Have you seen some amount of improvement in visibility or your interaction with, uh, let's call it your project management or your CROs to get realtime, uh, information on what's happening with the trial? Um, you, you mentioned the ability to like speeden up the trials, so what has, what's been your like experience in the past, like.

You know, 10 plus years of you being in the space and sort of what have you seen happen so far?

Dr. Houman Hemmati: Yeah. You know, one, one thing that has been nice, but it also comes at, at a bit of a cost is having, you know, sponsor level access to the electronic, uh, you know, data capture system to be able to. Log in and in a blinded or masked manner, uh, be able to see what's coming in.

So be able to look at patient level, site level data. Um, but that also comes at a bit of a price because a lot of that information hasn't been reviewed. So the quality may not be great. A lot of the data hasn't been entered manually. Into the system by the site. They are still by and large doing paper and pen records.

And then it takes someone manually transferring that, um, into the system. And even after they've done that, there may be issues of data quality, right? If it hasn't been monitored, if someone hasn't done source document verification, for example, uh, if you haven't validated what's in there, that may be erroneous information or it may be delayed.

Both. And so that level of sponsor access may, may come again at that kind of price. So you may make mistaken assumptions based on what you see. And on top of that, the, the systems are cumbersome and they're not necessarily built for sponsor level, uh, access, especially for people who are not in the weeds on the day-to-day basis.

So people who are in my role, uh, you know, in in clinical development who are more designing trials. Um, and then handing them over to clinical operations don't have necessarily all the knowledge and the tools to be able to make the most of that data. We want to be able to extract what we need without spending hours and hours and potentially making mistakes, uh, in, in the process of finding what, what information we, we need.

And so there are a lot of problems, um, associated with that. It has helped, it has been nice to have that access. Some trials get you more access than others. Uh, but regardless, that has been beneficial. But without having the high quality, without having the, the, the timeliness of the data in the system, that that can actually backfire in in some regards.

Ram Yalamanchili: But you also are saying that you have visibility through the EDC. But the data's only as good as all these other steps which need to happen, which are still very manual,

Dr. Houman Hemmati: right? Mm-hmm.

Ram Yalamanchili: Yes. If somebody had taken it on a paper somewhere and that paper was never entered into the system, there's nothing you can do, even if there's an EDC there, right?

That is correct. So, so maybe then that the problem then kind of goes back to, okay, where is the site? Who is the site? How can I empower the site to be, uh, able to do a better, um, you know, if you, if that is your, truly your bottleneck, then you probably are shifting the problem from. A, uh, a data capture to like a site workflow, right.

In some ways.

Dr. Houman Hemmati: That's right.

Ram Yalamanchili: Got it. And what sort of, um, you know, I think everyone's gotten great experiences and, and sort of like not so great experiences working with, um, the, the industry and, and sort of working with within these problems. Um, have you seen practices or opportunities where you just said, Hey, this is great, this worked amazingly well.

Uh, maybe it's, uh, you know, working with a certain CRO partner, whatever it might be, but at the same time, maybe that doesn't scale. So I'm just curious, like, have you had problems where something worked great but you wish it were, it were able to scale really rapidly, but it wasn't, uh, in, in, in this kind of scenario?

Dr. Houman Hemmati: Um, you know, generally all of it, I think is, is, uh, is problematic in that regard. I mean, the, the issue is. That anytime. I want to really be sure that what I'm looking at, even if I personally have access to it or or whether I don't, and I'm asking someone else, it still requires me to get into a phone call or a meeting, or at least a series of emails with someone else who really knows what's going on to ask them what is really happening here?

And I think that's the problem that can't be scaled, is the fact that in order to be sure that what you are seeing is real. Is complete, has been verified, validated, and and, and is and is. Thorough. The only way to do that is to have a meeting with someone, or at least to participate in a long series of, of emails.

Regardless, there is that direct one-on-one interface required, uh, with whoever is the keeper of, of all that information. And, and sometimes that is, is not so easy, right? Tho those people usually have other jobs, uh, whether they're working on other trials or even if they're dedicated to your trial, they have actual operational duties.

And sometimes being able to communicate with a sponsor and provide that information is the last on their list of priorities for obvious reasons.

Ram Yalamanchili: Right. So then it becomes almost like an attention situation, right? Uh, attention from the right person who, who hopefully if you can scale that, then, then yes, correct.

But, but clearly that is not the easiest resource to scale. Like, like, you know, great people are in, in limited quantity, and we've got. We've got a certain amount of attention from, well,

Dr. Houman Hemmati: there, there are budgetary issues as well, right? You, you can pay for it, you can, you can certainly ask a CRO to dedicate multiple full-time people to a trial simply for, for those reasons.

But the problem is that that is exceptionally expensive, right? Clinical trials right now are, you know, are so expensive that sponsors literally are asking CROs, and I've just done that recently. Can you please limit the number of people who are in our weekly meetings? We don't see person X and person Z contributing very much.

They have said nothing in the last several weeks of attending meetings and we've gotten a big bill for it. That's literally what sponsors have to do. I've had to do it. It's, it's part of that lean, uh, trial mentality. And, and when you do that, uh, imagine having to add more people just to have that extra visibility and extra access that doesn't make financial sense.

And so I think cutting that out. Being able to, to, to automate that process or make it quicker and simpler without having lots of additional full-time people involved is essential. Right, right.

Ram Yalamanchili: And just switching to this right, obviously. You know, in some ways cost is tied to the capacity because you're hiring resources.

Mm-hmm. And if you are, if you're trying to be more efficient, you, you might at this point, uh, choose a path where you're reducing the capacity of what you could do with the resources you've got. Basically, that's kind of what you're saying. Gimme exactly number of people. On my project or keep it to a certain, uh, number of hours.

Um, how does that affect, like your decision making? I mean, you know, I'm just, I'm just curious like are there opportunities where you say, well, this trial is going this, this route, and. You know what? I'm very curious about something else, and I, I, I'm, you know, we need to dig down into it. But then if answering that additional question is a massive cost, then are you then making that additional, should I do it or not?

Does this happen on a regular basis from your perspective?

Dr. Houman Hemmati: Uh, it has, you know, in, in terms of, in interpreting trial data, for example, that's been, uh, you know, something that, that, that I found essential, you know, a, a trial that I was recently involved in, um, appeared at first glance to not have worked, but once we dug deep into it, a process that, um, came at a great cost, we were billed, you know, $25,000 and it took four weeks.

For their team to take the data and analyze it in the way that we wanted to. Something that I could have done on my own had I had access to the full database of validated info, uh, in a, in a matter of seconds or minutes, right? That rescued the trial and it showed us that there actually was an amazing effect.

Something that we didn't anticipate when we initially designed, you know, how our, our, our analysis plan similarly, uh, I know of another trial that I wasn't involved in where the data again came out as a failure, uh, a blockbuster failure and a big phase three. However, once they went back and looked at two different subgroups.

They found, hey, this actually worked really well in one of those subgroups, so very well defined subgroups, something that hadn't been expected. Guess what? That company was just sold for a couple billion dollars based on that analysis of the data that came out after they thought it had failed. Um, many such things.

But that analysis itself. Took several months and cost many hundreds of thousands of dollars because they needed an army of people to look at it and figure out how they can make sense of what they expected to be a success and didn't turn out that way once they got their top line data. Um, many things like this that I think could go much faster, could go much quicker and actually could go better because.

If you have software that can think better than one, two, or even 10, people can look at all different possibilities rather than just those few that initially come to mind. We can end up extracting way more information out of trials. Those that have succeeded, I. Those that have failed potentially, uh, or those that are somewhere in the middle.

Um, there, there's, there are infinite possibilities, but unless we have that, that immediate analytic ability, we may, you know, make the wrong assumptions about our trial data, whether it's. In real time with, with masked data that's coming in, or at the very end when you have top line data coming in, you're trying to make immediate decisions on where you take your program, uh, from there or, or, or, uh, attempt to attain funding to continue your program.

These are things that are very time sensitive, uh, and you don't have the ability or the luxury of spending more money on it sometimes, or, uh, the ability or luxury, uh, to spend a lot of time on, uh, figuring things out.

Ram Yalamanchili: I see, um, lot of interesting things there to unpack. Uh, I think I've got a couple of things here.

First, there seems to be this notion that, you know, I'm gonna run the trial to a certain extent and then I do an analysis and I get some insight, and then we keep moving. But even when you do that, you somehow found out at the very end that it failed. And, and then you had to do some kind of a subgroup analysis, and then you found out, well, this actually worked in this particular case, right?

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Why hasn't that happened earlier? Like what, what, what contributed or what, what sort of. Limitations exist today where there is no, uh, sort of like course correction mechanism where you could in, in, you know, in a much more higher frequency figure out, well, could I have done something different? Could I have made an amendment in the protocol to be able to do this?

Right? Is, are those the type of things which you, which you see or like, uh, from a lack of capability in the current infrastructure? Like how could you, can you.

Dr. Houman Hemmati: Talk a bit more about that. Yeah. And, and, and it comes back to having realtime access to data. Right? 1, 1, 1 issue of course, is that in a trial that is, uh, randomized and double-blinded or double masked, you, you can't have access to.

On masking data, right? So you're limited in what you can do based on that, and, and you really shouldn't, as a sponsor, uh, have access to that data because, uh, it, it, it allows you the opportunity, uh, to, to, to change the course of, of how things are done in a way that is unfair, right? Uh, and creates invalid trial data.

However, at the same time. There are opportunities to look at what's coming in, even if it is masked and say, you know what, we're, we're seeing that there are no responders, regardless of whether they may or may not be on active. Right? They may be placebo active, doesn't matter. No one's responding, uh, who has a certain group of, of you know, criteria that they meet.

Whereas there are others who, you know, we see half of them responding, or a third responding or something better. Uh, guess what? Maybe we shouldn't have enrolled anyone who has a certain, you know, set of criteria, whatever it may be for that particular disease indication, um, that allows a sponsor to get ahead of it and perhaps tweaks things in order to avoid futile treatment to people in the trial for whom this may not work.

That benefits. Potential subjects in the trial. It bene, you know, who, who were otherwise being exposed to only risk with no potential benefit. And it benefits a sponsor by getting things to move much more quickly, increasing the probability of success, increasing the probability of a, of an approval down the line.

And so it's a win-win, I think, for everybody. But until you have the data and are able to look at it, you may not know. Same thing, by the way, applies to adverse events as well, right? If you have a complete picture and real time picture. As someone who's done medical monitoring and has been responsible for signing off on the adverse events in clinical trials and actually monitoring for things.

That are serious. That may even be unex unexpected, but at the very least be severe enough, uh, to warrant further investigation. If you don't have that real time access, there may be delays. Sometimes this is up to the site to proactively remember to notify the CRO. Then for the CRO to proactively remember to notify the sponsor, these delays can actually, you know, cause problems for the subjects in the trial, right?

As what not forgetting about even the drug itself. You need to know about that. As someone who's done medical monitoring, I want to have. Immediate visibility into those issues without having to wait for that human interface. And again, those, you know, you don't want that delay. And I've seen those delays occur, uh, and in many indications it's not that of big of a deal, but in some indications it's a life or death issue.

And, and you do need to know those sorts of things as soon as they occur. It makes sense.

Ram Yalamanchili: Uh, I know you've mentioned delays and time as another word, like everything you've unpacked, I think clearly, like there's this urgency to access data. I get that. Then there's this cost equation, but then there's also a time equation, right?

I think you wanna move at a certain pace. Uh, and I hear very often, you know, recruitment is the biggest challenge for trials to sort of go where, where they need to go. Um. And, you know, I think it also comes back to how fast your sites are activated, how great your sites are. Uh, you know, the, the whole works, right?

It's, it's a multivariate problem here. So my question for you is, uh, how do you figure out how many sites you can activate and what is the ideal number? Because in a theoretical situation, if you had infinite scale and infinite number of sites, which can be added. Maybe the recruitment problem can be solved.

Is that, is that a good way to think about it? How, how, how do you think about it?

Dr. Houman Hemmati: Yeah, it, recruitment goes to two things, right? There is, uh, site number, how many do you have, it's site quality. Uh, you can have infinite sites, but if they're not good sites or if they don't understand the protocol, or if they're deprioritizing it or if there's a problem, um, you know, somewhere else in that equation, then it's not gonna help you.

And then there's also the enrollment criteria, right? Even the very best sites. Cannot enroll an Unen Enrollable trial where you have inclusion or exclusion criteria or both that are just not reasonable. And there's some indications, newer indications, especially right where you're in uncharted territory and it's not until you attempt to enroll the trial that you find out that maybe your enrollment criteria were a little bit too narrow and restrictive in such a way that you're just not getting any subjects.

I have seen that, I've experienced that numerous times in trials where we went into uncharted territory and, you know, and, and waited and waited and waited. I, I just, I'm in the middle of one trial that I, so I can't talk about the details of it, but I've been in the middle of one trial where there was one enrollment criterion that caused enrollment to be so incredibly slow, and it's for a rare disease, so it made it far worse.

That we were getting one subject in every several months. It wasn't until we did that manual review of everything that we were able to finally figure out what was causing all these screen fails, and the moment we went and fixed that, did a small amendment that didn't impact the the trial. In any way in terms of safety or efficacy assessments, it was just a guess that we did wrong.

Uh, all of a sudden enrollment picked up very much more quickly. Um, and it's, and it allowed the company to move forward without having to do a massive raise or, or abandon, uh, the program, which could have otherwise happened. Um, I've had many other examples of this as well, where that real time access, uh, is helpful, but, but delays in that real time access.

Caused, uh, delays in the ability to detect those issues, sometimes to the point where they could have been devastating to the, to the company, to the program, uh, or otherwise.

Ram Yalamanchili: Interesting. By the way, this, this particular scenario of hurt, many, many folks tell me, especially in the clinical development world, right?

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Uh, and would you say, this comes back to the earlier discussion of not having. Access to the data or at least the people to do the analysis or, or ask, being able to ask the questions which you wanted to ask as soon as possible. Is that, is this sort of like related to that? Is that why you wouldn't you?

You waited as long as you waited before the amendment had to be made and Yes, it. That sort of like gave you a, a, a pretty quick activation. Right. So it was like a step function change when, when this particular change was made. Uh, correct. It was fascinating. So,

Dr. Houman Hemmati: yeah. I mean, look, and, and for, in a lot of trials, by the way, the sponsor doesn't have visibility into things that are happening until that weekly meeting, right?

And so there are a lot of things that happen between weekly meetings and you don't wanna necessarily wait, you know, five, six or seven days. To know about something and sometimes things aren't even available to the, to the CRO until a week later. And so now you've got a two week or three week delay to fi, you know, to to, to finding out important information.

That's unacceptable in many cases. And, and, and remember also, there are limited people and those limited people have many things that they're doing on their list, and they have limited bandwidth because of that and limited priorities, and sometimes their priority, obviously their priority is going to be.

Executing the trial, it's not necessarily everything else that the sponsor is worried about. So I think it's really not necessarily about adding people. Or replacing people. It's, it's, it's about taking the people who we already have and allowing them to really work faster, do more things with the time that they have, and allow all of the priorities to be met, uh, at the, at the very same time.

And I think it's possible with shifting how we approach technology, uh, and how we really look at conducting trials to begin with. Yeah.

Ram Yalamanchili: And I think I also realized something interesting from what you're saying. There is this notion that we are capturing data in the EDC, which is, which obviously it does, but there's also so much it is not capturing.

And that is knowledge which is being built up in your team, whoever's working on the trial. Yes. And the only way to query that knowledge is to ask them. In the midst of many other things on priorities they already have.

Dr. Houman Hemmati: Yes. So you've

Ram Yalamanchili: got this contextualized data, which is, you know, part of it in the EDC and the rest of it is in, in the ether, let's say, in in the team knowledge.

And, uh, one is potentially easier to query. The other is not, because that's like generalized context, right? And so that's, it's a very like, interesting way to think about this problem because I, I see a lot of parallels in, in sort of how, um. I see this evolving and, and including what we're working on and what we've built, how our customers are using some of the tooling and, and, and the capabilities we have.

So let me, let me switch to another, uh, ne next point here, right. I come from the view that, as you've said, there is going to be a, a, a, a sort of like a shift in how we prioritize things and what we do and what we don't do. That's, that's just natural from a productivity and, uh, evolution of like using technology in, in everybody's lifecycle, work, work, workflow, and lifecycle.

Uh, and in this case, I think for the very first time, uh, perhaps in in history, what we've got is, uh, ai, which is incredibly smart. Um, we have ais who have great capabilities, not only around, uh, logical, uh, problem solving, but also reasoning now, uh, task planning. Uh, some concepts of memorization. Short-term memory.

Long-term memory. So there's, there's various things which are now coming in, in capability which extend beyond just asking a question and getting an answer back.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: Where Chad GPT was or some of these information retrieval style, uh, problems are, uh, being solved with with Chad GPT. Right? And I think all of us have probably seen it where we wanna draft a nice email.

We can ask Chad, GPD. It gives you back that. That doesn't necessarily remove the work of actually me answering 500 emails. It's, it's just me solving a part of that problem.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: And I think the capabilities of AI being able to do great reasoning, great task planning, will get us into that next level of unlock where you can say, Hey, go do this work for me, and, and I will, I will basically oversight you.

Right? So we are coming into this domain, which, uh, at Tilda, what we call, uh, AI teammates, so. These are not people, these are ais who have the access to certain tools. So in our case, we have access to, uh, you know, communication tools, phones, emails. Uh, the AI teammates have access to these tools. Uh, they have access to a computer where they can work on, uh, a browser.

They have access to a mouse and a keyboard. So they have tools on which we can now teach skills. And then, you know, they can be part of your team. You train them, you, you sort of get them to a place of competence where they're able to do certain workflows. I find it interesting the way you are describing how, um, you know, there's a siloed knowledge.

Mm-hmm. Some is an EDC, somewhere in the, in the, in the team. And now I feel like, you know, is there a world where your AI teammates can, you know, absorb that knowledge, which is previously not being captured elsewhere? And then you can ask the question that way, right? Or you can ask them to do certain things about it.

Uh, how, how do you, like if, if this were the case and this were how we were progressing, um, uh, into, at least this is my view, maybe what I should say is, do you see that happening? Do, do you see this sort of a thing? As too much of a science fiction, or do you feel like we are here, or at least we're seeing glimpses of it, and this will, this will very quickly evolve.

Where, where do you stand from that?

Dr. Houman Hemmati: Yeah. I, I, I think we're, we're not quite yet there a hundred percent obviously, because we don't see this employed in, in every trial, let alone, uh, even, you know, a major subset of trials. But we are now technologically at that place. Where I think we are at, at a major inflection point where we can now actually have AI tools that have visibility into everything.

Even AI tools that attend, you know, zoom meetings, for example. Right? And, and are absorbing everything that's coming in, you know, in real time and in the totality of it, let alone having access to the database. Having access to so many other things that may not have made it in the database, but are relevant as well as other pieces of information that are coming through, you know, various sources of communications.

All of that said, I think it's quite valuable and it gives us an opportunity to do things and see things that we have not been able to do before with a level of timing, cost, and scale that we've never envisioned before. And I think what that's gonna allow us to do, it'll unlock opportunities. To run trials far more, uh, effectively, efficiently, quickly, safely, and with a higher probability of success.

And ultimately, those are the things that any clinical trial needs, right? There is no reason why the operational aspects of a, of a very simple 30 patient clinical trial, one of which I completed a year or so ago, should. Feel like a phase three trial. It just shouldn't, and there's no reason why, you know, conducting a phase three trial, no matter how complex it is, right?

But especially in an easy field, like an eye drop trial that I do in ophthalmology should feel like you're launching a rocket to the moon. I mean, these are things that should be pretty basic. But we've overly, you know, uh, added, added too much complexity to it because of the processes that are required, you know, uh, to maintain quality standards in clinical trials.

But I think by introducing ai, by introducing automation and by really simply modernizing how things are done, uh, we have the opportunity to really reduce the cost. Increase the time. And, and that benefits not just the sponsor, it benefits ultimately patients. It gets, uh, drugs to market quicker, uh, cheaper.

Um, and, and ultimately that's what people want. Everybody I.

Ram Yalamanchili: Yeah, no, it, this is where the exciting, uh, you know, effects of implementing AI into this process is right. One other area which I've, uh, I, I've thought through and sort of worked, um, or at least saw it through my experience at Lent, my previous company, uh, and, and, and it tilda with, uh, with our current customers and,

Dr. Houman Hemmati: mm-hmm.

Ram Yalamanchili: See, I think change management is a scary word for many and. One of the advantages of sort of working with an AI teammate is you don't necessarily need to change a lot of your process. You can sort of say, this process involves a certain amount of overhead. I don't want to do it, I'm just gonna give it to my AI teammate to do it.

Dr. Houman Hemmati: Yes.

Ram Yalamanchili: So an example of this would be, we work with, uh, many sites now, and there there is an expectation from all their sponsors to first capture source. Maybe it's on paper, maybe it's on some system. Then move that data into the EDC later. Yes. So you're essentially doing double data entry, right? You're entering it into your source first, and then you're entering it into the EDC.

And from a site's perspective, they've got this problem where, okay, that means if I have 20 trials running actively, I've got potentially 20 different systems I need to work with. And, uh, you know, I have 20 different EDC logins at least, right? So,

Dr. Houman Hemmati: yeah, true.

Ram Yalamanchili: And there are many EDCs, it's true on, uh, platform A and another company could be on platform B.

So there's multi, uh, you know, there's a diversity of systems here, and what we've come in and said is, well, you know, you should really just standardize on what you do well, which is doing source entry, which is anyway, what you do. Every clinic has the capability to enter this data into an EMR or some kind of paper chart or some other system.

So yes. Right. So you do that. But the after part of, like, once you do that, moving into an another system, managing some form of your queries, that's something which an AI team at this point is fairly capable and can do it at a really like, uh, you know, high quality, consistent, reliable way. And, uh, you know, the argument here is that why do you wanna spend half your time working time on doing data entry into the EDC, where you could basically focus on patient care, which is.

Net net benefit for everybody, right? The whole ecosystem wins in this situation, but we've never changed the process itself. We didn't change manage anything here because, uh, we just kind of handed that work to somebody else. And this somebody else just happens to be a very powerful, intelligent ai. So, so that's sort of thing, right?

That is correct. Similar sort of an experience with, uh, some of our sponsors and several opportunities lately, which we're seeing is. I've got a phase three trial. I need to enable 700 sites in eight months, uh, from FPI and uh, and you know, it's across X number of countries. How do I do it? I need to staff temporarily to do all of this work and then, you know, kind of manage them out if I don't have a large project of that side.

And to me that sort of thing is really exciting because. These are processes where you're not asking to change, manage, like, don't change your process. If you, if you really just want to email every single site and, uh, you know, follow up on documents in that manner, that's not a problem at all. You can basically have an it met, which will scale immediately to do this sort of work rather than sort of, uh, doing every single part of this work with a person.

Right. And what I find is also that sometimes it's not about even like, um. Make building the team, it's really hard to build these teams because there's not enough capacity out there to be able to enable these large trials and the kind of timeline you're looking at. And I think you have resource competition.

That's why running a phase one feels like a phase three because you're competing with the same type of resource constraints, which everybody else is with.

Dr. Houman Hemmati: Yes. So,

Ram Yalamanchili: so, you know, it's, it's sort of like a persistent problem, I think, which all comes from, to me anyway. The lack of really great supply of individual talent in this industry, and we're not growing that fast enough, and that sort of has a knock on effect, in my opinion.

Anyway, on the, on the, um, number of goals we have on the shot, right? Um, number of formation of biotech companies from number of opportunities to bring new drugs into, into clinical development and. This sort of thing, all kind of, and then of course the amount of capital available because if, if you had a certain amount of capital, each of your trial costs a certain amount of money, you divide that.

That's the number of, and you have in terms of how many opportunities you've got. And so I, I sort of see this, you know, interesting opportunity where. For the first time, we're saying, don't, if you don't wanna modernize your process, or rather, if you don't wanna think about it that way, don't do that. But you can have an AI teammate come in and sort of work within your framework, uh, in this way.

Right. Which is sort of an interesting, um, uh, you know, paradigm I'm looking at. Do you, do you sort of see that, um, I mean, I guess like I would ask you like, how does that, how does that sort of fit into your framework? Does, does that. Makes sense. Do you have any concerns, anything where you feel this, this is like, um, you know, just fundamentally not

Dr. Houman Hemmati: viable?

Uh, in some ways, I guess I'll, I'll tell you, I, I think it's gonna require, uh, people to become comfortable with it. Uh, and, and, and different people are comfortable to a different degree. I'm a person who already likes to take, you know, de-identified data sometimes and, and, and just to do a quick analysis using.

You know, gr or chache bt of things just to give myself some, some confidence about things before I get an official an, an analysis, um, you know, with a limited amount of information, uh, just to see how does it do it. And, and, and I've always been very pleasantly surprised. And so I'm comfortable with the technology only because I played around with it, even outside of the formal setting within.

Within trials. Right. Um, I know on the other hand, there may be people who are hesitant only because of a, of a sense of mistrust, not knowing what's going on in the black box. Right. And what it's gonna take, take, um, is for there to be real world examples. First of where this sort of approach and having, uh, you know, the virtual teammates or the AI teammates succeeding, uh, and benefiting trials without causing problems.

And then second, it's gonna require many people, including people who are a little skeptical or, or hesitant to be, you know, uh, engaging with it themselves, perhaps in a way that's redundant even, uh, right. To have it alongside, uh, additional, you know, traditional resources for trials, but not instead of, uh, to see, okay, how is this helping me?

And if I didn't have all these additional people, uh, how would that have helped or how does that, uh, AI tool. Allow the people I do have to focus on what they really need to be focused on instead of constantly answering, you know, my queries or, or, or questions about things. Ultimately, it will happen because once we see real world examples of trials becoming far quicker and far more efficient, naturally sponsors will, will, uh, want to go in that direction.

CROs will want to go in that direction and frankly, the site. We'll want to take that approach because it's gonna make their lives way easier. Right. And as a site, you are not caring about the mechanics of the trial sites want to take care of patients plain and simple and, and the easier it can be for them to take their existing clinical framework and bring patients in for clinical trials.

Uh, without changing how their clinic is run, the better. And so for them removing any of those heavy lifting steps or making it far easier for them to do, uh, the better. And so the, I think again, this poses a win-win for everyone, and it's just a really a matter of rolling it out, uh, in a way that makes people comfortable.

Um, and, and is, and, and, and demonstrates, uh, both, you know, the benefits and the lower level of risk.

Ram Yalamanchili: Yeah, under understandable. I think the comfort and trust I think, are gonna come with more data and more just frankly just able to prove that these are actually competent and able to do the work. But, um, I think, uh, that, that just comes with time.

I think there's gonna be innovators, there's gonna be visionaries who are able to see it and say, okay, I have some value here and let's, let's kind of jump in and do something with it. Um, and, uh, sort of, I think, uh, what you are talking about is. Uh, the industry will essentially evolve because there is essentially a need for it.

Um, and we have all these challenges we've spoken about. Um, so my last question for today is, you know, independently thinking about all, you know, your journey so far, I. Do you feel like the next five years are gonna be sort of similar or do you feel like there's something rumbling in this case? Like, and, and there might be a, a, a change to the better or, or change?

I don't know what you, what you look at it. I guess I'm asking you what, what would you think about,

Dr. Houman Hemmati: uh, I, I, I, I think what, what what we're probably gonna see in, in the world of trials is what we're seeing even in the government right now, which is a focus on. Eliminating any kind of waste, uh, but to do so thoughtfully, obviously, uh, and to, to make things more efficient so we can focus our limited resources on doing what we're supposed to do very, very well and very quickly and efficiently, right?

In a way that benefits everyone. So my vision for what's gonna happen, uh, with clinical trials, especially with the involvement. Of AI is that we're gonna see a lot more self-service, uh, a lot fewer delays and a lot more confidence in, in the quality of the trial and the quality of the data that are coming out, and the ability to execute and get the results.

I. That we're hoping to see and that the results that we see are actually reflective of the performance of, of the, of the product, of the drug. And we're headed there. Uh, for sure. Right now, you know, we're at this point in evolution of clinical trial conduct that we've maximized existing traditional technologies and it's time to now take a shift.

To new technologies. And that happened when we shifted from pure paper charts to involving electronic databases and doing simple things like Excel spreadsheets and eventually to more advanced systems. But now we're at that inflection point that moves far beyond that to a new level where we're able to actually make sense of the information in real time rather than having to just simply rely on people, uh, to interpret it or act on it.

Ram Yalamanchili: That makes sense. Uh, one quick follow-up on that is what's your view from a regulatory perspective? Have you, have you had an opportunity to think about that or, or speak to anybody on that?

Dr. Houman Hemmati: Yeah, you know, I, I think from a regulatory perspective, uh, we're again in uncharted territory and the FDA, especially the, the modern FDA, that's under very new leadership as of yesterday with, with, uh, Dr.

Marty McCarey in charge. As, as FDA commissioner has an opportunity to really, uh. Modernize how, how trials are allowed to be run. And I think what they're gonna probably recognize is that this benefits the, the FDA as well. The, the more, uh, quality data they can get. And even if they're have access to some of those tools and are allowed to utilize it themselves to dig into the trials, the better.

Imagine being the FDA and having. The ability to, uh, you know, do trial audits in a way that doesn't force them to send an army of people to each site that they wanna audit, but rather allows them to do it remotely. Or even if they are in person, send fewer people. And to do it with far more robustly, with greater scalability that's gonna benefit everyone because now the, you know, the agency will be able to discover things that they may have missed in the past, while also you employing far fewer resources and making it a lot less cumbersome on the sites.

Right now, if you're a site and you have FDA coming, you know, you have to drop everything for, for a matter of days, sometimes or longer, uh, just to simply accommodate the agency for that re review. That, that can all go away if we change how things are done, uh, from from the regulatory side, of course there are gonna be regulatory approvals required for some of these things, especially when it comes to, you know, sponsor access.

Uh, and also the ability to use AI driven analytics, uh, for FDA submissions, right? They're gonna, right now the FDA wants to be able to take the raw data, right? And do their own analysis to validate. That the, the sponsor's analysis was done correctly, uh, without any fudging. And if the FDA similarly may have access to these tools, um, either traditional tools still or the AI tools or both, and they can do side by side comparisons, ultimately, I think the FDA will be, um, comfortable as well.

But I think that's gonna be a process and it creates opportunities to work in partnership with that. Understood.

Ram Yalamanchili: Yeah, no, it's a fascinating, uh, I I think it'll be a fascinating next several years. Uh, we're, we're, we're truly in a very interesting time. Um, well, Dr. Hamad, it's been a pleasure. Uh, appreciate you spending the time, uh, discussing this.

I had a lot of fun talking to you about this. Um, so, uh, thank you.

Dr. Houman Hemmati: Yeah, Ram, thank you for having me. Always happy to join you again.

Ram Yalamanchili: Alright, thank you. Bye.


Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.