Dr. Houman Hemmati: Why Clinical Trials Fail

In this episode, Ram Yalamanchili sits down with Dr. Houman Hemmati to unpack the real-world inefficiencies plaguing clinical trials and how AI can finally make a difference. But not just any AI. We're talking about AI teammates, tools that work quietly in the background to automate documentation, reduce site burden, and improve execution without disrupting care. Dr. Hemmati shares his firsthand experience as an investigator and biotech leader, breaking down why most clinical trials fail in the execution phase, not the science. This conversation pulls no punches and offers practical, tested ways AI can actually help trial sites work smarter—not harder.

Dr. Houman Hemmati: Why Clinical Trials Fail

In this episode, Ram Yalamanchili sits down with Dr. Houman Hemmati to unpack the real-world inefficiencies plaguing clinical trials and how AI can finally make a difference. But not just any AI. We're talking about AI teammates, tools that work quietly in the background to automate documentation, reduce site burden, and improve execution without disrupting care. Dr. Hemmati shares his firsthand experience as an investigator and biotech leader, breaking down why most clinical trials fail in the execution phase, not the science. This conversation pulls no punches and offers practical, tested ways AI can actually help trial sites work smarter—not harder.

Transcript

46 min

Ram Yalamanchili (00:03.534)

Hi, Dr. Hamati, how are you? Very nice. Yeah, it's always great to see you and talk to you. So I'm really looking forward to this. So as we get into this, one of the first things I'd like to start with is tell us your story. How did you end up where you are and why?

Houman David Hemmati (00:05.257)

I'm very good. Thanks so much for having me.

Houman David Hemmati (00:28.162)

Yeah, my background's a little interesting. I'm an MD, but I also got a PhD. And I always envisioned a career in which science plays a major role. Traditionally, that involves being in the academic setting and practicing academic medicine, having a research lab or clinical research on the side. And that's not what the world wanted for me. I trained in ophthalmology. And afterwards, when I got a fellowship to train in a subspecialty in cornea and refractive surgery, it came along with a postdoc.

at MIT with Bob Langer, who does chemical engineering and specifically works on extended release drug delivery. And so my career took a big shift when I went there and learned how to actually make drugs and make drugs last longer, specifically in the eye. And shortly after I practiced in the academic setting for a year, I was recruited by Allergan, as it was known back then, to be involved in clinical development and ophthalmology. And so I spent a couple of years

there and once the company was sold I went off to somewhere else and eventually ended up on my own as a startup entrepreneur, co-founder, as well as chief medical officer part-time for a variety of different companies all in the ophthalmology space. And so I spend my life now working on early to late stage trials as well as even pre-clinical development of drugs intended to improve how we take care of different ophthalmic conditions.

Ram Yalamanchili (01:55.82)

Yeah. And, you know, just from looking at your profile and what you've done, you've worked on many programs, have obviously gotten deep expertise in this area. And one of the things which I'm really looking forward to is just understanding what are your learnings in this place and what do you wish to change as well? And I think we're also in an interesting time where there's many opportunities for change. especially from a technology perspective, there's new innovations, are, think,

wonderful in terms of actually implementing and getting us there. So I think that's sort of what I wanted to unpack today as far as how we converse about it and go through this. So the first step I'll probably start with is I'm very curious about the pain points and sort of like what does your day to day look like when you think about

you know, developing a drug, what, what, from a sponsor's perspective, what are the type of pain points you usually see and, and spend more, a lot of time either you personally or from your team.

Houman David Hemmati (03:01.797)

I wish we had about four hours to do this because I can just spend that much time focused on pain points. There are a lot, right? There are regulatory pain points, there's CMC pain points, there's fundraising, there's clinical and clinical trial pain points. All of these things, each of them can be a several hour podcast. With respect to the clinical development, which is the part that I spend a lot of my life focused on, in addition to the other aspects, it's several things. It's one, getting a trial.

operationalized, which means selecting sites, operationalizing it, making sure they truly understand the protocol so they're not making mistakes. It means also getting the study launched and translating the protocol into an actual clinical trial, which is just like taking the information from an architect, handing it to a builder and making sure they build it. Without the architect's involvement, sometimes the builder does the wrong things, right? There are a lot of things that aren't in the architectural plans.

And a lot of it is up to interpretation. And sometimes the architect designs a building that can't really be built or be built well. And sometimes that collaboration between the architect and the builder is needed. That same thing I think I see in clinical trials. And then a lot of it is also in terms of running that trial smoothly, making sure it's enrolling properly, making sure that adverse events and other problems like enrollment issues or enrollment criteria that are not perfectly tuned.

for that particular trial are caught early so that amendments can be made and staying on top of it, understanding what kind of data is coming in, if there are any problems with the study design overall that may lead to poor or uninterpretable data later on, and whether there's anything that causes the trial to have to be stopped early. And then finally is interpreting the trial once the data is there and really making good sense of it. All of these things pose challenges and for

All of us in the trial world, one of the biggest issues is that clinical trials now are trapped in the 1980s and 1990s when it comes to how we do things. We are literally using paper and pen for a lot of things. We are literally depending on phone calls and faxes in order to transmit information. There are delays caused by those. There are mistakes caused by those and that lack of real time, high quality visibility into what's going on

Houman David Hemmati (05:27.36)

to be able to react to information in real time is causing problems for everybody. And so I think there are great opportunities to fix a lot of our pain points by modernizing, how we do things. unfortunately, until recently, I think everything has been trapped in decades old technology and decades old thinking. and, and right now, I think a lot of us are looking at how things are done and realizing it's time to also switch how we do things to modernize it.

and create tools that would actually allow us to leverage modern technology to help resolve a lot of these issues.

Ram Yalamanchili (06:02.774)

Yeah, and some of the things you're mentioning, I think I really resonate with because just my own past experience being a biotech founder and being in the oncology space, working on a large clinical trial, it's very interesting that the type of problems you're mentioning kind of resonates in a different therapeutic area as well. I think it's it kind of speaks volumes to the type of problems we all face as an industry, right across the board. So I'm talking a few things.

Houman David Hemmati (06:27.776)

Yes.

Ram Yalamanchili (06:32.618)

What, any, has worked to improve some of the problems you're talking about? And I'm talking about over the past many years of your work in the space. think visibility is one of the things you just mentioned, real-time visibility, right? Now, I could imagine a world where that was non-existent. Maybe today there is some amount of capability somewhere which may exist or could have gotten to a certain extent. But I'm just curious, have you seen some amount of improvement in

visibility or your interaction with, let's call it your project management or your CROs to get real-time information on what's happening with the trial. You mentioned the ability to like speed up the trials. So what's been your like experience in the past like, you know, 10 plus years of you being in the space and sort of what have you seen happen so far?

Houman David Hemmati (07:25.779)

Yeah, you know, one thing that has been nice, but it also comes at a bit of a cost is having sponsor level access to the electronic data capture system to be able to log in and in a blinded or masked manner, be able to see what's coming in. So be able to look at patient level site level data. But that also comes at a bit of a price because a lot of that information.

hasn't been reviewed, so the quality may not be great. A lot of the data hasn't been entered manually into the system by the site. They are still, by and large, doing paper and pen records, and then it takes someone manually transferring that into the system. And even after they've done that, there may be issues of data quality, right? If it hasn't been monitored, if someone hasn't done source document verification, for example, if you haven't validated what's in there,

that may be erroneous information or it may be delayed or both. And so that level of sponsor access may come again at that kind of price. So you may make mistaken assumptions based on what you see. And on top of that, these systems are cumbersome and they're not necessarily built for sponsor level access, especially for people who are not in the weeds on the day-to-day basis. So people who are in my role in clinical development who are more designing trials,

and then handing them over to clinical operations don't have necessarily all the knowledge and the tools to be able to make the most of that data. We want to be able to extract what we need without spending hours and hours and potentially making mistakes in the process of finding what information we need. And so there are a lot of problems associated with that. It has helped. It has been nice to have that access. Some trials get you more access than others, but regardless, that has been beneficial.

But without having the high quality, without having the timeliness of the data in the system, that can actually backfire in some regards.

Ram Yalamanchili (09:31.182)

But you also are saying that you have visibility through the EDC, but the data is only as good as all these other steps which need to happen, which are still very manual, right? If somebody had taken it on a paper somewhere and that paper was never entered into the system, there's nothing you can do even if there's an EDC there, right? So maybe then the problem then kind of goes back to, where's the site? Who's the site? How can I empower the site to be able to do a better?

Houman David Hemmati (09:40.297)

Mm-hmm. Yes.

Houman David Hemmati (09:49.053)

That is correct.

Ram Yalamanchili (09:59.838)

If that is truly your bottleneck, then you probably are shifting the problem from a data capture to a site workflow in some ways. it. what sort of... I think everyone's gotten great experiences and not so great experiences working with the industry and working within these problems. Have you seen...

Houman David Hemmati (10:09.001)

That's right.

Ram Yalamanchili (10:26.914)

practices or opportunities where you just said, hey, this is great. This worked amazingly well. Maybe it's working with a certain CRO partner or whatever it might be. But at the same time, maybe that doesn't scale. So I'm just curious, like, have you had problems where something worked great, but you wish it were able to scale really rapidly, but it wasn't in this kind of scenario?

Houman David Hemmati (10:50.536)

Um, you know, generally all of it, think is, uh, is problematic in that regard. I mean, the issue is that anytime I want to really be sure that what I'm looking at, even if I personally have access to it or whether I don't, and I'm asking someone else, it still requires me to get into a phone call or meeting, or at least a series of emails with someone else who really knows what's going on to ask them.

What is really happening here? And I think that's the problem that can't be scaled is the fact that in order to be sure that what you are seeing is real is complete has been verified, validated, is, is. The only way to do that is to have a meeting with someone or at least to participate in a long series of emails. Regardless, there is that direct one-on-one interface required.

with whoever is the keeper of all other information. And sometimes that is not so easy, right? Those people usually have other jobs. Whether they're working on other trials or even if they're dedicated to your trial, they have actual operational duties. And sometimes being able to communicate with a sponsor and provide that information is the last on their list of priorities for obvious reasons.

Ram Yalamanchili (12:09.568)

Right, so then it becomes almost like an attention situation, Attention from the right person who hopefully if you can scale that then yes, you get what you want. But clearly that is not the easiest resource to scale. Great people are in limited quantity and we've got certain amount of attention from.

Houman David Hemmati (12:18.184)

Correct.

Houman David Hemmati (12:27.997)

Well, there are budgetary issues as well, right? You can pay for it. You can certainly ask the CRO to dedicate multiple full-time people to a trial simply for those reasons. But the problem is that that is exceptionally expensive, right? Clinical trials right now are so expensive that sponsors literally are asking CROs, and I've just done that recently. Can you please limit the number of people who are in our weekly meetings? We don't see person X and person Z

contributing very much they have said nothing in the past several weeks of attending meetings and we've got a big bill for it that's literally what sponsors have to do i've had to do it it's it's part of that lean trial mentality and and when you do that imagine having to add more people just to have that extra visibility in extra access that doesn't make financial sense and so i think cutting that out but being able to to to automate that process or make it quicker

and simpler without having lots of additional full-time people involved is essential.

Ram Yalamanchili (13:30.446)

Right, right. And just switching to this, right, obviously, you know, in some ways, cost is tied to the capacity because you're hiring resources. And if you're trying to be more efficient, you might at this point choose a path where you're reducing the capacity of what you could do with the resources you've got. Basically, that's kind of what you're saying. Give me less people on my project or keep it to a certain number of hours. How does that affect

Houman David Hemmati (13:41.276)

Mm-hmm.

Houman David Hemmati (13:52.602)

Exactly.

Ram Yalamanchili (14:00.502)

like your decision making. mean, I'm just curious, like, are there opportunities where you say, well, this trial is going this route and you know what, I'm very curious about something else. we need to dig down into it. But then if answering that additional question is a massive cost, then are you then making that decision on should I do it or not? Does this happen on a regular basis from your perspective?

Houman David Hemmati (14:25.926)

It has, know, in terms of interpreting trial data, for example, that's been, you know, something that I found essential, you know, a trial that I was recently involved in appeared at first glance to not have worked. But once we dug deep into it, a process that came at a great cost, we were billed, you know, $25,000 and it took four weeks.

for their team to take the data and analyze it in the way that we wanted to, something that I could have done on my own had I had access to the full database of validated info in a matter of seconds or minutes, right? That rescued the trial and it showed us that there actually was an amazing effect, something that we didn't anticipate when we initially designed our analysis plan. Similarly, I know of another trial that I wasn't involved in where the data again came out as a failure.

a blockbuster failure and a big phase three. However, once they went back and looked at two different subgroups, they found, Hey, this actually worked really well in one of those subgroups, a very well defined subgroup, something that hadn't been expected. Guess what? That company was just sold for a couple billion dollars based on that analysis of the data that came out after they thought it had failed. many such things, but that analysis itself took.

several months and cost many hundreds of thousands of dollars because they needed an army of people to look at it and figure out how they can make sense of what they expected to be a success and didn't turn out that way once they got their top line data. Many things like this that I think could go much faster, could go much quicker and actually could go better because if you have software that can think better than one, two or even 10 people can look.

at all different possibilities rather than just those few that initially come to mind, we can end up extracting way more information out of trials. Those that have succeeded, those that have failed potentially, or those that are somewhere in the middle. There are infinite possibilities, but unless we have that immediate analytic ability, we may make the wrong assumptions about our trial data, whether it's in real time with masked data that's coming in or

Houman David Hemmati (16:46.957)

At the very end, when you have top line data coming in, you're trying to make immediate decisions on where you take your program from there or attempt to attain funding to continue your program. These are things that are very time sensitive and you don't have the ability or the luxury of spending more money on it sometimes or the ability or luxury to spend a lot of time on figuring things out.

Ram Yalamanchili (17:11.982)

I see. A lot of interesting things there to unpack. think I've got a couple of things here. First, there seems to be this notion that I'm going to run the trial to a certain extent, and then I do an analysis, and I get some insight, and then we keep moving. But even when you do that, you somehow found out at the very end that it failed. And then you had to do some kind of a subgroup analysis, and then you found out, well, this will actually work in this particular case.

Houman David Hemmati (17:41.742)

Yes.

Ram Yalamanchili (17:42.146)

Why hasn't that happened earlier? what, what, what contributed or what, what sort of limitations exist today where there is no sort of like course correction mechanism where you could in, in, know, in a much more higher frequency figure out, well, could I've done something different? Could I've made an amendment in the protocol to be able to do this? Right. Is, are those the type of things which you, which you see or like from a lack of capability in the current infrastructure? Like maybe talking about more about that.

Houman David Hemmati (18:10.373)

Yeah, and it comes back to having real time access to data, right? One issue, of course, is that in a trial that is randomized and double blinded or double masked, you can't have access to unmasking data, right? So you're limited in what you can do based on that. And you really shouldn't, as a sponsor, have access to that data because it allows you the opportunity.

to to to change the course of of how things are done in a way that is unfair right and creates invalid trial data however at the same time there are opportunities to look at what's coming in even if it is masked and say you know what we're we're seeing that there are no responders regardless of whether they may or may not be on active right they may be placebo active as a matter of the no one's responding who has a certain group of of you know criteria that they need

Whereas there are others who, you know, we see half of them responding or a third responding or something better. Guess what? Maybe we shouldn't have enrolled anyone who has a certain, you know, set of criteria, whatever it may be for that particular disease indication. That allows a sponsor to get ahead of it and perhaps tweaks things in order to avoid futile treatment to people in the trial for whom this may not work. That benefits potential subjects in the trial, you know, who were...

otherwise being exposed to only risk with no potential benefit. And it benefits a sponsor by getting things to move much more quickly, increasing the probability of success, increasing the probability of an approval down the line. And so it's a win-win, I think for everybody, but until you have the data and are able to look at it, you may not know. Same thing, by the way, applies to adverse events as well, right? If you have a complete picture and real-time picture as someone who's done medical monitoring and has been responsible for signing off on the

adverse events in clinical trials and actually monitoring for things that are serious, that may even be unexpected, but at the very least be severe enough to warrant further investigation. If you don't have that real-time access, there may be delays. Sometimes this is up to the site to proactively remember to notify the CRO, then for the CRO to proactively remember to notify the sponsor. These delays can actually cause problems

Houman David Hemmati (20:33.902)

for the subjects in the trial, Not forgetting about even the drug itself, you need to know about that. As someone who's done medical monitoring, I want to have immediate visibility into those issues without having to wait for that human interface. And again, you don't want that delay. And I've seen those delays occur. And in many indications, it's not that big of a deal. But in some indications, it's a life or death issue. And you do need to know those sorts of things as soon as they occur.

Ram Yalamanchili (21:00.248)

Makes sense. I know you mentioned delays and time as another word, like everything you unpacked, I think clearly like there is this urgency to access data. I get that. Then there's this cost equation, but then there's also the time equation, right? I think you want to move at a certain pace. And I hear very often, know, recruitment is the biggest challenge for trials to sort of go where they need to go. And

Houman David Hemmati (21:15.149)

Yes.

Ram Yalamanchili (21:27.406)

I think it also comes back to how fast your sites are activated, how great your sites are, the whole works, right? So it's a multivariate problem here. So my question for you is, how do you figure out how many sites you can activate and what is the ideal number? Because in a theoretical situation, if you had infinite scale and infinite number of sites which can be added, maybe the recruitment problem can be solved. Is that a good way to think about it? How do you think about it?

Houman David Hemmati (21:55.297)

Yeah, recruitment goes to two things, right? There is a site number. How many do you have? It's site quality. You can have infinite sites, but if they're not good sites or if they don't understand the protocol or if they're deprioritizing it or if there's a problem somewhere else in that equation, then it's not going to help you. And then there's also the enrollment criteria, right? Even the very best sites cannot enroll an unenrollable trial.

where you have inclusion or exclusion criteria or both that are just not reasonable. And there are some indications, newer indications especially, right? Where you're in uncharted territory and it's not until you attempt to enroll the trial that you find out that maybe your enrollment criteria were a little bit too narrow and restrictive in such a way that you're just not getting any subjects. I have seen that. I've experienced that numerous times in trials where we went into uncharted territory and

You know, and, and waited and waited and waited. I just, I'm in the middle of one trial that I, so I can't talk about the details of it, but I've been in the middle of one trial where there was one enrollment criterion that caused enrollment to be so incredibly slow and it's for a rare disease. So it made it far worse that we were getting one subject in every several months. It wasn't until we did that manual review of everything that we were able to finally figure out what.

was causing all these screen fails and the moment we went and fix that did a small amendment that didn't impact the the trial in any way in terms of safety or efficacy assessments it was just a guess that we did wrong all of a sudden enrollment picked up very much more quickly and it's and it allowed the company to move forward without having to do a massive raise or or abandon the program which could have otherwise happened i've had many other examples of this as well where that real-time access

is helpful but but delays in that real-time access caused delays in the ability to detect those issues sometimes to the point where they could have been devastating to the to the company to the program or otherwise

Ram Yalamanchili (24:03.936)

Interesting. By the way, this particular scenario I've heard many, many folks tell me, especially in the clinical development world, right? And would you say this comes back to the earlier discussion of not having access to the data or at least the people to do the analysis or being able to ask the questions which you wanted to ask as soon as possible? Is this sort of related to that? Is that why you waited as long as you waited before the amendment had to be made? And it sounds like that sort of gave you a...

Houman David Hemmati (24:10.476)

Yes.

Houman David Hemmati (24:29.867)

Yes.

Ram Yalamanchili (24:33.336)

pretty quick activation, right? So it was like a step function change when this particular change was made, which is fascinating.

Houman David Hemmati (24:38.005)

Correct. Yeah. mean, look, in a lot of trials, by the way, the sponsor doesn't have visibility into things that are happening until that weekly meeting. Right. And so there are lot of things that happen between weekly meetings and you don't want to necessarily wait, you know, five, six or seven days to know about something. And sometimes things aren't even available to the, to the CRO until a week later. And so now you've got a two week or three week delay to, you know, to, to, to finding out important information.

That's unacceptable in many cases. And remember also there are limited people and those limited people have many things that they're doing on their list and they have limited bandwidth because of that and limited priorities. And sometimes their priority, obviously the priority is going to be executing the trial. It's not necessarily everything else that the sponsor is worried about. So I think it's really not necessarily about adding people or replacing people. It's about taking the people who we already have and allowing them

to really work faster, do more things with the time that they have and allow all of the priorities to be met at the very same time. And I think it's possible with shifting how we approach technology and how we really look at conducting trials to begin with.

Ram Yalamanchili (25:51.586)

Yeah. And I think I also realized something interesting from what you're saying. There is this notion that we're capturing data in the EDC, which is, which obviously does, but there's also so much, is not capturing and that is knowledge, which is being built up in your team, whoever's working on the trial. And the only way to query that knowledge is to ask them in the midst of many other things on priorities they already have. you've got this contextualized data, which is.

Houman David Hemmati (26:10.518)

Yes.

Houman David Hemmati (26:17.568)

Yes.

Ram Yalamanchili (26:21.506)

part of it in the EDC and the rest of it is in the ether, say, in the team knowledge. And one is potentially easier to query, the other is not, because that's like generalized context, right? And so it's a very interesting way to think about this problem, because I see a lot of parallels in sort of how...

Houman David Hemmati (26:32.683)

Yes.

Ram Yalamanchili (26:44.246)

I see this evolving and including what we're working on and what we've built, how our customers are using some of the tooling and the capabilities we have. let me switch to another next point here, right? I come from the view that, as you've said, there is going to be a of like a shift in how we prioritize things and what we do and what we don't do, right?

Houman David Hemmati (27:10.238)

Mm-hmm.

Ram Yalamanchili (27:11.17)

That's just natural from a productivity and evolution of using technology in everybody's life cycle, workflow and life cycle. And in this case, I think for the very first time perhaps in history, what we've got is AI, which is incredibly smart. We have AIs who have great capabilities, not only around logical problem solving, but also reasoning now, task planning.

some concepts of memorization, short-term memory, long-term memory. So there's various things which are now coming in and capability which extend beyond just asking a question and getting an answer back. Where chat GPT was or some of these information retrieval style problems are being solved with chat GPT, right? And I think all of us have probably seen it where we want to draft a nice email. You can ask chat GPT, it gives you back that.

Houman David Hemmati (27:50.655)

Yes.

Ram Yalamanchili (28:05.624)

That doesn't necessarily remove the work of actually me answering 500 emails. It's just me solving a part of that problem. And I think the capabilities of AI being able to do great reasoning, great task planning will get us into that next level of unlock where you can say, hey, go do this work for me. And I will basically oversight you, right? So we are coming into this domain, which I tell them what we call AI teammates. So these are...

Houman David Hemmati (28:12.149)

Yes.

Ram Yalamanchili (28:34.922)

not people, these are AIs who have the access to certain tools. So in our case, we have access to communication tools, phones, emails. The AI team has access to these tools. They have access to a computer where they can work on a browser. They have access to a mouse and a keyboard. So they have tools on which we can now teach skills. And then they can be part of your team. You train them. You sort of get them to a place of competence where they're able to do certain workflows.

I find it interesting the way you're describing how, you know, there's a siloed knowledge, some of it is in EDC, some are in the team. And now I feel like, you know, is there a world where your AI teammates can, you know, absorb that knowledge, which is previously not being captured elsewhere. And then you can ask the question that way, right? Or you can ask them to do certain things about it. How do you, like, if this were the case and this were how we were

Houman David Hemmati (29:10.335)

Mm-hmm.

Ram Yalamanchili (29:32.408)

progressing into, at least this is my view. Maybe what I should say is, do you see that happening? Do you see this sort of a thing as too much of a science fiction or do you feel like we are here or at least we're seeing glimpses of it? And this very quickly evolved. Where do you stand?

Houman David Hemmati (29:48.68)

Yeah, I think we're not quite yet there 100 % obviously, because we don't see this employed in every trial, let alone even, you know, a major subset of trials. But we are now technologically at that place where I think we are at a major inflection point where we can now actually have AI tools that have visibility into everything, even AI tools that attend, you know, Zoom meetings, for example, right, and are absorbing everything that's coming in.

you know, in real time and in the totality of it, let alone having access to the database and having access to so many other things that may not have made it in the database, but are relevant as well as other pieces of information that are coming through, you know, various sources of communications. All of that said, I think it's quite valuable and it gives us an opportunity to do things and see things that we have not been able to do before with a level of timing, cost and scale.

that we've never envisioned before. And I think what that's gonna allow us to do, it'll unlock opportunities to run trials far more effectively, efficiently, quickly, safely, and with a higher probability of success. And ultimately, those are the things that any clinical trial needs, right? There is no reason why the operational aspects of a very simple 30 patient clinical trial, one of which I completed a year or so ago, should...

feel like a phase three trial. It just shouldn't. And there's no reason why, you know, conducting a phase three trial, no matter how complex it is, right? But especially in an easy field like an eye drop trial that I do in ophthalmology should feel like you're launching a rocket to the moon. I mean, these are things that should be pretty basic, but we've overly, you know, added too much complexity to it because of the processes that are required.

you know, to maintain quality standards in clinical trials. But I think by introducing AI, by introducing automation, and by really simply modernizing how things are done, we have the opportunity to really reduce the cost, increase the time, and that benefits not just the sponsor, it benefits ultimately patients. gets drugs to market quicker, cheaper, and ultimately that's what people want, everybody.

Ram Yalamanchili (32:06.626)

Yeah, no, this is where the exciting effects of implementing AI into this process is, right? One other area which I've thought through and sort of worked, or at least saw it through my experience at Lexand, my previous company, and at Tilda with our current customers and opportunities. See, I think change management is a scary word for many. And one of the advantages of sort of...

Houman David Hemmati (32:25.725)

Mm-hmm.

Ram Yalamanchili (32:34.444)

working with an AI teammate is you don't necessarily need to change a lot of your process. You can sort of say, this process involves a certain amount of overhead. I don't want to do it. I'm just going to give it to my AI teammate to do it. An example of this would be, we work with many sites now and there is an expectation from all their sponsors to first capture source. Maybe it's on paper, maybe it's on some system and then move that data into the EDC layer.

Houman David Hemmati (32:47.186)

Yes.

Ram Yalamanchili (33:04.246)

So you're essentially doing double data entry, right? You're entering it into your source first, and then you're entering it into the EDC. And from a site's perspective, they've got this problem where, okay, that means if I have 20 trials running actively, I've got potentially 20 different systems I need to work with. And, you know, I have 20 different EDC logins at least, right? So there are many EDCs. might be on platform A and another company could be on platform B. there's multi, you know, there's a diversity of systems here. And

Houman David Hemmati (33:04.327)

Yes.

Houman David Hemmati (33:23.291)

Yeah, it's true.

Ram Yalamanchili (33:33.866)

what we've come in and said is, well, you should really just standardize on what you do well, which is doing source entry, which is anyway what you do. Every clinic has the capability to enter this data into an EMR or some kind of paper chart or some other system. So you do that. But the after part of like, once you do that, moving into another system, managing some form of your queries, that's something which an AI team at this point is fairly capable and can do it at a really like...

Houman David Hemmati (33:47.548)

Yes.

Ram Yalamanchili (34:01.742)

a high quality, consistent, reliable way. the argument here is that why do you want to spend half your time, working time on doing data entry into the EDC where you could basically focus on patient care, which is a net-net benefit for everybody, right? The whole ecosystem wins in this situation, but we've never changed the process itself. We didn't change, manage anything here because we just kind of handed that work to somebody else and this somebody else just happens to be a very powerful, intelligent

So that sort of thing, right? And similar sort of an experience with some of our sponsors and several opportunities lately, which we're seeing is I've got a phase three trial. need to enable 700 sites in eight months from FBI. know, it's across X number of countries. How do I do it? I need to staff temporarily to do all of this work and then, you know, kind of manage them out if I don't have a large project of that sort.

Houman David Hemmati (34:33.469)

That is correct.

Ram Yalamanchili (35:01.546)

And to me, that sort of thing is really exciting because these are processes where you're not asking to change manage. Like don't change your process if you really just want to email every single site and follow up on documents in that manner, that's not a problem at all. You can basically have an AI teammate which will scale immediately to do this sort of work rather than sort of doing every single part of this work with the person, right? And what I find is also that sometimes

It's not about even like, make building the team. It's really hard to build these teams because there's not enough capacity out there to be able to enable these large trials and the kind of timeline you're looking at. And I think you have resource competition. That's why running a phase one feels like a phase three, because you're competing with the same type of resource constraints, which everybody else is with. so, you know, it's, it's sort of like a persistent problem, I think, which all comes from to me anyway, the lack of really great supply of.

Houman David Hemmati (35:51.707)

Yes.

Ram Yalamanchili (36:00.918)

individual talent in this industry and we're not growing that fast enough. And that sort of has a knock on effect in my opinion anyway on the, on the number of goals we have on the shot, right? Number of formation of biotech companies, number of opportunities to bring new drugs into, into clinical development and this sort of thing all kind of, and then of course the amount of capital available, because if, if you had a certain amount of capital, each of your trial costs a certain amount of money and you divide that, that's the number of N you have in terms of how many opportunities.

And so I sort of see this interesting opportunity where for the first time we're saying, if you don't want to modernize your process or rather if you don't want to think about it that way, don't do that. But you can have an AI teammate come in and sort of work within your framework in this way, right? Which is sort of an interesting paradigm I'm looking at. Do you sort of see that? I mean, I guess like...

I would ask you, how does that fit into your framework? Does that make sense? Do you have any concerns? Anything where you feel this is fundamentally not viable in some ways, I guess.

Houman David Hemmati (37:14.693)

I'll tell you, I think it's gonna require people to become comfortable with it. And different people are comfortable to a different degree. I'm a person who already likes to you know, de-identified data sometimes, and just to do a quick analysis using, you know, Grok or ChatGPT of things, just to give myself some confidence about things before I get an official analysis.

you know, with a limited amount of information, just to see how does it do it? And I've always been very pleasantly surprised. And so I'm comfortable with the technology only because I played around with it even outside of the formal setting within trials, right? I know on the other hand, there may be people who are hesitant only because of a sense of mistrust, not knowing what's going on in the black box, right? And what it's gonna take is for there to be real world examples.

First, of where this sort of approach and having the virtual teammates or the AI teammates succeeding and benefiting trials without causing problems. And then second, it's going to require many people, including people who are a little skeptical or hesitant to be engaging with it themselves, perhaps in a way that's redundant even, right? To have it alongside additional traditional resources for trials, but not instead of.

To see, okay, how is this helping me? And if I didn't have all these additional people, how would that have helped? Or how does that AI tool allow the people I do have to focus on what they really need to be focused on instead of constantly answering, you know, my queries or questions about things. Ultimately it will happen because once we see real world examples of trials becoming far quicker and far more efficient, naturally sponsors will

will want to go in that direction. CROs will want to go in that direction. And frankly, the sites will want to take that approach because it's going to make their lives way easier, right? And as a site, you are not caring about the mechanics of the trial. Sites want to take care of patients, plain and simple. And the easier it can be for them to take their existing clinical framework and bring patients in for clinical trials without changing how their clinic is run, the better. And so for them,

Houman David Hemmati (39:41.122)

removing any of those heavy lifting steps or making it far easier for them to do, the better. And so I think, again, this poses a win-win for everyone. And it's just really a matter of rolling it out in a way that makes people comfortable and demonstrates both the benefits and the lower level of risk.

Ram Yalamanchili (40:03.596)

Yeah, understandable. think the comfort and trust, I think, are going to come with more data and more, frankly, just able to prove that these are actually competent and able to do the work. But I think that just comes with time. think there's going to be innovators, there's going to be visionaries who are able to see it and say, OK, I have some value here, let's kind of jump in and do something with it. And sort of, think, what you're talking about is the industry will

Houman David Hemmati (40:20.271)

Yes.

Ram Yalamanchili (40:33.12)

essentially evolved because there is essentially a need for it and we have all these challenges we've spoken about. So my last question for today is, know, independently thinking about all your journeys so far, do you feel like the next five years are going to be sort of similar or do you feel like there's something rumbling in this case? Like, and there might be a change to the better or change, I don't know, what you look at it. I guess I'm asking you what would you think about?

Houman David Hemmati (41:02.841)

I think what we're probably going to see in the world of trials is what we're seeing even in the government right now, which is a focus on eliminating any kind of waste, but to do so thoughtfully, obviously, and to make things more efficient so we can focus our limited resources on doing what we're supposed to do very, very well and very quickly and efficiently, right? In a way that benefits everyone. So my vision for what's going to happen

with clinical trials, especially with the involvement of AIs that we're going to see a lot more self-service, lot fewer delays, and a lot more confidence in the quality of the trial and the quality of the data that are coming out and the ability to execute and get the results that we're hoping to see and that the results that we see are actually reflective of the performance of the product, of the drug. And we're headed there, for sure. Right now, we're at this point in evolution

of clinical trial conduct that we've maximized existing traditional technologies. And it's time to now take a shift to new technologies. And that happened when we shifted from pure paper charts to involving electronic databases and doing simple things like Excel spreadsheets and eventually to more advanced systems. But now we're at that inflection point that moves far beyond that to a new level where we're able to actually make sense.

of the information in real time rather than having to just simply rely on people to interpret it or act on it.

Ram Yalamanchili (42:35.554)

makes sense. One quick follow up on that is what's your view from a regulatory perspective? Have you had an opportunity to think about that or speak to anybody on that?

Houman David Hemmati (42:44.106)

Yeah, you know, I think from a regulatory perspective, we're again in uncharted territory and the FDA, especially the modern FDA that's under very new leadership as of yesterday with Dr. Marty McCary in charge as FDA commissioner, has an opportunity to really modernize how trials are allowed to be run. And I think what they're going to probably

recognize is that this benefits the FDA as well. The more quality data they can get, and even if they have access to some of those tools and are allowed to utilize it themselves to dig into the trials, the better. Imagine being the FDA and having the ability to do trial audits in a way that doesn't force them to send an army of people to each site that they want to audit, but rather allows them to do it remotely, or even if they are in person, send fewer people.

and to do it with far more robustly with greater scalability, that's going to benefit everyone because now the agency would be able to discover things that they may have missed in the past while also employing far fewer resources and making it lot less cumbersome on the sites. Right now, if you're a site and you have FDA coming, you have to drop everything for a matter of days sometimes or longer just to simply accommodate the agency for that review.

That can all go away if we change how things are done from the regulatory side. Of course, there are to be regulatory approvals required for some of these things, especially when it comes to sponsor access and also the ability to use AI-driven analytics for FDA submissions. Right now, the FDA wants to be able to take the raw data and do their own analyses to validate

that the sponsor's analysis was done correctly without any fudging. And if the FDA similarly may have access to these tools, either traditional tools still or the AI tools or both, and they can do side-by-side comparisons, ultimately, I think the FDA will be comfortable as well. But I think that's gonna be a process and it creates opportunities to work in partnership with them.

Ram Yalamanchili (45:01.494)

Understood, yeah. No, it's a fascinating, I think it'll be a fascinating next several years. We're truly in a very interesting time. Well, Dr. Hemard, it's been a pleasure. I appreciate you spending the time discussing this. had a lot of fun talking to you about this. So thank you.

Houman David Hemmati (45:19.732)

Yeah, Ram, thank you for having me. Always happy to join you again.


Ram Yalamanchili (00:03.534)

Hi, Dr. Hamati, how are you? Very nice. Yeah, it's always great to see you and talk to you. So I'm really looking forward to this. So as we get into this, one of the first things I'd like to start with is tell us your story. How did you end up where you are and why?

Houman David Hemmati (00:05.257)

I'm very good. Thanks so much for having me.

Houman David Hemmati (00:28.162)

Yeah, my background's a little interesting. I'm an MD, but I also got a PhD. And I always envisioned a career in which science plays a major role. Traditionally, that involves being in the academic setting and practicing academic medicine, having a research lab or clinical research on the side. And that's not what the world wanted for me. I trained in ophthalmology. And afterwards, when I got a fellowship to train in a subspecialty in cornea and refractive surgery, it came along with a postdoc.

at MIT with Bob Langer, who does chemical engineering and specifically works on extended release drug delivery. And so my career took a big shift when I went there and learned how to actually make drugs and make drugs last longer, specifically in the eye. And shortly after I practiced in the academic setting for a year, I was recruited by Allergan, as it was known back then, to be involved in clinical development and ophthalmology. And so I spent a couple of years

there and once the company was sold I went off to somewhere else and eventually ended up on my own as a startup entrepreneur, co-founder, as well as chief medical officer part-time for a variety of different companies all in the ophthalmology space. And so I spend my life now working on early to late stage trials as well as even pre-clinical development of drugs intended to improve how we take care of different ophthalmic conditions.

Ram Yalamanchili (01:55.82)

Yeah. And, you know, just from looking at your profile and what you've done, you've worked on many programs, have obviously gotten deep expertise in this area. And one of the things which I'm really looking forward to is just understanding what are your learnings in this place and what do you wish to change as well? And I think we're also in an interesting time where there's many opportunities for change. especially from a technology perspective, there's new innovations, are, think,

wonderful in terms of actually implementing and getting us there. So I think that's sort of what I wanted to unpack today as far as how we converse about it and go through this. So the first step I'll probably start with is I'm very curious about the pain points and sort of like what does your day to day look like when you think about

you know, developing a drug, what, what, from a sponsor's perspective, what are the type of pain points you usually see and, and spend more, a lot of time either you personally or from your team.

Houman David Hemmati (03:01.797)

I wish we had about four hours to do this because I can just spend that much time focused on pain points. There are a lot, right? There are regulatory pain points, there's CMC pain points, there's fundraising, there's clinical and clinical trial pain points. All of these things, each of them can be a several hour podcast. With respect to the clinical development, which is the part that I spend a lot of my life focused on, in addition to the other aspects, it's several things. It's one, getting a trial.

operationalized, which means selecting sites, operationalizing it, making sure they truly understand the protocol so they're not making mistakes. It means also getting the study launched and translating the protocol into an actual clinical trial, which is just like taking the information from an architect, handing it to a builder and making sure they build it. Without the architect's involvement, sometimes the builder does the wrong things, right? There are a lot of things that aren't in the architectural plans.

And a lot of it is up to interpretation. And sometimes the architect designs a building that can't really be built or be built well. And sometimes that collaboration between the architect and the builder is needed. That same thing I think I see in clinical trials. And then a lot of it is also in terms of running that trial smoothly, making sure it's enrolling properly, making sure that adverse events and other problems like enrollment issues or enrollment criteria that are not perfectly tuned.

for that particular trial are caught early so that amendments can be made and staying on top of it, understanding what kind of data is coming in, if there are any problems with the study design overall that may lead to poor or uninterpretable data later on, and whether there's anything that causes the trial to have to be stopped early. And then finally is interpreting the trial once the data is there and really making good sense of it. All of these things pose challenges and for

All of us in the trial world, one of the biggest issues is that clinical trials now are trapped in the 1980s and 1990s when it comes to how we do things. We are literally using paper and pen for a lot of things. We are literally depending on phone calls and faxes in order to transmit information. There are delays caused by those. There are mistakes caused by those and that lack of real time, high quality visibility into what's going on

Houman David Hemmati (05:27.36)

to be able to react to information in real time is causing problems for everybody. And so I think there are great opportunities to fix a lot of our pain points by modernizing, how we do things. unfortunately, until recently, I think everything has been trapped in decades old technology and decades old thinking. and, and right now, I think a lot of us are looking at how things are done and realizing it's time to also switch how we do things to modernize it.

and create tools that would actually allow us to leverage modern technology to help resolve a lot of these issues.

Ram Yalamanchili (06:02.774)

Yeah, and some of the things you're mentioning, I think I really resonate with because just my own past experience being a biotech founder and being in the oncology space, working on a large clinical trial, it's very interesting that the type of problems you're mentioning kind of resonates in a different therapeutic area as well. I think it's it kind of speaks volumes to the type of problems we all face as an industry, right across the board. So I'm talking a few things.

Houman David Hemmati (06:27.776)

Yes.

Ram Yalamanchili (06:32.618)

What, any, has worked to improve some of the problems you're talking about? And I'm talking about over the past many years of your work in the space. think visibility is one of the things you just mentioned, real-time visibility, right? Now, I could imagine a world where that was non-existent. Maybe today there is some amount of capability somewhere which may exist or could have gotten to a certain extent. But I'm just curious, have you seen some amount of improvement in

visibility or your interaction with, let's call it your project management or your CROs to get real-time information on what's happening with the trial. You mentioned the ability to like speed up the trials. So what's been your like experience in the past like, you know, 10 plus years of you being in the space and sort of what have you seen happen so far?

Houman David Hemmati (07:25.779)

Yeah, you know, one thing that has been nice, but it also comes at a bit of a cost is having sponsor level access to the electronic data capture system to be able to log in and in a blinded or masked manner, be able to see what's coming in. So be able to look at patient level site level data. But that also comes at a bit of a price because a lot of that information.

hasn't been reviewed, so the quality may not be great. A lot of the data hasn't been entered manually into the system by the site. They are still, by and large, doing paper and pen records, and then it takes someone manually transferring that into the system. And even after they've done that, there may be issues of data quality, right? If it hasn't been monitored, if someone hasn't done source document verification, for example, if you haven't validated what's in there,

that may be erroneous information or it may be delayed or both. And so that level of sponsor access may come again at that kind of price. So you may make mistaken assumptions based on what you see. And on top of that, these systems are cumbersome and they're not necessarily built for sponsor level access, especially for people who are not in the weeds on the day-to-day basis. So people who are in my role in clinical development who are more designing trials,

and then handing them over to clinical operations don't have necessarily all the knowledge and the tools to be able to make the most of that data. We want to be able to extract what we need without spending hours and hours and potentially making mistakes in the process of finding what information we need. And so there are a lot of problems associated with that. It has helped. It has been nice to have that access. Some trials get you more access than others, but regardless, that has been beneficial.

But without having the high quality, without having the timeliness of the data in the system, that can actually backfire in some regards.

Ram Yalamanchili (09:31.182)

But you also are saying that you have visibility through the EDC, but the data is only as good as all these other steps which need to happen, which are still very manual, right? If somebody had taken it on a paper somewhere and that paper was never entered into the system, there's nothing you can do even if there's an EDC there, right? So maybe then the problem then kind of goes back to, where's the site? Who's the site? How can I empower the site to be able to do a better?

Houman David Hemmati (09:40.297)

Mm-hmm. Yes.

Houman David Hemmati (09:49.053)

That is correct.

Ram Yalamanchili (09:59.838)

If that is truly your bottleneck, then you probably are shifting the problem from a data capture to a site workflow in some ways. it. what sort of... I think everyone's gotten great experiences and not so great experiences working with the industry and working within these problems. Have you seen...

Houman David Hemmati (10:09.001)

That's right.

Ram Yalamanchili (10:26.914)

practices or opportunities where you just said, hey, this is great. This worked amazingly well. Maybe it's working with a certain CRO partner or whatever it might be. But at the same time, maybe that doesn't scale. So I'm just curious, like, have you had problems where something worked great, but you wish it were able to scale really rapidly, but it wasn't in this kind of scenario?

Houman David Hemmati (10:50.536)

Um, you know, generally all of it, think is, uh, is problematic in that regard. I mean, the issue is that anytime I want to really be sure that what I'm looking at, even if I personally have access to it or whether I don't, and I'm asking someone else, it still requires me to get into a phone call or meeting, or at least a series of emails with someone else who really knows what's going on to ask them.

What is really happening here? And I think that's the problem that can't be scaled is the fact that in order to be sure that what you are seeing is real is complete has been verified, validated, is, is. The only way to do that is to have a meeting with someone or at least to participate in a long series of emails. Regardless, there is that direct one-on-one interface required.

with whoever is the keeper of all other information. And sometimes that is not so easy, right? Those people usually have other jobs. Whether they're working on other trials or even if they're dedicated to your trial, they have actual operational duties. And sometimes being able to communicate with a sponsor and provide that information is the last on their list of priorities for obvious reasons.

Ram Yalamanchili (12:09.568)

Right, so then it becomes almost like an attention situation, Attention from the right person who hopefully if you can scale that then yes, you get what you want. But clearly that is not the easiest resource to scale. Great people are in limited quantity and we've got certain amount of attention from.

Houman David Hemmati (12:18.184)

Correct.

Houman David Hemmati (12:27.997)

Well, there are budgetary issues as well, right? You can pay for it. You can certainly ask the CRO to dedicate multiple full-time people to a trial simply for those reasons. But the problem is that that is exceptionally expensive, right? Clinical trials right now are so expensive that sponsors literally are asking CROs, and I've just done that recently. Can you please limit the number of people who are in our weekly meetings? We don't see person X and person Z

contributing very much they have said nothing in the past several weeks of attending meetings and we've got a big bill for it that's literally what sponsors have to do i've had to do it it's it's part of that lean trial mentality and and when you do that imagine having to add more people just to have that extra visibility in extra access that doesn't make financial sense and so i think cutting that out but being able to to to automate that process or make it quicker

and simpler without having lots of additional full-time people involved is essential.

Ram Yalamanchili (13:30.446)

Right, right. And just switching to this, right, obviously, you know, in some ways, cost is tied to the capacity because you're hiring resources. And if you're trying to be more efficient, you might at this point choose a path where you're reducing the capacity of what you could do with the resources you've got. Basically, that's kind of what you're saying. Give me less people on my project or keep it to a certain number of hours. How does that affect

Houman David Hemmati (13:41.276)

Mm-hmm.

Houman David Hemmati (13:52.602)

Exactly.

Ram Yalamanchili (14:00.502)

like your decision making. mean, I'm just curious, like, are there opportunities where you say, well, this trial is going this route and you know what, I'm very curious about something else. we need to dig down into it. But then if answering that additional question is a massive cost, then are you then making that decision on should I do it or not? Does this happen on a regular basis from your perspective?

Houman David Hemmati (14:25.926)

It has, know, in terms of interpreting trial data, for example, that's been, you know, something that I found essential, you know, a trial that I was recently involved in appeared at first glance to not have worked. But once we dug deep into it, a process that came at a great cost, we were billed, you know, $25,000 and it took four weeks.

for their team to take the data and analyze it in the way that we wanted to, something that I could have done on my own had I had access to the full database of validated info in a matter of seconds or minutes, right? That rescued the trial and it showed us that there actually was an amazing effect, something that we didn't anticipate when we initially designed our analysis plan. Similarly, I know of another trial that I wasn't involved in where the data again came out as a failure.

a blockbuster failure and a big phase three. However, once they went back and looked at two different subgroups, they found, Hey, this actually worked really well in one of those subgroups, a very well defined subgroup, something that hadn't been expected. Guess what? That company was just sold for a couple billion dollars based on that analysis of the data that came out after they thought it had failed. many such things, but that analysis itself took.

several months and cost many hundreds of thousands of dollars because they needed an army of people to look at it and figure out how they can make sense of what they expected to be a success and didn't turn out that way once they got their top line data. Many things like this that I think could go much faster, could go much quicker and actually could go better because if you have software that can think better than one, two or even 10 people can look.

at all different possibilities rather than just those few that initially come to mind, we can end up extracting way more information out of trials. Those that have succeeded, those that have failed potentially, or those that are somewhere in the middle. There are infinite possibilities, but unless we have that immediate analytic ability, we may make the wrong assumptions about our trial data, whether it's in real time with masked data that's coming in or

Houman David Hemmati (16:46.957)

At the very end, when you have top line data coming in, you're trying to make immediate decisions on where you take your program from there or attempt to attain funding to continue your program. These are things that are very time sensitive and you don't have the ability or the luxury of spending more money on it sometimes or the ability or luxury to spend a lot of time on figuring things out.

Ram Yalamanchili (17:11.982)

I see. A lot of interesting things there to unpack. think I've got a couple of things here. First, there seems to be this notion that I'm going to run the trial to a certain extent, and then I do an analysis, and I get some insight, and then we keep moving. But even when you do that, you somehow found out at the very end that it failed. And then you had to do some kind of a subgroup analysis, and then you found out, well, this will actually work in this particular case.

Houman David Hemmati (17:41.742)

Yes.

Ram Yalamanchili (17:42.146)

Why hasn't that happened earlier? what, what, what contributed or what, what sort of limitations exist today where there is no sort of like course correction mechanism where you could in, in, know, in a much more higher frequency figure out, well, could I've done something different? Could I've made an amendment in the protocol to be able to do this? Right. Is, are those the type of things which you, which you see or like from a lack of capability in the current infrastructure? Like maybe talking about more about that.

Houman David Hemmati (18:10.373)

Yeah, and it comes back to having real time access to data, right? One issue, of course, is that in a trial that is randomized and double blinded or double masked, you can't have access to unmasking data, right? So you're limited in what you can do based on that. And you really shouldn't, as a sponsor, have access to that data because it allows you the opportunity.

to to to change the course of of how things are done in a way that is unfair right and creates invalid trial data however at the same time there are opportunities to look at what's coming in even if it is masked and say you know what we're we're seeing that there are no responders regardless of whether they may or may not be on active right they may be placebo active as a matter of the no one's responding who has a certain group of of you know criteria that they need

Whereas there are others who, you know, we see half of them responding or a third responding or something better. Guess what? Maybe we shouldn't have enrolled anyone who has a certain, you know, set of criteria, whatever it may be for that particular disease indication. That allows a sponsor to get ahead of it and perhaps tweaks things in order to avoid futile treatment to people in the trial for whom this may not work. That benefits potential subjects in the trial, you know, who were...

otherwise being exposed to only risk with no potential benefit. And it benefits a sponsor by getting things to move much more quickly, increasing the probability of success, increasing the probability of an approval down the line. And so it's a win-win, I think for everybody, but until you have the data and are able to look at it, you may not know. Same thing, by the way, applies to adverse events as well, right? If you have a complete picture and real-time picture as someone who's done medical monitoring and has been responsible for signing off on the

adverse events in clinical trials and actually monitoring for things that are serious, that may even be unexpected, but at the very least be severe enough to warrant further investigation. If you don't have that real-time access, there may be delays. Sometimes this is up to the site to proactively remember to notify the CRO, then for the CRO to proactively remember to notify the sponsor. These delays can actually cause problems

Houman David Hemmati (20:33.902)

for the subjects in the trial, Not forgetting about even the drug itself, you need to know about that. As someone who's done medical monitoring, I want to have immediate visibility into those issues without having to wait for that human interface. And again, you don't want that delay. And I've seen those delays occur. And in many indications, it's not that big of a deal. But in some indications, it's a life or death issue. And you do need to know those sorts of things as soon as they occur.

Ram Yalamanchili (21:00.248)

Makes sense. I know you mentioned delays and time as another word, like everything you unpacked, I think clearly like there is this urgency to access data. I get that. Then there's this cost equation, but then there's also the time equation, right? I think you want to move at a certain pace. And I hear very often, know, recruitment is the biggest challenge for trials to sort of go where they need to go. And

Houman David Hemmati (21:15.149)

Yes.

Ram Yalamanchili (21:27.406)

I think it also comes back to how fast your sites are activated, how great your sites are, the whole works, right? So it's a multivariate problem here. So my question for you is, how do you figure out how many sites you can activate and what is the ideal number? Because in a theoretical situation, if you had infinite scale and infinite number of sites which can be added, maybe the recruitment problem can be solved. Is that a good way to think about it? How do you think about it?

Houman David Hemmati (21:55.297)

Yeah, recruitment goes to two things, right? There is a site number. How many do you have? It's site quality. You can have infinite sites, but if they're not good sites or if they don't understand the protocol or if they're deprioritizing it or if there's a problem somewhere else in that equation, then it's not going to help you. And then there's also the enrollment criteria, right? Even the very best sites cannot enroll an unenrollable trial.

where you have inclusion or exclusion criteria or both that are just not reasonable. And there are some indications, newer indications especially, right? Where you're in uncharted territory and it's not until you attempt to enroll the trial that you find out that maybe your enrollment criteria were a little bit too narrow and restrictive in such a way that you're just not getting any subjects. I have seen that. I've experienced that numerous times in trials where we went into uncharted territory and

You know, and, and waited and waited and waited. I just, I'm in the middle of one trial that I, so I can't talk about the details of it, but I've been in the middle of one trial where there was one enrollment criterion that caused enrollment to be so incredibly slow and it's for a rare disease. So it made it far worse that we were getting one subject in every several months. It wasn't until we did that manual review of everything that we were able to finally figure out what.

was causing all these screen fails and the moment we went and fix that did a small amendment that didn't impact the the trial in any way in terms of safety or efficacy assessments it was just a guess that we did wrong all of a sudden enrollment picked up very much more quickly and it's and it allowed the company to move forward without having to do a massive raise or or abandon the program which could have otherwise happened i've had many other examples of this as well where that real-time access

is helpful but but delays in that real-time access caused delays in the ability to detect those issues sometimes to the point where they could have been devastating to the to the company to the program or otherwise

Ram Yalamanchili (24:03.936)

Interesting. By the way, this particular scenario I've heard many, many folks tell me, especially in the clinical development world, right? And would you say this comes back to the earlier discussion of not having access to the data or at least the people to do the analysis or being able to ask the questions which you wanted to ask as soon as possible? Is this sort of related to that? Is that why you waited as long as you waited before the amendment had to be made? And it sounds like that sort of gave you a...

Houman David Hemmati (24:10.476)

Yes.

Houman David Hemmati (24:29.867)

Yes.

Ram Yalamanchili (24:33.336)

pretty quick activation, right? So it was like a step function change when this particular change was made, which is fascinating.

Houman David Hemmati (24:38.005)

Correct. Yeah. mean, look, in a lot of trials, by the way, the sponsor doesn't have visibility into things that are happening until that weekly meeting. Right. And so there are lot of things that happen between weekly meetings and you don't want to necessarily wait, you know, five, six or seven days to know about something. And sometimes things aren't even available to the, to the CRO until a week later. And so now you've got a two week or three week delay to, you know, to, to, to finding out important information.

That's unacceptable in many cases. And remember also there are limited people and those limited people have many things that they're doing on their list and they have limited bandwidth because of that and limited priorities. And sometimes their priority, obviously the priority is going to be executing the trial. It's not necessarily everything else that the sponsor is worried about. So I think it's really not necessarily about adding people or replacing people. It's about taking the people who we already have and allowing them

to really work faster, do more things with the time that they have and allow all of the priorities to be met at the very same time. And I think it's possible with shifting how we approach technology and how we really look at conducting trials to begin with.

Ram Yalamanchili (25:51.586)

Yeah. And I think I also realized something interesting from what you're saying. There is this notion that we're capturing data in the EDC, which is, which obviously does, but there's also so much, is not capturing and that is knowledge, which is being built up in your team, whoever's working on the trial. And the only way to query that knowledge is to ask them in the midst of many other things on priorities they already have. you've got this contextualized data, which is.

Houman David Hemmati (26:10.518)

Yes.

Houman David Hemmati (26:17.568)

Yes.

Ram Yalamanchili (26:21.506)

part of it in the EDC and the rest of it is in the ether, say, in the team knowledge. And one is potentially easier to query, the other is not, because that's like generalized context, right? And so it's a very interesting way to think about this problem, because I see a lot of parallels in sort of how...

Houman David Hemmati (26:32.683)

Yes.

Ram Yalamanchili (26:44.246)

I see this evolving and including what we're working on and what we've built, how our customers are using some of the tooling and the capabilities we have. let me switch to another next point here, right? I come from the view that, as you've said, there is going to be a of like a shift in how we prioritize things and what we do and what we don't do, right?

Houman David Hemmati (27:10.238)

Mm-hmm.

Ram Yalamanchili (27:11.17)

That's just natural from a productivity and evolution of using technology in everybody's life cycle, workflow and life cycle. And in this case, I think for the very first time perhaps in history, what we've got is AI, which is incredibly smart. We have AIs who have great capabilities, not only around logical problem solving, but also reasoning now, task planning.

some concepts of memorization, short-term memory, long-term memory. So there's various things which are now coming in and capability which extend beyond just asking a question and getting an answer back. Where chat GPT was or some of these information retrieval style problems are being solved with chat GPT, right? And I think all of us have probably seen it where we want to draft a nice email. You can ask chat GPT, it gives you back that.

Houman David Hemmati (27:50.655)

Yes.

Ram Yalamanchili (28:05.624)

That doesn't necessarily remove the work of actually me answering 500 emails. It's just me solving a part of that problem. And I think the capabilities of AI being able to do great reasoning, great task planning will get us into that next level of unlock where you can say, hey, go do this work for me. And I will basically oversight you, right? So we are coming into this domain, which I tell them what we call AI teammates. So these are...

Houman David Hemmati (28:12.149)

Yes.

Ram Yalamanchili (28:34.922)

not people, these are AIs who have the access to certain tools. So in our case, we have access to communication tools, phones, emails. The AI team has access to these tools. They have access to a computer where they can work on a browser. They have access to a mouse and a keyboard. So they have tools on which we can now teach skills. And then they can be part of your team. You train them. You sort of get them to a place of competence where they're able to do certain workflows.

I find it interesting the way you're describing how, you know, there's a siloed knowledge, some of it is in EDC, some are in the team. And now I feel like, you know, is there a world where your AI teammates can, you know, absorb that knowledge, which is previously not being captured elsewhere. And then you can ask the question that way, right? Or you can ask them to do certain things about it. How do you, like, if this were the case and this were how we were

Houman David Hemmati (29:10.335)

Mm-hmm.

Ram Yalamanchili (29:32.408)

progressing into, at least this is my view. Maybe what I should say is, do you see that happening? Do you see this sort of a thing as too much of a science fiction or do you feel like we are here or at least we're seeing glimpses of it? And this very quickly evolved. Where do you stand?

Houman David Hemmati (29:48.68)

Yeah, I think we're not quite yet there 100 % obviously, because we don't see this employed in every trial, let alone even, you know, a major subset of trials. But we are now technologically at that place where I think we are at a major inflection point where we can now actually have AI tools that have visibility into everything, even AI tools that attend, you know, Zoom meetings, for example, right, and are absorbing everything that's coming in.

you know, in real time and in the totality of it, let alone having access to the database and having access to so many other things that may not have made it in the database, but are relevant as well as other pieces of information that are coming through, you know, various sources of communications. All of that said, I think it's quite valuable and it gives us an opportunity to do things and see things that we have not been able to do before with a level of timing, cost and scale.

that we've never envisioned before. And I think what that's gonna allow us to do, it'll unlock opportunities to run trials far more effectively, efficiently, quickly, safely, and with a higher probability of success. And ultimately, those are the things that any clinical trial needs, right? There is no reason why the operational aspects of a very simple 30 patient clinical trial, one of which I completed a year or so ago, should...

feel like a phase three trial. It just shouldn't. And there's no reason why, you know, conducting a phase three trial, no matter how complex it is, right? But especially in an easy field like an eye drop trial that I do in ophthalmology should feel like you're launching a rocket to the moon. I mean, these are things that should be pretty basic, but we've overly, you know, added too much complexity to it because of the processes that are required.

you know, to maintain quality standards in clinical trials. But I think by introducing AI, by introducing automation, and by really simply modernizing how things are done, we have the opportunity to really reduce the cost, increase the time, and that benefits not just the sponsor, it benefits ultimately patients. gets drugs to market quicker, cheaper, and ultimately that's what people want, everybody.

Ram Yalamanchili (32:06.626)

Yeah, no, this is where the exciting effects of implementing AI into this process is, right? One other area which I've thought through and sort of worked, or at least saw it through my experience at Lexand, my previous company, and at Tilda with our current customers and opportunities. See, I think change management is a scary word for many. And one of the advantages of sort of...

Houman David Hemmati (32:25.725)

Mm-hmm.

Ram Yalamanchili (32:34.444)

working with an AI teammate is you don't necessarily need to change a lot of your process. You can sort of say, this process involves a certain amount of overhead. I don't want to do it. I'm just going to give it to my AI teammate to do it. An example of this would be, we work with many sites now and there is an expectation from all their sponsors to first capture source. Maybe it's on paper, maybe it's on some system and then move that data into the EDC layer.

Houman David Hemmati (32:47.186)

Yes.

Ram Yalamanchili (33:04.246)

So you're essentially doing double data entry, right? You're entering it into your source first, and then you're entering it into the EDC. And from a site's perspective, they've got this problem where, okay, that means if I have 20 trials running actively, I've got potentially 20 different systems I need to work with. And, you know, I have 20 different EDC logins at least, right? So there are many EDCs. might be on platform A and another company could be on platform B. there's multi, you know, there's a diversity of systems here. And

Houman David Hemmati (33:04.327)

Yes.

Houman David Hemmati (33:23.291)

Yeah, it's true.

Ram Yalamanchili (33:33.866)

what we've come in and said is, well, you should really just standardize on what you do well, which is doing source entry, which is anyway what you do. Every clinic has the capability to enter this data into an EMR or some kind of paper chart or some other system. So you do that. But the after part of like, once you do that, moving into another system, managing some form of your queries, that's something which an AI team at this point is fairly capable and can do it at a really like...

Houman David Hemmati (33:47.548)

Yes.

Ram Yalamanchili (34:01.742)

a high quality, consistent, reliable way. the argument here is that why do you want to spend half your time, working time on doing data entry into the EDC where you could basically focus on patient care, which is a net-net benefit for everybody, right? The whole ecosystem wins in this situation, but we've never changed the process itself. We didn't change, manage anything here because we just kind of handed that work to somebody else and this somebody else just happens to be a very powerful, intelligent

So that sort of thing, right? And similar sort of an experience with some of our sponsors and several opportunities lately, which we're seeing is I've got a phase three trial. need to enable 700 sites in eight months from FBI. know, it's across X number of countries. How do I do it? I need to staff temporarily to do all of this work and then, you know, kind of manage them out if I don't have a large project of that sort.

Houman David Hemmati (34:33.469)

That is correct.

Ram Yalamanchili (35:01.546)

And to me, that sort of thing is really exciting because these are processes where you're not asking to change manage. Like don't change your process if you really just want to email every single site and follow up on documents in that manner, that's not a problem at all. You can basically have an AI teammate which will scale immediately to do this sort of work rather than sort of doing every single part of this work with the person, right? And what I find is also that sometimes

It's not about even like, make building the team. It's really hard to build these teams because there's not enough capacity out there to be able to enable these large trials and the kind of timeline you're looking at. And I think you have resource competition. That's why running a phase one feels like a phase three, because you're competing with the same type of resource constraints, which everybody else is with. so, you know, it's, it's sort of like a persistent problem, I think, which all comes from to me anyway, the lack of really great supply of.

Houman David Hemmati (35:51.707)

Yes.

Ram Yalamanchili (36:00.918)

individual talent in this industry and we're not growing that fast enough. And that sort of has a knock on effect in my opinion anyway on the, on the number of goals we have on the shot, right? Number of formation of biotech companies, number of opportunities to bring new drugs into, into clinical development and this sort of thing all kind of, and then of course the amount of capital available, because if, if you had a certain amount of capital, each of your trial costs a certain amount of money and you divide that, that's the number of N you have in terms of how many opportunities.

And so I sort of see this interesting opportunity where for the first time we're saying, if you don't want to modernize your process or rather if you don't want to think about it that way, don't do that. But you can have an AI teammate come in and sort of work within your framework in this way, right? Which is sort of an interesting paradigm I'm looking at. Do you sort of see that? I mean, I guess like...

I would ask you, how does that fit into your framework? Does that make sense? Do you have any concerns? Anything where you feel this is fundamentally not viable in some ways, I guess.

Houman David Hemmati (37:14.693)

I'll tell you, I think it's gonna require people to become comfortable with it. And different people are comfortable to a different degree. I'm a person who already likes to you know, de-identified data sometimes, and just to do a quick analysis using, you know, Grok or ChatGPT of things, just to give myself some confidence about things before I get an official analysis.

you know, with a limited amount of information, just to see how does it do it? And I've always been very pleasantly surprised. And so I'm comfortable with the technology only because I played around with it even outside of the formal setting within trials, right? I know on the other hand, there may be people who are hesitant only because of a sense of mistrust, not knowing what's going on in the black box, right? And what it's gonna take is for there to be real world examples.

First, of where this sort of approach and having the virtual teammates or the AI teammates succeeding and benefiting trials without causing problems. And then second, it's going to require many people, including people who are a little skeptical or hesitant to be engaging with it themselves, perhaps in a way that's redundant even, right? To have it alongside additional traditional resources for trials, but not instead of.

To see, okay, how is this helping me? And if I didn't have all these additional people, how would that have helped? Or how does that AI tool allow the people I do have to focus on what they really need to be focused on instead of constantly answering, you know, my queries or questions about things. Ultimately it will happen because once we see real world examples of trials becoming far quicker and far more efficient, naturally sponsors will

will want to go in that direction. CROs will want to go in that direction. And frankly, the sites will want to take that approach because it's going to make their lives way easier, right? And as a site, you are not caring about the mechanics of the trial. Sites want to take care of patients, plain and simple. And the easier it can be for them to take their existing clinical framework and bring patients in for clinical trials without changing how their clinic is run, the better. And so for them,

Houman David Hemmati (39:41.122)

removing any of those heavy lifting steps or making it far easier for them to do, the better. And so I think, again, this poses a win-win for everyone. And it's just really a matter of rolling it out in a way that makes people comfortable and demonstrates both the benefits and the lower level of risk.

Ram Yalamanchili (40:03.596)

Yeah, understandable. think the comfort and trust, I think, are going to come with more data and more, frankly, just able to prove that these are actually competent and able to do the work. But I think that just comes with time. think there's going to be innovators, there's going to be visionaries who are able to see it and say, OK, I have some value here, let's kind of jump in and do something with it. And sort of, think, what you're talking about is the industry will

Houman David Hemmati (40:20.271)

Yes.

Ram Yalamanchili (40:33.12)

essentially evolved because there is essentially a need for it and we have all these challenges we've spoken about. So my last question for today is, know, independently thinking about all your journeys so far, do you feel like the next five years are going to be sort of similar or do you feel like there's something rumbling in this case? Like, and there might be a change to the better or change, I don't know, what you look at it. I guess I'm asking you what would you think about?

Houman David Hemmati (41:02.841)

I think what we're probably going to see in the world of trials is what we're seeing even in the government right now, which is a focus on eliminating any kind of waste, but to do so thoughtfully, obviously, and to make things more efficient so we can focus our limited resources on doing what we're supposed to do very, very well and very quickly and efficiently, right? In a way that benefits everyone. So my vision for what's going to happen

with clinical trials, especially with the involvement of AIs that we're going to see a lot more self-service, lot fewer delays, and a lot more confidence in the quality of the trial and the quality of the data that are coming out and the ability to execute and get the results that we're hoping to see and that the results that we see are actually reflective of the performance of the product, of the drug. And we're headed there, for sure. Right now, we're at this point in evolution

of clinical trial conduct that we've maximized existing traditional technologies. And it's time to now take a shift to new technologies. And that happened when we shifted from pure paper charts to involving electronic databases and doing simple things like Excel spreadsheets and eventually to more advanced systems. But now we're at that inflection point that moves far beyond that to a new level where we're able to actually make sense.

of the information in real time rather than having to just simply rely on people to interpret it or act on it.

Ram Yalamanchili (42:35.554)

makes sense. One quick follow up on that is what's your view from a regulatory perspective? Have you had an opportunity to think about that or speak to anybody on that?

Houman David Hemmati (42:44.106)

Yeah, you know, I think from a regulatory perspective, we're again in uncharted territory and the FDA, especially the modern FDA that's under very new leadership as of yesterday with Dr. Marty McCary in charge as FDA commissioner, has an opportunity to really modernize how trials are allowed to be run. And I think what they're going to probably

recognize is that this benefits the FDA as well. The more quality data they can get, and even if they have access to some of those tools and are allowed to utilize it themselves to dig into the trials, the better. Imagine being the FDA and having the ability to do trial audits in a way that doesn't force them to send an army of people to each site that they want to audit, but rather allows them to do it remotely, or even if they are in person, send fewer people.

and to do it with far more robustly with greater scalability, that's going to benefit everyone because now the agency would be able to discover things that they may have missed in the past while also employing far fewer resources and making it lot less cumbersome on the sites. Right now, if you're a site and you have FDA coming, you have to drop everything for a matter of days sometimes or longer just to simply accommodate the agency for that review.

That can all go away if we change how things are done from the regulatory side. Of course, there are to be regulatory approvals required for some of these things, especially when it comes to sponsor access and also the ability to use AI-driven analytics for FDA submissions. Right now, the FDA wants to be able to take the raw data and do their own analyses to validate

that the sponsor's analysis was done correctly without any fudging. And if the FDA similarly may have access to these tools, either traditional tools still or the AI tools or both, and they can do side-by-side comparisons, ultimately, I think the FDA will be comfortable as well. But I think that's gonna be a process and it creates opportunities to work in partnership with them.

Ram Yalamanchili (45:01.494)

Understood, yeah. No, it's a fascinating, I think it'll be a fascinating next several years. We're truly in a very interesting time. Well, Dr. Hemard, it's been a pleasure. I appreciate you spending the time discussing this. had a lot of fun talking to you about this. So thank you.

Houman David Hemmati (45:19.732)

Yeah, Ram, thank you for having me. Always happy to join you again.


Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.