Andrew Pucker: AI Teammates Modernize Clinical Research

In this episode, Tilda CEO Ram Yalamanchili sits down with Dr. Andrew Pucker, VP of Clinical Development at Mintra Health, to explore how modern, AI-driven platforms are transforming the execution of clinical trials. Dr. Pucker, an optometrist and myopia researcher with over 15 years of experience, shares how Mintra and Tilda have partnered to streamline endpoint collection, reduce site burden, and improve data quality through integrated systems. They highlight challenges with traditional methods—manual timing, paper workflows, siloed systems—and explain how guided modules within TildaSense are improving standardization and reducing protocol deviations across multi-site trials. “We’ve been doing tear breakup time the same way since the 1970s,” says Pucker. “Now, we’re using real-time digital validation, training embedded in the same tools used for data collection, and automation to catch errors before they happen.” This is a must-watch for sponsors, CRO leaders, and clinical development teams looking to design faster, smarter, more reliable trials. Watch the full conversation now and see what the future of AI-powered research looks like in practice.

Andrew Pucker: AI Teammates Modernize Clinical Research

In this episode, Tilda CEO Ram Yalamanchili sits down with Dr. Andrew Pucker, VP of Clinical Development at Mintra Health, to explore how modern, AI-driven platforms are transforming the execution of clinical trials. Dr. Pucker, an optometrist and myopia researcher with over 15 years of experience, shares how Mintra and Tilda have partnered to streamline endpoint collection, reduce site burden, and improve data quality through integrated systems. They highlight challenges with traditional methods—manual timing, paper workflows, siloed systems—and explain how guided modules within TildaSense are improving standardization and reducing protocol deviations across multi-site trials. “We’ve been doing tear breakup time the same way since the 1970s,” says Pucker. “Now, we’re using real-time digital validation, training embedded in the same tools used for data collection, and automation to catch errors before they happen.” This is a must-watch for sponsors, CRO leaders, and clinical development teams looking to design faster, smarter, more reliable trials. Watch the full conversation now and see what the future of AI-powered research looks like in practice.

Transcript

41 min

Andrew Pucker, OD, PhD (01:55.118)

Hi, Ram. Thanks for having me. I'm Andrew Pucker for everyone. I'm currently the vice president of clinical event development at Mintra Health. Really, my job is to be a research coach. So we bring clients in, we help them develop their protocols. And then as we go along, we try to make their study as efficient as possible. And that's kind how I've come to work with Tilda.

Ram Yalamanchili (02:20.811)

Great. And I'll also add that Mintra is a really great set of individuals, very tech forward, I would say, from a CRO perspective. Maybe to begin with, one of my first questions will be, what makes you and your team so, I would say, forward thinking on your adoption cycle, and why is it important?

Andrew Pucker, OD, PhD (02:45.566)

I think the people at Mintra really thrive because we've all been in the field for a long time. So for example, I am an optometrist, I have a PhD in myopia development, and I've been doing clinical research for roughly 15 years. And that is kind of the standard at Mintra. So we've seen a lot and we know what we can do to make things better. So whenever we get into a protocol, we try to brainstorm the best way to make things efficient, effective, error-free.

And with that, technology is often the answer. So that's the way I approach it. And I think everyone at MNTRA approaches a study.

Ram Yalamanchili (03:17.417)

Thank

Ram Yalamanchili (03:24.49)

Got it. All right. And in terms of what we've been working on lately, I would like to cover a few things around that topic, specifically how you're approaching endpoint development and some of the tooling related to collecting these endpoints. So maybe from an audience perspective, could you tell us a little bit about what your motivations are in that aspect and some of the

which we've been working on.

Andrew Pucker, OD, PhD (03:55.886)

Sure. One thing I've been interested in for at least a decade now is developing endpoints. In particular, things like corneal fluorescein staining, which is a measure of corneal irritation that's often an endpoint in dry eye studies, or tear breakup time. That's another endpoint in dry eye studies that essentially is determining how stable a person's tears are. If they're less stable, they're going to break up quicker, and then they're going to be more likely to have dry eye.

The issue with these tests, are they really variable from the doctor's perspective? Typically as the doctor, you look at the patient's eye and you're asked to make a judgment. Is this a, for example, a grade one, two or three for staining or is it a breakup of five seconds or six seconds for the tear breakup time? And with that, you need to bring in some standardization. So every doctor is going to be a little different.

Ram Yalamanchili (04:42.016)

for

Andrew Pucker, OD, PhD (04:53.388)

Right. So we want to train our doctors to do it exactly the same. It's not so big a deal in clinical practice when it's just your everyday patient. But when you're doing, say a 30 site study, every doctor is doing a little different in their regular day practice. need to train them to do it the study way. And with that, they hopefully bring down the variability. If you can lower your variability, you're going to be more likely to get a significant difference between your groups.

and more likely to win on your endpoints in the end and get your product approved. So everything we can do in that area to make it more consistent, make it easier for the doctor, less likely to make transposition errors, those sort of things I think can go a long way. And with TILDA, we've really run with that. what we've...

done as a partnership between Mintra and Tilda is build these training programs around those specific outcomes. And we're doing others, but I'm just using that as an example. But what we're doing is we're doing a training within Tilda Sense. And what that is going to have the doctor do before they get into the study or the study coordinator before they get into the studies to take an online training. This training will effectively walk them through

the whole procedure step by step, and then it'll do a knowledge check. And if they don't do it quite right, then we can see that because everything is transparent in the TildeSense system. So we can see, did they get these questions right or wrong? Are they proficient? If they're not proficient, we can go and do a tune up training with them. But once they successfully complete that, then they can go and do their trial. They can see the subjects.

And still we can have a pulse on how they're doing specifically. We've built in a real time measure of how they're doing the tests. So with one of the tests that your breakup time is an example, they need to do tests at very specific times. they apply a corneal fluorescein stain. is something that just mixes in with the tears. We start a timer and then they're not allowed to start grading until one minute.

Andrew Pucker, OD, PhD (07:09.792)

So the machine will say, hey, it's been a minute. You can start grading. And then they have up to five minutes total to complete this test. So it'll say, hey, you're within the window or not. And it can in real time track the amount of time for your breakup time by just having the doctor or their assistant potentially turn the stopwatch on and off. And it's collecting the data within the system. So

Ram Yalamanchili (07:10.029)

So, we're using the same method that we did for our creative work. And as we say, as we have defined it, we're to be using the same method. So, we're going be using the same that done. And we're be using the same method that we've And we're be the same method we've already And we're going going same method we've done. And to we're going to method we've And same the the same And the same

Andrew Pucker, OD, PhD (07:37.518)

Typically, you as the doctor you may be at the lamp with your cell phone using that as a timer and you can see how that might lead to errors, right? They may have transposition errors, etc. by putting that number into the system, but the system is collecting the data directly in this instance so that there is less likely for you know, all those transmission errors and then just less likely for them to get it wrong. So

Those are just some examples. think you could do this with a whole bunch of different outcomes. And kind of the cool thing with working with Tilda is it's very efficient, cost-effective. Like some of these products could be probably hundreds of thousands of dollars if you go to other vendors, but this is just part of their normal offering. So I love that too.

Ram Yalamanchili (08:06.846)

So, you know, what we've kind of described, or at least the way I look at it is it's a very guided way of

walking through sites in the process of endpoint collection. And I also noticed that in the past, you've looked at other systems where they're disjointed, right? You have training, which is separated out from the actual collection. And you have two different ways of sort of training and collecting, and that can lead to some inconsistency as well. But since the way we've developed this specific set of endpoint modules, it's basically...

This is how you collect it and you use the same exact module to train as well. And then the trainer is essentially assessing you based on how you're using the module in terms of how accurate you are in the process. What I'd like to know is, what was your past experience like in similar use cases? What is being done currently in the industry and what are the type of challenges you've faced in the past where this sort of an integrated system or workflow does not exist?

Andrew Pucker, OD, PhD (09:35.342)

I think the most common way to do this sort of training, so I'll use corneal florescine staining as an example, is that you'll have this investigator meeting before a study starts. So all the investigators, if you can get them to attend, hopefully they're not on vacation. Hopefully they all attend, often they don't. They will have to sit through this, four hour session, which is important, but it can get long. And as we know, our attention can wander within that four hours.

you will get at some point in that meeting where the expert goes over that specific method. could be standing, like I said, or something totally different, but you just give them a PowerPoint lecture like you would in any, academic classroom, and then you hope that they do it right. And maybe there's some sort of multiple choice test or something afterwards, but that's kind of more your typical academic kind of lecture learning situation, whereas with the TildeSense system,

Ram Yalamanchili (10:23.696)

So, going to be talking about the management of the site. That's the one thing that's going come up in the next couple of days. And I'm to be talking about of the site. And the management of the site is going be as the management of the site. And I'm to talking about the management of site. I'm to the of

Andrew Pucker, OD, PhD (10:36.46)

You have them in the system. It's the same system that you're using for everything else for the study. And you can know if they're actually doing it, because you can watch the amount of time they're taking it. You can see if they got the questions right or wrong all in real time. I think it's just make it more modernized, and it's better feedback for the sponsors. You can really know if these people are paying attention or not, whereas it's pretty hard to know if you're on a video call.

If they're like walking their dog, who knows what they could be doing.

Ram Yalamanchili (11:08.602)

Yeah, so that's part of the, that's essentially talking about the training process, but what about when the assessment is actually happening when you have a subject in front of you, when a patient is in front of you, right?

What's like, what have you seen in the past? And I'd like to specifically understand some of the challenges which might have been presented and what are the implications of having that type of a inconstancy in the past, that type of an inconsistency in the past. So yeah, anything you can add more light on that or anything you can say more.

Guess my question is, in the past, given the lack of an integrated system or guided workflows, what we have done is essentially trained and then given worksheets and essentially expect the site staff to do this in a consistent basis. And also the study has 30 different sites and maybe 60 different people who are probably doing the assessments.

Ram Yalamanchili (12:15.772)

to each at a site possibly. And you're dealing with this sort of a variability and you're hoping that everybody's following the protocol. But I'm curious, is it basically people just writing down at a certain time, this is what I've done and another time, this is what I've done and you're just making sure that those times all add up when you monitor? Is that how it's done or how how is this done before we existed and before you existed in terms of developing this kind of a capability?

Andrew Pucker, OD, PhD (12:43.4)

I think that historically what everyone has done is they're essentially their typical clinical workflow. They're seeing a patient and they're probably writing down the data on paper. So with that, there's very little visibility from the CRO or the sponsor side on how they're really doing this. Like we can kind of look at their data to make sure it makes sense on the backend with paper, know, they put it on paper, they put it then in their

data capture system, and then we can see if it makes sense, but there's really no gatekeeper checker on, they doing the times right? Right? So with the TildeSense system, we can know that they turn the timer on, yes. And then did they hopefully start their grading at a specific time? And then were they, did they have all their data in the system before that timer went off? So with that, I think the quality,

has much better tracking. We know what they're doing and ideally when they're doing it.

Ram Yalamanchili (13:48.533)

Right. So it's essentially like a real-time monitoring process here, in other words, right? And that also leads to better consistency because it's real-time. You get a...

direct alert immediately if something was not correct or validly measured. And that leads to better quality. That's essentially how you're thinking about it. Have you seen studies where this has become a problem? Is this something which you do see or have seen in the field where, you know, eventually it was, it had an adverse event or adverse effect on the trial itself?

Andrew Pucker, OD, PhD (14:22.54)

I think from a workload perspective, I see issues commonly. So it's not uncommon for some sites, I don't want to say all sites, but some sites might have hundreds of protocol deviations that need to be mitigated before you do your database lock, that sort of thing. So it leads to a lot of stress on the whole study team trying to resolve these like transposition errors or those sorts of things. I think that poor training in general is a

reason why sponsors don't get the data they think they're going to get. So if there's a lot of variability in your measure, that's much more likely to result in a failure of achieving your endpoints. So having this additional rigor, I think is going to make it more likely that everyone succeeds too.

Ram Yalamanchili (15:17.178)

Right. And in terms of having access to this kind of a data, like high frequency, high amount of fidelity data, how does that help either a sponsor or a CRO in terms of day-to-day trial execution or even overall as a program success, right? Are there ways where you can think of this helping them and what it can do?

Andrew Pucker, OD, PhD (15:41.496)

think if you start with this sort of thing from the beginning of your program development, so your first in-human study, you can use the data collected along the way to optimize future studies. So we might learn that it's hard for the doctors to get a specific test done in a given amount of time. So with that information, we can take that to our phase two or phase three study.

and better design our protocols so that they are efficient, effective, and error-free. So I think that is really interesting from just a protocol development standpoint and probably, you know, success factor.

Ram Yalamanchili (16:28.457)

And that's something which is a unique advantage in this model because otherwise you're not going to get this kind of information unless you really process every single

I don't know, every single data point collected on a paper for timing and doing a whole bunch of analytical work on top of it before you get this information, right? And in this sort of a case, you have much higher frequency. You're probably seeing data which would otherwise be hard to see, and that leads to a better protocol design as you work through the next phases of the trial. Is that fair?

Andrew Pucker, OD, PhD (17:03.116)

Yeah, and it's kind of standard to not have to enter your data into an EDC for a day or two or three. This is instant enter into your EDC. So it's just faster and you have way more visibility in what's actually happening.

Ram Yalamanchili (17:14.52)

Hmm.

Ram Yalamanchili (17:21.219)

That's right. So that's an important aspect of what we should cover because what I've also seen is sponsors and CROs typically give sites essentially an EDC setup, which is where the data capture happens. And there's probably five to 10 other systems which they're also giving them, right? So there's training systems, there's ePros, there's IRTs, EDC and...

training might not talk to each other, IRT might talk to ADC hopefully. So there's these siloed approaches of building technology and bringing it into a study. And our approach has been the integration of the lack of is what's causing quite a bit of burden on the site at the moment and leads to several inefficiencies and even the trial being a success, both from time cost or better design work or better insights into the next study.

So one of the principles which we've taken at Tilda is just producing one platform which can integrate all the way from a CTMS, EDC, TMF, source, E-Pro, your trainings, your IRT. So it's really like an end-to-end platform which has the capability to manage a trial in entirety. And on top of that building automation or AI capabilities which are deeply integrated across the stack. So this provided

the type of capabilities I think when we first started talking about how do we build certain endpoints into guided workforce on our platform? How can we build it so that the site is directly entering the data, but then part of it can flow directly into your EDC? I think having that kind of a architectural advantage in our platform really helped us figure out how we can start to build these kinds of endpoints and generally start to design much better trials for the various indications where

these sort of endpoints are being used, right? But particularly one thing I wanted to touch on is this particular assessment, like I know if you're talking about the fluorosine staining and tear break up time, that's a very time-based assessment. It's almost like you have to do certain assessments within a certain specified amount of time. And if you don't, you can really meaningfully change the outcome of the endpoint in this particular study. And...

Ram Yalamanchili (19:46.412)

That was very fascinating to me. the ranges are not very wide. These were talking about minutes to the second sometimes. And I found it really interesting that we essentially, as an industry, did not have a really strong set of governance or real-time validation capabilities around how you're collecting this data. And a lot of it is just dependent on whatever you write down on a paper.

you go with whatever is said there, And I think there's something to be said in terms of like the overall industry where this sort of a platform can add a lot of value in terms of just designing better study protocols, better execution of these protocols, and maybe ultimately finding the right place for the drugs we're developing, right? So really like trying to get them to be optimal in terms of coming into the market.

Anything else you want to like kind of add? mean, I'm curious about how you see the next several years to come from Mintra's perspective, because what I've noticed is you're very active in developing and creating these sort of endpoints and really standing out in terms of executing trials in a way which I've not seen many others sort of do. But I am curious, like, what do you think is the impact of these sort of...

endpoints and integrated platforms in how you see Mintra and the future of trial operations. I think ophthalmology in this case, but maybe other third-party areas as well.

Andrew Pucker, OD, PhD (21:23.662)

Sure. A couple of comments about what you said first though. With like tear breakup time, we've been doing it exactly the same since probably the late seventies. So it's nice to bring some new technology, innovation rigor to that. So I love that we're doing that. I also love that everything is integrated. I hate having 11 passwords and then they all change at different times. And then I don't know what passwords what. So from the site.

and CRO standpoint, I think it's great that you can just sign into one platform and do everything. And for our sponsors, it's probably just logistically easier. You don't need to get quotes from like five different vendors. It's just one vendor that does everything. So I think it makes it simpler and cheaper because everything's integrated, know, can give it some redundancies. I think going forward, I think we can keep optimizing, you know, with our partnership, of course, these sort of systems. So with

the electronic data capture system, think we can make it even smarter. So one thing that we haven't done yet, but I'd like to do hopefully in the near future is that if you have a really rigorous set of inclusion criteria for picking, for example, your study I, or if you qualify or not, you could have the system just automatically tell you on the spot as the doctor, like you put in your values and maybe they're captured in real time because you were doing the timer and all the right things.

Ram Yalamanchili (22:21.735)

.

Andrew Pucker, OD, PhD (22:49.994)

and then you get to this page that it takes all the data together for you and says, this patient qualifies or not. So I think that could be really, really interesting. I think that.

Overall, I think we can keep optimizing it so that things are just cheaper and faster and easier. And, know, we're never going to have a study get done in a day, but maybe we can maximize the system so much that it's allowable for the sites to see twice as many patients than they would have have already. So I think that is, that's another potential thing that we're already, I think you're already kind of doing on the Tilda side. I think you're making sites more efficient, but I think we can keep running with that.

from a training strand point, it'd be nice to have some sort of training for every outcome. That's important. I think that your, your platform allows for that. know, you, you got a big team of engineers who, if you give them an idea, they run with it and they come back with a product in a week or two, which is just amazing. So I think, you know, we, started with some of these dry eye things, but we can expand this to anything that we want. And then.

Ram Yalamanchili (23:54.774)

What's your thoughts on, from a sponsor perspective, the prior experience you had versus some of the recent experience you're seeing with these type of technologies, right? How has that changed, just the relationship between

Andrew Pucker, OD, PhD (24:02.702)

you know, as a field, we can grow and use this together.

Ram Yalamanchili (24:24.723)

you as a CRO and the sponsor and the value the sponsors are gaining.

Andrew Pucker, OD, PhD (24:31.49)

I think it's been really hard to just kind of integrate everything. There's just so many vendors that you need to have on a successful study. you know, it could be five or more vendors. And if you can eliminate that down to two instead of five, I think that is a huge, huge win right there. I'm sure there's other things that are gonna...

come down the road, like that alone, think is a huge, huge win.

Ram Yalamanchili (25:05.407)

Yeah, and that's coming back to this idea of how do you build better integrated systems which are easier to config, modify, deploy. And I think to a certain extent, the fact that there's so much automation capability we've baked into our platform. And I think nowadays it would be very unusual for you to say, how come this product does not have some kind of AI or automation based?

way to like create a workflow or set up a platform or some kind of integration or even answer certain questions, right? So I think that's sort of a trend I do believe is going to be happening across the board, not just with us, but other technology platforms which are looking into modernizing the sort of workflow. So another aspect of what you touched on, which I think is really interesting is

I noticed from a CRO or a sponsor perspective, study startup is an interesting timeframe. There's a lot of work which is done in order to bring up the study and bring up each site and really get ready for like a first patient, first visit to happen from the time you've executed your CTA, let's say, right? And I also noticed that there's quite a bit of time spent on this exact problem of getting systems ready or getting...

everything ready with the fact that you're constantly also seeing some kind of changes in terms of your requirements. So can you maybe talk a little bit more about that part of it where, we've gone through some iterations where you say, well, this is what we want and, you know, the system should be able to configure quickly. And if we didn't have the right tooling and the right ways to do it, I mean, in our case, it's largely AI driven, an agent driven approach, but

I'm just curious, have you seen situations where the impact is pretty visible? Maybe it's longer timelines or things like that. I'd like to understand how you've seen that happen in the past.

Andrew Pucker, OD, PhD (27:15.182)

Yeah, I do think that if you don't get on board with AI, you're going to be left in the past. So that I think that's clear. And I think, yeah, with any study, the kind of go live for subject lead up is very, very stressful. And a lot of that comes from one, you have to prepare roughly 10 different major documents. And if you could have.

Ram Yalamanchili (27:21.075)

Ha

Andrew Pucker, OD, PhD (27:42.602)

an AI system where you feed in the protocol and it shoots information to all of these 10 different documents and gets it 80 % of the way there. That would just save weeks of time, I think. So that's something I think that AI could help us with. Like, I think there's going to have to be a human touch on it, at least for many more years, but I think AI can get it really close and that will just save valuable hours for

Ram Yalamanchili (27:55.028)

.

Andrew Pucker, OD, PhD (28:10.04)

For the really skilled people, which honestly are in low supply, there's some really smart people out there, but the CR world is very, very specialized and there's not enough people to do that work. So if we can make their jobs easier so that maybe they can do two projects and not feel stressed out instead of doing one project and feeling stressed out, I think that could be a game changer.

There's a lot of documents to manage. Like I don't even think I know all the documents that are on a study, because we have really skilled clinical leads and project managers that are helping me. I'm more on the medical side of things, but they have to file hundreds of documents. And if you can automate that, that also will make everyone less stressed. You're less likely to lose them. And overall you'll be in better regulatory.

Ram Yalamanchili (28:42.819)

Yeah.

Andrew Pucker, OD, PhD (29:04.046)

clients. like that's all happening at the same time. It could be within like a month window, especially if you have a sponsor who wants to make a public announcement that they're going to see their first subject before Arvo or like another important meeting. like we're often given these timelines that are maybe a little unreasonable, but we go above and beyond to meet them. But if you have AI helping you, I think that is going to make things even easier going forward.

Ram Yalamanchili (29:08.907)

Yeah.

Ram Yalamanchili (29:24.571)

Thank you.

Ram Yalamanchili (29:32.454)

Yeah, and I think case in point, that's exactly what our regulatory teammates do, right? It's helping you prepare documentation, get them signed off, doing document QC on top of them, and essentially driving the process forward at scale. So you're not bottlenecked from a people-time perspective, but you're still very much involved in the process and the oversight side of things. But...

What I've also noticed, or at least would like to touch point is, do you see something similar on setting up the various systems and change management at the vendor level? And how often do you see that happen before the study starts where requirements are changing, so you would have to go back and forth with different vendors and sort of, it takes certain time to sort of propagate all this requirements into all the systems or all the vendors you're working with, right? How big of a pain point is that? And,

And have you seen issues arise of that sort?

Andrew Pucker, OD, PhD (30:34.478)

Pretty much every study I've ever been on has had a last minute change. Maybe we noticed that the vendor is doing X, Y, Z. Maybe it's something related to a reading center as an example, and they feel like they need to optimize the protocol a little bit last minute, because they may not get the protocol until you're further along, right? You kind of are building the study. And then...

that actually could spring a change in several other documents. So you might have to change the protocol and then like a manual procedures and then maybe a pharmacy manual. Like having one change could really throw a wrench in things. So you have to quality control check all these other documents. And if you had a system that could do most of that for you, that would be amazing.

Ram Yalamanchili (31:20.528)

Yeah, yeah. And what about data management sort of things? you generally see changes which then propagate into your EDC pros and, know, edit checks and things like that as well?

Andrew Pucker, OD, PhD (31:33.792)

A lot of protocol changes will then translate into needing to update your source documents and your EDC. So that can cause, you know, like a post production change in your EDC, which again is just another downstream effect of changing the protocol, which it takes a lot of time. Like it could take days for the team to put in the change and then have other people double check it and get sponsors sign off. So if you can.

automate that as well, think it's really going to be a game changer.

Ram Yalamanchili (32:08.655)

Right, right. Yeah, and I think that's kind of the fascinating thing. I've certainly seen timelines be thrown around, which, you know, from the looks of it seems very aggressive. But when I started to go into the nitty gritty detail, I think, you know, what's said is very different from what's actually being done. And that gap is wide, I feel, not enough. Even from a sponsor's perspective, I feel it's not appreciated, right? Like, I think...

you sort of frequently hear what you'd like to hear from service providers and vendors, but in reality, how do I actually get there is a very different question. And it's a complicated problem to solve without modern technologies and automation and AI and so on and so forth, right? But I think in some ways, having an integrated platform and having really strong automation built into it, I think are good ways to go about solving for this type of a issue.

So.

Andrew Pucker, OD, PhD (33:10.136)

I think the timelines are important and it'd be nice to sometimes delay things, but you often can't because maybe the sponsor's timeline is tied to funding. Like if they don't meet this mark, maybe they're not gonna get the money to do the next thing. yeah, doing everything we can to make the timelines work is important and every tool we can get to make that more likely to happen, I think is valuable.

Ram Yalamanchili (33:35.791)

Yeah, yeah. Another point I am curious about is how does it work with

Ram Yalamanchili (33:47.643)

Sorry, I had a line apart. Let me just look at this notes here.

So we covered.

Ram Yalamanchili (34:16.561)

Okay, actually, I think I've covered pretty much everything we've had in our notes. Anything else you want to talk about? I think interesting.

Andrew Pucker, OD, PhD (34:27.202)

I think that's pretty thorough for now. Maybe when we do this again in a year and see how things have changed.

Ram Yalamanchili (34:33.774)

Yeah, yeah, that's fair. One thing we can do is I can ask you something about what's the reaction been when you talk to sponsors about what we're doing and how we're doing it, right? And what's the excitement? What are the questions they're asking? Should we do that? Does that make sense? Okay, all right, let's do that. So yeah, and I think...

Andrew Pucker, OD, PhD (34:57.304)

Sure, yeah.

Ram Yalamanchili (35:05.7)

Another curious question I have is, you know, we've spoken about a lot of very interesting and fascinating aspects of trial execution, which clearly like Mintra is pioneering and, you know, we're happy to be part of that journey. I'm curious, how do sponsors receive it? I'm sure you spend quite a bit of time with founders and sponsors who are looking to work with you, but I'm just curious, like what...

What is the reaction when you go about talking about all these capabilities you're bringing through Muntra?

Andrew Pucker, OD, PhD (35:42.862)

Sure. So one of my jobs as VP of clinical development is to help with sales. So that's one of my big jobs. I do go and meet with CEOs of companies, leads of clinical development, et cetera. And what we do at Mintra is we try to use our partnership with Tilda as a differentiator. And I often bring up AI efficiency cost savings and it really piques people's interest. We know that AI is a buzzword now, but we...

kind of bring that to their world. say we're a CRO who partners with this company called Tilda, who has this amazing AI technology that can help make things faster, more efficient. And that kind of just opens the door and we do a discussion and then maybe we do a little demo. And I think it's been really well received. think that people are still a little skeptical to be honest, but they want to learn more. And once you show them what

what we're doing, they get even more excited. So I think as our partnership grows and as AI becomes more accepted, it's just going to be even a bigger thing going forward.

Ram Yalamanchili (36:55.272)

Yeah, and I will say we are seeing something very similar from other partners and sponsors we work with. I think the understanding of what exactly is AI and how it can be helpful and used, feel is not ubiquitous, right? Everyone's got slightly different versions of what that means. And I think being able to show a platform which is actually working and here's what happens with your protocol. You put it in, you...

the system starts to come alive and their teammates going and doing certain work or AI teammates, should call, doing certain parts of the workflow. You've got a single system which is fully integrated end to end. And I think that's where the aha moment really starts to come in. It's like, wow, like, okay, this is actually real. This is actually something I can use. And I think, you you kind of have to wrap your head around this new mode of implementation of your clinical development program probably.

But I think from what I've seen with Mintra, the work you guys are doing, I think you're really thinking about it from first principles perspective, saying, we are building the modern AI driven or technology driven CRO. How would we do it without falling into the classic, because we've done it a certain way for the last X number of years, we will continue to it this way. Which is interesting.

And I'm also pretty fascinated that you've gotten a pretty good amount of success already in terms of working with several sponsors and trials. So, you know, I wish you all the best and it's been amazing working with you guys. So thanks, Andrew, and thanks for your time on this podcast.

Andrew Pucker, OD, PhD (38:45.516)

Yeah, this is great. I'm a scientist first, so I wouldn't try to push anything I didn't believe in. So I think this works and I really do partner or value our partnership and I look forward to.


Andrew Pucker, OD, PhD (01:55.118)

Hi, Ram. Thanks for having me. I'm Andrew Pucker for everyone. I'm currently the vice president of clinical event development at Mintra Health. Really, my job is to be a research coach. So we bring clients in, we help them develop their protocols. And then as we go along, we try to make their study as efficient as possible. And that's kind how I've come to work with Tilda.

Ram Yalamanchili (02:20.811)

Great. And I'll also add that Mintra is a really great set of individuals, very tech forward, I would say, from a CRO perspective. Maybe to begin with, one of my first questions will be, what makes you and your team so, I would say, forward thinking on your adoption cycle, and why is it important?

Andrew Pucker, OD, PhD (02:45.566)

I think the people at Mintra really thrive because we've all been in the field for a long time. So for example, I am an optometrist, I have a PhD in myopia development, and I've been doing clinical research for roughly 15 years. And that is kind of the standard at Mintra. So we've seen a lot and we know what we can do to make things better. So whenever we get into a protocol, we try to brainstorm the best way to make things efficient, effective, error-free.

And with that, technology is often the answer. So that's the way I approach it. And I think everyone at MNTRA approaches a study.

Ram Yalamanchili (03:17.417)

Thank

Ram Yalamanchili (03:24.49)

Got it. All right. And in terms of what we've been working on lately, I would like to cover a few things around that topic, specifically how you're approaching endpoint development and some of the tooling related to collecting these endpoints. So maybe from an audience perspective, could you tell us a little bit about what your motivations are in that aspect and some of the

which we've been working on.

Andrew Pucker, OD, PhD (03:55.886)

Sure. One thing I've been interested in for at least a decade now is developing endpoints. In particular, things like corneal fluorescein staining, which is a measure of corneal irritation that's often an endpoint in dry eye studies, or tear breakup time. That's another endpoint in dry eye studies that essentially is determining how stable a person's tears are. If they're less stable, they're going to break up quicker, and then they're going to be more likely to have dry eye.

The issue with these tests, are they really variable from the doctor's perspective? Typically as the doctor, you look at the patient's eye and you're asked to make a judgment. Is this a, for example, a grade one, two or three for staining or is it a breakup of five seconds or six seconds for the tear breakup time? And with that, you need to bring in some standardization. So every doctor is going to be a little different.

Ram Yalamanchili (04:42.016)

for

Andrew Pucker, OD, PhD (04:53.388)

Right. So we want to train our doctors to do it exactly the same. It's not so big a deal in clinical practice when it's just your everyday patient. But when you're doing, say a 30 site study, every doctor is doing a little different in their regular day practice. need to train them to do it the study way. And with that, they hopefully bring down the variability. If you can lower your variability, you're going to be more likely to get a significant difference between your groups.

and more likely to win on your endpoints in the end and get your product approved. So everything we can do in that area to make it more consistent, make it easier for the doctor, less likely to make transposition errors, those sort of things I think can go a long way. And with TILDA, we've really run with that. what we've...

done as a partnership between Mintra and Tilda is build these training programs around those specific outcomes. And we're doing others, but I'm just using that as an example. But what we're doing is we're doing a training within Tilda Sense. And what that is going to have the doctor do before they get into the study or the study coordinator before they get into the studies to take an online training. This training will effectively walk them through

the whole procedure step by step, and then it'll do a knowledge check. And if they don't do it quite right, then we can see that because everything is transparent in the TildeSense system. So we can see, did they get these questions right or wrong? Are they proficient? If they're not proficient, we can go and do a tune up training with them. But once they successfully complete that, then they can go and do their trial. They can see the subjects.

And still we can have a pulse on how they're doing specifically. We've built in a real time measure of how they're doing the tests. So with one of the tests that your breakup time is an example, they need to do tests at very specific times. they apply a corneal fluorescein stain. is something that just mixes in with the tears. We start a timer and then they're not allowed to start grading until one minute.

Andrew Pucker, OD, PhD (07:09.792)

So the machine will say, hey, it's been a minute. You can start grading. And then they have up to five minutes total to complete this test. So it'll say, hey, you're within the window or not. And it can in real time track the amount of time for your breakup time by just having the doctor or their assistant potentially turn the stopwatch on and off. And it's collecting the data within the system. So

Ram Yalamanchili (07:10.029)

So, we're using the same method that we did for our creative work. And as we say, as we have defined it, we're to be using the same method. So, we're going be using the same that done. And we're be using the same method that we've And we're be the same method we've already And we're going going same method we've done. And to we're going to method we've And same the the same And the same

Andrew Pucker, OD, PhD (07:37.518)

Typically, you as the doctor you may be at the lamp with your cell phone using that as a timer and you can see how that might lead to errors, right? They may have transposition errors, etc. by putting that number into the system, but the system is collecting the data directly in this instance so that there is less likely for you know, all those transmission errors and then just less likely for them to get it wrong. So

Those are just some examples. think you could do this with a whole bunch of different outcomes. And kind of the cool thing with working with Tilda is it's very efficient, cost-effective. Like some of these products could be probably hundreds of thousands of dollars if you go to other vendors, but this is just part of their normal offering. So I love that too.

Ram Yalamanchili (08:06.846)

So, you know, what we've kind of described, or at least the way I look at it is it's a very guided way of

walking through sites in the process of endpoint collection. And I also noticed that in the past, you've looked at other systems where they're disjointed, right? You have training, which is separated out from the actual collection. And you have two different ways of sort of training and collecting, and that can lead to some inconsistency as well. But since the way we've developed this specific set of endpoint modules, it's basically...

This is how you collect it and you use the same exact module to train as well. And then the trainer is essentially assessing you based on how you're using the module in terms of how accurate you are in the process. What I'd like to know is, what was your past experience like in similar use cases? What is being done currently in the industry and what are the type of challenges you've faced in the past where this sort of an integrated system or workflow does not exist?

Andrew Pucker, OD, PhD (09:35.342)

I think the most common way to do this sort of training, so I'll use corneal florescine staining as an example, is that you'll have this investigator meeting before a study starts. So all the investigators, if you can get them to attend, hopefully they're not on vacation. Hopefully they all attend, often they don't. They will have to sit through this, four hour session, which is important, but it can get long. And as we know, our attention can wander within that four hours.

you will get at some point in that meeting where the expert goes over that specific method. could be standing, like I said, or something totally different, but you just give them a PowerPoint lecture like you would in any, academic classroom, and then you hope that they do it right. And maybe there's some sort of multiple choice test or something afterwards, but that's kind of more your typical academic kind of lecture learning situation, whereas with the TildeSense system,

Ram Yalamanchili (10:23.696)

So, going to be talking about the management of the site. That's the one thing that's going come up in the next couple of days. And I'm to be talking about of the site. And the management of the site is going be as the management of the site. And I'm to talking about the management of site. I'm to the of

Andrew Pucker, OD, PhD (10:36.46)

You have them in the system. It's the same system that you're using for everything else for the study. And you can know if they're actually doing it, because you can watch the amount of time they're taking it. You can see if they got the questions right or wrong all in real time. I think it's just make it more modernized, and it's better feedback for the sponsors. You can really know if these people are paying attention or not, whereas it's pretty hard to know if you're on a video call.

If they're like walking their dog, who knows what they could be doing.

Ram Yalamanchili (11:08.602)

Yeah, so that's part of the, that's essentially talking about the training process, but what about when the assessment is actually happening when you have a subject in front of you, when a patient is in front of you, right?

What's like, what have you seen in the past? And I'd like to specifically understand some of the challenges which might have been presented and what are the implications of having that type of a inconstancy in the past, that type of an inconsistency in the past. So yeah, anything you can add more light on that or anything you can say more.

Guess my question is, in the past, given the lack of an integrated system or guided workflows, what we have done is essentially trained and then given worksheets and essentially expect the site staff to do this in a consistent basis. And also the study has 30 different sites and maybe 60 different people who are probably doing the assessments.

Ram Yalamanchili (12:15.772)

to each at a site possibly. And you're dealing with this sort of a variability and you're hoping that everybody's following the protocol. But I'm curious, is it basically people just writing down at a certain time, this is what I've done and another time, this is what I've done and you're just making sure that those times all add up when you monitor? Is that how it's done or how how is this done before we existed and before you existed in terms of developing this kind of a capability?

Andrew Pucker, OD, PhD (12:43.4)

I think that historically what everyone has done is they're essentially their typical clinical workflow. They're seeing a patient and they're probably writing down the data on paper. So with that, there's very little visibility from the CRO or the sponsor side on how they're really doing this. Like we can kind of look at their data to make sure it makes sense on the backend with paper, know, they put it on paper, they put it then in their

data capture system, and then we can see if it makes sense, but there's really no gatekeeper checker on, they doing the times right? Right? So with the TildeSense system, we can know that they turn the timer on, yes. And then did they hopefully start their grading at a specific time? And then were they, did they have all their data in the system before that timer went off? So with that, I think the quality,

has much better tracking. We know what they're doing and ideally when they're doing it.

Ram Yalamanchili (13:48.533)

Right. So it's essentially like a real-time monitoring process here, in other words, right? And that also leads to better consistency because it's real-time. You get a...

direct alert immediately if something was not correct or validly measured. And that leads to better quality. That's essentially how you're thinking about it. Have you seen studies where this has become a problem? Is this something which you do see or have seen in the field where, you know, eventually it was, it had an adverse event or adverse effect on the trial itself?

Andrew Pucker, OD, PhD (14:22.54)

I think from a workload perspective, I see issues commonly. So it's not uncommon for some sites, I don't want to say all sites, but some sites might have hundreds of protocol deviations that need to be mitigated before you do your database lock, that sort of thing. So it leads to a lot of stress on the whole study team trying to resolve these like transposition errors or those sorts of things. I think that poor training in general is a

reason why sponsors don't get the data they think they're going to get. So if there's a lot of variability in your measure, that's much more likely to result in a failure of achieving your endpoints. So having this additional rigor, I think is going to make it more likely that everyone succeeds too.

Ram Yalamanchili (15:17.178)

Right. And in terms of having access to this kind of a data, like high frequency, high amount of fidelity data, how does that help either a sponsor or a CRO in terms of day-to-day trial execution or even overall as a program success, right? Are there ways where you can think of this helping them and what it can do?

Andrew Pucker, OD, PhD (15:41.496)

think if you start with this sort of thing from the beginning of your program development, so your first in-human study, you can use the data collected along the way to optimize future studies. So we might learn that it's hard for the doctors to get a specific test done in a given amount of time. So with that information, we can take that to our phase two or phase three study.

and better design our protocols so that they are efficient, effective, and error-free. So I think that is really interesting from just a protocol development standpoint and probably, you know, success factor.

Ram Yalamanchili (16:28.457)

And that's something which is a unique advantage in this model because otherwise you're not going to get this kind of information unless you really process every single

I don't know, every single data point collected on a paper for timing and doing a whole bunch of analytical work on top of it before you get this information, right? And in this sort of a case, you have much higher frequency. You're probably seeing data which would otherwise be hard to see, and that leads to a better protocol design as you work through the next phases of the trial. Is that fair?

Andrew Pucker, OD, PhD (17:03.116)

Yeah, and it's kind of standard to not have to enter your data into an EDC for a day or two or three. This is instant enter into your EDC. So it's just faster and you have way more visibility in what's actually happening.

Ram Yalamanchili (17:14.52)

Hmm.

Ram Yalamanchili (17:21.219)

That's right. So that's an important aspect of what we should cover because what I've also seen is sponsors and CROs typically give sites essentially an EDC setup, which is where the data capture happens. And there's probably five to 10 other systems which they're also giving them, right? So there's training systems, there's ePros, there's IRTs, EDC and...

training might not talk to each other, IRT might talk to ADC hopefully. So there's these siloed approaches of building technology and bringing it into a study. And our approach has been the integration of the lack of is what's causing quite a bit of burden on the site at the moment and leads to several inefficiencies and even the trial being a success, both from time cost or better design work or better insights into the next study.

So one of the principles which we've taken at Tilda is just producing one platform which can integrate all the way from a CTMS, EDC, TMF, source, E-Pro, your trainings, your IRT. So it's really like an end-to-end platform which has the capability to manage a trial in entirety. And on top of that building automation or AI capabilities which are deeply integrated across the stack. So this provided

the type of capabilities I think when we first started talking about how do we build certain endpoints into guided workforce on our platform? How can we build it so that the site is directly entering the data, but then part of it can flow directly into your EDC? I think having that kind of a architectural advantage in our platform really helped us figure out how we can start to build these kinds of endpoints and generally start to design much better trials for the various indications where

these sort of endpoints are being used, right? But particularly one thing I wanted to touch on is this particular assessment, like I know if you're talking about the fluorosine staining and tear break up time, that's a very time-based assessment. It's almost like you have to do certain assessments within a certain specified amount of time. And if you don't, you can really meaningfully change the outcome of the endpoint in this particular study. And...

Ram Yalamanchili (19:46.412)

That was very fascinating to me. the ranges are not very wide. These were talking about minutes to the second sometimes. And I found it really interesting that we essentially, as an industry, did not have a really strong set of governance or real-time validation capabilities around how you're collecting this data. And a lot of it is just dependent on whatever you write down on a paper.

you go with whatever is said there, And I think there's something to be said in terms of like the overall industry where this sort of a platform can add a lot of value in terms of just designing better study protocols, better execution of these protocols, and maybe ultimately finding the right place for the drugs we're developing, right? So really like trying to get them to be optimal in terms of coming into the market.

Anything else you want to like kind of add? mean, I'm curious about how you see the next several years to come from Mintra's perspective, because what I've noticed is you're very active in developing and creating these sort of endpoints and really standing out in terms of executing trials in a way which I've not seen many others sort of do. But I am curious, like, what do you think is the impact of these sort of...

endpoints and integrated platforms in how you see Mintra and the future of trial operations. I think ophthalmology in this case, but maybe other third-party areas as well.

Andrew Pucker, OD, PhD (21:23.662)

Sure. A couple of comments about what you said first though. With like tear breakup time, we've been doing it exactly the same since probably the late seventies. So it's nice to bring some new technology, innovation rigor to that. So I love that we're doing that. I also love that everything is integrated. I hate having 11 passwords and then they all change at different times. And then I don't know what passwords what. So from the site.

and CRO standpoint, I think it's great that you can just sign into one platform and do everything. And for our sponsors, it's probably just logistically easier. You don't need to get quotes from like five different vendors. It's just one vendor that does everything. So I think it makes it simpler and cheaper because everything's integrated, know, can give it some redundancies. I think going forward, I think we can keep optimizing, you know, with our partnership, of course, these sort of systems. So with

the electronic data capture system, think we can make it even smarter. So one thing that we haven't done yet, but I'd like to do hopefully in the near future is that if you have a really rigorous set of inclusion criteria for picking, for example, your study I, or if you qualify or not, you could have the system just automatically tell you on the spot as the doctor, like you put in your values and maybe they're captured in real time because you were doing the timer and all the right things.

Ram Yalamanchili (22:21.735)

.

Andrew Pucker, OD, PhD (22:49.994)

and then you get to this page that it takes all the data together for you and says, this patient qualifies or not. So I think that could be really, really interesting. I think that.

Overall, I think we can keep optimizing it so that things are just cheaper and faster and easier. And, know, we're never going to have a study get done in a day, but maybe we can maximize the system so much that it's allowable for the sites to see twice as many patients than they would have have already. So I think that is, that's another potential thing that we're already, I think you're already kind of doing on the Tilda side. I think you're making sites more efficient, but I think we can keep running with that.

from a training strand point, it'd be nice to have some sort of training for every outcome. That's important. I think that your, your platform allows for that. know, you, you got a big team of engineers who, if you give them an idea, they run with it and they come back with a product in a week or two, which is just amazing. So I think, you know, we, started with some of these dry eye things, but we can expand this to anything that we want. And then.

Ram Yalamanchili (23:54.774)

What's your thoughts on, from a sponsor perspective, the prior experience you had versus some of the recent experience you're seeing with these type of technologies, right? How has that changed, just the relationship between

Andrew Pucker, OD, PhD (24:02.702)

you know, as a field, we can grow and use this together.

Ram Yalamanchili (24:24.723)

you as a CRO and the sponsor and the value the sponsors are gaining.

Andrew Pucker, OD, PhD (24:31.49)

I think it's been really hard to just kind of integrate everything. There's just so many vendors that you need to have on a successful study. you know, it could be five or more vendors. And if you can eliminate that down to two instead of five, I think that is a huge, huge win right there. I'm sure there's other things that are gonna...

come down the road, like that alone, think is a huge, huge win.

Ram Yalamanchili (25:05.407)

Yeah, and that's coming back to this idea of how do you build better integrated systems which are easier to config, modify, deploy. And I think to a certain extent, the fact that there's so much automation capability we've baked into our platform. And I think nowadays it would be very unusual for you to say, how come this product does not have some kind of AI or automation based?

way to like create a workflow or set up a platform or some kind of integration or even answer certain questions, right? So I think that's sort of a trend I do believe is going to be happening across the board, not just with us, but other technology platforms which are looking into modernizing the sort of workflow. So another aspect of what you touched on, which I think is really interesting is

I noticed from a CRO or a sponsor perspective, study startup is an interesting timeframe. There's a lot of work which is done in order to bring up the study and bring up each site and really get ready for like a first patient, first visit to happen from the time you've executed your CTA, let's say, right? And I also noticed that there's quite a bit of time spent on this exact problem of getting systems ready or getting...

everything ready with the fact that you're constantly also seeing some kind of changes in terms of your requirements. So can you maybe talk a little bit more about that part of it where, we've gone through some iterations where you say, well, this is what we want and, you know, the system should be able to configure quickly. And if we didn't have the right tooling and the right ways to do it, I mean, in our case, it's largely AI driven, an agent driven approach, but

I'm just curious, have you seen situations where the impact is pretty visible? Maybe it's longer timelines or things like that. I'd like to understand how you've seen that happen in the past.

Andrew Pucker, OD, PhD (27:15.182)

Yeah, I do think that if you don't get on board with AI, you're going to be left in the past. So that I think that's clear. And I think, yeah, with any study, the kind of go live for subject lead up is very, very stressful. And a lot of that comes from one, you have to prepare roughly 10 different major documents. And if you could have.

Ram Yalamanchili (27:21.075)

Ha

Andrew Pucker, OD, PhD (27:42.602)

an AI system where you feed in the protocol and it shoots information to all of these 10 different documents and gets it 80 % of the way there. That would just save weeks of time, I think. So that's something I think that AI could help us with. Like, I think there's going to have to be a human touch on it, at least for many more years, but I think AI can get it really close and that will just save valuable hours for

Ram Yalamanchili (27:55.028)

.

Andrew Pucker, OD, PhD (28:10.04)

For the really skilled people, which honestly are in low supply, there's some really smart people out there, but the CR world is very, very specialized and there's not enough people to do that work. So if we can make their jobs easier so that maybe they can do two projects and not feel stressed out instead of doing one project and feeling stressed out, I think that could be a game changer.

There's a lot of documents to manage. Like I don't even think I know all the documents that are on a study, because we have really skilled clinical leads and project managers that are helping me. I'm more on the medical side of things, but they have to file hundreds of documents. And if you can automate that, that also will make everyone less stressed. You're less likely to lose them. And overall you'll be in better regulatory.

Ram Yalamanchili (28:42.819)

Yeah.

Andrew Pucker, OD, PhD (29:04.046)

clients. like that's all happening at the same time. It could be within like a month window, especially if you have a sponsor who wants to make a public announcement that they're going to see their first subject before Arvo or like another important meeting. like we're often given these timelines that are maybe a little unreasonable, but we go above and beyond to meet them. But if you have AI helping you, I think that is going to make things even easier going forward.

Ram Yalamanchili (29:08.907)

Yeah.

Ram Yalamanchili (29:24.571)

Thank you.

Ram Yalamanchili (29:32.454)

Yeah, and I think case in point, that's exactly what our regulatory teammates do, right? It's helping you prepare documentation, get them signed off, doing document QC on top of them, and essentially driving the process forward at scale. So you're not bottlenecked from a people-time perspective, but you're still very much involved in the process and the oversight side of things. But...

What I've also noticed, or at least would like to touch point is, do you see something similar on setting up the various systems and change management at the vendor level? And how often do you see that happen before the study starts where requirements are changing, so you would have to go back and forth with different vendors and sort of, it takes certain time to sort of propagate all this requirements into all the systems or all the vendors you're working with, right? How big of a pain point is that? And,

And have you seen issues arise of that sort?

Andrew Pucker, OD, PhD (30:34.478)

Pretty much every study I've ever been on has had a last minute change. Maybe we noticed that the vendor is doing X, Y, Z. Maybe it's something related to a reading center as an example, and they feel like they need to optimize the protocol a little bit last minute, because they may not get the protocol until you're further along, right? You kind of are building the study. And then...

that actually could spring a change in several other documents. So you might have to change the protocol and then like a manual procedures and then maybe a pharmacy manual. Like having one change could really throw a wrench in things. So you have to quality control check all these other documents. And if you had a system that could do most of that for you, that would be amazing.

Ram Yalamanchili (31:20.528)

Yeah, yeah. And what about data management sort of things? you generally see changes which then propagate into your EDC pros and, know, edit checks and things like that as well?

Andrew Pucker, OD, PhD (31:33.792)

A lot of protocol changes will then translate into needing to update your source documents and your EDC. So that can cause, you know, like a post production change in your EDC, which again is just another downstream effect of changing the protocol, which it takes a lot of time. Like it could take days for the team to put in the change and then have other people double check it and get sponsors sign off. So if you can.

automate that as well, think it's really going to be a game changer.

Ram Yalamanchili (32:08.655)

Right, right. Yeah, and I think that's kind of the fascinating thing. I've certainly seen timelines be thrown around, which, you know, from the looks of it seems very aggressive. But when I started to go into the nitty gritty detail, I think, you know, what's said is very different from what's actually being done. And that gap is wide, I feel, not enough. Even from a sponsor's perspective, I feel it's not appreciated, right? Like, I think...

you sort of frequently hear what you'd like to hear from service providers and vendors, but in reality, how do I actually get there is a very different question. And it's a complicated problem to solve without modern technologies and automation and AI and so on and so forth, right? But I think in some ways, having an integrated platform and having really strong automation built into it, I think are good ways to go about solving for this type of a issue.

So.

Andrew Pucker, OD, PhD (33:10.136)

I think the timelines are important and it'd be nice to sometimes delay things, but you often can't because maybe the sponsor's timeline is tied to funding. Like if they don't meet this mark, maybe they're not gonna get the money to do the next thing. yeah, doing everything we can to make the timelines work is important and every tool we can get to make that more likely to happen, I think is valuable.

Ram Yalamanchili (33:35.791)

Yeah, yeah. Another point I am curious about is how does it work with

Ram Yalamanchili (33:47.643)

Sorry, I had a line apart. Let me just look at this notes here.

So we covered.

Ram Yalamanchili (34:16.561)

Okay, actually, I think I've covered pretty much everything we've had in our notes. Anything else you want to talk about? I think interesting.

Andrew Pucker, OD, PhD (34:27.202)

I think that's pretty thorough for now. Maybe when we do this again in a year and see how things have changed.

Ram Yalamanchili (34:33.774)

Yeah, yeah, that's fair. One thing we can do is I can ask you something about what's the reaction been when you talk to sponsors about what we're doing and how we're doing it, right? And what's the excitement? What are the questions they're asking? Should we do that? Does that make sense? Okay, all right, let's do that. So yeah, and I think...

Andrew Pucker, OD, PhD (34:57.304)

Sure, yeah.

Ram Yalamanchili (35:05.7)

Another curious question I have is, you know, we've spoken about a lot of very interesting and fascinating aspects of trial execution, which clearly like Mintra is pioneering and, you know, we're happy to be part of that journey. I'm curious, how do sponsors receive it? I'm sure you spend quite a bit of time with founders and sponsors who are looking to work with you, but I'm just curious, like what...

What is the reaction when you go about talking about all these capabilities you're bringing through Muntra?

Andrew Pucker, OD, PhD (35:42.862)

Sure. So one of my jobs as VP of clinical development is to help with sales. So that's one of my big jobs. I do go and meet with CEOs of companies, leads of clinical development, et cetera. And what we do at Mintra is we try to use our partnership with Tilda as a differentiator. And I often bring up AI efficiency cost savings and it really piques people's interest. We know that AI is a buzzword now, but we...

kind of bring that to their world. say we're a CRO who partners with this company called Tilda, who has this amazing AI technology that can help make things faster, more efficient. And that kind of just opens the door and we do a discussion and then maybe we do a little demo. And I think it's been really well received. think that people are still a little skeptical to be honest, but they want to learn more. And once you show them what

what we're doing, they get even more excited. So I think as our partnership grows and as AI becomes more accepted, it's just going to be even a bigger thing going forward.

Ram Yalamanchili (36:55.272)

Yeah, and I will say we are seeing something very similar from other partners and sponsors we work with. I think the understanding of what exactly is AI and how it can be helpful and used, feel is not ubiquitous, right? Everyone's got slightly different versions of what that means. And I think being able to show a platform which is actually working and here's what happens with your protocol. You put it in, you...

the system starts to come alive and their teammates going and doing certain work or AI teammates, should call, doing certain parts of the workflow. You've got a single system which is fully integrated end to end. And I think that's where the aha moment really starts to come in. It's like, wow, like, okay, this is actually real. This is actually something I can use. And I think, you you kind of have to wrap your head around this new mode of implementation of your clinical development program probably.

But I think from what I've seen with Mintra, the work you guys are doing, I think you're really thinking about it from first principles perspective, saying, we are building the modern AI driven or technology driven CRO. How would we do it without falling into the classic, because we've done it a certain way for the last X number of years, we will continue to it this way. Which is interesting.

And I'm also pretty fascinated that you've gotten a pretty good amount of success already in terms of working with several sponsors and trials. So, you know, I wish you all the best and it's been amazing working with you guys. So thanks, Andrew, and thanks for your time on this podcast.

Andrew Pucker, OD, PhD (38:45.516)

Yeah, this is great. I'm a scientist first, so I wouldn't try to push anything I didn't believe in. So I think this works and I really do partner or value our partnership and I look forward to.


Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.

Stay current on our AI teammates. Sign up now.

@2025, Tilda Research. All rights reserved.