Transcript
Welcome, everyone, and thank you for joining us for today's webinar entitled Harnessing the Potential of AI in Modern Healthcare, hosted by Dell Technologies and Worldwide Technology. My name is Gerhard Golden, and I work in the Global Industries marketing team at Dell Technologies, and I will moderate today's webinar. Our panelists in this webinar will explore the transformative impact of AI in the healthcare industry. Uh, the discussion will revolve around the innovative ways AI is revolutionizing healthcare delivery, improving patient outcomes, and driving operational efficiency, which is becoming more and more important every day. It will also highlight the pivotal role that will be played by technology companies like Dell Technologies and Worldwide Technology, in spearheading the integration of AI and healthcare. These two companies on the line here continue to drive advancements that promise to reshape the industry. The insights during this event will highlight not just the progress made to date, but also the promising future where AI driven healthcare solutions can lead to improved patient outcomes and more efficient healthcare systems. Um, for from a sort of logistical perspective, please note that today's webinar will be recorded and it will be available on demand after the live session. We will send out links to the live webinar recording to all the attendees on the call here. Now, without further ado, I will turn it over to the speakers to introduce themselves, starting with Doctor Quinonez, Chief Healthcare Advisor at Worldwide Technology. Doctor Quinonez, please take it away. Well. Thank you. Gerhard. Uh, so again, my name is, uh, Eric Quinones. I am, uh, one of our chief health care advisors here at WWE. Um, physician. Uh, background. Um, and I really have been in clinical informatics, and, um, I would say digital and data transformation and health care for well over 20 years, um, in various formats. So obviously, as a user of, uh, of technologies and delivering care to patients, but also and uh, and least not least of all, but uh, and, uh, at very large health care organizations and institutions, um, as a physician, executive and um, even in the consulting spaces as well, uh, large consultancies. But, uh, currently at WWE, I've been here for about three years and I am based in Los Angeles. Fantastic. Thank you so much. Um, Steve, do you want to go next? Thank you. Gerhard. Be happy to jump in. And with that, I'm Stephen Lazer, the global healthcare and life sciences CTO at Dell Technologies. I come to the role with about 25 years of healthcare experience and a little more than 30 years in technology, um, and really have the luxury and pleasure of being the. Being a global resource and interacting with healthcare around the world. I also spend quite a bit of time working with industry, working with social innovation, making sure that we have some outreach on the philanthropic side of things. Uh, things like a role with the Global Crisis Coordination Center, also working with several incubators in order to help those startups that we find out there in healthcare and life sciences really start to take hold. And with that, I'll turn it back over to you, Gerhard. Thanks, Steve. That's great. And, uh. Michael. Hello everyone. My name is Michael Giannopoulos. I'm the Dell Healthcare Global CSO and CTO for the Americas, and I also serve as the Dell Federal Healthcare Director. I have the distinct pleasure of working with, uh, individuals like Steve Lazar and with partner individuals such as Doctor Eric Quinones, uh, in helping propel healthcare from today to tomorrow. Uh, our responsibility as stewards of the technology, uh, that make, uh, modern healthcare happen is to ensure that every life that we change, every system, every life that we change, every change that we make is a meaningful change, uh, so that we find ourselves in a better place every day moving forward with artificial intelligence. Uh, we are at a seminal moment in healthcare delivery. And this is what we are here to discuss. We hope that you find this discussion, uh, meaningful and useful. Thank you. Perfect. Thank you so much to all three of you. Um, I must say, selfishly, it's very exciting to have, uh, so many thought leaders, uh, in the same webinar. Um, so without any further ado, um, let's get let's get talking about the issue at hand. So, um, I think, uh, all of us can agree that AI has been around for a while also in the healthcare industry, but I think we can also all agree that the time has arrived to maybe take the next step in integrating AI into the clinical environment. And with this as a backdrop, um, you know, for all three of you, how do you see advanced analytics methodologies effectively integrated into the clinical setting to provide the benefits promised without adding? And this is the key without adding friction and barriers for clinicians and patients. And so Doctor Quinonez, let's start with you again. And then the rest of the panel can chime in. Um happy to. So, uh, thank you. Yeah. That's um that's a big topic. Um, I think back, uh, you know, AI has been around a long time, so since the 50s, I think in, uh, 56, uh, 1956, uh, I was really, you know, became a field of study at, uh, one of my alma maters at Dartmouth and, you know, all the way fast forward to today. It was really interesting. It wasn't until November of last, this last year. So, you know, almost a year ago, uh, my daughter was home and she was, um, you know, had a remarkable discovery. She said, dad, hey, have you heard of this ChatGPT thing? And I'm like, uh, kind of. I haven't really played with it yet. She goes, well, I asked it to actually create a meal plan for me that was high protein and all the other variables, and it did it within seconds, and that completely blew my mind. So then I started thinking about, well, how could this actually be used? Obviously, in health care, when you think of the burdens that we have in healthcare today in terms for clinicians, the burnout rates that we have, and a lot of that's due to a lot of reasons, but one that particularly jumps out to me is, uh, documentation. Right. So if we were able to use technologies like this to actually help, you know, streamline documentation, to actually ambiently hear conversations that are happening between patient and, and, uh, clinician to be able to create that note, curate that note and to have it teed up for review. And, uh, and obviously the, the plan, the clinical plan and what we're going to do for that patient, the orders all teed up. I mean that would be remarkable. So, you know, it really starts to you start to think about that. And you know, when I pause and um, you know, I give it a lot of thought, you know, I think, you know, advanced analytics and the AI technologies will, you know, will be integrated into primary, secondary, tertiary and quaternary care. Uh, you know, areas really, I think influencing decision, clinical decision making, um, in so many ways. So these technologies, you know, will will be used for patient, uh, risk stratification, uh, predictive modeling, uh, for disease progression, um, obviously decision support applications as well, and population health management. So those are big broad categories. But I can see it definitely being used. Um, you know, in the, in the example I gave gave earlier, again, these things should be brought into the environment where they're not causing friction for, um, any of the end users, clinicians, patients, etc.. It's almost it should be automatically involved in the back end where it really does streamline the the workflow for, you know, obviously the patient and the clinician. So to give it from a data technology, data technology and people and process, um, I think that's to think of it that way is very important. I also think we need to bring in, you know, the fourth. Have a moral responsibility to ensure that, um, you know, that principles of equity. Uh, are included in the implementation of AI in healthcare. You know, this is really to prevent, you know, data, you know, algorithmic, uh, calculations and analytics biases, uh, that could potentially widen inequity inequities. And, um, you know, in healthcare by race, ethnicity, gender or other social, uh, social, uh, demographic factors as well. Um, so those are really key things to think about. Yeah. No, that's a great point. Really, really good point. And, uh, uh, Michael, let's let's jump to you this time first. Uh, any any anything you want to add to what Doctor Quinonez was just saying? It's difficult to, uh, to to speak after Doctor Quinonez, but, uh, here's my point of view on it. Um. Well, regardless of whether it's myself or somebody else that I know on a table somewhere receiving care, I want Doctor Quinonez being the person, the entity that provides that care. Uh, not necessarily a simulacrum. Um, however, Doctor Quinonez is at a disadvantage at this point, and that disadvantage is that he's got way too many Michaels to, you know, to properly care for. So the augmentation of capabilities that artificial intelligence has the promise to provide for Doctor Quinonez or any other physician, clinician, advanced clinician, nurse, whoever it might be, uh, are staggering because that means that the individuals who are charged with providing the care, the best possible care to their patients. And keep in mind, our North Star is always our patients and patient safety are able to actually provide that care to those patients, to a larger variety of patients, to a larger degree of patients. And as we progress forward even remotely to patients, that they might not necessarily be in front of them. Artificial intelligence in today, in its nascent state, uh, has is allowing us to green field what health care delivery might look like, can possibly look like over the next five, ten, 15 years and onwards. It's a generational transformation because we are, as a species, growing in numbers. Uh, the pipeline for physicians and NPS and RNs isn't big enough to, to actually accommodate that growth. And we need these technologies, uh, built in exactly the way Doctor Quinonez actually mentioned, uh, in an ethical way, in a variable way, in an augmented way to be able to provide that extra layer of hands and control so that we make the best decision possible for the person that matters the most in that particular instance, the person on the table. Yeah that's excellent. Thanks, Michael. That's that's really a great perspective as well. Um, Steve, did you want to add anything to that? Well, they're both a hard act to follow, but let's see what we can do with this and go a little bit further with this. Um, we would be remiss in, in approaching this discussion without having a conversation about, uh, ethics bias and trust and want to make sure that those are always considered when we think about what we're going to do with AI and that we need to plan for that as we start thinking about ways we can utilize artificial intelligence, as well as advanced analytics capabilities or predictive analytics capabilities. Now we talk about the field of of AI, and everyone is thinking about generative AI because it's the current buzzword of the world today. Um, with that, the predictive analytics capabilities. Doctor, here you are spot on mentioning that, you know, we've been around since the early to mid 50s with predictive analytics capabilities. They still see that in use today, especially as we start to look at things like population health. Now as we get into a little more advanced utilization, utilization of artificial intelligence capabilities and start to think about other ways that we can utilize it, it's not just prediction capabilities, but it's also starting to look at, excuse me, starting to look at what's coming next. What are we going to do next. How are we going to be able to see what's been done with other use cases that are very similar? How do we take some of those day to day operational issues that are out there and minimize the staff involvement in them, and utilize our staff to the best of their capabilities so that they can see more patients treat more patients? Um, the challenge of staffing resource shortage is here. It's not coming. It's here. And it's only going to get worse as the population continues to age. And as we start to see ways that we can go ahead and take those routine tasks and automate them, we really need to start thinking about how we can make this faster, better, safer as we start to utilize those technologies and capabilities in order to treat those patients. Yeah. No. That's excellent. Thank you. Thank you Steve. That was great. And I'm guessing that listening to you to the three of you speak, I think, uh, you know, uh, interoperability will probably remain aspirational in the near future because I think, um, you know, a, I is a large piece of the puzzle, but I'm not sure where, you know, that elusive goal has been achieved just yet. But, Steve, the your, uh, comments touched upon my next question. Um, and that is that, you know, with this rapid advancement of and adoption of AI and particularly large language models like ChatGPT Doctor Quinonez that you mentioned with your daughter, um, what are some of the ethical challenges and considerations clinicians need to be aware of when utilizing these AI driven solutions? And, um, I don't want to put you on the spot three times in a row, Doctor Quinonez. So, um, Steve, why don't you seeing that you kind of broach the subject in your previous answer, why don't you. Why don't you start us off on this one? Thank you, Gerhard. I'd be happy to, um. And with that, um, although the the wonder of AI capabilities, whether it's generative AI or response augmented AI or however we look at that artificial intelligence capabilities. Um, we have a responsibility. And that responsibility is to use this ethically, first of all, and making sure that we're making good choices and good approaches. Uh, you're going to hear about and I believe Michael is going to spend a couple of minutes talking about the recent executive order that was just published yesterday. Uh, and looking at how we can be not only responsible but also ethical with that AI utilization. Uh, looking at it more from a perspective of, uh, and doctor Q, this will be straight into your into your category of do no harm. Um, with that AI and looking at how we can go ahead and follow that process, continue to utilize that. Uh, but also at the same time, we talk about health equity and making sure that, uh, the information that we're utilizing does not create automatic bias, the, uh, information sharing, although a lot of the barriers can be broken down utilizing some of the tools that we have for data management with artificial intelligence. Um, sharing that data, we need to make sure the data itself is not biased. And those those two little pieces of skin attached to the side of my head are actually filters. Um, they're not ears, even though we call them ears. Uh, they actually filter what I hear and it provide some form of inherent bias whether I choose to have it or not. That's not up to me. It's up to my experiences in life and how I was raised, where I was raised. What is the population I was exposed to? What are the things that we've been exposed to? So there is inherent bias in everything that we do. The trick with artificial intelligence is how do we minimize that? We're never going to completely eliminate it. It's not possible. So how do we minimize that? How do we go ahead and make sure that whatever we're developing, whatever algorithms are created, that we utilize information that is not just 40 to 60 year old white males in order to diagnose and define the treatment methodology, uh, as our most clinical trial data, data points, but they represent the actual population. And with that, it's going to take some time and some effort to go ahead and actually develop that. But we need to be very, very conscious of this when we start utilizing those capabilities and start to steer our models so that they are population based. And what's correct for a population in the US may not be correct for a population in Asia, may not be correct for population in the Middle East. And that becomes part of the challenge here is how do we make sure that that regionalization also takes place as we look at those capabilities? With that, I'm going to turn it over to doctor Q and a few additional comments and you're on mute. I am on mute. Okay? No, I Steve, you really hit. I think a lot of the key concerns that we have, um, you know, when it comes down to really being. I think proactive when it comes, uh, thinking about ethical implications, um, with AI and advanced analytics and healthcare and how we, we bring that into, you know, the, the care of patients into, uh, you know, into life sciences as well. I think, you know, you know, first do no harm is is the number one, you know, thing that I know, I think about and, and my colleagues think about because it was an oath that we took and we took it very seriously, you know, and to be true to that, you know, we whatever whenever we're caring for a patient, you know, we're obviously looking for, um, you know, we're looking for the right data. So not just the data for the patient or regarding the patient, but also, you know, the most recent evidence based medicine to really help in terms of the treatment, you know, pathway we're going to choose for that patient. Um, and so that really helps guide us to to first do no harm. Now when considering using, uh, a, a technology that has data that may not be complete in terms of the data itself or where is this data coming from, you know, that has the potential for unintended unintended consequences. So I think there's a lot of things to consider when we're thinking about, you know, ethical considerations. I would I would say, uh, data privacy, uh, to switch it up and just thinking about things that are important. So data privacy and security are really important. Um, I think, um, you know, on the clinical decision making side, as I was mentioning, you know, to definitely not just rely on the AI as, as that, that, you know, quintessential answer. No, it needs to be, um, a guide, if you will. Uh, it's not a replacement for, for clinicians, you know, uh, I will not replace, uh, clinicians, but clinicians that don't use AI, they will be replaced. So, um, something to think about. Uh, I also think, um, you know, we talked about biases and fairness. You know, we mentioned that a few times. So, you know, algorithms, what are they based on? You know, working from a black box that is not going to work for us, you know, that that that we have to understand what these models are trained on. Uh, what is the science behind it before we actually apply it? Um, and then also, you know, in that same category, I would say, you know, thinking about cultural sensitivities. So it's not just, uh, you know, the, you know, the, the science, but it's the, the humanistic component of, of, of care. So thinking of, um, you know, the, the, the cultural, racial or socioeconomic factors, so social determinants of health, if you will as well. So all those things are very, very important lifestyle choices and those things that all has to be considered. And then here's one that I think is important. So accountability and also the liability. So if there may be um, you know, errors that may have happened because of the use of AI, who's responsible. Is it the AI responsible. Is it the clinician. Is it both. You know, so these are things to to to consider and think about. And obviously, you know, regulatory regulatory compliance I think that's important. You know, think about that. You know, is it in line with, with uh, the rules and regulations that we currently have such as HIPAA and other, uh, you know, other policies? And, uh, I would say, you know, economic barriers and, and, uh, technical barriers as well. So accessibility, you know, do are there populations that are marginalized that don't have access to, uh, to these technologies? And are they just left out? So these are things to think about when we're when we're using, you know, advanced analytic tools. Excellent. You know. Thanks. Thanks again for that. That's a fantastic overview, especially from the clinicians side. The point of view. Uh, Michael, is there anything else you wanted to add to that? I, I'm not sure if there is anything, uh, humanly possible to add to that, but I will say, uh, uh, to, to everything that, uh, Doctor Quinonez and Steve have said, we do not have a single automobile on the road today that has not gone through extensive crash testing, that has not gone through extensive testing of how the automobile reacts during that crash, how the seat belts retract, what the braking mechanisms are, etc., etc. because lives are always at stake when you are. When you discuss automobiles, the same level of maturity needs to be applied to this particular marketplace, uh, which is currently an immature marketplace, uh, so that we can be. Confident in our ability to deliver the results that we need to deliver. Um, anything absent of that level of scrutiny, uh, makes it, from my perspective, a nonstarter. And we've seen this. I'll take this opportunity to possibly talk to talk about, um, this executive order that came out yesterday from the white House. Uh, the executive order is probably the most that the US will do. Don't necessarily expect expect much to come out of Congress on this front in the near future. But the executive order does specifically call out safety. It calls out security. It calls out every talking point that doctor Q and, uh, Mister Laser have actually brought up in the past five minutes, seven minutes. And the fact that we are taking this seriously at the presidential level and potentially, you know, at the, the, the, the leadership position across the globe, uh, at least gets the eyes that need to be looking this way and focus this way, actually focusing this way. Um, there is no there is no redo in this if we do not get it right from the beginning, uh, then we will forever be playing whack a mole with AI companies, systems and technologies that are operating in their own black boxes. And nobody wants to be in Schrödinger's box. Nobody wants to be that cat. Uh, so we need to ensure that it's done correctly from the beginning, and we are starting to understand what that means. Yeah. No, no, a really good point, and I'm glad you brought that up. Uh, so, so at least to my ears, it sounds like there are ways to avoid the minority report becoming a reality. Um, but, uh, but it's good to see that regulators and legislators are starting to pay attention, um, and that, you know, um, they, they get involved before we progress too far down the road with, with technologies like AI. So that's encouraging. Um, now shifting the focus a little bit from the opportunities and the challenges of implementing AI today. Um, let's talk a little bit more about the future of AI in healthcare. And, you know, considering the current trajectory of advanced analytics, uh, what do all of you see as the projected trends and potential breakthroughs in the next decade or so? Uh, in healthcare and this time, Doctor Quinonez? Um, if you don't mind me doing so, I'm going to start with you again and then give each one of the panelists an opportunity to look into their own crystal balls. No. If I only had one. Gerhard. Um, yeah. You know, it's funny I say that, though. You know, there's things that, you know, many of us, my colleagues that are, you know, we're technologists and also physicians. We've been, I would say, screaming from the mountaintops for, for a long time now, for years, people think we were crazy. But now things are starting to come to fruition. So, I don't know, maybe we're on to something, but but what I want to kind of point out is kind of give some, you know, some bullet points to think about. You know, before I kind of give you my answer. And it's kind of like, you know, current state and what the future state, you know, may, may hold. So when you think of like current state. You know. Oh, was it in 1950? You know, medical knowledge doubled every 50 years. And so today it's less than 73 days. All right. So take a minute for that to marinate. You know, for a clinician to stay on top of their game, they would have to read over 20 hours a day for their specialty. Okay. That gives them four hours to see patients and sleep. All right. So again let that marinate. Your health care generates about 30% of the world's data today. In the next two years, it'll be, I think, 36% and greater than about 85% of that is unstructured data. Okay. So again, let that marinate. You know, and the average health care system I think produces about 50PB a year. You know this. That plus. You know, we have a real crisis right now when it comes down to. And I don't mean to be Debbie Downer. This is just. Facts. So our resources are depleted. We will never have enough clinicians ever again. We just won't have enough. We can't train them fast enough. Um, it just takes a lot of money and resources and time to train them. Um, so we're about 200,000, uh, nurses every year in deficit, and we would have to hire and train 200,000 nurses every year for the unforeseeable future to keep up with demand. Um, for physicians, by 2030, I think we're going to be about 140,000, uh, physicians in debt. And we will have what they call the silver tsunami by 2030, meaning that every, uh, baby boomer will be 65 and over. So you have a population that needs greater care because of chronic disease, and you have less people to care for them. All right. So think about all those things I just said. So I think in the future, in the near future, you know what? You know, if we had. I would say the longitudinal clinical data. So discrete and non-discrete text data, the imaging and the pathology data, the social determinants of health data, the the genomics, the the biomes and microbiomes, all that data, the IoT data, the wearable data that we have to be piping, that clinical and lifestyle data in. You know, again, providing a real time predictive surveillance to course correct. I would say, you know, any deviations from our personalized, uh, optimal diet, exercise, sleep, um, environmental exposures, stressors, you know, to really, you know, evaluate our, our, uh, preventive maintenance as well, bringing in labs, uh, vitals and home, uh, you know, diagnostics tools that we use, like I said, in IoT devices, you know, to really ensure that we are as close to homeostasis as possible. You know, you know, that I would consider something like a copilot, like an assistant that every patient and every, every clinician would have. So on the clinical, on the clinical side. For the care team, you know, what does that mean for them? So they would be, um. Boy, I mean, they'd be having kind of like this constant surveillance of their patient panels of their population to be able to identify patients, um, you know, preemptively that are going in the wrong direction. So how do we course correct those patients, whether these are patients that, um, that they're caring for, for a chronic disease management patients that have been just discharged or these are patients that have, you know, maybe an early diagnosis of, let's say, prediabetes or diagnosis of prediabetes. How do we make sure we prevent them to continue going down, either that pre-diabetic stage and going and maybe even coming out of that stage and or preventing them from going into diabetes, diabetic, uh, um, you know, state as well. So to being able to have these tools to be able to help assist those that are caring for patients, again, not replacing physicians, but the copilot to be advising them on their patient panels, whether, again, this is, uh, either on the ambulatory side or, you know, on the acute care side to be able to survey the patients that they're caring for and to be proactively caring for them because the data is there and it's letting them know, letting know it's letting the right people know at the right time about the right patient what needs to happen. Yeah. No, that that would be a future that I would enjoy, uh, to be in. So that's thank you for that perspective. Uh, Michael and Steve, do you guys want to add anything to that? Anything you see that's coming down the pike, that's making, you know, getting you excited. Um. I'll go stay with you, if that's okay with you. Yeah. So one of the I agree with everything you just said. Uh, one of the things that I believe is going to make a monstrous difference in our capabilities of doing achieving results. Uh, personalized results, uh, rapid results, uh, for individuals in care or individuals not in care. Everybody basically is the advent and the maturation, the maturity. Uh Matchett. The level of maturity we have coming with, um. Digital twins. Biological digital twins. Uh, and whether that's whether that's being used on a population level, uh, data set or on a personalized level data set, I think that particular technology is the apex, uh, penultimate, uh, validation of what these technologies in, in, in total can provide us with the ability to actually look at a patient and, and visualize that patient and multiple care vectors or multiple life path vectors, uh, and determine what that yields, and then have a virtuous feedback loop back to the clinician, back to the patient that says, listen, you know what? You should not eat those ten boxes of chips right now, uh, because it impacts you in this particular way. Uh, four years down the line, I think, uh, the digital twin phenomenon that, uh, that emerged and started off with manufacturing, uh, a while ago is now a real phenomenon within the health sciences and will become an even more important phenomenon moving forward, uh, with personalized medicine and personalized care. That, to me is extremely exciting. Uh, and, you know, if you're if you're talking about, you know, you made a minority report, uh, reference or if you want to go down the movie lane, you know, the idea of having a Star Trek pattern buffer, uh, for, uh, for the transporter, uh, built around virtual twinning. It just is. It's it's it's an incredible for me. And, uh, and I'm looking forward to the time where, where we can actually say we did some of that, you know. That's great Michael. Thanks. Thanks for that reference. But that made me smile. Um, so, Steve, anything you want to add to that? Steven? You are mute. We've covered quite a bit here, but with everything that's been said, I think that as we look at how all of this can impact us from a future state perspective, taking away the toil and the burdensome activities that we have that are in place today in healthcare and life sciences. Being able to be smarter and more accurate in what we do, being able to bring more information to bear, to make decisions about how we treat that patient, how we take care of that patient, providing that patient, the ability to learn more about what impacts them, and really starting to drive that from a wellness perspective. Doctor. Q you're absolutely spot on on the future state there. As you look at that quote unquote patient panel. And that patient panel is something that will be with us every day of our lives, that will become a part of our lives, I expect will eventually start to develop relationships with the avatars that drive those panels and become become part of that. I think the other half of this that we haven't touched on is the behavioral health aspect, and with that, the ability to utilize these technologies and these capabilities to really and truly impact behavioral health around the globe, for those that are suffering from depression, from loneliness, from the, the, the factors that really start to drive them into poor behaviors and really start to change that behavior through reducing those impacts from the external world by giving them those outlets, giving them that positive reinforcement, giving them that education that they need, providing information for them, and really being able to just guide people a little bit better. Even though we may still be somewhat isolated, some people have became so isolated throughout Covid that there is no coming back from that for some people. And with that, how do we bring them back into society? How do we give them, I'll say, someone or something they can interact with and feel safe and feel comfortable and still at the same time utilize those technologies to provide good. Uh, there's so many opportunities here. I shudder at the thoughts, and I feel very I'm thrilled that I have the opportunity to be a part of it. Um, but it has amazing responsibility that goes with it, and it is daunting. I will say, every day when I look at these things, look the technologies and the capabilities that we can provide to ensure that we truly do no harm and that we really do improve the lives of people around the globe. Spot on. Nice way to close it. Uh, Stephen, I appreciate it. Um, so that was a great summary of the potential future, um, you know, impacts of AI and healthcare. And, of course, it's hard to tell how history will reflect on this particular inflection point in the adoption of AI. Um, so in sticking with our movie theme, Michael, um, please humor me as I go a little interstellar, uh, for my final question, which is, you know, knowing that it's a purely a hypothetical exercise, imagine that you can travel ahead, let's say, five years into the future. And then looking back at this moment in time, what do you think we will say that the healthcare industry got right in the implementation of AI. And what do you think? Um, it might have gotten wrong. So, uh, let me put someone else in the hot seat for a chance for for a change. Uh, Michael, what are your reservations about the implementation of AI in healthcare, if any? Um, and, you know, kind of taking that approach where we go through the looking glass, looking back, you know, um, into the into the history, which is the current moment in time. What do you think is the one thing that we might have gotten wrong or might have gotten right? Okay, so you use interstellar. So. And I didn't know you were going to. So I'm going to throw this right back at you. Listen to Murphy. Listen to Murphy. It's good advice. So that means listen to the docs. Listen to the doctors. Do not develop the technology. Will you go into interstellar later, but do not develop the technology for the sake of developing the technology. Develop the technology because doctor Q and my PCP and your PCP and Steve's PCP and every single other PCP and specialist said, this is what we need to do, and then take that as our mission statement. Uh, if, uh, if someone had listened to Murphy, uh, in interstellar, then it would it would not have been a great movie, very short. It would have been short. Very short. Simple story. Yeah. Um, and I don't think that I said earlier, I don't think there's any coming back from making enormous mistakes right now. I don't think we get that second chance. And retro, you know, retrospectively, looking at what IFS is going to be a colossal waste of energy and time that we do not, as a society, have. So what I would say is, if I was five years from now, looking back, I would keep my same position. I would say talk to the doc, talk to the doc, figure out what the doc is telling you. Help be the bridge between doctor known as a technologist as well as a physician. Most physicians are not technologists, okay? And because most physicians are not technologists and most NPS and MDS and everybody and, uh, you know, RNs and NAS and whoever it is are not technologists, those the needs that they express operationally on a day to day basis are important. If you are in a position of authority, if you are in a position of impacting change, do the gemba walks across, uh, across your hospitals in the morning. Uh, do the rounds with your physicians. Listen, Phil, use the filters that Steve spoke about. Listen to what's happening. More importantly, listen to what's not happening and feed that back to the clinicians and say, what is it that we need to do? Do not develop technology for technology's sake. It is the wrong path. Develop technology to resolve to solve and resolve an issue. That's my advice. Not ignorance. Do you agree with Michael? Um, wholeheartedly. Um, I yeah, that I've, we've seen so many times in I would say in healthcare it and um, technology in general. And then we look at things that are always being invented, coming up from startups, etc. they're great ideas. They, they were, you know, birthed in a lab, but they sometimes fail to actually go to the end user to really understand, well, what are the problems that you are trying to solve. You know, they may have a great idea and it may be a great idea, but it's not really solving the problem. Or if it is solving the problem, it's adding too much friction, um, or more friction. And I would say not just for, for, you know, Michael's 100%. Right. You know, uh, you know, surveying the clinicians. And when I say clinicians, um, that's everybody who takes care of a patient, that's everyone involved in patient care, um, whether it's bedside or it's ambulatory. So really understand what their pain points are. And then also the patients really understanding their pain points. What is their patient journey like. Is it fragmented. Is it fractured. Are they confused? Do they even know what the next step is in their healthcare journey? Um, you know, if, God forbid, they got a, you know, a devastating diagnosis of cancer, you know, what? What do they do next? They have so many questions and a lot of a lot of times they don't have answers. Um, so, you know, they have to be considered as well. I think, you know, my fear in the next, you know, five years is that I guess it's two things. One is that, you know, what what happened yesterday with the Biden administration is a good step. I think, um, I, uh, the, uh, EU did it in what, uh, 20, 21, in April of 2021, they actually started this, uh, you know, a governance type, you know, initiative as well in the EU. So we're late to the game, but we're but we're in the game. And I think it's important because, you know, that in the next five years, you know, I hope that actually does help structure. Um, some things that we talked about earlier um, or prevent some things that we talked about earlier. Uh, but I also hope that it doesn't overshoot and prevent, um, you know, innovation. And because that can happen as well. So it's a, it's a, it's a delicate dance. Uh, and, um, but it has to be, you know, there has to be, you know, obviously governance, but with room to actually use the technology or grow the technology without stifling the technology, it just has to be very responsible. So my fear is that we may, you know, overshoot and, and, uh, prevent, uh, innovation. Um, and so that's one fear. The other fear I have is that these technologies are, are going to be very amazing. And I just don't want to see, you know, rural health and, um, you know, safety net hospitals and health care systems like that be left out of, of the conversation. You know, they have to be considered as well. They may not have the budgets and things like that to be able to adopt and bring these tools into their, their workflow. But there has to be ways to to do that. So that that's another fear I have. No, that's. Those are two excellent points because I think that that's going to determine the success in the long run. And and we've always seen that when you exclude certain groups and you exclude certain in, you know, institutions, that it's never good in the long run. So, um, Steve, anything else you wanted to add to close us out? You asked about the future. Yeah, well, looking back into the current, right? Yeah, yeah. Yeah, I. Understand traveling in the future and looking at the current state. The question that I run through my head is, have we been irresponsible and how we've approached this at this point? Have we unleashed tools that right now have the potential to do significant harm without any control, without any regulation, without any, uh, any thoughts for what will happen with that? Um, and how do we go ahead and bring that back into control in an appropriate way without stifling innovation, without stopping the growth, without stopping the development of what we're seeing? I think we are on the advent of many, many wonderful things. At the same time, we are also on the advent of enabling those that choose to do harm, to do great harm. Um. It becomes a very, very delicate balance. I agree that the legislation that was brought forward from an executive order perspective earlier this week is very appropriate. Um, some say it's late to the table, some say it's, uh, I'll say, uh, appropriate at the table. It depends on your perspective. I think the capabilities that we see growing and the tool sets that we see being built need to be handled responsibly. And that, to me, is the biggest challenge of this whole revolution that we are seeing is how to handle this responsibly, ethically, and to do it in a manner that we do no harm. And I don't think we can end it on a better line than that. So, um, thank you so much, Steve. And that's really great. Um, and thank you very much, Doctor Quinones. Uh, Michael. Stephen, um, what an incredible discussion. Um, at least I, uh, learned a lot about the potential of of AI and modern healthcare and the importance of making sure that we, frankly, get it right. Um, and I and I agree, there's there are no do overs here. It's, uh, it's we've got one shot at this. So, um, this brings our webinar to a close. Uh, many thanks to Dell Technologies and WWT, uh, or Worldwide Technology for making this webinar a reality. The long standing partnership between Dell Technologies and, uh, Worldwide Technology in Healthcare combines Dell's expertise in innovative technology solutions and worldwide technologies, extensive knowledge in deploying advanced systems. And this creates a really powerful synergy that drives the development and implementation of cutting edge, AI driven healthcare solutions. Uh, the collaboration also facilitates the integration of state of the art infrastructure, software and consulting services, fostering a comprehensive approach to revolutionizing healthcare through technology, which in its turn enhances patient care, operational efficiency, and also medical advancements. So please note that this webinar was recorded and will be posted here on the Dell Technologies Global Industries Brighttalk webinar channel. The channel is continually being updated with webinars, roundtables, and other presentations, so please check back as often as you can. Uh, you can also go to dell.com slash healthcare and Wbtw.com. For more information on the companies and solutions that you heard about today. And last but not least, thank you. Uh, to all of our listeners. Uh, we appreciate you being here and look forward to continuing the conversation with you.