Business | Pexels by fauxels
Business | Pexels by fauxels
Whether it’s accelerating research in the lab or augmenting physician decision-making in the clinic, artificial intelligence (AI) has seemingly limitless potential to transform healthcare.
With the recent launch of the Department of Biomedical Informatics at the University of Colorado School of Medicine, the CU Anschutz Medical Campus is at the forefront of integrated computational technology and AI. The department trains the next generation of biological and clinical informaticists who support CU Anschutz’s initiatives in research, clinical care and other areas of informatics.
Casey Greene, PhD, is the department’s founding chair, a professor of biomedical informatics and a national expert on computational biology and artificial intelligence. In this episode of CU Anschutz 360, Greene discusses ethical issues around AI, the rise of biobanks and personalized medicine, using technology to improve patient care, a general skepticism about the effectiveness of AI in medical care, and the peculiar, AI-related connection between chihuahuas and blueberry muffins.
The buzz around ChatGPT
He also addresses the buzz around ChatGPT and large language models. “I think we’re going to see a lot of new uses of these models,” Greene said. “You could fine tune it to work in some settings more effectively, whether that setting is healthcare, thinking about electronic notes, all of these elements provide an opportunity for tuning.”
Co-hosting the discussion are Thomas Flaig, MD, CU Anschutz vice chancellor for research, and Chris Casey, director of digital storytelling in the Office of Communications.
Listen to the podcast:
Podcast transcript:
Chris Casey: Casey, it's great to have you with us here today. This is a very vast topic we'll be discussing today, AI in healthcare. So, you've been described as a computational biologist. Can you describe what that means, and what your focal areas of research are?
Casey Greene: Yeah. So, in our lab we use the tools of computer science to study biology. This has applications in many different areas. So we work on cancer, we work on understanding infectious diseases. But at the end of the day, it comes back to using the tools of computer science to study biology.
Thomas Flaig: When you're talking to a general audience, some guy you bump into walking down the street, how would you encapsulate what is meant by artificial intelligence maybe broadly in how you define it?
Casey Greene: A couple of years ago I would've given you a much more technically precise definition. But I think it's entered the popular conversation so much that, at this point, I would think about it as algorithms that respond in different ways based on the data that they're being fed. If you're a computer scientist, you're going to find that to be an overly broad definition, but because of the rapid uptake of these types of methods, I think for me it's the definition that works at the moment.
Thomas Flaig: Intrinsic in your answer was algorithm. It's a mathematical, computational approach to understand data and datasets and differences, right?
Casey Greene: Yeah. And it's interesting. When I was being trained, when we were thinking about algorithms, we were thinking about algorithms that were largely human-defined, a rule set, or, if X, then Y. Everything still ends up encoded in the same way, but now instead of having so many human-defined rules, often we set these things to learn from data, and then make a prediction or choose an action based on the input data.
Thomas Flaig: Maybe I can sneak in one other high-level question here as we kick things off though. So, you have a patient sitting in a clinic. I'm able to do clinic, I do it every week, it's an integral part of my professional life, and I think about the patient point of view. If they're in clinic talking to their provider right now, is it right for me to think about AI as something in the future that's going to be part of their care, something that's going on maybe right now and emerging, or is it something that's actually been going on for a while and we just really haven't acknowledged it or called it that?
Casey Greene: Yeah, I think it's going to be a bit of a mix of all three. This data-driven decision-making that includes more computation in the loop, I think that has happened for a while. In terms of approaches to identify potential factors to look into that could lead to a clinical trial, that's been a widespread use of these AI and machine learning methods for quite a few years.
On the other hand, we're also seeing a pretty significant transformation in what these systems are capable of. If you imagine someone sitting with their provider now, I think hopefully the conversations of the future are different. Hopefully they're better, hopefully they're more personal. Because things that currently require a human to do, hopefully we can automate some of those tasks and let providers focus on what they're really good at, connecting with patients and understanding what's not going to be in the data that gets entered after that encounter.
Chris Casey: Getting to a specific scenario, perhaps in the clinics, and just an area where perhaps AI can intersect with patient care, I'll use an example everybody's familiar with, say, radiology. You go in, you get an x-ray, you've broken your leg. Supposedly, AI could help lead to a faster diagnosis perhaps. Do you have any examples of how, say, in a radiological setting, AI or just these technological advances could help a clinician see something better in an X-ray, that maybe they didn't see before, thus better outcome for the patient?
Casey Greene: I think it's important to recognize that these systems examine this type of data differently than a human does. So if you're using a human and a human reasoning as the analogy for how AI in radiology works, that's an incorrect analogy. And so I don't think it helps to think about things in that way. This is good and bad. On the positive side, it means that these algorithms can key in on things that a physician might not notice, a radiologist might not notice. Sometimes what we would say is the error modes can be orthogonal, which essentially means they could make different mistakes. Which is actually valuable, because then if there's a disagreement, then you can put more time and attention into resolving that.
AI in radiology, my hope or expectation would be that as these systems get deployed, providers can focus on the most challenging cases. Because one of the places where these methods do struggle is the rare example. Something that occurs extremely infrequently in the data that the algorithm has been trained on, it will be less likely to be successful there. So, we often say these algorithms require a lot of training data, so they require a lot of examples. Humans, on the other hand, are very good at extrapolation. Being able to have humans contribute at what they excel at, while also being supported by tools that can make the routine diagnosis easier, I think is a real area of potential.
Thomas Flaig: So, in medicine there are mistakes made by providers. Do we feel different about a mistake made by a human being than by artificial intelligence? Do we just fundamentally feel differently? What do you think about that concept?
Casey Greene: I don't know. This comes up on the topic of self-driving cars. There's a belief or expectation that the performance needs to be superhuman for us to feel OK about it. And really, this is a topic that folks in the Center for Bioethics and the Humanities here, like Matt DeCamp and others, should certainly weigh in on. To me, I think that does feel right. The performance should be superhuman, it should result in a lower overall rate of errors and better overall care. I think that's really important. I do think that building systems that don't replace a human but that think very consciously about how these systems can augment human decision-making, to me, I think that's a better place to be.
Thomas Flaig: Yeah, it's really interesting. And the self-driven car is one of the things I was thinking about. So you could imagine that you could put an analogy to reading radiology scans and these other things. But if you reduce the amount of accidents in self-driven cars by 50%, you'd still have that accident that occurs with that artificial intelligence. And I think we would feel differently, although I'm not going to ... You don't have to cover that further. I think you made a good point here, too. So, again, think of this as a provider in clinic, there are some more routine things, decisions, analyses. For example, if a very simple artificial intelligence if the lab is abnormal, you'll get a little mark next to that. You may have to respond with a very simple thing, triggers me as a provider to look at it. So you can imagine this technology driving that and augmenting versus taking over certain decisions. I suppose that's one of these balance things again.
Casey Greene: Yeah, I think the best outcomes for this type of use in healthcare is going to be where there might be, let's imagine, right now a lab value is out of range. But what you might want to know more is that “Wow, for this individual, this value is particularly unusual.” Maybe it's in range, but maybe for that individual, that's out of their normal range. So that would be something you might key in on, even if it's in the normal range for the population. Or vice versa, something that's out of range for someone but where you expect that, can we get you the right information at the right time and avoid extraneous information so that you as a provider can really focus on your patient and focus on listening to your patient and focus on getting the things that you get from those conversations that can't be obtained any other way.
Thomas Flaig: And it's interesting because I still remember, as a trainee, as a medical student, coming to the hospital very early to go and get the radiographic scans, to actually get the films, as we used to call them, and screw around. Now, it's all automated. I remember, I actually wrote clinic notes, which were difficult to read at where we are now. So this is just building on that, and I'm not that old, of course. So maybe your point though, you can imagine a lab is abnormal, what do I do when that happens? I plot out the last 20 labs and I say, “Oh, this isn't abnormal for this individual. It's actually improved from where it was.” But that level of tech, that's something I have to do and decide which one I'm going to spend the time to click through and look at. So artificial intelligence could actually have a different column as abnormal for this patient or something.
Casey Greene: Yeah, exactly.
Chris Casey: I was curious to read a recent Pew research Center survey on AI and healthcare and how comfortable or not comfortable Americans were in integrating AI into their doctor care. That survey showed that 60% of Americans would be uncomfortable with a healthcare provider who relied on AI to diagnose their disease or recommend a treatment. Only 38% felt AI to diagnose a disease or recommend a treatment would lead to better outcomes. So those are people being hesitant about embracing this it seems like. And so I'm curious what you think of those statistics and how you think that could be gradually improved.
Casey Greene: In the technology field, there can be an emphasis on move fast and break things. And when you're dealing with healthcare, move fast and break things is not an appropriate strategy. If we look at the rollout, let's go back to self-driving cars. When we look at the rollout of self-driving cars, there were quite a few mistakes made. In fact, there appeared to be continued sort of examples of these types of mistakes where cars make mistakes that humans wouldn't make. This is the sort of demonstrating the potential of what happens when a technology gets ahead of this conversation about ethical deployment. I think it's really important that as we're talking here, what we're talking about ... we're not talking about an algorithm making a decision. What we're talking about is an algorithm providing the support for a provider to make that decision. I suspect that understanding the nuances of that is really important and it's also why groups here, like the Center for Bioethics and the Humanities are so important so we can understand how we need to think about responsible deployment.
Thomas Flaig: I think part of this question, too, though is how a person being polled here would actually define artificial intelligence in understanding. Cause going back to one of our earlier comments, it's probably integrated into some things that are happening already that maybe are below the level of awareness.
Casey Greene: Yeah, I mean if you asked me if I wanted to be seen in an AI clinic where I just showed up and some computer told me what the outcome was, I'd say absolutely no. Right? I want that human connection. You want to meet your provider, you want that connection because I think care isn't just about the diagnosis. It's about the experience that you have. And if you have a better experience, you'll likely have a better outcome. It's not just a diagnosis.
Thomas Flaig: As you look across sectors, if you thought of automotive, maybe research and development, artificial intelligence in this application, so automotive, maybe it's on the front end of this, healthcare wouldn't be on the front end of this, I wouldn't think. And I don't know if you agree with that and if so, why?
Casey Greene: I would agree with that. I don't think healthcare would be on the front end. We have a pretty extensive regulatory process around healthcare, which is appropriate. I mean it touches people's lives in so many different ways and so it's appropriate to take things cautiously, to think about the ethics involved. And like I said, it's not a move fast and break things type of environment. I'm also not entirely sure that cars are a move-fast-and-break-things type of environment either. If you think about the places where this type of technology has really taken off, those are often things like advertising that is probably a place where a different regulatory environment exists. I wouldn't put healthcare at the forefront, but that's on the implementation side.
On the research side, like the studies that we briefly mentioned earlier around how AI can outperform radiologists in some settings, although there are some caveats with those so we can come back to them. I think in that type of research setting, I think we should do research as rapidly and as ethically as we can because we need to build the toolkit. But then deciding where to deploy that toolkit, to me that's where it becomes an ethics question and that's where I think we should be more cautious.
Chris Casey: And turning the conversation a little bit more directly onto our campus, can you cite some examples, Casey, where the CU Anschutz Medical Campus might be a bit of ahead of the game in terms of responding to healthcare problems – where AI can be integrated or computational biology can be integrated, and just what advantages do we have as a campus in that area?
Casey Greene: One of the areas that I think we should recognize that's kind of crosses the bounds in terms of genetics and informatics and using that to guide care. So, we have something called the Colorado Center for Personalized Medicine (CCPM) here. So this was started seven or eight years ago. It was really remarkable foresight. The goal of this center is to use genetics to improve care delivery. And so this is an example of trying to put the right information in the right person's hands at the right time to make the correct decision. And so the way this is structured at the moment is there's a biobank study. So, individuals who are seen at UCHealth can decide whether or not they'd like to consent to participate. And if they consent to participate the next time they have a blood draw, a separate part of that can get sent to our biobank.
At the biobank it can be genotyped. And then based on that, results can be returned to an electronic health record where if a provider is going to prescribe, for instance, a medication that might work differently for these individuals and that provider can receive an alert to suggest, “Oh, you might want to select a different medication or you might need to adjust the dosing to receive the desired effect.” And so this is a case where there's a lot of planning and forethought into how to do this ethically. It went through all the appropriate institutional approvals. There was a lot of thought about deployment and how this can be used to improve care, and it's really this concerted pipeline of people and informatics technologies that make it possible.
Thomas Flaig: And one of the fascinating things about that effort is really its integration within the electronic health records. Something which you're describing is with the routine blood draw, that after patient's consent to understand the process and for the provider ... Providers are already fairly overtaxed, I think on multiple different realms, physicians, nurses in the whole healthcare team, really. Do you see ways in which this can actually unburden healthcare workers in some ways and help support the workforce in that regard?
Casey Greene: Yeah, I mean, acutely. If someone is prescribed a medication that for which they would personally benefit more from a different dose or from a different medication, getting them the right medication at the right time is a way to help them. They'll have more successful outcomes and it also reduces the load on providers because otherwise you have to deal with understanding why a medication's not working. Maybe then you have to post-hoc, send out for a genetic test to understand why it's not working. I think knowing that upfront, it prevents a lot of unnecessary challenges for the patient, which is what I think about is improving their care journey, but it also prevents a lot of unnecessary work for their care team.
Thomas Flaig: And even beyond this example, I don't know if you think in general that there's ways that AI could actually ... Because I think a lot of us are concerned now about the healthcare workforce. The ways that this can be applied either through the EMR more broadly is actually trying to help support what's become a very complex environment in which to work.
Casey Greene: I don't want to go too far out of my lane here, but some of the work that's happened like Project Joy, which is focused on the burdens that nurses face and how to alleviate those, which was a UCHealth effort, I think that is using these types of tools to improve not just the outcome for patients, but also the environment for the care team. I think those two things are interlinked. If you want to deliver the best possible care, you really need people who are not overtaxed by things that can be simplified.
Chris Casey: And it's interesting, we're talking about healthcare professionals being overtaxed, I think of an example being an emergency room setting where every decision has to be made very urgently and very quickly, and those folks are obviously operating off whatever information's available on the electronic health record. Are there ways, in that setting when time is of the essence, like an ER doc can get benefit from some sort of algorithm working through the electronic health record that can help them ascertain a more accurate diagnosis or something?
Casey Greene: Yeah, again, I don't want to get too far out of my lane and talk about things that are really innovations of others, but the Virtual Health Center I think is a really amazing resource that UCHealth has constructed. And what this lets them do ... There's a really great example of work that CT Lin and others have done looking at how to use sepsis models, so models that might predict a patient's rapid deterioration. And one of the challenges with these models is that they can be noisy. And what I mean by that is, you don't want to miss someone's rapid deterioration, but if you tell that to their care team on the front lines, it's going to create a bunch of additional information only some of which needs to be acted on. And so, one of the really amazing things that UCHealth put together is this Virtual Care Center.
Providers at the Virtual Care Center can receive these alerts, integrate them, and then let the team who's interacting with the patient understand when there's something they need to act on. One of the challenges with these algorithms is that you have to tune how much you're willing to miss. How many poor outcomes are you willing to miss to reduce the noise of over alerting and thinking from the complete picture around, can we get a more sensitive, accurate algorithm if we have this additional layer of human involved? To me, that balance is brilliant, right? Because you can get the benefits of the technology much earlier than someone could somewhere else. That investment is going to pay off a lot earlier, and you don't overburden your healthcare workers.
Thomas Flaig: Shifting gears just a little bit, thinking about research informing, for example, healthcare and so forth. So we think of clinical research, randomized clinical trials are the way that we discover new drugs, improve new drugs, understand new techniques, diagnoses and so forth. The strength of those is they’re so regimented and there are specific programs around them, protocols around them. Well, the downside is they’re still regimented and there are so many programs around them. And also, who gets into the trials and are the patients being studied, representative of the broader population? This has become, I think, highlighted through COVID – some of the work that was done in there. How can artificial intelligence help us with existing data, future data to make sure it's represented across gender, racial, other ethnic groups?
Casey Greene: Yeah, I mean, I think this is a really important challenge. It's a challenge in the field of artificial intelligence as well, because if your training data are non-representative, your predictive models will not work the same way for everyone. This is an area where the National Institutes of Health have put a lot of interest. And so they started, it's probably a couple years ago now, these calls for a new program called Bridge to AI that was designed to generate essentially the healthcare data for the next generation of these algorithms. And they put a lot of thought into fairness and how representation would be handled and how we could really build data that did reflect the diversity of our country. And so I think this is a really exciting program. It's now underway. In fact, there's a coordinating center associated with it, and the leads of two of those are faculty in our Department of Biomedical Informatics.
So we have a lot of experience with understanding this program as it's getting underway. And I think that type of effort is going to be really important to make sure that we do have benchmark representative data. And then I think the key is going to be can we use these types of algorithms to understand what's missing in the common data that tend to be used? So, there's another faculty member in the Department of Biomedical Informatics who's done some recent work using machine learning and artificial intelligence methods to annotate, provide structured information about genomic samples that have been publicly shared to try to annotate what tissue they come from and annotate other features about them so that we can understand what's missing. What are we blind to? Because if we don't know what we're missing, we're going to struggle to build algorithms that can work.
Thomas Flaig: And I'm really glad you mentioned the new department and your leadership of the department. I think it's a really exciting development for our campus. It's the first new department of the School of Medicine, I think, in many, many years. Do you want to just say a few things about the department, how it's grown up, and maybe how unique it might be in the academics sphere overall?
Casey Greene: Yeah, so I think we're the first new department (at the CU School of Medicine) since Emergency Medicine. We are a department of researchers, physicians and educators focused on the development and deployment of ethical technologies that advance research and care. And that means we have leaders in the coordinating center for the NIH's Bridge to AI effort. Moni Munoz-Torres and Anne Thessen, who think about how you put data and communities together in structured ways to solve these kind of key problems.
And then the other faculty member that I was talking about, Arjun Krishnan, recently joined us and his work has been on using machine learning and artificial intelligence to annotate what's essentially this missing metadata, data about the data. Data that exists and that we need to use to understand where our gaps are, but that are, no one ever took the time to write down. And so he's developing automated methods that go back to the data themselves and help to write that down. So these are good examples of how we're finding people who are building the types of technologies that are going to be necessary and building the types of communities that are going to be necessary to solve essentially the hardest problems of AI in medicine.
Thomas Flaig: Are there many departments like this across the country or not?
Casey Greene: If you look around, we were not the first institution to create a Department of Biomedical Informatics. And I'll be shocked if we're the last institution to create a Department of Biomedical Informatics, we might be the most recent institution to create a Department of Biomedical Informatics. As data are becoming more and more central to research and care, I think these departments are growing or even where there aren't departments, the investment in these is growing nationally. On the other hand, I think our department and our campus are unique in some ways. There were a couple things that really attracted me to CU and why I think we can build something here that is different than what exists elsewhere. So, the first is sometimes in academia there can be a focus on academic work products, the peer-reviewed papers. These types of things are important, but they're a means to an end. The end is improved research, new discoveries, new therapies or improved care, better care pathways that can reduce this burden and lead to better care. And I think those are tangible real-world outcomes.
When I looked, and this is going to get really esoteric, into what the promotion and tenure guidelines were at CU, there was a recognition here that I did not see elsewhere around the importance of producing meaningful positive change in the world. And what that meant is when we were creating a new department, we could write promotion and tenure guidelines for the department that were wholly consistent with the guidelines of the school, the campus and the university system that recognized real-world products. Software that's used throughout the world to deliver improved care that can count, that matters, that's actual impact.
So we were able to focus how we think about faculty contributions and how we think about researcher and teacher contributions on what matters. And I feel like we weren't beholden to a 200-year-old system of evaluation. And to me, yeah, you could say there's been departments of biomedical informatics over the last 10 years, and there's going to be departments of biomedical informatics over the next 10 years. But I think we have the potential here to have one that is unique because there's an openness here to making a difference in the world that not everywhere has.
Thomas Flaig: Yeah, that's really refreshing to hear. Thanks.
Chris Casey: Casey, this will not come as news to you, but ChatGPT has been getting a lot of attention in the news, and this is another area with machine learning, artificial intelligence. How can we understand the development of ChatGPT better? And in light of the broader topic we're discussing here with healthcare?
Casey Greene: ChatGPT is a couple things put together. One is large language models. Let's just say you're willing to spend an inordinately large amount of computer time to understand routine text. It's essentially training these types of models on, essentially, a representation of language. But it's extremely complex to calculate, we'll put it that way. Those have been advancing quite rapidly over the last decade; really, over the last five or so years, there has been a lot of progress. ChatGPT is under the hood a large language model, but it's combined with a couple key things, a set of prompts that make it useful and the software that make it really accessible. This goes back to sort of what matters? What's real-world impact? The real-world impact of a large language model is up for debate, but then when you pair that with effective software and this sort of starting point for people to really access it and use it, all of the sudden the impact is magnified many, manyfold.
And so you go from this sort of esoteric academic topic to something that's useful. I mean, it was remarkable watching this rollout over a week. All of the sudden this capability has existed not quite at the level that it had, but pretty close. And in a week you go from people not really having any clue what it is to all of the sudden this is going to transform the world. It felt like a state switch had been flipped. And I just think it's important to recognize that state switch wasn't just one piece, it was these pieces put together. And I think that's why focusing on the real world impact is so important because then you've set people up to think not just how can I advance the research, but how can I advance the research in a way that has a tangible impact?
Thomas Flaig: And one thing I haven't understood about this, is this akin to a really early version of this technology, or is this a later-stage model and we won't see a lot of development the next few years?
Casey Greene: This is not V1 in terms of large language models, this is a relatively advanced language model. On the other hand, I think what we're going to see is a lot of new uses of these models and those uses are going to guide how we approach these types of problems in a way that's likely to feed back and improve the models. So, you can take a generic large language model and use it as ChatGPT is being used, but you could also tune it. You could fine tune it to work in some settings more effectively, whether that setting is healthcare, thinking about electronic notes, all of these elements provide an opportunity for tuning. So I think what you're going to see is... I mean, you'll continue to see development of large language models. I think there are still advances to be had there, but you're also going to see a lot more emphasis on how they're deployed and used.
Chris Casey: Shifting gears a little, when I think back to couple decades ago, probably the biggest scientific advancement maybe out of the ’90s was Dolly the sheep, right? Genetics, cloning the huge accomplishment of, say, the ’90s. And I'm wondering, as we get into this third decade of 21st century, do you envision, Casey, the Dolly the sheep breakthrough scientific advancement to come through biomedical informatics somewhere? Or where do you think this decade's definitive scientific breakthrough will land?
Casey Greene: This decade, it'd be pretty tough to argue against things like RNA vaccines as a major advance. So maybe biochemistry, I'll push back a little bit on Dolly the sheep being sort of the major breakthrough when it comes to research. I think more broadly than that, the field of genetics, I'll agree with genetics as kind of a key. The return on investment for a lot of that research in genetics is happening now. The work that CCPM is doing is a place where genetics is guiding care at scale, and that takes a long time. So I do think not just this decade, but the preceding one, I think a lot of work has happened that is laying the groundwork for those types of informatics technologies to be transformative in the future.
I actually think it's often very hard to point at one specific advance as the moment. If you wanted to draw an analogy with ChatGPT and Dolly, Dolly took this sort of work that had been happening and put it in the public consciousness in a way that I don't think it was before. And I do think ChatGPT has done that.
Thomas Flaig: We've been talking today, I think, about a broad topic in a positive way. And in a way we can see the benefits that actually come from this, and I think those are very clear. As we think about this, are there any cautionary tales or cautions that you've see in the wind is an expert in this area?
Casey Greene: There's a classic example now ... and when I say classic, I mean it's less than 10 years old. But there's a classic example of a quip that noted that AIs have a hard time distinguishing chihuahuas and blueberry muffins. So if you look at the error modes of these algorithms, this is sort of key. They work differently than ... In some ways they're very similar to human visual perception. If you look at a chihuahua and you look at a blueberry muffin ... I realize this is audio only, so I can't show a slide. But there are pictures of chihuahuas and there are pictures of blueberry muffins where there are three blueberries around it, and then there's two eyes and a nose, and they're not that easy to distinguish, even for a human. But a human does think about this differently.
And if I tell you, well, I need you to tell me if this is a chihuahua or a blueberry muffin, you're going to look at those extra defining features. These algorithms work differently. So, I think this goes to ... You're asking are there examples of how this could be destructive in practice to deploy? I think the chihuahua blueberry muffin example, there are pretty different implications if you're deciding on your breakfast. I think ... Sorry.
Chris Casey: That's a true statement.
Casey Greene: That's right, very true statement, especially if you have a small dog at home. One of the things that has come out with folks working in radiology images, especially, there was a set of folks in computer science who kind of moved into this space, who wrote a bunch of papers about how AI was going to replace radiologists. There's another individual who works in the field who was looking at that and saying, well, this is a little bit unusual. How is this working? And started looking at why those AI systems were making the predictions they were making. And in many cases, those predictions were made by things that were that should have been irrelevant to the image. If you want to predict poor outcome for a patient, one of the predictors, and this was predicting pneumonia, was the word portable showing up. So it learned looking for portable as an indicator of pneumonia, but it's also an indicator of someone who can't go to the scanner.
The chihuahua blueberry muffin is a bit flippant, but we need to understand why these algorithms work and why they don't work and what the failure modes are. If we were just going to deploy them, I think we would have to be willing to accept those types of errors, and I think we should and would be less accepting of those types of errors. Maybe this goes back to your Pew poll from earlier. That could be the type of system folks are imagining, and I can understand why 60% of Americans would be uncomfortable with that. I'd be uncomfortable with that.
Chris Casey: Well, I think that we've covered a lot of ground here with this discussion. We've gone through algorithms, artificial intelligence, computational biology, biomedical informatics. It's exciting that our campus here at CU Anschutz Medical Campus sits at the forefront of a lot of this and your leadership, Casey, as well as the new Department of Biomedical Informatics. So thank you very much for your time. This has been a very enjoyable discussion and we appreciate it.
Thomas Flaig: Great discussion.
Casey Greene: Thanks. It's been wonderful.
Original source can be found here.