ChatGPT in Medicine

  • VIEWofTwo - Column

One topic - two opinions

Some see in the AI application ChatGPT the breakthrough to a better world, others the downfall of our cultural achievements. But what do the clever minds at VISUS think, who of course have to seriously deal with the use of ChatGPT in order to steer their own software development in the right direction at an early stage?

Dr. Daniel Geue - VISUS

Division Manager for Research & Development

Dr. Daniel Geue

ChatGPT is undoubtedly one of the most fascinating pieces of technology that has emerged in recent years. At the same time, it is one of the most questionable pieces of tech – at least when it comes to its use in medicine. The reason for my disquiet is that ChatGPT neatly packages errors and misinformation in smooth-sounding and appealing words. Medicine is based on scientific knowledge. And as of today, it is not entirely clear how scientific findings can be incorporated into the results generated by ChatGPT – or if it's even possible at all. Some data indicate that only two percent of validated scientific knowledge is used to generate ChatGPT's responses. And that number really does sound plausible, considering that even small variations in the questions asked lead to completely different results. In light of this, I find it alarming that some health (and health IT) experts are calling for technology like ChatGPT to be integrated more extensively in the medical field and for development in this area to be prioritized in order to avoid falling behind internationally. Let's be honest: We've been struggling to get electronic health records up and running, but now we should be focusing more of our efforts on ChatGPT? Just by virtue of my job, I'm a big fan of algorithms that make things easier for medical staff. However, I am skeptical about the idea of using ChatGPT – or similar solutions based on the same principle – for medical diagnostics.

Martin Klingelberg - VISUS

Division Manager for Product Management

Martin Klingelberg

Do I trust things like ChatGPT to be able to determine a diagnosis based on the symptoms I type in? Absolutely! After all, the world's most famous AI bot is capable of passing medical state exams – as long as you ask it the right questions in the right way. Do I trust algorithm-based diagnostics, too? Absolutely not! And that's because ChatGPT does not disclose sources and may draw wrong conclusions from the information available; it may be deluded and creating false facts. Despite all the challenges we will face with generative text machines and all the positive changes they may bring to our everyday lives, the fact remains that there will never be a medical field without medical practitioners. Because unlike ChatGPT, my family doctor can see, feel, measure, ask, and examine. Above all, my doctor can show me empathy, listen to my concerns, and take those concerns seriously. And yet, I welcome the opportunities that generative text machines (will) offer us. And I am optimistic that we will find ways to use them wisely and for the benefit of everyone – in our particular case, for patients. I don't yet know what these new AI solutions will mean for our customers' day-to-day activities. Nor do I know whether and how they could find their way into our products. However, my curiosity is driving me to explore these questions. Because I have a lot of faith in technology. And because I have confidence that we will also use artificial intelligence, well, intelligently.