Data usage in medicine: smarter with AI
Technical insights into the development of software solutions that generate AI-based results or use them is one thing. It is another thing altogether to have the procedural knowledge to be able to evaluate and influence the quality of the contents. Improving the quality of treatment in healthcare often needs more than “just” adequate technology. It needs solutions that generate reliable and transparent information from the available data. The substantial links between data and their quality must be identified and exploited to achieve this.
A provider that has always embodied this symbiosis between technical and substantial expertise is KMS Vertrieb und Services GmbH in Unterhaching. The company, which is part of the CGM Group, has been active for more than 20 years in data mining and data warehouse solutions in healthcare facilities and is very familiar with linking medical data for the purpose of knowledge generation. “The most important issue for us at the moment is, of course, the use of AI. Whether it is optimizing billing processes or supporting medical professionals in their everyday work using AI-generated information,” explains Nils Wittig, CEO at KMS.
Minimize screen time, maximize knowledge
In essence, the body of data that has steadily grown in healthcare facilities over the decades is consolidated, structured, and enriched with reliable outside knowledge such as information from package inserts, scientific data from studies, or generally available expert knowledge. “We do not create fundamentally new information with AI. Instead we link existing information with new knowledge and provide it where it is needed. One example is the diagnosis of rare diseases for which even experienced medical practitioners lack the appropriate expertise. Another is in nursing to rule out medication errors. In principle, the physician or the nurse could research the symptoms of a rare disease or the interactions of a medication themselves. But the time needed for this is often not available or could be used for something more useful,” according to Nils Wittig.
Of course, the data quality plays an important role in this context. Ensuring that the body of data that is fed into an algorithm is sufficiently reliable requires a rigorous test of the data quality. One that automatically determines if there are abnormalities and helps to keep the good data and sort out the poor data. According to Nils Wittig, “We have to be aware, however, that even with AI there is still not 100% certainty. The same applies to employees: humans also make mistakes. Healthcare professionals who use AI-generated knowledge have to know this. They have to learn how to handle such knowledge and identify where potential weaknesses may lie. And they have to do this with a full awareness that medical decisions cannot be delegated to AI but can instead be supported by it.”
Increased efficiency means maintaining the quality
Healthcare facilities are now aware how important support from AI is and will be, including as part of the decision making process—the shortage of skilled professionals is making itself felt here. “A few years ago increased efficiency in healthcare facilities was signified by job cuts and declines in treatments. This has completely turned around. Today those responsible in hospitals and practices know that increased efficiency due to growing digitalization is the foundation for maintaining the level of treatment provided. Accordingly, they are open to the idea of using existing data as best as possible,” explains the data mining expert.
With a view to using AI, he believes that in addition to the quality of the data alone there is another factor that plays a critical role: To boost the confidence of users in the results derived from software, the program must be able to admit its ignorance. Nils Witting describes this as follows: “One of the known weaknesses of AI is its tendency to hallucinate. With an eye to increasing confidence in AI-based solutions, it is a key task for us to control this phenomenon. This is a fine line to tread because we also don’t want algorithms to be able to only make linear connections. Over the last few years, we therefore concentrated on ensuring that links are created where they are actually present. At the same time our task is to encourage AI to say ‘I can’t say anything here’ rather than giving a wrong answer. This is also difficult for many humans, I might point out.”