AI in Everyday Health Care – Opportunities and Responsibility
Artificial intelligence (AI) is being used by more and more patients, including in a health care context. Medical terms, findings, or information from doctor’s visits can be quickly looked up and explained. For AI to provide meaningful support, one aspect is crucial: the way questions are asked. This is exactly where prompting comes into play.
What Is a Prompt?
A prompt is the specific input, meaning the question or instruction that you direct to an AI. Especially in the medical field, the following applies: An AI’s answer can only be as helpful as the question it is given.
AI does not understand content like a human being. It evaluates language based on statistical patterns and probabilities. Unclear or very general questions therefore inevitably lead to unspecific answers. In contrast, precisely and objectively formulated prompts increase the likelihood of receiving a helpful and understandable assessment.
Why Clear Questions Are Important
AI systems respond to the information entered. Although they can ask further questions in a dialog, they cannot reliably recognize which details are medically decisive or which information may be missing. Precisely formulated prompts therefore increase the likelihood of receiving understandable and easy-to-interpret answers. However, these should always be understood as general information, not as a personal recommendation or basis for decision-making.
In addition, AI systems may also provide incorrect or outdated information without indicating this. They do not independently verify content for medical accuracy or individual relevance. This makes it all the more important to critically assess AI responses and not view them in isolation.
Meaningful Use in a Health Care Context
When used correctly, prompting can offer real added value for patients. Many medical documents are linguistically complex and difficult to understand, especially medical reports, findings reports, or package leaflets. Here, AI can provide support by explaining technical terms or presenting general connections in understandable language. For many people, this is a help in being able to interpret medical information at all.
Prompting can also be useful for preparing for conversations with doctors or pharmacists. Those who already understand basic terms can ask more targeted questions, address uncertainties, and participate more actively in the conversation. In this sense, AI can help strengthen one’s own health literacy and gain more orientation in the often complex health care system.
It is important to keep in mind that general information cannot automatically be applied to one’s personal health situation. Individual symptoms, pre-existing conditions, or concomitant medications cannot be reliably captured or assessed by AI.
Clear Limits of AI in the Medical Field
As helpful as AI can be in explaining medical content, its limits are just as clear – and these depend on the context in which it is used.
In clinical practice, AI-supported systems are already being used for support, for example in the evaluation of X-ray, CT, or MRI images. Such applications can mark abnormalities, automate measurements, or analyze large amounts of data. However, they serve as assistance systems: the final interpretation of findings, assessment, and diagnosis are carried out by qualified medical professionals who bear responsibility.
This must be distinguished from the use of freely accessible AI systems by patients in everyday life. These systems are not approved medical devices, do not perform a physical examination, and are not aware of the complete medical history or individual risk factors. Even if AI can ask further questions in a dialog, this does not occur on the basis of medical judgment. It cannot reliably recognize which information is clinically decisive and cannot conduct a structured, responsible medical history as doctors do.
Questions about the cause of specific symptoms or about an appropriate treatment should therefore not be clarified with a freely accessible AI. The answers may be incomplete, misleading, or incorrect and, in the worst case, lead to false conclusions. In addition, AI cannot clinically assess uncertainties, as it does not bear responsibility for its statements, which in sensitive health matters can lead to unnecessary concern or, conversely, to a false sense of security.
Medical decisions belong in the responsibility of qualified medical professionals who make an individual assessment and assume responsibility.
Do Not Lose Sight of Data Protection
Another central point is the protection of personal health data. Health information is among the most sensitive types of personal data. Many AI assistants are external services for which it is not always clear to users whether and how entered data are stored, processed, or reused.
Patients should therefore handle the information they provide very consciously. General, anonymized questions about medical terms or active ingredients are generally lower risk, as long as no conclusions about one’s own identity are possible. Specific diagnoses, names, medication plans, or detailed courses of illness, on the other hand, should not be entered into freely accessible AI systems. Data protection is therefore an active component of responsible health literacy.
A Complement, Not a Replacement
AI can help to better understand medical information and prepare for conversations with medical professionals. However, it does not replace medical or pharmaceutical advice. Anyone who uses AI consciously, critically, and with attention to data protection can use it as a supportive tool, but should always keep its limits in mind.
Digital support is particularly helpful when it is used safely and in a structured way. The mediteo app can support you in keeping track of your medications and being well prepared for conversations with doctors or pharmacists – as a meaningful addition in everyday life, not as a replacement for personal advice.
Sources
-
(WHO), World Health Organization (2021). Ethics and governance of artificial intelligence for health: WHO guidance. Geneva.
-
Topol, Eric J. “High-performance medicine: the convergence of human and artificial intelligence.” Nature medicine vol. 25,1 (2019): 44-56. doi:10.1038/s41591-018-0300-7
-
Glikson, E. and A. W. Woolley (2020). “Human Trust in Artificial Intelligence: Review of Empirical Research.” Academy of Management Annals 14(2): 627-660.
-
Bender, E. M., et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
-
Gilson, Aidan et al. “How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.” JMIR medical education vol. 9 e45312. 8 Feb. 2023, doi:10.2196/45312
-
European Society of Radiology (ESR). “What the radiologist should know about artificial intelligence – an ESR white paper.” Insights into imaging vol. 10,1 44. 4 Apr. 2019, doi:10.1186/s13244-019-0738-2