Sign in to view your recently used, saved, and created tools here.

More Options

ProfessionalsSkills
101

Medical Information in the Era of AI

Medical Information in the Era of AI

The advent of generative AI could be a game-changing development for the field of medicine. Advanced capabilities of artificial intelligence to summarize text and generate high-level abstractions may offer assistance in an age where information overload is a major impediment to providing the care. The field of medicine is on the cusp of a transformative era. The remarkable capabilities of artificial intelligence to summarize text and generate abstractions hold immense potential to address the challenges posed by information overload. Healthcare professionals and patients alike are grappling with an overwhelming volume of data, hindering efficient care delivery. In this article, we explore how the power of generative AI can alleviate these burdens, enhance patient interaction, and augment decision-making in the medical domain.

Extraction of information

One of the most exciting outcomes of generative large language models such as GPT-3, underlying ChatGPT, is an astonishing ability to summarize text. Though this may seem like one of the most obvious uses of generative AI, the specific abilities of modern models offer unexpected capabilities such as contextual understanding. Given a complete text, ChatGPT can understand a text holistically, drawing connections between words, sentences, and paragraphs, but also between concepts and themes. The same model can perform these tasks in multiple languages and when fine-tuned (specifically trained) on data from a certain field, such as medical histories, produces summaries which are concise, informative, and of high quality.

Information overload has become a very real issue in the medical field. Access is virtually unlimited as the internet is a means for immediate and world-wide distribution of research, case studies, and patient information. This is without doubt beneficial overall, but we face a simple issue of volume. The National Institute of Health estimated that approximately 1.8 million articles are published every year as of 2014, a figure that has no doubt increased in the last decade. No convincing is necessary to show that this is an issue. Not because of the information itself, but because there is simply too much. Truly important information is undoubtedly lost on medical professionals who simply cannot read fast enough to account for the latest information. In another vein, as patient histories become digitized, the amount of information available for each patient will only grow as it becomes easier and easier to insert notes, medications, visit details. Now here lies a perhaps more pronounced issue. Patient histories are not something that can be missed but rather are a crucial piece of a healthcare provider's job which will only take more and more of their time as we stride into the digital age.

Though this may seem somewhat apocalyptic, AI offers an answer. Generative AI’s ability to summarize seems to be the perfect response to the exponential increase in information available to medical professionals. Fine-tuning a model on a certain niche of research such as medicine has already been shown to produce effective and informative summaries from vast quantities of data. This along with context from patient histories appears to be an opportunity to create a chat bot to fill the role of a medical-informational repository. In the near future, the crucial task of patient-specific research and history updates could be as simple as referencing a summarization prompt in a chatbot such as ChatGPT to obtain a summary of the most relevant information for care. In theory, a system such as this offers a shift of the burden of research and patient history upkeep from the provider onto an algorithm leaving doctors to use their time for hands-on patient care or following up on research they know is relevant.

Patient Interaction

Exposure to extensive research and patient information may be difficult for healthcare professionals to handle, but perhaps there is another issue at hand. With the internet reaching every corner of the globe, patients themselves have the same access as healthcare professionals to endless research but perhaps even less direction or filter than doctors. Thus we run into a very similar issue of an overwhelming stream of information and perhaps few resources to combat misinformation online. Reading unconfirmed or unreviewed articles can create insecurity around the decisions of healthcare providers and overall to tension in the doctor patient relationship. Once again, perhaps generative AI can provide an answer. With the same access to modern research and the capability to contextualize information, generative AI offers a perfect liaison to the medical field. A well designed prompt may offer information with disclosures of disputed research. A prompt could also be written to give this information in an easily understood format with simple language. Medical jargon is not only difficult to understand but can often seem harsh and uncaring. Generative AI could be a perfect way to soften and package important information to best suit a patient’s personal situation.

For example, consider a prompt designed to gently inform a patient of the implications of a cancer diagnosis. A frantic scouring of the internet for information will likely leave the patient with a fragmented and incomplete view of their situation. Having the option to turn to a chatbot which can dually comfort and inform a patient of important considerations is a unique opportunity of the modern day. Further, after leaving a patient interaction to do their own research, a doctor could deploy a prompted chatbot to relay more information and answer any questions they may have. One may note that this approach may seem to distance a doctor from their patients but you must take into account that consistent doctor-patient interaction is a luxury to many. For both doctors and patients, personal followup interaction may not be an option and the right prompt in a chatbot can perhaps extend a valuable line of information. Indeed, does it not seem more helpful to supply a patient with a pre-prompted chatbot to answer their questions as opposed to an email bluntly describing a diagnosis?

There are, of course, considerations to be made around an approach such as this one. AI alignment is one of the greatest issues facing the development of generative AI today. At its most simple, this problem is one of assuring that the “goal” of a generative AI algorithm is well defined and aligns with the designer’s goals as well as humanity’s goals. In this scenario, if a doctor deploys a prompted chatbot to a patient and this chatbot produces incorrect information, who is to blame? With the new large-language models being used for chatbots, it is effectively impossible to consider every possible output from the algorithm and minimizing misinformation from the models themselves is one of the greatest challenges to AI scientists today. While the deployment of prompted chatbots can offer valuable information and support to patients, the responsibility lies in addressing the challenge of minimizing misinformation generated by large-language models, which is a pressing concern for AI scientists working on the development of generative AI.

Pre-prompted Patient Interaction Chat

Enhanced Decision-Making

Just as a chatbot may be a helpful means of communication with a patient, perhaps one could be helpful to doctors themselves. The decisions being made by healthcare professionals every day are undoubtedly crucial. From small factors to immediate and life altering health considerations, doctors make choices every day impact the lives of millions. Indeed one of the pillars of modern western society is our faith in doctors. This trust we have in them has been built into an institution that has prolonged life and cured illnesses that threatened humanity as a whole. Although, this burden of effective and correct practice can be difficult to carry.

Medicine in modern day is a distinctly interactive field. Doctors constantly consult with one another and make decisions with the aid of research as well as their peers. Why then could generative AI not also be a venue of contemplation? Given this collaborative approach, it is worth considering AI as an additional avenue for consultation and decision-making. Just as doctors seek insights from their colleagues, why not explore the potential for generative AI to contribute to the process? A chatbot could be prompted to act as another peer with access to a remarkable amount of research and data to aid its views. Approached as simply another mode of reference, generative AI could be a valuable resource. It is still, and likely will be for decades, an unsettling thought to leave our fate to the mysteries of a computer algorithm, but it is unarguable that the context-driven summarization abilities of generative AI cannot be ignored for the aid they can offer to healthcare workers. We again face the issue of alignment, but as long as medical professionals are well-informed about the stochastic and nondeterministic nature of machine learning models, AI can confidently be used as a resource in even the most crucial decisions. Although generative AI does indeed have an element of randomness, from another angle, it could actually provide a sort of stability. Yes, decision making is largely about logic, reasoning, and input information, but it is also undeniably emotional. The decision a doctor may make for a patient is not necessarily the same decision they would make for a friend or loved-one. Indeed, when dealing with personally related patients, the job of decision making may become a mostly emotional one. Generative AI offers here the possibility of decision aid without any emotional involvement.

The importance of human touch and emotion can be provided by the healthcare professionals but when cold logic and information are necessary, generative AI can provide without emotional involvement. While it may be unsettling to entrust our fate to a computer algorithm, the context-driven summarization capabilities of generative AI cannot be disregarded in terms of the aid they can offer to healthcare workers, as long as medical professionals are well-informed about the stochastic and nondeterministic nature of machine learning models.

Prompt for Diagnostic Aid


© 2024 LowTech AI. All rights reserved.

Privacy Policy - Terms of Service

Product

Create a ToolSign UpSign InTop Tools