Doctor Chat Bot | EOS Sciences

“Hi ChatGPT. I have a runny nose and sore throat. What’s my diagnosis?” I’m sorry, but I’m not a doctor and I can’t make medical diagnoses. I advise you to contact a medical professional, such as a GP, to discuss your symptoms and get a proper diagnosis.

ChatGPT’s chatty, AI bot answers a lot of questions, but when it comes to medical advice, it’s programmed to be reserved. This is a good thing, because a diagnosis not made by a doctor carries many risks. However, researchers are busy building such applications in the medical world. Will the computer system soon replace our doctors? Is this safe medical advice from artificial intelligence?

Keen to learn systems

Artificial intelligence systems have emerged in different corners of the medical landscape in recent years. Unlike classic computer programs, such as the word processor Word, we do not program artificial intelligence (AI) by telling it what to do line by line. Instead, the AI ​​itself learns how to solve a complex problem. During the training phase, the AI ​​sees a large number of examples in which it must learn to recognize patterns in order to solve the task as best as possible. For example, there are AI systems that have been given training thousands of brain scans of patients known to have a tumor. When this AI sees a new brain scan after training, it can predict whether a patient has a tumor. This is why you use the patterns you learned from all the previous examples.

Another type of artificial intelligence is being trained to extract new medical knowledge from patient records and doctor’s notes. This AI can detect unknown side effects of medications, or learn how effective new treatments are. Moreover, there are also systems that attempt to automate the so-called “clinical reasoning process” for the physician. This is the process that every doctor performs consciously or unconsciously when a patient comes to consultation with a long list of symptoms, history and risk factors. The combination of all this information, along with medical knowledge and experience, leads the doctor to the diagnosis and proposed treatment. In the AI ​​version of this process, a ChatGPT-like system receives the same patient information and creates a medical recommendation from similar examples it saw during training.

See also  Scientists begin a project to bring back the dodo

Teeth whitening agent against cancer

Why should we use these AI systems, and are they reliable? Proponents will assert that doctors are human, and humans are known to make mistakes. By replacing them with artificial intelligence with much greater “brain” power, we eliminate human mind-wandering.

On the other hand, of course, we are putting our lives in the hands of something whose decisions we don’t actually understand. AI can pick up patterns in data that we humans know to be incorrect. Suppose the AI ​​learns from a large number of patient files that yellow teeth are common in smokers, who are also more likely to develop lung cancer. If we asked an AI to do an action that could prevent cancer, it might suggest you whiten your teeth. The doctor knows that yellowing teeth does not cause cancer, but rather is a result of smoking, just like cancer. Whitening yellow teeth will not reduce your risk of cancer, but quitting smoking will.

For this reason, it is not a good idea to launch fully AI systems into the world of medicine. To ensure its reliability, humans must lend a helping hand.

Strong together

There are different ways to bring people and machines closer together. One of them is interactivity. For example, an AI system may not just indicate yes or no to whether it detects a tumor on a brain scan, it must provide more information. For example, he can circle the area where he found the tumor, or indicate in percentages how confident he is in his prediction. Final responsibility then lies with the physician using the system. Based on this additional information, it determines the expectation value.

See also  Which lane(s) is really the fastest in a traffic jam?

Only a doctor has enough basic knowledge to put medical AI predictions into context. The intention is not merely to open these systems to the general public. For example, a medical chatbot where people can get advice online is a bad idea. What is possible is a chatbot for doctors. The doctor can then enter the patient’s symptoms, and the chatbot makes suggestions about possible diagnoses. Using chat functionality, a clinician can write their own clinical reasoning process and test it against a chatbot, without completely outsourcing it.

Future doctor

Instead of only involving the doctor after the system is built, important knowledge can also be integrated into the AI ​​during development. We return to the example of the relationship between smoking, lung cancer, and yellowing of teeth. To ensure that the AI ​​does not learn false cause-and-effect relationships, we can impose constraints that it must follow. For example, one of these rules is “yellowing teeth does not cause cancer.” The AI ​​is then again tasked with learning patterns automatically from the data, but within these constraints. This is how you build a system that has the best of both worlds.

Will our doctors soon be replaced by artificial intelligence? Things won’t advance that quickly, but it’s possible that doctors will receive more support from artificial intelligence in the future. If you go to a doctor in 10 years, it will likely be working with artificial intelligence in various forms, from image recognition to chatbots. However, human input and physician expertise remain crucial to ensuring patient safety.

Megan Vasquez

"Creator. Coffee buff. Internet lover. Organizer. Pop culture geek. Tv fan. Proud foodaholic."

Leave a Reply

Your email address will not be published. Required fields are marked *