When it comes to machine usetraining, most often talk about the medical field. And this is not surprising: a huge industry that generates a phenomenal amount of data and income, in which technological advances can improve or save the lives of millions of people. Hardly a week goes by without the appearance of a study that suggests that the algorithms will soon be better for experts to identify pneumonia or Alzheimer's disease - diseases of complex organs, from eye to heart. And all this goes, but ...
Problems of crowded hospitals and congestedmedical personnel are poisoned by public health systems and lead to increased spending on private health systems. And here, again, the algorithms offer a tempting solution. How many times do you actually need to visit a doctor? Is it possible to replace these visits with a smart chatbot - which will be equipped with portable diagnostic tests using the latest advances in biotechnology? Unnecessary visits could be reduced, and patients could be diagnosed and referred to specialists more quickly, without waiting for the initial consultation.
As is the case with artificial algorithmsintelligence, the goal is not to replace doctors, but to give them the tools to reduce everyday or repetitive parts of the work. With an AI that can examine thousands of scans per minute, the “boring routine” remains on the machines, and doctors can focus on those parts of the work that require a more complex, subtle, experience-based judgment of the best treatment methods and patient needs.
- 1 High stakes
- 2 Too many streams to unravel.
- 3 Evaluation of algorithms
- 4 Achieving balance
And yet, as is the case with AI algorithms,there are risks associated with their use - even for tasks that are considered routine. The problems of black box algorithms that make inexplicable decisions are serious enough when you try to understand why an automated chatbot-recruiter was not impressed with your story during the interview. In the context of health care, where decisions can mean life or death, the consequences of an algorithmic failure can be fatal.
Neural networks are great at handlinga large amount of training data and the establishment of links, the absorption of the underlying laws or logic of the system in hidden layers of linear algebra; whether it is the detection of skin cancer from photographs or learning to write in pseudo-shakespeare language However, they terribly explain the underlying logic of the relationships they discovered: there is something more than just a string of numbers, statistical “weights” between the layers. And they cannot distinguish a correlation from a causal relationship.
There are interesting dilemmas for medicalworkers. The dream of big data in medicine is to provide the neural network with “huge amounts of health data,” find complex, implicit relationships and make individual assessments for patients. What if such an algorithm turns out to be unreasonably effective in diagnosing a health condition or prescribing a treatment, but you will not have a scientific understanding of how this relationship actually works?
Too many streams to unravel.
Statistical models that underliesuch neural networks often assume that the variables are independent of each other, but in a complex, interacting system like the human body, this is not always the case.
In a sense, this is a well-known concept inmedical sciences - there are many phenomena and relationships that have been observed for decades, but are still poorly understood at the biological level. Paracetamol is one of the most popular painkillers, but there are still active discussions about its action. Medical practitioners may seek to use any instrument that is most effective, regardless of whether it is based on a deep scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics can paraphrase this as “Shut up and heal!”.
Of course, there is a debate in this area aboutDo we not risk with this approach losing sight of a deeper understanding that will ultimately prove to be more fruitful - for example, in the search for new drugs.
In addition to the philosophical squabbles, there are practical problems: if you do not understand how the black box of the medical algorithm works, how to approach the issues of clinical trials and regulation?
Transparency may be required in relation tohow the algorithm functions - the data on which it looks, the threshold values on the basis of which it draws conclusions or provides advice, but this may contradict the motives for making a profit and the desire for secrecy in medical startups.
One solution may be to excludealgorithms that cannot explain themselves or do not rely on well-understood medical science. But it can prevent people from reaping the benefits of the useful work of such algorithms.
New algorithms in health care are notwill be able to do what physicists have done with quantum mechanics, because they will not be deployed in the field. And many algorithms are improved precisely by working in the field. How do we choose the most promising approach?
Creating a standardized clinical systemtesting and testing, which will equally be applicable to algorithms that work differently or use different input data, will be a difficult task. Clinical trials that use small samples, for example, with algorithms that try to personalize treatment for individuals, will also be difficult. With small samples and a weak scientific understanding of what is happening, it will be impossible to determine whether the algorithm has succeeded or failed because it can be quite good in general, but set a bad example.
Add to this mix of training and the picture becomeseven more complicated. "More importantly, the ideal algorithm in the black box is plastic and constantly updated, so the traditional model of clinical trials is not suitable because it relies on a static product that is subject to a stable assessment."
We'll have to customize the entire system of medical and clinical trials.
Achieving a balance
The history of health care reflects the history of artificial intelligence in many aspects. It is no coincidence that IBM tried to change the healthcare sector by using its Watson artificial intelligence.
Balance will have to find. We will have to find a way to process big data, use the awesome power of neural networks and automate thinking. We must be aware of the shortcomings and prejudices of such an approach to solving problems.
In doing so, we must welcome these technologies,because they can be a useful addition to the skills, knowledge and deeper understanding that people can provide. Like the neural network, our industries should be trained, expanding this cooperation in the future.
Do you agree? Let's discuss in our chat in Telegram.