General

Why scientists should not rely on AI for scientific discoveries, at least for now

We live in the golden age of scientific data, surroundinghuge stocks of genetic information, medical images and astronomical observations. The current capabilities of machine learning algorithms allow artificial intelligence as quickly as possible and at the same time very carefully study these data, often opening the door to potentially new scientific discoveries. However, we should not blindly trust the results of scientific research conducted by AI, said research specialist Rice Genevera Allen University. At least not at the current level of development of this technology. According to the scientist, the problem lies in the fact that modern AI systems do not have the ability to critically evaluate the results of their work.

According to Allen, AI systems using methodsmachine learning, that is, when learning occurs in the process of applying solutions to a multitude of similar tasks, and not just by introducing and following new rules and instructions, some decisions can be trusted. To be more precise, it is quite possible to entrust AI with solving problems in the areas where the final result can be easily verified and analyzed by the person himself. As an example, we can take, for example, counting the number of craters on the moon or predicting repeated aftershocks after an earthquake.

However, the accuracy and efficiency of more complexThe algorithms that are used to analyze very large data arrays to search for and identify previously unknown factors or interrelations between various functions are “more difficult to test,” Allen notes. Thus, the impossibility of verifying the data collected by such algorithms can lead to erroneous scientific conclusions.

Take for example exact medicine when forDeveloping effective treatment methods, specialists analyze patient metadata, trying to find certain groups of people with similar genetic characteristics. Some AI programs designed to “sift” genetic data do show their effectiveness by successfully identifying groups of patients with a similar predisposition, for example, to the development of breast cancer. However, they are completely ineffective in identifying other types of cancer, for example, colorectal. Each algorithm analyzes the data in its own way, so when combining the results there can often be a conflict in the classification of the patient sample. This in turn makes scientists think about which AI will ultimately trust.

These contradictions arise from the fact thatdata analysis algorithms are designed to comply with the instructions laid down in these algorithms, which leave no room for indecision and uncertainty, Allen explains.

“If you set a task for the clustering algorithmto find such groups in his database, he will perform the task and say that he has found several groups according to the given parameters. Tell me to find three groups, he will find three. Ask to find four, he will find four, ”Allen comments.

"In fact, the real effectivenessSuch an AI will be demonstrated when the program can respond something like this: “I really believe that this group of patients fits the required classification, however, in the case of these people whose data I also checked and compared, I’m not quite sure” .

Scientists don't like uncertainty. However, traditional methods for determining measurement uncertainties are designed for those cases when it is necessary to analyze data that has been specially selected to evaluate a particular hypothesis. Data mining AI programs do not work at all. These programs are not driven by any guiding idea and simply analyze the data arrays collected without any specific goal. Therefore, now many researchers in the field of AI, including Allen herself, are developing new protocols that will allow AI systems of a new generation to evaluate the accuracy and reproducibility of their discoveries.

The researcher explains that one of the new methodsdepth analysis will be based on the concept of re-sampling. Let's say if the AI ​​system is supposed to make an important discovery, for example, it defines groups of patients who are clinically important for the study, then this discovery should be displayed in other databases. Creating new and large data sets in order to verify the correctness of the AI ​​sample is very expensive for scientists. Therefore, according to Allan, an approach can be used in which "an existing data set will be used, in which information will be randomly mixed in such a way as to simulate a completely new database." And if time after time the AI ​​will be able to determine the characteristics that allow the desired classification to be carried out, “in that case, you can assume that you have a really real discovery in your hands,” adds Allan.

Subscribe to our Yandex. Dzen to keep abreast of the latest developments from the world of science and technology.