A person is able to listen to music not only with his ears,but also with his eyes, watching the rapid movements of the pianists' fingers or the smooth sliding of the violin's bow. Sometimes we intuitively follow the movements of musicians to distinguish the sounds made by musical instruments.
Artificial intelligence system created inlaboratory of the Massachusetts Institute of Technology, allows you to isolate the music published by the instrument, based on the movements of the performer. Trained by comparing human movements and the musical pace of individual parts, artificial intelligence helps the listener isolate one flute or violin from several similar instruments playing in the orchestra.
According to developers, specially traineda neural network can be used when mixing sound to increase the volume of a particular musical instrument. The use of this technology in the future will become available when bringing video conferencing. The user will be able to highlight a conversation with a specific interlocutor and reduce extraneous noise.
The basis for the new system is technology,used in an existing function, when when listening to a concert it was possible to increase the volume of specific instruments. The new system provides for the analysis of the musician's movements, the study of his body position. Artificial intelligence training was conducted using synchronized audio and video recordings.