In a remarkable leap, we have equipped AI with the ability to learn to associate visual and auditory inputs, unsupervised, through merely linking these senses, with no human correction or any human instance annotated dataset.
This great step forward, developed by the researchers at top AI labs, opens the door to AI systems that see and understand the world in a more holistic, human-like way.
Historically AI systems for vision and sound are trained independently, without learning any relationship between the modalities. Recent progress in self-supervised learning and cross-modal learning has allowed some AI models to learn correspondences between audio and vision from unlabeled video data.
One major advance is the appearance of AI models that can both synthesize audio signals from visual inputs and the other way around. That is, an AI can “imagine” the sound corresponding to an image or the sound of an event without human voicing or human labels.
For example, an AI might “see that” — visibly notice, through countless examples, the correlation between the sight of a closing door and the sound of a door banging shut — just by watching lots and lots of unlabeled videos.
The impact of this development is enormous and impacts multiple sectors. For instance, AI that can correlate visual and auditory information might make self-driving cars safer by recognizing the sounds of sirens or distant cars in the context of visual traffic data.
In healthcare, AI can analyze both images and their underlying sound to help determine the most accurate diagnosis and perhaps even discover subtle anomalies to increase the accuracy of diagnosis.
Moreover, AI-based multimodal content curation can potentially transform applications such as journalism and film making, as it can automatically assemble multimodal content using intelligent video and audio retrieval.
Scientists behind the development stress that although progress has been made, hurdles still exist. A critical concern is the robustness and generalization of these AI models in a wide range of practical situations.
Ethical implication caused by multimodal data-based decision making by AI The ethical issues caused by AI making decision based on multimodal data need attentive thinking so that transparency and responsibility are achieved.
In the future, future work in the research community is expected to incorporate more advanced neural network structures and rely on larger datasets to further refine autonomous learning AI systems. The end goal is to develop AI systems that can process and understand the world just as people do across all the sights, sounds, and sensations of the natural environment, potentially resulting in more intelligent and intuitive technologies to come.
This new frontier for AI is expected to enable a multitude of applications that were previously thought to be infeasible, taking us closer to the vision of ‘general’ or ‘strong’ AI, which can perceive and understand the complexities of the real world.