Satellite Providers

News

言語バージョン:

How Self-Supervised Learning Aids in Understanding the Brain

A pair of studies conducted by researchers at the K. Lisa Yang Integrative Computational Neuroscience Center at MIT suggests that the brain may develop an intuitive understanding of the physical world through a process similar to self-supervised learning. Self-supervised learning is a type of machine learning that allows models to learn about visual scenes based solely on similarities and differences, without the need for labels or other information.

In their research, the scientists trained neural networks using this particular type of self-supervised learning and discovered that the resulting models generated activity patterns that resembled those found in the brains of animals performing the same tasks as the models. This implies that the models are capable of learning representations of the physical world that enable them to make accurate predictions about what will happen within that world.

The findings suggest that the mammalian brain may be using a similar strategy to develop an understanding of the physical world. The researchers believe that these models, initially designed to build better robots, can also serve as a framework for gaining a deeper understanding of the brain. Although the researchers cannot confirm that the models reflect the entire brain, their results across various brain areas and scales support a unifying principle.

The studies, one led by lead author Aran Nayebi and the other led by Ila Fiete, director of the ICoN Center, will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in December 2023.

In recent years, computer vision models have increasingly utilized contrastive self-supervised learning, a technique that enables models to learn to classify objects based on their similarities without relying on external labels. This approach has proved powerful, as it allows for leveraging large datasets, particularly videos, and unlocking the potential of modern artificial intelligence (AI).

In the studies, the researchers aimed to determine if self-supervised models of other cognitive functions would exhibit similarities to the mammalian brain. They trained self-supervised models to predict the future state of their environment using hundreds of thousands of naturalistic videos depicting everyday scenarios.

The researchers then tested the model’s generalization abilities by applying it to a task called “Mental-Pong.” Similar to the popular video game Pong, Mental-Pong requires the player to estimate the trajectory of a ball that disappears before hitting the paddle. The model successfully tracked the hidden ball’s trajectory with accuracy matching that of neurons in the mammalian brain. The neural activation patterns observed within the model closely resembled those seen in the dorsomedial frontal cortex of animals’ brains as they played the game.

The researchers note that no other class of computational model has achieved such close alignment with biological data, suggesting that self-supervised learning may be a key mechanism for the brain’s intuitive understanding of the physical world.

Sources:

MIT College of Computing. “Toward an understanding of the brain.” (MIT News, 2023).

Note: The source article did not provide URLs for the sources mentioned.

The post How Self-Supervised Learning Aids in Understanding the Brain appeared first on Fagen Wasanni Technologies.

El Chaparral | Badagomuwa | Wokoi | Andromasy | Alagandanahalli | Bafei | Gujjalapalem | Rincón Perdido | Disofou | ‘Alī al Ḩājj ‘Ulwān | Manuwala | Baquioen | Maranville | Dhabli