Brain decoder

A study published today in journal Nature introduced a non-invasive semantic decoder based on functional magnetic resonance imaging (fMRI) and technology similar to the one powering ChatGPT.

The decoder can convert a person's brain activity when they are listening to or silently imagine telling a story into a continuous stream of text. The method created by the team from the University of Texas at Austin may enable people to speak with their minds. 

Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. And it can fruitfully work with multiple brain regions. This demonstrates the viability of non-invasive language brain–computer interfaces. 

Computational task was performed by Generative Pre-trained Transformer (GPT), a 12-layer neural network using multi-head self-attention to combine representations of each word in a sequence with representations of previous words. GPT was trained on a large corpus of books to predict the probability distribution over the next word n a sequence. The authors fine-tuned GPT on a corpus comprising Reddit comments (over 200 million total words) and 240 autobiographical stories from The Moth Radio Hour and Modern Love. The model was trained for 50 epochs with a maximum context length of 100. 


Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023).

Jerry Tang, Amanda LeBel, Shailee Jain, Alexander G. Huth. Semantic reconstruction of continuous language from non-invasive brain recordings bioRxiv 2022.09.29.509744; doi: download pdf


Popular posts from this blog

AI in Elderly Healthcare: The Promise and The Bias

Neuropathological Features of Age-Related Brain Diseases

HippoCamera for Aging Brain