Posts

Showing posts from May, 2023

Uncovering Resilience to Alzheimer's Disease

Image
A recent publication highlights the discovery of the second case demonstrating exceptional resilience to autosomal dominant Alzheimer's disease (ADAD). This male patient, carrying a PSEN1-E280A mutation, remained cognitively intact until the age of 67, despite the presence of highly elevated amyloid plaque burden. Interestingly, the patient did not possess the protective APOE3 Christchurch variant but instead had a rare variant in the RELN gene (H3447R), termed COLBOS. This gain-of-function variant showed enhanced ability to activate its protein target Dab1, leading to reduced human Tau phosphorylation. These findings suggest a potential role for RELN signaling in resilience to dementia, highlighting the importance of genetic variants in ADAD protection.  The apolipoprotein E (APOE) gene, specifically the APOE ε4 allele, is the most well-established genetic risk factor for late-onset Alzheimer's disease. Inheriting one or two copies of the APOE ε4 allele increases the risk of d

Brain decoder

Image
A study published today in journal Nature introduced a non-invasive semantic decoder based on functional magnetic resonance imaging (fMRI) and technology similar to the one powering ChatGPT. The decoder can convert a person's brain activity when they are listening to or silently imagine telling a story into a continuous stream of text. The method created by the team from the University of Texas at Austin may enable people to speak with their minds.  Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. And it can fruitfully work with multiple brain regions. This demonstrates the viability of non-invasive language brain–computer interfaces.  Computational task was performed by Generative Pre-trained Transformer (GPT), a 12-layer neural network using multi-head self-attention to combine representation