Brain Speech Neurosicncee

Summary: Researchers have made a breakthrough in how the human brain forms words before they speak. Using Neuropixels probes, they mapped how neurons represent speech sounds and assemble them into language.

This research not only sheds light on the complex cognitive levels involved in speech production, but also opens up possibilities for the treatment of speech and language disorders. The technology could lead to artificial speech, which would benefit those with neurological conditions.

Key facts:

  1. The research uses advanced neuropixel probes to record neural activity in the brain, revealing how we think and produce words.
  2. Researchers have identified neurons dedicated to speaking and listening, defining different brain functions for language production and comprehension.
  3. The findings could lead to the development of treatments for speech and language disorders and to brain-machine interfaces for artificial speech.

Source: Harvard

Using advanced brain-imaging techniques, a new study led by researchers at Harvard-affiliated Massachusetts General Hospital shows how neurons in the human brain work and allows people to think about what words they want to say and then create them out loud in speech.

The findings provide a detailed map of how speech sounds are represented in the brain as consonants and vowels before they are spoken, and how they are integrated during language production.

The work published in NatureIt may lead to improvements in the understanding and treatment of speech and language disorders.

“Speaking often seems simple, but our brains perform many complex cognitive steps in natural speech – coming up with the words we want to say, planning artful movements, and making our voices heard,” said senior author Ziv Williams. Associate Professor in Neurosurgery at MGH and Harvard Medical School.

“Our brains perform these tasks incredibly fast – three words per second in natural speech – with surprisingly few errors. Yet exactly how we achieve this feat has remained a mystery.

Using a technology called neuropixels probe to record the activity of individual neurons in the prefrontal cortex, Williams and colleagues identified cells involved in language production. Speak up. They also discovered that there are different groups of neurons in the brain that are designed for speaking and listening.

“The use of neuropixel probes in humans was first pioneered at MGH,” Williams said. “These probes are amazing – they are smaller than the width of a human hair, but they have hundreds of channels, they can simultaneously record the activity of dozens or even hundreds of neurons.”

Williams worked on the recording techniques with Sidney Cash, a professor of neurology at MGH and Harvard Medical School, who helped lead the study.

The research shows how neurons represent some of the basic elements involved in building spoken words – from simple speech sounds called phonemes to their assembly into complex strings like syllables.

For example, the consonant “da” produced by touching the hard blade behind the teeth with the tongue is needed to form the word dog. By recording individual neurons, the researchers found that certain neurons were active before this phoneme was spoken aloud. Other neurons reflect more complex aspects of word construction, such as the assembly of phonemes into syllables.

With their technology, the researchers demonstrated that it is possible to reliably determine the speech volume of individuals before they speak. In other words, scientists can predict what combinations of consonants and vowels will occur before the words are actually spoken. This capability could potentially benefit many patients with artificial prostheses or brain-machine interfaces to produce artificial speech.

“Disruptions in speech and language networks are observed in a variety of neurological disorders — stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” said Arjun Khanna, a postdoctoral fellow in the Williams lab. Co-author on the study.

“Our hope is that a better understanding of the neural circuitry underlying speech and language ability will pave the way for the development of treatments for these disorders.”

The researchers hope to expand their work by studying the complex linguistic processes involved in how people choose the words they intend to say, how the brain assembles words into sentences, and the questions that convey a person’s thoughts and feelings to others. .

So language and speech research news

Author: MGH connections
Source: Harvard
Contact: MGH Communication – Harvard
Image: Image credited to Neuroscience News.

Preliminary study: Open Access.
Single-neuron components of speech production in humans” by Ziv Williams et al. Nature


Single-neuron components of speech production in humans

People can create different combinations of body movements to create meaningful speech. This ability to organize specific phonetic sequences and their pronunciation and criteria in sub-second time allows us to produce thousands of word sounds and is an integral part of language. However, the basic cellular units and structures by which we plan and process words during speech are poorly understood.

Here, using acute ultrahigh density Neuropixel recordings that can sample a cortical column in humans, we find neurons in the pre-linguistic prefrontal cortex that contain detailed information about the phonetic organization and composition of intended words during natural speech.

These neurons represent the specific order and structure of articulatory events before speaking and reflect phonetic sequences into different syllables. They also correctly predicted phonetic, syllabic, and morphological components of incoming words and exhibited temporally ordered dynamics.

Together, we show how these combinations of cells are broadly organized along the cortical column and how their movement patterns translate from blueprint to production. We also show how these cells reliably track consonant and vowel sounds during perception, and specifically how they distinguish speech-related processes from those related to listening.

Together, these findings reveal a remarkably structured organization and cellular process capable of encoding phonetic representations in the prefrontal cortex in humans and supporting speech production.