Scientists Say They’ve Figured Out a Way to Read Thoughts Using an MRI Machine

by -204 views

Psychology and neuroscience professor Jack Gallant displays videos and encephalon images used in his research. Video produced by Roxanne Makasdjian, Media Relations.

BERKELEY —
Imagine tapping into the mind of a blackout patient, or watching i’s own dream on YouTube. With a cutting-border blend of encephalon imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios inside reach.

Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.

As yet, the engineering tin can only reconstruct movie clips people have already viewed. However, the breakthrough paves the style for reproducing the movies inside our heads that no ane else sees, such as dreams and memories, according to researchers.

The approximate reconstruction (right) of a movie clip (left) is achieved through encephalon imaging and figurer simulation

“This is a major jump toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the written report published online today (Sept. 22) in the periodical
Current Biological science. “We are opening a window into the movies in our minds.”

Somewhen, practical applications of the technology could include a improve agreement of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases.

Information technology may also lay the groundwork for brain-machine interface so that people with cerebral palsy or paralysis, for example, can guide computers with their minds.

Withal, researchers signal out that the technology is decades from allowing users to read others’ thoughts and intentions, equally portrayed in such sci-fi classics every bit “Begin,” in which scientists recorded a person’s sensations and so that others could experience them.

Previously, Gallant and swain researchers recorded encephalon activity in the visual cortex while a field of study viewed black-and-white photographs. They so built a computational model that enabled them to predict with overwhelming accuracy which picture the field of study was looking at.

In their latest experiment, researchers say they have solved a much more difficult problem by really decoding encephalon signals generated by moving pictures.

“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In guild for this technology to take wide applicability, we must empathize how the brain processes these dynamic visual experiences.”

Mind-reading through brain imaging technology is a common sci-fi theme

Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain nevertheless within the MRI scanner for hours at a time.

They watched ii separate sets of Hollywood movie trailers, while fMRI was used to measure claret flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.”

“Nosotros built a model for each voxel that describes how shape and movement information in the movie is mapped into brain activity,” Nishimoto said.

The encephalon activity recorded while subjects viewed the first set up of clips was fed into a reckoner program that learned, second by 2d, to associate visual patterns in the moving-picture show with the corresponding brain activeness.

Brain activity evoked past the second set up of clips was used to test the movie reconstruction algorithm. This was done past feeding eighteen million seconds of random YouTube videos into the computer program and so that it could predict the brain activity that each film prune would most probable evoke in each subject.

Finally, the 100 clips that the calculator programme decided were most similar to the clip that the subject had probably seen were merged to produce a blurry even so continuous reconstruction of the original pic.

Reconstructing movies using brain scans has been challenging considering the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, researchers said. For this reason, virtually previous attempts to decode brain activity accept focused on static images.

“We addressed this problem by developing a ii-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.

Ultimately, Nishimoto said, scientists need to empathise how the brain processes dynamic visual events that we feel in everyday life.

“Nosotros need to know how the brain works in naturalistic conditions,” he said. “For that, we need to first empathise how the brain works while we are watching movies.”

Other coauthors of the report are Thomas Naselaris with UC Berkeley’s Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley’s Articulation Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Section of Statistics.

RELATED INFORMATION

  • Gallant Lab website, with more details nearly the report

Source: https://news.berkeley.edu/2011/09/22/brain-movies/