Skip to main content

Researchers Reconstruct Observed Videos from Resulting Brain Activity

Thanks to research by Professor Jack Gallant and a crack team of researchers at UC Berkeley, we are now one step closer to using our brains to record and store actual visual media. When we see things (or think about things for that matter), our brains naturally activate in very specific ways considering what we’re seeing or imagining. Gallant’s most recent paper in Current Biology outlines the results of an experiment that tried to decode brain activity and convert it back into video. It’s astounding how well it worked.

Recommended Videos

The experiment went a little something like this. Subjects were placed in an MRI and watched a series of movie trailers. While they watched, the MRI tracked the blood flow to certain parts of the brain and a computer parsed these parts of the brain into voxels (volumetric pixels). The first round of trailer-viewing gave the computer a chance to get a feel for the way the sections of the brain should be mapped. Its results were then compared against the actual trailers to try and match the voxels of activity with the footage that created them, a calibration round of sorts.

The second run is where things get interesting. The subjects were again subjected to a stream of trailers and the MRI recorded the blood flow and the computer parsed it into voxels, but this time, the results weren’t matched against the original footage. Instead, the computer used its parsed voxel data and 18 million seconds (that’s about 208 days) of YouTube footage to reconstruct the clips it thought the subject had been seeing.

For each scene, the computer selected the 100 clips that it thought would be most accurate –most likely to recreate the observed blood flow and voxel patterns– and merged them together, creating a composite clip that should, in theory, resemble the footage the subject actually saw. Remarkably, they did.

Granted, the actual results matched the the trailers largely in terms of shape. There are little-to-no details that were able to make it all the way through the merging process, words were mostly construed as blobs and more complex images (and images that one would expect to be rarer in the YouTube video bank), like an elephant, were mostly just moving blobs. Still, considering how complex the brain is, this is a huge first step. If the video bank were to get larger and more easily searchable (or maybe able to render clips on the spot) and the brain-voxel reading technology could get more specific, this technology could totally be used to replay memories (as they are remembered anyway) and maybe even dreams. Granted, the specificity needed to make a completely watchable clip is still a way off, but the first big jump has been made. It’s mostly iteration and fine tuning from here on out.

Read more about the study here.

(via Gizmodo)

Have a tip we should know? [email protected]

Author

Filed Under:

Follow The Mary Sue: