Neural Decoding of Visual Imagery During Sleep


This paper talks about an experiment conducted to decode the content of dreams using fMRI.

Quote

Machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases.

The experiment took place during the visual imagery experienced during the hypnagogic period. It's nearly the same as the REM stage.

Subjects were woken when an electroencephalogram was detected and were asked to give a verbal report describing their visual experience before awakening.

The decoding was very good for meta-categories (human, vehicles, etc...) but not so good for the non-meta categories even iif it was better than chance.

Some brain zones showed better performance with different images:

Quote

The FFA showed better performance with human synsets, whereas the PPA showed better performance with scene synsets.

META

Status:: #wiki/references/zotero/article
Related:: Dreams

Link:: https://www.science.org/doi/10.1126/science.1234330
ZoteroLink:: @horikawaNeuralDecodingVisual2013
Author:: T. Horikawa, M. Tamaki, Y. Miyawaki, Y. Kamitani
Year:: 2013

Priority::

Consumed:: true
Reconsume::

Rating:: 9
Favorite::