Skip to main content
Advertisement

< Back to Article

Natural sounds can be reconstructed from human neuroimaging data using deep neural network representation

Fig 8

Sound reconstruction with attention.

(A) Reconstructed spectrograms under selective auditory attention tasks (ROI: AC, DNN layer: Conv5; for reconstructed sounds, see https://www.youtube.com/watch?v=1ZHCoiyqPb4). The top panel displays the spectrograms of two superimposed sounds presented during the task, where subjects were instructed to focus on one specific sound. The bottom panel shows the spectrograms reconstructed by different subjects (S4, S5). (B) Evaluation of attentional bias. Identification analysis was conducted to evaluate the attentional bias by comparing the correlation of reconstructed features with those of the attended and unattended stimuli. (C) Identification accuracy of attended sound. Each bar represents the correct rate among the 48 identification trials. Since the identification of the attended vs. unattended sound was scored as a binary outcome per trial, conventional error bars are not shown. Instead, the dashed line indicates the significance level (p < 0.05) based on a binomial test (N = 48). For the pilot study S1 (N = 32) and cases where F0 and HNR calculations were unsuccessful, a higher significance threshold was required (not depicted here). The data underlying this figure are provided in S2 Data.

Fig 8

doi: https://doi.org/10.1371/journal.pbio.3003293.g008