Figures
Abstract
Magnetoencephalography (MEG) is a neuroimaging technique that accurately captures the rapid (sub-millisecond) activity of neuronal populations. Interpretation of functional data from MEG relies upon registration to the participant’s anatomical MRI. The key remaining step is to transform the participant’s MRI into the MEG head coordinate space. Although both automated and manual approaches to co-registration are available, the relative accuracy of two approaches has not been systematically evaluated. The goal of the present study was to compare the accuracy of manual and automated co-registration. Resting MEG and T1-weighted MRI data were collected from 90 participants. Automated and manual co-registration were performed on the same subjects, and the inter-method reliability of the two methods assessed using the intra-class correlation. Median co-registration error for both methods was within acceptable limits. Inter-method reliability was in the “good” range for co-registration error, and the “good” to “excellent” range for translation and rotation. These results suggest that the output of the automated co-registration procedure is comparable to that achieved using manual co-registration.
Citation: Houck JM, Claus ED (2020) A comparison of automated and manual co-registration for magnetoencephalography. PLoS ONE 15(4): e0232100. https://doi.org/10.1371/journal.pone.0232100
Editor: Blake Johnson, Australian Research Council Centre of Excellence in Cognition and its Disorders, AUSTRALIA
Received: August 22, 2019; Accepted: April 7, 2020; Published: April 29, 2020
Copyright: © 2020 Houck, Claus. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The raw source data cannot be shared publicly because the study relies upon head shape and T1-weighted structural MRI data. These are generally considered identifiable data, and cannot be shared according to the guidance of The Institutional Review Board of the University of New Mexico. The error data used in the computation of the intra-class correlations (ICCs) as well as de-faced MRIs are available at https://doi.org/10.35092/yhjc.11991546 but these are not the raw study data used in this manuscript. The Methods provide enough detail for interested researchers to replicate the analyses in a similar population.
Funding: JMH K01AA021431 National Institute on Alcohol Abuse and Alcoholism https://www.niaaa.nih.gov EDC R01AA023665 National Institute on Alcohol Abuse and Alcoholism https://www.niaaa.nih.gov The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Magnetoencephalography (MEG) is a neuroimaging technique that accurately captures the rapid (sub-millisecond) activity of neuronal populations. Indeed, MEG can only detect signal from the synchronous firing of neuronal populations in a cortical patch of approximately 10 mm2 or larger [1], making it essentially a network-detection technique. Due to a relative scarcity of reimbursable MEG-based clinical procedures, historically MEG was available at only a relatively limited number of cutting-edge research and clinical institutions [2]. However, interest in MEG has grown as the technique’s potential has been revealed over the past four decades, with increased recognition of MEG as a means of directly evaluating neuronal networks and their relevance to a range of disorders as well as to typical cognitive and affective processes.
As is the case for other functional neuroimaging approaches, interpretation of functional data from MEG relies upon registration to an anatomical or template MRI [3]. Because the data for the two modalities are collected on different scanners and therefore in different coordinate spaces [4], the procedure for MRI-MEG co-registration is somewhat involved. Typically during preparation for an MEG scan, three to five head position coils are affixed to the participant’s scalp, and then a 3D digitizing pen is used to digitize important points including the coil locations, anatomical landmarks that typically include the nasion and preauricular points, as well as a detailed headshape using approximately 150 points (Fig 1). The headshape points are collected primarily from the brow, bridge of the nose, and skull, avoiding the lower jaw and cartilaginous or fatty tissue that would be expected to shift when the participant moves or might be compressed by the head coil during the participant’s MRI scan. Because this preparation process can be somewhat labor-intensive and subject to variability in technician skill, alternatives such as use of a bite bar [5], 3-D camera [6], or 3-D laser scanner [7] have also been explored but are not widely used.
Subject preparation and co-registration procedures are important because they influence the quality of electromagnetic source localization [8, 9]. Research using MEG has consistently shown that poor co-registration quality can lead to poor source localization [10, 11]. When source localization is performed using beamforming, co-registration error greater than approximately 2 mm may yield unacceptably large errors in both source localization [12] and source extent [13]. The same 2 mm threshold appears to apply to source localization using minimum-norm estimates [14], suggesting this threshold as an heuristic for co-registration quality.
During the MEG scan session, the head position coils are energized at known frequencies, which permits the precise measurement of their locations relative to the MEG sensor array. Because at the conclusion of an MEG scan the relative locations of the sensors, coils, anatomical landmarks, and headshape points are known, transforming the MEG data to the participant’s head coordinate space is relatively simple. The key remaining step is to transform the participant’s MRI into the MEG head coordinate space. This transformation is the focus of the MEG-MRI co-registration process.
The manual co-registration process itself is straightforward. A high-resolution 3-D head surface based on the skin-air boundary can be extracted from a T1-weighted MRI using readily-available analysis toolkits such as Freesurfer [15]. Incorporation of this surface into the co-registration process has been shown to improve the quality of the co-registration [10] and is the standard in MNE-python and its predecessor, MNE [16]. In the absence of significant MRI artifacts, the participant’s distinguishing features, including the face and the anatomical landmarks collected during MEG preparation, are clearly visible on this surface. The anatomical landmarks and MEG headshape can be used to co-register the MRI head surface (and therefore the MRI data) to the participant’s head coordinate space. Typically this involves manually identifying the anatomical landmarks on the MRI head surface, using these values to perform an initial transformation, and then applying an iterative closest points algorithm [ICP: 17] to assist in refining the transformation until the distance between the MEG headshape and the MRI head surface has been minimized (Fig 2). This can be accomplished using template MRIs, but MEG data can be localized with higher confidence when the individual participant’s own structural MRI is used for their co-registration. Numerous toolkits are available to assist the analysist in co-registration, including but not limited to MNE-python [18], SPM [19], Fieldtrip [20], BrainStorm [21], and NUTMEG [22].
Importantly, although both automated and manual approaches to co-registration are available, the consistency of the two approaches has not been systematically evaluated. One common approach to evaluating the consistency of two methods is to compute their inter-method reliability. Inter-method reliability, computationally identical to inter-rater reliability, is a means of assessing the chance-corrected agreement between two different methods. This can be computed using a ratio of the variance of interest divided by the total variance; that is, the intra-class correlation (ICC) [23]. The goal of the present study was to evaluate the inter-method reliability of manual and automated co-registration using the MNE-python toolbox [18].
Method
As part of an ongoing study, resting MEG and T1-weighted MRI data were collected from 90 participants (mean age = 35.20 years (SD = 10.04), 42.2% female, 50% Hispanic). MRI data were collected on a Siemens 3T Trio Tim system (Siemens Healthcare, Erlangen, Germany) using a 32-channel head coil. Paper tape was placed across each participant’s forehead to reduce motion. Structural images were collected with magnetization-prepared 180° radiofrequency pulses and rapid gradient-echo sequence (MPRAGE; TE = 1.64, 3.5, 5.36, 7.22, and 9.08 ms; TR = 2.53 s; FA = 7°; number of excitations = 1; slice thickness = 1 mm; FOV = 256 mm; resolution = 256×256). Standard preprocessing was conducted using the Freesurfer image analysis suite [15], which is documented and freely available for download online (http://surfer.nmr.mgh.harvard.edu/). However, generation of the head surface file used in co-registration relies only upon the existence of a T1 image [24], not on any specific preprocessing package.
MEG data were collected in a magnetically and electrically shielded room (VAC Series Ak3B, Vacuumschmelze GmbH) using an Elekta Neuromag whole-cortex 306-channel MEG array (Elekta Oy, Helsinki, Finland). Before positioning the participant in the MEG, four coils were affixed to the participant’s head—two on the forehead and one behind each ear. Additional positioning data was collected using a head position device (Polhemus Fastrak). Between 83 and 229 points were collected for each subject (median = 143, IQR 122–157). Participants were instructed to keep their eyes open and focused on a fixation cross back-projected onto a screen during the scan. MEG data were sampled at a rate of 1000 Hz, with a bandpass filter of 0.10 to 330 Hz. Head position was monitored continuously throughout the MEG session. Five minutes of raw single-trial data were collected and stored. Data from two MEG measurement sessions were examined.
An experienced technician had previously manually co-registered each MEG scan to its corresponding MRI using MNE [16] following the general steps described in the Introduction. Automated co-registration in MNE-python follows the same general sequence described for manual co-registration. The MNE toolboxes include standard landmark coordinates (nasion, preauricular points) defined on the MNI305 head [25]. The automated co-registration is performed by 1) transforming these coordinates from the MNI305 head to each participant’s MRI coordinate space, 2) performing an initial fit to the MRI head surface using only these landmarks, 3) applying several initial iterations of the iterative closest points (ICP) algorithm, 4) eliminating outlier head points (i.e., those > 5 mm away from the head surface), and 5) applying the ICP algorithm again. The final affine transformation was then saved and the co-registration errors (i.e., the median distance between each MEG headshape point and the nearest point on the MRI head surface) preserved. Errors for manual co-registrations were obtained by applying the affine transformations from the manual co-registration to the MEG headshape and computing the distance between each MEG headshape point and the nearest point on the MRI head surface. Visual inspection was used to assure the quality of the fit. The inter-method reliability of co-registration error for manual and automated co-registration was compared using the intraclass correlation (ICC model 3,1: 23).To evaluate the relationship between participant preparation procedures and co-registration error, we computed the correlation between co-registration error terms and the number of headshape points collected during participant preparation. To assess the comparability of the head transformation matrices produced by each method, we converted each affine transformation matrix to mm of translation in the x, y, and z directions, and degrees of rotation around the x, y, and z axes (i.e., pitch, roll, and yaw), and computed the inter-method reliability for each parameter. Finally, to evaluate whether the automated technique could be applied to anonymized, “de-faced” data, we re-ran the automated co-registration procedure using MRIs that had been de-faced with PyDeface [26].
The methods described above generate a 3D rendering of each participant’s head and face from their structural MRI. While initial studies of such renderings observed relatively low accuracy when matching renderings to facial photographs [27, 28], more recent work has indicated that facial recognition software can accurately match renderings to facial photographs with high confidence when sufficient high-quality facial photographs are available [29]. This is a point of ethical concern, as any dataset containing images comparable to facial photographs could result in a loss of privacy to affected research participants if data are shared. This would also lead to a violation of research regulations, such as the HIPAA Privacy Rule in the U.S. [30]. For studies such as ours that are protected by a Certificate of Confidentiality under the U.S. 21st Century Cures Act [31], identifying information cannot be shared without participant consent, by force of law. The standard practice is to remove these direct identifiers from datasets [32]. Large U.S. federally-funded studies such as the Human Connectome Project share only de-faced structural MRIs, and for MEG data, also remove the participant’s head shape [33]. As participants in the present study did not consent to data sharing, we have adopted this approach and share only de-identified data, including de-faced structural MRIs and coded co-registration error data for participants, at http://dx.doi.org/10.35092/yhjc.11991546.
Results
Median co-registration error for manual co-registration was 1.37 mm (IQR 1.17–1.63), and for automatic co-registration 1.58 mm (IQR 1.23–2.05) (Fig 3). The mean difference in co-registration error between manual and automated co-registration was approximately 0.313 mm (SD 0.555 mm). Co-registration error between the two methods was correlated at r = 0.541 (p < .001), which corresponds to a Cohen’s d of 1.29, a “large” effect size [34]. The association between co-registration error and the number of headshape points was not significant for either the manual (r = 0.143, p = .058) or the automated (r = 0.025, p = 0.745) co-registration procedure. The inter-method reliability for the co-registration error between the two co-registration approaches was ICC = 0.472, which is in the “fair” range [35]. After excluding automated co-registration results with unacceptably high error (i.e., > 2.0 mm), inter-method reliability improved only slightly to ICC = 0.491, also in the “fair” range [35]. Inter-method reliability of all translation and rotation parameters was in the good to excellent range (i.e., all ICC > 0.74; see Table 1).
Median co-registration error for automatic co-registration with de-faced MRIs was 2.01 mm (IQR 1.71–2.32). The difference in co-registration error between manual co-registration with original MRIs and automated co-registration with de-faced MRIs was approximately 0.619 mm (SD 0.577 mm). The inter-method reliability for manual co-registration with original MRIs and automated co-registration with de-faced MRIs was ICC = 0.045, which is in the “poor” range [14].
Discussion
In our data, both manual and automated co-registration yielded generally acceptable results. The co-registration error obtained for both processes in the present study is also consistent with that of other studies. For instance, a study using bite bars to reduce motion found a mean co-registration error of 1.16 mm [5], while a study using a 3D scanner found mean error of 2.2 mm [7], and one using a 3D camera (Kinect) observed a mean error of 1.62 mm [6]. Despite the ready availability of co-registration error metrics, reporting of these metrics in MEG studies has not yet become standard practice [36, 37].
The inter-method reliability results, in the “fair” range for co-registration error and the “good” to “excellent” range for translation and rotation parameters, suggests that the outputs of the manual and automated co-registration processes applied in this study are similar. That is, despite the extensive training and time requirements of manual co-registration, the results of the manual and automated co-registration procedures were in agreement, for both co-registration error and for the translations and rotations that were applied to align the MEG headshape points and MRI head surface. However, based on the results of the present study, automated co-registration using de-faced MRIs should be viewed with some caution.
It is worth noting that the MRI scans included in the present study appear to have been relatively artifact-free. Data from participants with common sources of susceptibility artifact such as braces, permanent retainers, other dental work, and certain hair products [38] would likely result in distortions of the head surface generated from the T1-weighted MRI, requiring greater attention and the potential for manual intervention during co-registration.
Conclusion
Until devices capable of collecting simultaneous MRI and MEG data become commercially available [39], co-registration will remain a limiting factor in the localization accuracy of MEG data [10–14]. Because reporting of co-registration error is not yet a best practice for MEG [36], adoption has been slow. The implementation of procedures to estimate co-registration error in analysis packages such as MNE-python [18] may help to accelerate this. Our results suggest that in many cases a simple automated processes performed using freely-available and open-source software can co-register MEG and MRI data with results similar to those achieved by manual co-registration, avoiding the time and training requirements of manual procedures.
Ethical approval and informed consent statement
All study protocols were approved by the University of New Mexico Institutional Review Board (http://irb.unm.edu). All procedures were carried out in accordance with the relevant guidelines and regulations. Documented informed consent was obtained from all participants. Documented consent for the publication of the photograph presented in Fig 1 and the rendering presented in Fig 2 was obtained from the individual pictured, who was not a participant in this research study.
References
- 1. Lü Z-L, Williamson SJ. Spatial extent of coherent sensory-evoked cortical activity. Exp Brain Res. 1991;84(2):411–6. pmid:2065748
- 2. Bagić AI. Disparities in clinical magnetoencephalography practice in the United States: a survey-based appraisal. J Clin Neurophysiol Off Publ Am Electroencephalogr Soc. 2011 Aug;28(4):341–7.
- 3. Hämäläinen M. Anatomical correlates for magnetoencephalography: integration with magnetic resonance images. Clin Phys Physiol Meas. 1991 Jan;12(A):29–32.
- 4.
CoordinateSystems—Free Surfer Wiki [Internet]. [cited 2019 Dec 18]. Available from: https://surfer.nmr.mgh.harvard.edu/fswiki/CoordinateSystems
- 5. Adjamian P, Barnes GR, Hillebrand A, Holliday IE, Singh KD, Furlong PL, et al. Co-registration of magnetoencephalography with magnetic resonance imaging using bite-bar-based fiducials and surface-matching. Clin Neurophysiol. 2004 Mar;115(3):691–8. pmid:15036065
- 6. Vema Krishna Murthy S, MacLellan M, Beyea S, Bardouille T. Faster and improved 3-D head digitization in MEG using Kinect. Front Neurosci [Internet]. 2014 Oct 28 [cited 2015 Aug 21];8. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4211394/
- 7. Bardouille T, Krishnamurthy SV, Ghosh Hajra S, D’Arcy RCN. Improved Localization Accuracy in Magnetic Source Imaging Using a 3-D Laser Scanner. IEEE Trans Biomed Eng. 2012 Dec;59(12):3491–7. pmid:23033325
- 8. Dalal SS, Rampp S, Willomitzer F, Ettl S. Consequences of EEG electrode position error on ultimate beamformer source reconstruction performance. Front Neurosci [Internet]. 2014 [cited 2019 Oct 9];8. Available from: https://www.frontiersin.org/articles/10.3389/fnins.2014.00042/full
- 9. Acar ZA, Makeig S. Effects of Forward Model Errors on EEG Source Localization. Brain Topogr. 2013 Jan 26;26(3):378–96. pmid:23355112
- 10. Whalen C, Maclin EL, Fabiani M, Gratton G. Validation of a method for coregistering scalp recording locations with 3D structural MR images. Hum Brain Mapp. 2008;29(11):1288–301. pmid:17894391
- 11. Troebinger L, López JD, Lutti A, Bradbury D, Bestmann S, Barnes G. High precision anatomy for MEG. NeuroImage. 2014 Feb 1;86:583–91. pmid:23911673
- 12. Hillebrand A, Barnes GR. The use of anatomical constraints with MEG beamformers. NeuroImage. 2003 Dec;20(4):2302–13. pmid:14683731
- 13. Hillebrand A, Barnes GR. Practical constraints on estimation of source extent with MEG beamformers. NeuroImage. 2011 Feb 14;54(4):2732–40. pmid:20969964
- 14. Chang W-T, Ahlfors SP, Lin F-H. Sparse current source estimation for MEG using loose orientation constraints. Hum Brain Mapp. 2013;34(9):2190–201. pmid:22438263
- 15. Fischl B. FreeSurfer. NeuroImage. 2012;62(2):774–81. pmid:22248573
- 16.
Hämäläinen M. MNE [Internet]. 2001. Available from: http://www.nmr.mgh.harvard.edu/martinos/userInfo/data/MNE_register/index.php
- 17. Besl PJ, McKay HD. A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell. 1992 Feb;14(2):239–56.
- 18. Gramfort A, Luessi M, Larson E, Engemann D, Strohmeier D, Brodbeck C, et al. MNE software for processing MEG and EEG data. NeuroImage. 2014;86:446–60. pmid:24161808
- 19.
FRISTON K, ASHBURNER J, KIEBEL S, NICHOLS T, PENNY W, editors. Statistical Parametric Mapping: The Analysis of Functional Brain Images [Internet]. London: Academic Press; 2007. Available from: http://www.sciencedirect.com/science/article/pii/B9780123725608500000
- 20. Oostenveld R, Fries P, Maris E, Schoffelen J-M. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput Intell Neurosci. 2011;2011:1–9.
- 21. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Comput Intell Neurosci. 2011;2011:1–13.
- 22. Dalal S, Zumer J, Agrawal V, Hild K, Sekihara K, Nagarajan S. NUTMEG: A Neuromagnetic Source Reconstruction Toolbox. Neurol Clin Neurophysiol NCN. 2004 Nov 30;2004:52. pmid:16012626
- 23. Shrout P, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979;86:420–8. pmid:18839484
- 24.
Greve DN, Kaufman, Zeke. Freesurfer mkheadsurf [Internet]. FreeSurfer; 2016 [cited 2019 Nov 21]. Available from: https://github.com/freesurfer/freesurfer/blob/stable6/scripts/mkheadsurf
- 25.
Evans AC, Collins DL, Mills SR, Brown ED, Kelly RL, Peters TM. 3D statistical neuroanatomical models from 305 MRI volumes. In: 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference. 1993. p. 1813–7 vol.3.
- 26.
PyDeface [Internet]. poldracklab; 2019 [cited 2019 Dec 15]. Available from: https://github.com/poldracklab/pydeface
- 27. Prior FW, Brunsden B, Hildebolt C, Nolan TS, Pringle M, Vaishnavi SN, et al. Facial Recognition From Volume-Rendered Magnetic Resonance Imaging Data. IEEE Trans Inf Technol Biomed. 2009 Jan;13(1):5–9. pmid:19129018
- 28. Mazura JC, Juluru K, Chen JJ, Morgan TA, John M, Siegel EL. Facial Recognition Software Success Rates for the Identification of 3D Surface Reconstructed Facial Images: Implications for Patient Privacy and Security. J Digit Imaging. 2012 Jun 1;25(3):347–51. pmid:22065158
- 29. Schwarz CG, Kremers WK, Therneau TM, Sharp RR, Gunter JL, Vemuri P, et al. Identification of Anonymous MRI Research Participants with Face-Recognition Software. N Engl J Med. 2019 Oct 24;381(17):1684–6. pmid:31644852
- 30.
U.S. Department of Health & Human Services, Office for Civil Rights. Summary of the HIPAA Privacy Rule [Internet]. Available from: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html
- 31.
An act to accelerate the discovery, development, and delivery of 21st century cures, and for other purposes. Public Law 114–255 Dec 13, 2016. Available from: https://www.govinfo.gov/app/details/PLAW-114publ255
- 32. Hrynaszkiewicz I, Norton ML, Vickers AJ, Altman DG. Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers. BMJ. 2010 Jan 29; 340. Available from: https://www.bmj.com/content/340/bmj.c181
- 33. Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, Ugurbil K. The WU-Minn Human Connectome Project: An overview. NeuroImage. 2013 Oct 15;80:62–79. pmid:23684880
- 34.
Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Routledge Academic; 1988.
- 35. Cicchetti DV, Sparrow SA. Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. Am J Ment Defic. 1981;86:127–37. pmid:7315877
- 36. Gross J, Baillet S, Barnes GR, Henson RN, Hillebrand A, Jensen O, et al. Good practice for conducting and reporting MEG research. NeuroImage. 2013 Jan 15;65:349–63. pmid:23046981
- 37. Keil A, Debener S, Gratton G, Junghöfer M, Kappenman ES, Luck SJ, et al. Committee report: Publication guidelines and recommendations for studies using electroencephalography and magnetoencephalography. Psychophysiology. 2014 Jan 1;51(1):1–21. pmid:24147581
- 38. Chenji S, Wilman AH, Mah D, Seres P, Genge A, Kalra S. Hair product artifact in magnetic resonance imaging. Magn Reson Imaging. 2017 Jan 1;35:1–3. pmid:27590880
- 39. Vesanen PT, Nieminen JO, Zevenhoven KCJ, Dabek J, Parkkonen LT, Zhdanov AV, et al. Hybrid ultra-low-field MRI and magnetoencephalography system based on a commercial whole-head neuromagnetometer. Magn Reson Med. 2013;69(6):1795–804. pmid:22807201