Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Upper limb movements can be decoded from the time-domain of low-frequency EEG

Abstract

How neural correlates of movements are represented in the human brain is of ongoing interest and has been researched with invasive and non-invasive methods. In this study, we analyzed the encoding of single upper limb movements in the time-domain of low-frequency electroencephalography (EEG) signals. Fifteen healthy subjects executed and imagined six different sustained upper limb movements. We classified these six movements and a rest class and obtained significant average classification accuracies of 55% (movement vs movement) and 87% (movement vs rest) for executed movements, and 27% and 73%, respectively, for imagined movements. Furthermore, we analyzed the classifier patterns in the source space and located the brain areas conveying discriminative movement information. The classifier patterns indicate that mainly premotor areas, primary motor cortex, somatosensory cortex and posterior parietal cortex convey discriminative movement information. The decoding of single upper limb movements is specially interesting in the context of a more natural non-invasive control of e.g., a motor neuroprosthesis or a robotic arm in highly motor disabled persons.

Introduction

Understanding how the human brain encodes movements is essential for the development of an intuitive and natural control of a motor neuroprosthesis or a robotic arm. Neuroprostheses based on functional electrical stimulation (FES) [1] can be already used to restore movement function of spinal cord injured (SCI) persons [2]. These neuroprostheses often rely on a shoulder joystick as a control signal, and end users with SCI need to learn to control movements, such as grasping, with contralateral shoulder movements. However, this control would have a more natural feel for the end user if the movement intention is decoded with a brain-computer interface (BCI), and subsequently translated into a control signal for a neuroprosthesis or robotic arm. It has been shown with tetraplegic human subjects that invasive BCIs allow the control of a robotic arm with up to 10 degrees of freedom (DoF) [36]. Invasive BCIs have a better signal-to-noise ratio than non-invasive BCIs, but require extensive surgery, and the suitability for long-term use is still unclear due to neural tissue response. Non-invasive BCIs based on electroencephalography (EEG) signals on the other hand do not require surgery and are easier to setup. They often rely on power modulations of sensorimotor rhythms (SMR) accompanying movement imagination (MI) (see also event-related (de)synchronization [7]) but other brain rhythms can also be exploited [8,9]. These power modulations can act as the control signals for a neuroprosthesis [2,1012]. Using an SMR-based BCI, our group has already shown the restoration of the lateral grasp of a tetraplegic (C4/5 ASIA A) user with MI of both feet [12]. In a later study, we demonstrated the switching between different lateral grasp phases in a person with SCI (C5 ASIA A) with an SMR-based BCI and the Freehand system [10]. Recently, Rohm et al. and Kreilinger et al. [11,13] restored not only hand but also elbow functions of a tetraplegic end user (a review can be found in Rupp et al. [2]). However, SMR-based BCIs can usually only detect spatially well separated patterns in the EEG as elicited by, for example, right hand MI vs left hand MI, although recent research suggests more spatially specific detections [14,15]. Furthermore, SMR-based BCIs usually require repetitive MI of movements. This often requires BCI users to learn unnatural MI commands, such as using repetitive left hand MI to control right hand functions [16]. However, for a more natural control the imagined movement should be as close as possible to the actual neuroprosthesis movement. In this context, continuous decoding of movement trajectories from the time-domain of the EEG has been investigated. Bradberry et al. showed in an offline study the decoding of 3D hand velocities [17], later our group showed the decoding of 3D positions in a continuous movement task [18] and the decoding of imagined movement trajectories [19]. Furthermore, Agashe et al. decoded hand joint angular velocities [20], and also hand movement directions were decoded non-invasively [21]. The current state of the art allows decoding of movement trajectories and directions from EEG, however the low correlation with the real or intended movement prevents a reliable and accurate control.

Another possibility to make neuroprosthesis or robotic arm control more natural is to decode additional information about the type or quality of an imagined movement, which has been done in the time-domain as well as in the frequency-domain of EEG. Gu et al. found that the speed of imagined wrist movements is encoded in the time-domain in motor-related cortical potentials (MRCPs) [2224], and Yuan et al. found such a relationship in the mu and beta rhythm with executed/imagined hand movements [25]. Jochumsen et al. [26] decoded from MRCPs movement force and speed during executed and imagined grasping movements in healthy persons, and attempted movements in stroke patients. Also MIs related to the same limb were classified based on EEG power modulations in the frequency domain: Edelman et al. [15] classified repetitive imagined hand flexion/extension and forearm supination/pronation, Yong and Menon showed the classification of repetitive imagined grasp and elbow movements [14]. Based on these findings, the natural control experience can be enhanced if, e.g. an imagined repetitive supination of the arm is used to control the supination of, e.g. a robotic arm. Furthermore, detecting different MIs related to the same limb increases the number of control possibilities compared to classical SMR-based BCIs, which often only detect left/right hand and foot MI. However, repetitive MIs are also not optimal since one usually does not execute repetitive hand/arm movements when manipulating objects. Of special interest are therefore sustained MIs, such as single supination. Vučković and Sepulveda showed the classification of sustained wrist extension/flexion and forearm pronation/supination MIs from the frequency-domain of the EEG in the delta and gamma band [27,28]. Gu et al. classified imagined wrist extension and wrist rotation based on power-modulations in the mu and beta band and the rebound rate of MRCPs but did not find any statistical difference in the rebound rate of MRCPs [23].

In this work we hypothesize that executed and imagined sustained movements from the same limb can be decoded from low-frequency time-domain signals (< 3 Hz). We applied a multiclass classification comprising of 6 movement classes: elbow flexion/extension, forearm supination/pronation, and hand open/close. Additionally, these movements were classified against a rest class. We measured 15 healthy subjects in two separate ME and MI sessions. To the best of our knowledge, this high number of different sustained movements of the same limb has not been studied before using low-frequency time-domain EEG signals. Furthermore, we show for the first time for EEG-based movement decoding the classifier patterns [29] in the source space, which allows the estimation of the brain regions exploited by the classifier. Generally, the purpose of this work is to get a better understanding if and how single sustained upper limb movements are encoded in the time-domain of low-frequency EEG signals.

Methods

Subjects

We recruited 15 healthy subjects aged between 22 and 40 years with a mean age of 27 years (standard deviation 5 years). Nine subjects were female, and all the subjects except s1 were right-handed. The subjects received payment for their participation. Written informed consent was obtained from all subjects, and the study was conducted in accordance with the protocol approved by the ethics committee of the Medical University of Graz (approval number 28–108 ex 15/16).

Paradigm

Subjects sat on a chair and their right arm was fully supported by an exoskeleton with anti-gravity support (Hocoma, Switzerland) to avoid muscle fatigue, see Fig 1A (the individual in this figure has given written informed consent, as outlined in PLOS consent form, to publish these case details).

thumbnail
Fig 1. Experimental setup and movements.

a: Subjects sat in a chair and executed/imagined movements according to cues presented on a computer screen in front of them. b: Subjects executed/imagined: elbow flexion, elbow extension, forearm supination, forearm pronation, hand close, and hand open.

https://doi.org/10.1371/journal.pone.0182578.g001

We measured each subject in two sessions on two different days, which were not separated by more than one week. In the first session the subjects performed ME, and MI in the second session. The subjects performed six movement types which were the same in both sessions and comprised of elbow flexion/extension, forearm supination/pronation and hand open/close; all with the right upper limb (see Fig 1B). All movements started at a neutral position: the hand half open, the lower arm extended to 120 degree and in a neutral rotation, i.e. thumb on the inner side. Additionally to the movement classes, a rest class was recorded in which subjects were instructed to avoid any movement and to stay in the starting position. In the ME session, we instructed subjects to execute sustained movements. In the MI session, we asked subjects to perform kinesthetic MI [30] of the movements done in the ME session (subjects performed one ME run immediately before the MI session to support kinesthetic MI).

The paradigm was trial-based and cues were displayed on a computer screen in front of the subjects, Fig 2 shows the sequence of the paradigm. At second 0, a beep sounded and a cross popped up on the computer screen (subjects were instructed to fixate their gaze on the cross). Afterwards, at second 2, a cue was presented on the computer screen, indicating the required task (one out of six movements or rest) to the subjects. At the end of the trial, subjects moved back to the starting position. In every session, we recorded 10 runs with 42 trials per run. We presented 6 movement classes and a rest class and recorded 60 trials per class in a session.

thumbnail
Fig 2. Trial sequence.

At second 0, a cross appeared together with a beep sound; at second 2, the cue was presented and subjects executed/imagined a sustained movement or avoided any movement, respectively. After the trial, a break with a random duration of 2 s to 3 s followed.

https://doi.org/10.1371/journal.pone.0182578.g002

Recording

The EEG was measured from 61 channels covering frontal, central, parietal and temporal areas using active electrodes and four 16-channel amplifiers (g.tec medical engineering GmbH, Austria). Reference was placed on the right mastoid, ground on AFz. We used an 8th order Chebyshev bandpass filter from 0.01 Hz to 200 Hz and sampled with 512 Hz. Power line interference was suppressed with a notch filter at 50 Hz. In addition we measured the arm joint angles for the exoskeleton using customized software and finger positions with a 5DT Data Glove (5DT, USA) for determining movement onsets. Prior to each session, we measured the electrode positions with a CMS 20 EP system (Zebris Medical GmbH, Germany). The individual electrode positions were used for source imaging.

Movement onset detection

To detect movement onsets in ME sessions we used sensor data from the exoskeleton and the data glove. The elbow and wrist sensors (exoskeleton) were used to detect elbow flexion/extension and forearm pronation/supination onsets, respectively. For opening/closing onsets we performed a principal component analysis on the data glove sensor data and used only the first principal component for further processing. A movement was detected when the absolute difference between the sensor data and the preceding time average (from -1 s to -0.5 s) crossed a threshold. Thresholds were chosen dependent on each sensor to ensure timely detection of movement onsets and to minimize false positive detections (typically, movements were detected not more than 80 ms later than a human expert would detect them when visually inspecting the sensor data). In order to account for systematic detection time differences between the classes (e.g. different sensor thresholds and different inertiae of limb parts), we time-shifted the mean value of the detection times of each class toward the mean value of all classes. Thus, on average the movement onsets (wrt. to the cue) of the movement classes were all the same. For the classes without overt movements (i.e., the rest class and the MI classes), we assumed a virtual movement onset. This virtual movement onset was individually calculated for each subject as the average movement onset of the movement classes. In this manner, all classes were still comparable.

Preprocessing

We used EEGLAB to detect and remove noisy channels (1.4 channels per subject on average) based on the joint probability of each channel. We downsampled the data to 256 Hz to save computation time. Thereafter we marked artefacts by band-pass filtering (0.3 Hz—70 Hz, 4th order zero-phase Butterworth filter) the data and using EEGLAB[31] to find (1) values above/below thresholds of -200 μV and 200 μV, respectively, (2) trials with abnormal joint probabilities, and (3) trials with abnormal kurtosis. The methods (2) and (3) used as threshold 5 times the standard deviation of their statistic to detect artefact contaminated trials. The artefact contaminated trials were only marked for removal but not yet removed. Afterwards, we filtered the original (unfiltered) 256 Hz EEG data with a zero-phase 4th order Butterworth filter between 0.3 Hz and 3 Hz and re-referenced the data to a common average reference. Subsequently, we discarded the trials previously marked as artefact contaminated.

Classification

The preprocessed signals were classified with a shrinkage regularized linear discriminant analysis (sLDA) classifier [32,33] which was embedded in the discriminative spatial pattern (DSP) [29] framework described in the next section.

We conducted two types of classifications: first, we classified all 6 movement classes against each other. Second, we aggregated all movement classes into one class and classified it against the rest class. We refer to these classification types as mov-vs-mov and mov-vs-rest classifications, respectively. In the mov-vs-rest classification we randomly removed trials from the aggregated movement class to ensure equal trial numbers in both classes. As mov-vs-mov was a multiclass classification comprising of 6 classes, we applied an 1-vs-1 classification strategy yielding 15 binary classifiers. To validate the classification we employed a 10x10-fold cross-validation.

We employed two classification approaches using EEG data from: (1) single time points and (2) time windows with different lengths (0.2–1 s). Single time point classification gives a higher time resolution of the accuracy course and is more suitable to analyze the information distribution over time. Furthermore, the corresponding classifier patterns can be readily obtained with the DSP method described in the next section. The time window based classification, on the other hand, is expected to increase the classification accuracy. Because every method has its benefits, we analyzed both approaches in this work and refer to them as “single time point” and “time window” based classifications.

Classifier patterns

We calculated the classifier pattern based on the discriminative spatial pattern (DSP) method [29]. This method allows the calculation of an (s)LDA classifier and the corresponding patterns simultaneously. An LDA can be formulated as an optimization problem of Fisher's’ criterion and consecutively as a generalized eigenvalue problem. When this generalized eigenvalue problem is solved for the eigenvector corresponding to the largest eigenvalue one obtains the LDA weight vector. DSP also solves this generalized eigenvalue problem for the remaining eigenvectors and one obtains a weight matrix. This weight matrix can then be inverted to obtain the pattern.

Let x(t) be a vector of the EEG channels at time t with dimension [channels x 1], wt the computed LDA weight vector at time t with dimension [channels x 1], and the scalar y(t) the projection of the original EEG channels to the LDA space. Then the LDA can be formulated as (1) and wt corresponds to the eigenvector with the largest eigenvalue. With DSP we get a weight matrix instead where the first column (when sorted by the eigenvalue) corresponds to the LDA solution: (2)

This weight matrix can be inverted to obtain the pattern at corresponding to the LDA weights: (3) (4)

In fact, we obtained an sLDA weight vector because we calculated the within-class scatter matrix (a factor in the Fisher's criterion) using shrinkage regularization. We calculated the patterns for every time step in the time window from -0.4 s to 0.4 s relative to the movement onset (indicated by the subscript t).

In general terms, a pattern explains how a source, e.g. a specific brain area or independent component, is projected on the channels. Noteworthy, “source” can refer to two different concepts: first, the sources constituting a classifier (manifesting as a pattern) in channel space (i.e. scalp potential distribution), second, the brain sources found with source imaging methods, i.e. voxels. This section refers solely to patterns and the next section shows how source imaging was applied to transform this pattern to the source (voxel) space. Each element in a pattern vector shows with what impact a source is projected on the associated channel. It is important to bear in mind that a pattern itself does not have any physical representation, i.e. it has no physical unit. However, a common physical unit would be a necessity when averaging and interpreting patterns. If we multiply (scale) a source with its pattern, we get the projection to the channel space in the same physical unit as the source, e.g. if the source corresponds to Volt, the resulting scaled pattern corresponds to Volt too. In the case of LDA, however, we do not have a single source but two classes in the channel space projected into an one dimensional LDA space. Thus, we are interested in the distance between the two classes in the LDA space. In our scaling approach we use the distance between the two class means in the LDA space as a scaling factor for at. Let μ0,t and μ1,t be vectors with dimension [channels x 1] representing the class means of the two classes in the channel space, then the scaled pattern can be calculated as: (5)

With this scaling we get a pattern which has the same physical unit as the original channel space. The pattern shows the differences of the class means in the original space as exploited by the LDA classifier. We then transformed this pattern from the channel space into the source space using standardized low-resolution brain electromagnetic tomography (sLORETA) [34], see the next section for more details.

As we applied an 1-vs-1 classification strategy, we obtained several binary classifiers and therefore also several patterns (e.g. a supination vs pronation pattern). To obtain the final classifier patterns we grouped the patterns according to the two classification types: movement vs movement patterns (mov-vs-mov) and movement vs rest patterns (mov-vs-rest). Patterns belonging to a group were averaged using their absolute values. We took the absolute values because a pattern expresses the difference between two classes and its signs depend on the order of the classes and should therefore not be considered. Finally, we averaged the patterns over non-overlapping 100 ms time segments located between -0.4 s and 0.4 s relative to the movement onset, i.e. yielding 9 patterns per classification type for each session and subject. Additionally, we time averaged over the whole -0.4 s to 0.4 s period. Fig 3 summarizes the procedure.

thumbnail
Fig 3. Calculation of a mov-vs-mov pattern.

Patterns are calculated from each 1-vs-1 classifier; subsequently scaled and transformed into the source space; we then calculated the absolute value and averaged over patterns. Finally, we averaged over non-overlapping time segments. The same processing pipeline applies to the mov-vs-rest pattern.

https://doi.org/10.1371/journal.pone.0182578.g003

Source space

EEG source imaging methods allow to infer from the EEG (i.e. scalp potential distribution) the underlying sources in the brain. The EEG signals are attributed to the “channel space”, whereas the inferred brain sources are attributed to the “source space” and are often estimated (normalized) current densities [35].

We transformed the LDA patterns (obtained from single EEG time-points) from the channel space into the source space to increase the spatial resolution of the patterns obtained. For this purpose, we used the software Brainstorm [36]. A desirable property of scaled LDA patterns compared to LDA weights is that they correspond to measured scalp potentials and can be subjected to source imaging methods similar as EEG channels. Boundary element head models were calculated based on subject individual electrode positions and the ICBM152 template head model (ICBM152 is a head model based on a non-linear average of 152 subjects). We estimated the full noise covariance matrices based on the EEG data from the period 0.5 to 2 s after trial start and applied shrinkage regularization [37]. Finally, we computed 15002 brain sources, i.e. voxels, with sLORETA [34] (the dipole orientations were unconstrained).

Classifier pattern statistics

Group level statistics was done by nonparametric permutation testing [38,39] of the classifier patterns in the source space. The statistical testing was done separately for each ME/MI and mov-vs-mov/mov-vs-rest pattern. Beside the actual classifier patterns, we calculated random classifier patterns by shuffling class labels once for each subject. As a test statistic, we used the difference between the actual classifier patterns and the random classifier patterns averaged over all subjects. We obtained the permutation distribution of the test statistic by enumerating all 215 = 32768 actual/random classifier pattern combinations. For that, we used the maximum of the voxels in each enumeration step to account for multiple comparisons (in case of 100 ms time segments, we used the maximum of the whole -0.4 s to 0.4 s period). We then established a threshold corresponding to α = 0.05. All voxels with a test statistic exceeding the threshold were considered significant.

Results

Classification accuracies

Single time point classification. The ME classification accuracies are shown in Fig 4A (mov-vs-mov) and Fig 4B (mov-vs-rest). The mov-vs-mov average classification accuracy over all subjects reached a maximum of 42% (9% standard deviation) at 0.13 s after movement onset and the mov-vs-rest average classification accuracy reached a maximum of 81% (7% standard deviation) at movement onset (0.0 s). Accuracies were calculated from -2 s to 2 s relative to the movement onset with a time resolution of 1/16 s. Classification accuracies are statistically significant above 24% (mov-vs-mov) and 65% (mov-vs-rest) for a single subject, and above 18% (mov-vs-mov) and 54% (mov-vs-rest) for the average (α = 0.05, adjusted wald interval [40,41], Bonferroni corrected for the length of the analyzed time window). We calculated the significance levels based on the average number of trials available after artefact removal. In mov-vs-mov and mov-vs-rest all subjects reached a significant classification accuracy, see Table 1 which shows the individual maximum classification accuracies. The mov-vs-mov averaged classification accuracy becomes significant at -0.94 s and stays significant until the end of the analyzed time window (2 s); the mov-vs-rest averaged classification accuracy is significant between -1.0 s and 1.69 s, see Fig 4A and 4B.

thumbnail
Fig 4. ME classification results for the single time point classification.

a: mov-vs-mov classification accuracies of all 15 subjects and the average (thick black line). Time point 0 s corresponds to the movement onset. b: mov-vs-rest classification accuracies. The horizontal solid line in a and b is the chance level; the horizontal dashed line is the significance level for the average. c: mov-vs-mov confusion matrix (occurrences sum to 100%) with classes elbow flexion (Fle), elbow extension (Ext), forearm supination (sup), forearm pronation (pro), hand close (Clo), and hand open (Opn). d: mov-vs-rest confusion matrix. Confusion matrices were calculated at the time point with the highest average classification accuracy (mov-vs-mov: 0.13 s; mov-vs-rest: 0.0 s).

https://doi.org/10.1371/journal.pone.0182578.g004

thumbnail
Table 1. Maximum ME classification accuracies for the single time point classification.

https://doi.org/10.1371/journal.pone.0182578.t001

Confusion matrices are shown in Fig 4C (mov-vs-mov) and Fig 4D (mov-vs-rest). They correspond to the timepoints when the average classification accuracies reached a maximum. The confusion matrices show relative numbers, i.e. the occurrences sum up to 100%. If a movement was wrongly predicted, it was often predicted as a movement involving the same joints, see Fig 4C. In other words, movements involving different joints (e.g. open vs pronation) are better distinguishable than movements involving the same joints (e.g. open vs close).

Fig 5 shows the MI classification accuracies. The mov-vs-mov average classification accuracy over all subjects reached a maximum of 23% (3% standard deviation) at -0.13 s; the mov-vs-rest average classification accuracy reached a maximum of 68% (8% standard deviation) at 0.06 s. Accuracies are significant above 24% (mov-vs-mov) and 65% (mov-vs-rest) for a single subject, and above 18% (mov-vs-mov) and 54% (mov-vs-rest) for the average (α = 0.05, adjusted wald interval, Bonferroni corrected for the length of the analyzed time window). Ten subjects reached a significant classification accuracy in mov-vs-mov and 15 subjects in mov-vs-rest, see Table 2. The mov-vs-mov average classification becomes significant between -0.56 s and 0.81 s; the mov-vs-rest average classification is significant between -0.69 s and 0.81 s, see Fig 5A and 5B.

thumbnail
Fig 5. MI classification results for the single time point classification.

a: mov-vs-mov classification accuracies of all 15 subjects and the average (thick black line). Time point 0 s corresponds to the movement onset. b: mov-vs-rest classification accuracies. The horizontal solid line in a and b is the chance level; the horizontal dashed line is the significance level for the average. c: mov-vs-mov confusion matrix (occurrences sum to 100%). d: mov-vs-rest confusion matrix. Confusion matrices were calculated at the time point with the highest average classification accuracy (mov-vs-mov: -0.13 s; mov-vs-rest: 0.06 s).

https://doi.org/10.1371/journal.pone.0182578.g005

thumbnail
Table 2. Maximum MI classification accuracies for the single time point classification.

https://doi.org/10.1371/journal.pone.0182578.t002

The averaged maximum mov-vs-mov accuracies are 1.8 times higher for ME than for MI, the averaged maximum mov-vs-rest accuracies are 1.2 times higher for ME than for MI (cf. Table 1 and Table 2). The ME and MI accuracies are significantly different for mov-vs-mov and mov-vs-rest (p < 5⋅10−4, two-sided Wilcoxon signed rank test).

MI confusion matrices are shown in Fig 5C (mov-vs-mov) and Fig 5D (mov-vs-rest). They qualitatively show similar patterns as in ME, i.e. MI involving different joint are better discriminable than MI involving same joints.

Time window classification. Beside classifying on single time points, we also classified time windows of the EEG. The analyzed time windows ranged from 200 ms to 1 s, and features were taken in 100 ms time intervals within these time windows (see Table 3). Fig 6 shows the subjects' averaged ME/MI classification accuracies for the different window lengths as well as single time-point classification (relative to the movement onset) for comparison. The maximum averaged classification accuracies, the respective time points and standard deviations can be read from Table 4 (ME) and Table 5 (MI), respectively. Accuracies are significant above 18% (ME/MI mov-vs-mov) and 54% (ME/MI mov-vs-rest) (α = 0.05, adjusted wald interval, Bonferroni corrected for the length of the analyzed time window).

thumbnail
Fig 6. Classification accuracies for different window lengths.

Time point 0 s corresponds to the movement onset. The horizontal solid lines are the chance level; the horizontal dashed lines are the significance levels. a: subject averaged ME mov-vs-mov classification accuracies. b: subject averaged ME mov-vs-rest classification accuracies. c: subject averaged MI mov-vs-mov classification accuracies. d: subject averaged MI mov-vs-rest classification accuracies.

https://doi.org/10.1371/journal.pone.0182578.g006

thumbnail
Table 4. ME classification accuracies for different window lengths.

https://doi.org/10.1371/journal.pone.0182578.t004

thumbnail
Table 5. MI classification accuracies for different window lengths.

https://doi.org/10.1371/journal.pone.0182578.t005

A one-way repeated measures ANOVA was conducted to compare the effect of the window length on the classification accuracy (at the time point of maximum average classification accuracy). There was a statistically significant effect for the window length for ME mov-vs-mov [F(5,70) = 59.2, pGG = 7.0e-11], ME mov-vs-rest [F(5,70) = 7.1, pGG = 0.002], MI mov-vs-mov [F(5,70) = 21.6, p = 5.0e-13], and MI mov-vs-rest [F(5,70) = 3.5, pGG = 0.02]. Mauchly's test indicated that the sphericity assumption had been violated for ME mov-vs-mov, ME mov-vs-rest and MI mov-vs-rest (p < 0.05), and a Greenhouse-Geisser correction was applied in these cases. Post hoc tests with Dunn & Šidák's method were performed between groups and results are shown in Fig 7.

thumbnail
Fig 7. Post hoc tests with Dunn & Šidák's method between window lengths.

A star indicates a statistically significant difference (p < 0.05) a: ME mov-vs-mov b: ME mov-vs-rest c: MI mov-vs-mov d: MI mov-vs-rest.

https://doi.org/10.1371/journal.pone.0182578.g007

Motor-related cortical potentials

The grand-average MRCPs for all movements and the rest condition are shown in Fig 8 (ME) and Fig 9 (MI). MRCPs are aligned to the movement onset for ME and the virtual movement onset for MI, respectively. We show the grand-average MRCPs for channels FCz, C3, Cz, and C4, here Laplace filtered to increase the spatial resolution, the preprocessing was otherwise the same as for the classification. Laplace filtering was done by subtracting the mean voltage of the four surrounding orthogonal electrodes from the center electrode [42]. Generally, ME MRCPs are more pronounced than MI MRCPs (especially on Cz), and the rest condition shows smaller but otherwise similar shaped responses as the movements. The MRCPs show the largest response on Cz (ME) and on FCz (MI), respectively.

thumbnail
Fig 8. Grand-average MRCPs and respective standard errors during ME.

https://doi.org/10.1371/journal.pone.0182578.g008

thumbnail
Fig 9. Grand-average MRCPs and respective standard errors during MI.

https://doi.org/10.1371/journal.pone.0182578.g009

Fig 10 shows the ME MRCPs averaged over all subjects with respect to their joint movements. MRCPs on Cz for forearm supination/pronation and elbow flexion/extension are more pronounced than for hand close/open. Elbow and forearm pronation/supination movements have similar MRCPs prior to movement onset and show differences in the latency of their negative peak (around 50 ms and 300 ms, respectively). Also differences in the MRCPs of movements belonging to the same joint are observable (see S1 Fig). The negative peak at Cz in hand opening is 0.3 μV larger than in hand closing. Almost no differences in latency or amplitude can be found between forearm pronation and supination. Elbow flexion leads to earlier MRCPs at Cz (around 60 ms) and weaker MRCPs (about 0.3 μV) than elbow extension. Such a detailed and fair comparison of the MI MRCPs between conditions is not reasonable, since the real imagined movement onset cannot be given.

thumbnail
Fig 10. Grand-average ME MRCPs grouped with respect to their joint movements and respective standard errors.

Shown are the averages of elbow extension/flexion, forearm supination/pronation and hand opening/closing.

https://doi.org/10.1371/journal.pone.0182578.g010

Classifier patterns

We calculated 9 classifier patterns per subject, per classification type (mov-vs-mov and mov-vs-rest), and per movement condition (ME, MI), ranging from -0.4 s to 0.4 s relative to movement onset. Additionally, we calculated classifier patterns averaged over this time period. We subjected these patterns to statistical analysis, as described in the Methods section, and show them in Fig 11. The figure shows the group averages of the differences between classifier patterns and random classifier patterns (i.e. reference patterns) and only significant voxels are colored.

thumbnail
Fig 11. Classifier patterns.

Shown are patterns between -0.4 s and 0.4 s relative to movement onset (a-d) and averaged over this time period (e-h). a and e: mov-vs-mov patterns during ME. b and f: mov-vs-rest patterns during ME. c and g: mov-vs-mov patterns during MI. d and h: mov-vs-rest patterns during MI. Only significant voxels are colored. Blue corresponds to zero, red to the maximum value.

https://doi.org/10.1371/journal.pone.0182578.g011

Immediately before movement onset (around -100 ms), the ME mov-vs-mov patterns (see Fig 11A) are prominent on premotor areas (PM). Subsequently (0–100 ms), patterns intensify on the contralateral primary motor (M1), contralateral somatosensory cortex (S1) and the posterior parietal cortex (PPC). After 300 ms, patterns remain on M1 and S1. Patterns are shortly observable on an ipsilateral temporal area (100 ms). In the ME mov-vs-rest condition (see Fig 11B) patterns appear at movement onset (0 ms) contralaterally on PM, M1, S1 and PPC. The pattern on PM vanishes 100 ms after movement onset and the remaining patterns vanish almost entirely 200 ms after movement onset. S1 Video and S2 Video show the progression of the mov-vs-mov and mov-vs-rest patterns. The mov-vs-mov MI patterns are below the significance threshold (see Fig 11C). The mov-vs-rest MI patterns arise on central motor cortex areas at movement onset (see Fig 11D).

The time averaged ME patterns of mov-vs-mov and mov-vs-rest are similar and are located on PM, M1, S1 and PPC (see Fig 11E). The time averaged MI mov-vs-mov patterns are faintly located on central areas (see Fig 11G), whereas the mov-vs-rest patterns have a more distinct representation on M1 and S1.

Discussion

We show in this work for the first time the successful classification of six different movements of the right arm from low-frequency time-domain EEG. Significant classification accuracies were reached during movement execution as well as during movement imagination. This proves that single and non-repetitive movements of the same limb can be decoded from time-domain EEG signals and differentiated against each other. However, despite the ME classification accuracies being promising, the MI classification accuracies are rather low. This may be because ME EEG signals were time-locked to the actual movement onset but MI EEG signals were time-locked to a virtual movement onset (which corresponded to the average ME onset of each subject). Thus, the ME onset was more accurate, and exact time-locking is important for classifying in the time-domain as the underlying signals change over time. One could overcome this issue by defining the virtual movement onset relative to occurring MRCPs [43] instead as a fixed time delay. Another explanation may be that ME produces more pronounced brain patterns than MI in the time-domain. This is indicated by studies analyzing MRCPs [43,44]. Interestingly, Sugata et al. did not find such a dissimilarity in classification accuracy between ME and MI in a magnetoencephalography (MEG) study using comparable features in grasping, pinching and elbow flexion [45]. Also Wang et al. obtained more comparable classification accuracies between ME and MI in a MEG based study employing a target decoding paradigm [46]. Beside that, attempted movements may produce more pronounced brain patterns than MI and therefore yield higher classification accuracies. They may cause a stronger activation of the motor system as indicated in Blokland et al. where classification accuracies in tetraplegic individuals were higher with attempted movements than MI using spectral features [47]. Furthermore, extensive user training could improve the expression of distinct brain patterns. User training can be highly beneficial in SMR-based BCIs [12,48], however it is still unclear if this is also true for time-domain signals in the context of movement decoding. Moreover, the obtained confusion matrices indicate that movements involving different joints (i.e. different muscle groups) are more discriminable than movements involving the same joints Consequently, for future applications it would be necessary to select the subset of classes which work best for BCI users but still allow a natural control. Furthermore, a hierarchical classifier concept may be beneficial: one meta classifier classifies movements of different joints (e.g. hand movement vs elbow movement), and subjacent classifiers classify movements of the same joint (e.g. hand open vs hand close).

A simple approach to improve the classification accuracy is to use more temporal information when classifying the EEG. Therefore, we also classified time windows instead of single time points of the EEG, and analysed the effect of the time window length. The results indicate that a time window of length 0.6 s is sufficient to reach the maximum possible classification accuracy (w.r.t. the methods used in this paper), longer time windows don't improve the classification performance and increase the computational load. Furthermore, ME classification profits more from a time window based approach than MI in case of mov-vs-mov. The improvement in classification performance can be due to the temporal spread of the discriminative information of the underlying signals (i.e. MRCPs) which is better captured with a time window based classification. Another reason may be that a time window based classification allows to fine-tune the employed 0.3–3 Hz bandpass filter. An LDA classifier which uses data from more than one time point is basically a finite impulse response filter with trainable filter coefficients, and can shrink or enlarge the 0.3–3 Hz passband to maximize the extracted discriminative information.

Earlier, we pointed out some possibilities to boost the MI accuracy. However, a study conducted by Lacourse et al. [49] forments doubts if MI accuracy in healthy subjects is a good predictor for the performance in SCI subjects. They found that MRCPs during attempted and imagined hand movements in tetraplegic subjects are more similar than in a abled-bodied control group (there with executed and imagined movements). Furthermore, they found that MRCPs between tetraplegic subjects and abled-bodied subjects are different. This challenges the usefulness of using MI in abled-bodied subjects to predict the classification performance for SCI subjects. Nevertheless, our results show the general applicability in able-bodied subjects and point out the need for further research in SCI subjects with attempted movements.

Our work adds to the work of Vučković and Sepulveda who have shown that wrist extension/flexion and forearm pronation/supination can be decoded from the frequency-domain of EEG [27,28] (especially from the delta band). Here, we show that also the time-domain contains movement information related to individual joint movements. This is in line with previous research which shows that low-frequency time-domain EEG signals contain information about movement trajectories, speed and force [1719,22,23,26]. Electrocorticography (ECoG) studies support this and indicate that low-frequency time-domain signals contain movement related information [5054]. Interestingly, the frequency bands used in classical SMR-based BCIs, i.e. mu and beta band, contain less information about movement kinematics and muscle activity than low-frequency bands and the high-gamma band [5557]. Mu and beta bands are more suitable to detect a movement intention than the details of the movement. However, our group recently found that these frequency bands can be separated in two types of large-scale networks where one network type is modulated by the movement phase of rhythmic finger movements [9].

To reliably detect the movement intention is of utmost importance for a neuroprosthesis control to avoid unexpected and potentially dangerous movements. In accordance with [26,58], we successfully classified between movements and a rest class based on low-frequency time-domain EEG. The classification of movement vs rest may be further improved by combining time-domain signals and power modulations in mu and beta bands [59].

MRCPs can be retrieved with similar signal processing methods as low-frequency time-domain signals. They show a typical negative peak around movement onset like in our results [24]. Hence, our classification approach is based on MRCPs. Such MRCPs-like signals are also observable in both ME and MI rest classes, i.e. without any movement intention. It is reported that voluntary muscle relaxation causes similar potential changes to that of muscle contraction [24]. This may be an explanation at least for ME if the subjects were already preparing for some movement before the cue appeared on the screen, and then relaxed after the rest class cue was presented. This can be an issue for an asynchronous BCI trained with a cue based paradigm. An asynchronous BCI must be trained on a rest class which truly corresponds to a relaxation phase, and this requires a careful design of the training paradigm.

A novelty in the context of EEG-based movement decoding from a single limb is the analysis of the classifier patterns. These patterns show for ME that mainly M1, S1, PM, and PPC contain movement related information which can be decoded from low-frequency time-domain EEG signals. This is consistent with the general understanding that PM and PPC are involved in movement planning while M1 is active during the execution of the movement, and S1 receives proprioceptive feedback which is eventually integrated with other sensory input at the PPC [6062]. The ME mov-vs-mov patterns show also a slight and temporary involvement of a non-motor related ipsilateral temporal area. However, this lateral pattern cannot be attributed to movement artefacts as the mov-vs-rest classifier would be more susceptible to movement artefacts but does not have similar pronounced lateral pattern. This lateral pattern can be a consequence of the usage of a template head model and an incomplete electrode coverage on temporal sites. Another observation is that mov-vs-mov and mov-vs-rest patterns cover similar areas. Thus, general (mov-vs-rest) and detailed (mov-vs-mov) movement information can be decoded from the same brain areas. One can also observe that MI produces less pronounced patterns than ME, which is consistent with a lower classification accuracy for MI than ME. The MI patterns are also more centrally located.

We calculated classifier patterns instead of analyzing the weights of the LDA classifier because the EEG channels are highly correlated in lower frequencies [19] which causes a problem known as multicollinearity [63] and complicates their interpretation [64]. Classifier patterns were already used as a tool to spatially analyze brain processes [65]. They can be used to find EEG amplitude differences exploited by the classifier between experimental conditions.

The following limitations of our study can be identified. First, preprocessing filter and classification time windows were non-causal to avoid time shifts in the obtained results due to signal processing. However, for an online application causal filter and time windows must implemented. Second, the movement onsets obtained via external sensors are not as timely as movement onsets obtained via electromyography. Due to inertia of the body parts, muscle activity is usually detected before overt movements. Third, we used template head models instead of individual head models generated from magnetic resonance imaging scans for source imaging, which can increase the location error of sources and in turn decreases the sensitivity of the obtained patterns.

Future studies need to confirm if details of imagined or attempted movements can also be decoded from individuals with SCI and if the classifier performance is sufficient to control a neuroprosthesis or a robotic arm. Specifically, it has to be determined if the classification accuracies yielded by attempted movements in individuals with SCI correspond closer to the ME or MI accuracies reported in this work. The classifier patterns show that PM, M1 and S1 encode information about the details of the movement on the macroscale, and especially these areas have direct connections to the spinal cord [62,66]. These direct connections are impaired in SCI users, however, and this could have an influence on the information encoded in the MRCPs [49]. Further studies also need to analyze the influence of object interactions on the movement information encoded in low-frequency time-domain EEG signals.

Conclusion

We have demonstrated the successful decoding of single executed and imagined upper limb movements based on low-frequency time-domain EEG signals. These movements can be the basis for new mental control strategies aimed at a more natural neuroprosthesis or robotic arm control. Furthermore, we show that the patterns underlying the classification emerge from motor related brain areas.

Supporting information

S1 Fig. MRCPs for movements belonging to the same joints.

Shown is the average over subjects.

https://doi.org/10.1371/journal.pone.0182578.s001

(TIF)

S1 Video. Progression of the ME mov-vs-mov patterns.

Patterns were calculated for single time points (i.e. not averaged over time) from -1 s to 1 s relative to movement onset. Statistical analysis was not performed.

https://doi.org/10.1371/journal.pone.0182578.s002

(AVI)

S2 Video. Progression of the ME mov-vs-rest patterns.

Patterns were calculated for single time points (i.e. not averaged over time) from -1 s to 1 s relative to movement onset. Statistical analysis was not performed.

https://doi.org/10.1371/journal.pone.0182578.s003

(AVI)

Author Contributions

  1. Conceptualization: PO AS GRMP.
  2. Data curation: GRMP.
  3. Formal analysis: PO.
  4. Funding acquisition: GRMP.
  5. Investigation: PO AS.
  6. Methodology: PO AS JP GRMP.
  7. Project administration: GRMP.
  8. Resources: GRMP.
  9. Software: PO.
  10. Supervision: GRMP.
  11. Visualization: PO.
  12. Writing – original draft: PO.
  13. Writing – review & editing: PO AS JP GRMP.

References

  1. 1. Rupp R, Gerner HJ. Neuroprosthetics of the upper extremity—clinical application in spinal cord injury and challenges for the future. Acta Neurochir Suppl. 2007;97: 419–426.
  2. 2. Rupp R, Rudiger R, Martin R, Matthias S, Alex K, Muller-Putz GR. Functional Rehabilitation of the Paralyzed Upper Extremity After Spinal Cord Injury by Noninvasive Hybrid Neuroprostheses. Proc IEEE. 2015;103: 954–968.
  3. 3. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442: 164–171. pmid:16838014
  4. 4. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485: 372–375. pmid:22596161
  5. 5. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, et al. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet. 2013;381: 557–564. pmid:23253623
  6. 6. Wodlinger B, Downey JE, Tyler-Kabara EC, Schwartz AB, Boninger ML, Collinger JL. Ten-dimensional anthropomorphic arm control in a human brain−machine interface: difficulties, solutions, and limitations. J Neural Eng. 2014;12: 016011. pmid:25514320
  7. 7. Pfurtscheller G, da Silva FHL. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999;110: 1842–1857. pmid:10576479
  8. 8. Wagner J, Makeig S, Gola M, Neuper C, Müller-Putz G. Distinct β Band Oscillatory Networks Subserving Motor and Cognitive Control during Gait Adaptation. J Neurosci. 2016;36: 2212–2226. pmid:26888931
  9. 9. Seeber M, Scherer R, Müller-Putz GR. EEG oscillations are modulated in different behavior-related networks during rhythmic finger movements. J Neurosci. in press;
  10. 10. Müller-Putz GR, Scherer R, Pfurtscheller G, Rupp R. EEG-based neuroprosthesis control: a step towards clinical practice. Neurosci Lett. 2005;382: 169–174.
  11. 11. Kreilinger A, Rohm M, Kaiser V, Leeb R, Rupp R, Mueller-Putz GR. Neuroprosthesis Control via a Noninvasive Hybrid Brain-Computer Interface. IEEE Intell Syst. 2013;28: 40–43.
  12. 12. Pfurtscheller G, Gert P, Müller GR, Jörg P, Gerner HJ, Rüdiger R. “Thought”–control of functional electrical stimulation to restore hand grasp in a patient with tetraplegia. Neurosci Lett. 2003;351: 33–36. pmid:14550907
  13. 13. Rohm M, Schneiders M, Müller C, Kreilinger A, Kaiser V, Müller-Putz GR, et al. Hybrid brain-computer interfaces and hybrid neuroprostheses for restoration of upper limb functions in individuals with high-level spinal cord injury. Artif Intell Med. 2013;59: 133–142. pmid:24064256
  14. 14. Yong X, Menon C. EEG classification of different imaginary movements within the same limb. PLoS One. 2015;10: e0121896. pmid:25830611
  15. 15. Edelman BJ, Baxter B, He B. EEG Source Imaging Enhances the Decoding of Complex Right-Hand Motor Imagery Tasks. IEEE Trans Biomed Eng. 2016;63: 4–14. pmid:26276986
  16. 16. Pfurtscheller G, Guger C, Müller G, Krausz G, Neuper C. Brain oscillations control hand orthosis in a tetraplegic. Neurosci Lett. 2000;292: 211–214. pmid:11018314
  17. 17. Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J Neurosci. 2010;30: 3432–3437. pmid:20203202
  18. 18. Ofner P, Müller-Putz GR. Decoding of velocities and positions of 3D arm movement from EEG. Conf Proc IEEE Eng Med Biol Soc. 2012;2012: 6406–6409. pmid:23367395
  19. 19. Ofner P, Müller-Putz GR. Using a noninvasive decoding method to classify rhythmic movement imaginations of the arm in two planes. IEEE Trans Biomed Eng. 2015;62: 972–981. pmid:25494495
  20. 20. Agashe HA, Paek AY, Zhang Y, Contreras-Vidal JL. Global cortical activity predicts shape of hand during grasping. Front Neurosci. 2015;9: 121. pmid:25914616
  21. 21. Waldert S, Preissl H, Demandt E, Braun C, Birbaumer N, Aertsen A, et al. Hand movement direction decoded from MEG and EEG. J Neurosci. 2008;28: 1000–1008. pmid:18216207
  22. 22. Gu Y, Farina D, Murguialday AR, Dremstrup K, Montoya P, Birbaumer N. Offline Identification of Imagined Speed of Wrist Movements in Paralyzed ALS Patients from Single-Trial EEG. Front Neurosci. 2009;3: 62. pmid:20582286
  23. 23. Gu Y, Dremstrup K, Farina D. Single-trial discrimination of type and speed of wrist movements from EEG recordings. Clin Neurophysiol. 2009;120: 1596–1600. pmid:19535289
  24. 24. Shibasaki H, Hiroshi S, Mark H. What is the Bereitschaftspotential? Clin Neurophysiol. 2006;117: 2341–2356. pmid:16876476
  25. 25. Yuan H, Perdoni C, He B. Relationship between speed and EEG activity during imagined and executed hand movements. J Neural Eng. 2010;7: 026001.
  26. 26. Jochumsen M, Niazi IK, Taylor D, Farina D, Dremstrup K. Detecting and classifying movement-related cortical potentials associated with hand movements in healthy subjects and stroke patients from single-electrode, single-trial EEG. J Neural Eng. 2015;12: 056013. pmid:26305233
  27. 27. Vučković A, Sepulveda F. Delta band contribution in cue based single trial classification of real and imaginary wrist movements. Med Biol Eng Comput. 2008;46: 529–539. pmid:18418635
  28. 28. Vučković A, Sepulveda F. A two-stage four-class BCI based on imaginary movements of the left and the right wrist. Med Eng Phys. 2012;34: 964–971. pmid:22119365
  29. 29. Liao X, Yao D, Wu D, Li C. Combining Spatial Filters for the Classification of Single-Trial EEG in a Finger Movement Task. IEEE Trans Biomed Eng. 2007;54: 821–831. pmid:17518278
  30. 30. Neuper C, Scherer R, Reiner M, Pfurtscheller G. Imagery of motor actions: differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Brain Res Cogn Brain Res. 2005;25: 668–677. pmid:16236487
  31. 31. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004;134: 9–21. pmid:15102499
  32. 32. Peck R, Van Ness J. The use of shrinkage estimators in linear discriminant analysis. IEEE Trans Pattern Anal Mach Intell. 1982;4: 530–537. pmid:21869073
  33. 33. Blankertz B, Lemm S, Treder M, Haufe S, Müller K-R. Single-trial analysis and classification of ERP components—A tutorial. Neuroimage. 2011;56: 814–825. pmid:20600976
  34. 34. Pascual-Marqui RD. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find Exp Clin Pharmacol. 2002;24 Suppl D: 5–12.
  35. 35. Michel CM, Murray MM, Lantz G, Gonzalez S, Spinelli L, de Peralta RG. EEG source imaging. Clin Neurophysiol. 2004;115: 2195–2222. pmid:15351361
  36. 36. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: a user-friendly application for MEG/EEG analysis. Comput Intell Neurosci. 2011;2011: 879716. pmid:21584256
  37. 37. Schäfer J, Juliane S, Korbinian S. A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics. Stat Appl Genet Mol Biol. 2005;4. pmid:16646851
  38. 38. Nichols TE, Holmes AP. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum Brain Mapp. 2002;15: 1–25. pmid:11747097
  39. 39. Maris E, Oostenveld R. Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods. 2007;164: 177–190. pmid:17517438
  40. 40. Billinger M, Martin B, Ian D, Vera K, Jing J, Allison BZ, et al. Is It Significant? Guidelines for Reporting BCI Performance. Biological and Medical Physics, Biomedical Engineering. 2012. pp. 333–354.
  41. 41. Müller-Putz GR, Scherer R, Brunner C, Leeb R, Pfurtscheller G. Better than random? A closer look on BCI results. Int J Bioelectromagn. 2008;10: 52–55.
  42. 42. Hjorth B. An on-line transformation of EEG scalp potentials into orthogonal source derivations. Electroencephalogr Clin Neurophysiol. 1975;39: 526–530. pmid:52448
  43. 43. Niazi IK, Jiang N, Tiberghien O, Nielsen JF, Dremstrup K, Farina D. Detection of movement intention from single-trial movement-related cortical potentials. J Neural Eng. 2011;8: 066009. pmid:22027549
  44. 44. do Nascimento OF, Nielsen KD, Voigt M. Movement-related parameters modulate cortical activity during imaginary isometric plantar-flexions. Exp Brain Res. 2006;171: 78–90. pmid:16320044
  45. 45. Sugata H, Hisato S, Masayuki H, Takufumi Y, Kojiro M, Shiro Y, et al. Common neural correlates of real and imagined movements contributing to the performance of brain–machine interfaces. Sci Rep. 2016;6: 24663. pmid:27090735
  46. 46. Wang W, Sudre GP, Xu Y, Kass RE, Collinger JL, Degenhart AD, et al. Decoding and cortical source localization for intended movement direction with MEG. J Neurophysiol. 2010;104: 2451–2461. pmid:20739599
  47. 47. Blokland Y, Vlek R, Karaman B, Özin F, Thijssen D, Eijsvogels T, et al. Detection of event-related desynchronization during attempted and imagined movements in tetraplegics for brain switch control. Conf Proc IEEE Eng Med Biol Soc. 2012;2012: 3967–3969. pmid:23366796
  48. 48. Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. PNAS. 2004;101: 17849–17854. pmid:15585584
  49. 49. Lacourse MG, Cohen MJ, Lawrence KE, Romero DH. Cortical potentials during imagined movements in individuals with chronic spinal cord injuries. Behav Brain Res. 1999;104: 73–88. pmid:11125744
  50. 50. Hotson G, Fifer MS, Acharya S, Benz HL, Anderson WS, Thakor NV, et al. Coarse electrocorticographic decoding of ipsilateral reach in patients with brain lesions. PLoS One. 2014;9: e115236. pmid:25545500
  51. 51. Schalk G, Kubánek J, Miller KJ, Anderson NR, Leuthardt EC, Ojemann JG, et al. Decoding two-dimensional movement trajectories using electrocorticographic signals in humans. J Neural Eng. 2007;4: 264–275. pmid:17873429
  52. 52. Pistohl T, Ball T, Schulze-Bonhage A, Aertsen A, Mehring C. Prediction of arm movement trajectories from ECoG-recordings in humans. J Neurosci Methods. 2008;167: 105–114. pmid:18022247
  53. 53. Hammer J, Fischer J, Ruescher J, Schulze-Bonhage A, Aertsen A, Ball T. The role of ECoG magnitude and phase in decoding position, velocity, and acceleration during continuous motor behavior. Front Neurosci. 2013;7: 200. pmid:24198757
  54. 54. Acharya S, Fifer MS, Benz HL, Crone NE, Thakor NV. Electrocorticographic amplitude predicts finger positions during slow grasping motions of the hand. J Neural Eng. 2010;7: 046002. pmid:20489239
  55. 55. Bundy DT, Pahwa M, Szrama N, Leuthardt EC. Decoding three-dimensional reaching movements using electrocorticographic signals in humans. J Neural Eng. 2016;13: 026021. pmid:26902372
  56. 56. Ball T, Schulze-Bonhage A, Aertsen A, Mehring C. Differential representation of arm movement direction in relation to cortical anatomy and function. J Neural Eng. 2009;6: 016006. pmid:19155551
  57. 57. Shin D, Watanabe H, Kambara H, Nambu A, Isa T, Nishimura Y, et al. Prediction of muscle activities from electrocorticograms in primary motor cortex of primates. PLoS One. 2012;7: e47992. pmid:23110153
  58. 58. López-Larraz E, Montesano L, Gil-Agudo Á, Minguez J. Continuous decoding of movement intention of upper limb self-initiated analytic movements from pre-movement EEG correlates. J Neuroeng Rehabil. 2014;11: 153. pmid:25398273
  59. 59. Ibáñez J, Serrano JI, del Castillo MD, Monge-Pereira E, Molina-Rueda F, Alguacil-Diego I, et al. Detection of the onset of upper-limb movements based on the combined analysis of changes in the sensorimotor rhythms and slow cortical potentials. J Neural Eng. 2014;11: 056009. pmid:25082789
  60. 60. Andersen RA, Snyder LH, Bradley DC, Xing J. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu Rev Neurosci. 1997;20: 303–330. pmid:9056716
  61. 61. Aflalo T, Kellis S, Klaes C, Lee B, Shi Y, Pejsa K, et al. Neurophysiology. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science. 2015;348: 906–910. pmid:25999506
  62. 62. Kandel E. Principles of Neural Science, Fifth Edition. McGraw Hill Professional; 2013.
  63. 63. Farrar DE, Glauber RR. Multicollinearity in Regression Analysis: The Problem Revisited. Rev Econ Stat. 1967;49: 92.
  64. 64. Haufe S, Stefan H, Frank M, Kai G, Sven D, John-Dylan H, et al. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage. 2014;87: 96–110. pmid:24239590
  65. 65. Ofner P, Muller-Putz GR. Movement target decoding from EEG and the corresponding discriminative sources: A preliminary study. Conf Proc IEEE Eng Med Biol Soc. 2015;2015: 1468–1471. pmid:26736547
  66. 66. Dum RP, Strick PL. The origin of corticospinal projections from the premotor areas in the frontal lobe. J Neurosci. 1991;11: 667–689. pmid:1705965