Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

EEG Classification of Different Imaginary Movements within the Same Limb

  • Xinyi Yong,

    Affiliation School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada

  • Carlo Menon

    cmenon@sfu.ca

    Affiliation School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada

EEG Classification of Different Imaginary Movements within the Same Limb

  • Xinyi Yong, 
  • Carlo Menon
PLOS
x

Abstract

The task of discriminating the motor imagery of different movements within the same limb using electroencephalography (EEG) signals is challenging because these imaginary movements have close spatial representations on the motor cortex area. There is, however, a pressing need to succeed in this task. The reason is that the ability to classify different same-limb imaginary movements could increase the number of control dimensions of a brain-computer interface (BCI). In this paper, we propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Besides, the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies are also being investigated. To the best of our knowledge, both problems have not been explored in the literature. Based on the EEG data recorded from 12 able-bodied individuals, we have demonstrated that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow (goal-oriented) movements, the average accuracy achieved is 66.9%. For the 3-class problem of discriminating rest against imaginary grasp and elbow movements, the average classification accuracy achieved is 60.7%, which is greater than the random classification accuracy of 33.3%. Our results also show that goal-oriented imaginary elbow movements lead to a better classification performance compared to simple imaginary elbow movements. This proposed BCI system could potentially be used in controlling a robotic rehabilitation system, which can assist stroke patients in performing task-specific exercises.

Introduction

A brain-computer interface (BCI) system translates human brain activity to commands that can operate a device, such as a computer [1]. Existing BCI systems have many applications. For example, a BCI allows a user to spell with a virtual keyboard [2, 3], to control an orthosis [4], a functional electrical stimulator (FES) [5], and to navigate the World Wide Web [6], with different degrees of success. In the early stage of BCI research, most BCI applications aimed to help people with limited mobility including those with amyotropic lateral sclerosis and spinal cord injury [7]. Recently, there is also an emerging interest in BCI with applications targeting stroke individuals. More specifically, investigations have been performed to evaluate the possibility of using BCIs for post-stroke rehabilitation to restore upper and lower limb functions [8, 9].

It is not straightforward to apply existing BCI systems to control devices such as a robotic exoskeleton. The main reason is that these systems have low dimensional control, i.e., they can only recognize a limited number of mental tasks as unique control commands. Motor imagery tasks such as left hand, right hand, and foot motor imagery are among the most frequently used in a BCI system [10]. Wolpaw and McFarland have shown that their participants were able to move a cursor with two-dimensional control (i.e., horizontal and vertical) on a computer screen after several sessions of training [11]. In this study, each dimension of cursor movement was controlled by the mu (8–12 Hz) or beta (18–26 Hz) rhythm, which was associated with left or right hand motor imagery. This strategy was then extended to three-dimensional cursor control (i.e., horizontal, vertical, and depth) in which the BCI was based on the changes in mu and/or beta rhythm during foot, left, and right hand motor imagery [12]. Scherer et al. have proposed a virtual keyboard controlled by a three-class BCI that discriminated the motor imagery of left hand, right hand, and foot [2]. Some studies employed intelligent control strategies to achieve multi-dimensional BCI control. For example, four-class BCIs have been developed, which allowed users to fly a virtual helicopter [13] and a robotic quadcopter in a three dimensional space [14]. The users would imagine moving/resting both hands to fly the helicopter forward/backward and imagine moving left/right hand to rotate the helicopter left/right. Doud et al. extended the work in [13] and introduced a six-class BCI. The third dimensional control of raising and lowering the helicopter was achieved by imagining moving the tongue and feet respectively [15]. They have demonstrated the ability of users to control the flight of a virtual helicopter with three dimensional control that can be independently adjusted in strength according to user preference.

While the classification of left hand, right hand, foot, and tongue motor imagery have been rather successful, the task of detecting the intention or discriminating the motor imagery of different movements within the same limb, on the other hand, is challenging. This is due to the fact that these motor tasks activate regions that have very close representations on the motor cortex area of the brain [16, 17]. To date, not many studies have addressed this problem. A summary of the studies that classify the motor imagery or the execution of different upper-extremity movements within the same limb is provided in Table 1.

Liao et al. [18] have investigated the binary classification of the following ten different pairs of executed finger movements using 128-channel EEG signals: thumb vs index,; thumb vs middle; thumb vs ring; thumb vs little; index vs middle; index vs ring; index vs little; middle vs ring; middle vs little; and ring vs little finger. The average accuracy achieved in this study is 77.1% when power spectral changes are used as features and support vector machine is used as a classifier.

Three of the studies in Table 1 look into the decoding of different wrist movements. The classification of four different imaginary wrist movements namely wrist flexion, extension, pronation, and supination have been demonstrated in [19]. Unfortunately, the accuracies achieved are not satisfactory (approximately 35%). Vuckovic et al. [20] and Ghani et al. [21] also look into discriminating two different wrist movements using EEG signals. Their binary classification tasks include six combinations of different wrist movements: extension vs flexion; extension vs supination; extension vs pronation; flexion vs supination; flexion vs pronation; and supination vs pronation. The accuracies achieved in these studies are reasonably high (in the range of 60 to 80%). Vuckovic et al. [20] show that the best results were obtained when imaginary wrist extension was one of the classes being selected for classification. Ghani et al. [21], on the other hand, do not demonstrate any consistency in terms of the best classifiable type of movement.

Next, Deng et al. [22] and Zhou et al. [23] attempt to classify the intention of executing shoulder abduction and elbow flexion, which is used in a BCI system to overcome the abnormal coupling that exists between the shoulder abduction and elbow flexion following stroke. In these papers, the intention is defined as the time window of 1800 to 60ms prior to the onset of a voluntary shoulder or elbow torque. These two studies have demonstrated promising results, with accuracies above 70% for stroke patients and 80% for healthy volunteers. Finally, Chakraborti et al. [24] propose to use a multi-class BCI to control the motion and orientation of a robot. For each left and right hands, the execution of shoulder, elbow, and finger movements are classified using only 2-channel EEG signals. The classification of these same limb movements results in surprisingly high classification accuracies (i.e., in the range of 56%—93%).

In this paper, the research effort is focused in the classification of upper-limb movements within the same limb. We propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Motor imagery of grasp and elbow movements are chosen due to their potential use in controlling the robotic arm [25] developed in our lab and an FES. This rehabilitation system is designed to help stroke patients perform task-specific rehabilitation exercises and eventually improve their upper-extremity functions. The three classification tasks employed in this study are different from those listed in Table 1. Even though Chakraborti et al. [24] also look into the classification of elbow and finger movements, but their work focuses on real movements and resting states are not considered in their study. In contrast, both imaginary movements and a rest state are included in our classification problem. In the present study, we also investigate the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies.

To the best of our knowledge, the classification combination employed in this study as well as the difference between simple and goal-oriented motor imagery have not been explored in the BCI literature. In addition, all BCIs designed for stroke rehabilitation only classify two classes (left vs right motor imagery or mostly rest vs motor imagery), as shown in Table 2. A 3-class BCI system for stroke rehabilitation has some advantages over the state-of-the-art 2-class BCIs designed for stroke rehabilitation. First, it has an additional dimension to operate a robotic system when performing task-specific exercises. For example, the user can imagine elbow movements to move the robotic device close to a cup, and then imagine grasp movements to activate the FES, which in turn close the user’s fingers to grab the cup. Such control is more intuitive than that derived from a BCI system that identifies the motor imagery of different limbs (i.e., left/right hand and foot). The second advantage of the 3-class BCI system is that the users can perform mental practice on two different joint movements using the same device. Studies have shown that a rehabilitation program that includes mental practice can help improve the use and function of the affected arm of a stroke patient [26, 27].

thumbnail
Table 2. BCI studies in stroke rehabilitation, focusing on upper-extremity rehabilitation and EEG was used as a modality to measure brain activities.

https://doi.org/10.1371/journal.pone.0121896.t002

In the following section, the experimental procedures as well as the feature extraction and machine learning algorithms are described. Results are presented in Section 3 and Section 4 is dedicated to discussion and conclusion.

Experimental Procedure

EEG Recording

All of the methods within this study were in compliance with the declaration of Helsinki and were approved by the Simon Fraser University (SFU) Office of Research Ethics (#2012s0527). We recruited twelve able-bodied individuals for this study. Participants gave a written consent before participating in the experiment. Each individual was seated comfortably in front of a computer monitor. The computer provided a simple Graphical User Interface (GUI) that displays commands or cues to the participant.

A 32-channel EGI’s Geodesic sensor net was applied on the participant’s head [48]. The locations of all the electrodes are shown in Fig. 1. The labeled electrodes were those we employed for our BCI system. The remaining unlabeled electrodes, on the other hand, were not considered in this study because they were very close to sources that generate muscle activities or artifacts. All these electrodes were referred to the vertex (Cz position in Fig. 1) of the participant. The EEG signals were amplified and sampled at 1000 Hz using a Geodesic Net Amps 400 series amplifier [49]. Throughout the experiment, the electrode impedance was maintained below 50 kΩ.

thumbnail
Fig 1. The EEG electrode positions employed in this study.

The labeled electrodes were used in our BCI system. The remaining unlabeled electrodes, on the other hand, were not considered in this study. All these electrodes were referred to the vertex (Cz position).

https://doi.org/10.1371/journal.pone.0121896.g001

Experimental Procedures

Each experiment for each participant lasted for approximately 1.5 hours. The experiment consisted of four sessions. Each session lasted 12 minutes. The participant was asked to perform different repetitive tasks according to the visual cues displayed on the computer monitor. Four different visual cues (see Fig. 2) were presented in a random order to the participant. They are listed as follows:

  1. Rest (REST): rest and relax [Fig. 2(a)]
  2. Motor imagery of grasp (MI-GRASP): imagine opening and closing all the fingers to grab an object [Fig. 2(b)]
  3. Motor imagery of elbow flexion and extension (MI-ELBOW): imagine moving the forearm up and down [Fig. 2(c)]
  4. Goal-directed motor imagery of elbow flexion and extension (MI-ELBOW-GOAL): imagine reaching out for the glass of water displayed and bringing it back [Fig. 2(d)]
There is a clear distinction between MI-ELBOW and MI-ELBOW-GOAL. MI-ELBOW involves only simple repetitive elbow flexion and extension. MI-ELBOW-GOAL on the other hand is a goal-oriented action, i.e., a visible goal (a glass of water) is present. MI-ELBOW-GOAL was included in this study to investigate the effect of goals or targets on EEG activity and consequently on the classification accuracy of the multi-class BCI system proposed.

thumbnail
Fig 2. Visual cues presented during the experiments.

(a) REST: rest and relax; (b) MI-GRASP: imagine opening and closing the fingers; (c) MI-ELBOW: imagine moving the forearm up and down; (d) MI-ELBOW-GOAL: imagine reaching out for the glass of water displayed on the computer monitor and bringing it back.

https://doi.org/10.1371/journal.pone.0121896.g002

Each session consisted of 20 trials for each tasks. Each trial lasted from 8 to 10 s (see Fig. 3). Each visual cue was randomly selected and displayed on the screen for 3 s, indicating which task to perform. The participant was asked to perform each designated task for 3 s, followed by 5 to 7 s of rest. Throughout the experiment, the participant could take a break whenever needed.

thumbnail
Fig 3. Experimental paradigm.

At 0 s, a visual cue is randomly selected and presented. After 3 s, a blank screen appears for 5 −7 s before another visual cue is presented. During this period of time, the participant is requested to rest.

https://doi.org/10.1371/journal.pone.0121896.g003

Feature Extraction and Classification

The EEG data collected from each experiment contained a mixture of four different mental states: REST, MI-GRASP, MI-ELBOW, and MI-ELBOW-GOAL. In this paper, we first looked into the binary classification of the following combinations:

  1. REST vs MI-GRASP
  2. REST vs MI-ELBOW
  3. REST vs MI-ELBOW-GOAL
  4. MI-GRASP vs MI-ELBOW
  5. MI-GRASP vs MI-ELBOW-GOAL
Next, the classification of the following three classes were performed:
  1. REST vs MI-GRASP vs MI-ELBOW
  2. REST vs MI-GRASP vs MI-ELBOW-GOAL

The EEG data were processed by a signal processing unit that performs signal preprocessing, feature extraction, and classification operations. The relevant features were extracted and translated to useful control signals that could be employed to control one or more devices. In a three-class classification problem, the output of the classifier had one of the three discrete states ‘0’, ‘1’, or ‘2’ and was not a continuous function. The logical states ‘1’ and ‘2’ indicated the user’s intention to activate a device (e.g. a robotic arm or an FES). The logical states ‘0’, on the other hand, implied that the user did not intend to activate the system.

In this study, an open-source MATLAB toolbox, BCILAB, was utilized to process the EEG data [50]. In the following subsections, details about the data preprocessing, feature extraction, and classification algorithms are given.

Data Preprocessing

The EEG data were downsampled to 250 Hz and then band-pass filtered to the 6–35 Hz frequency band. This frequency band encompasses the mu and beta rhythms which have been reported to desynchronize during motor imagery [51]. The band power changes of the mu and beta rhythms have been successfully used in BCI systems to classify EEG signals related to motor imagery [5254]. Also, by band-pass filtering the data, ocular artifacts caused by the low frequency components of the EEG data were minimized.

Feature Extraction

EEG epochs from 1 to 3 s after a visual cue were segmented. Then, features were extracted from each segment. The following feature extraction methods, which are widely used in BCI research, were employed:

  1. Common Spatial Patterns (CSP) [53]
  2. Filter-Bank Common Spatial Patterns (FBCSP) [55]
  3. Logarithmic Band Power (BP) [10]
The frequency window used and the feature dimension for each method are presented in Table 3.

thumbnail
Table 3. Frequency window, time segment, and feature dimension for each feature extraction method.

https://doi.org/10.1371/journal.pone.0121896.t003

CSP has been widely used in BCI research to extract features from EEG signals. This algorithm can effectively extract discriminatory information from two classes of EEG signals [53]. The algorithm finds the directions where the EEG signals should be projected onto so that the differences between any two classes of EEG signals are maximized (i.e. the variance of one class is maximized while at the same time, the variance of the other class is minimized) [52]. These directions are provided by a weight matrix in which its rows give the weights of the EEG channels.

Here, the formulation of the CSP algorithm for a 2-class problem is described. This same formulation of the 2-class CSP algorithm was also used when classifying the three classes of EEG signals in this study as only binary classifiers were trained. More specifically, for a 3-class problem, three different binary classifiers were trained and a voting scheme was employed to determine the class label. More details about the voting scheme is provided in the next subsection.

Given two classes of EEG signals: Class 1 and Class 2, the CSP algorithm finds a spatial filter such that the signals can be projected into a 1-dimensional space where one class of signals is maximally scattered and the other is minimally scattered. High variance of the signals indicates strong rhythms whereas low variance indicates attenuated rhythms [52]. Let S = {S1,S2,…,SM} where Si ∈ ℝNc×N denotes the filtered i-th trial EEG signal, M the number of EEG trials, Nc the number of EEG channels, and N the number of samples in the signal. The optimization problem is expressed as: (1) where 𝒞1 represents all Class 1 EEG trials and wNc is the unknown weight vector of the spatial filter. In this study, the CSP features selected for classification were the log-variance of the EEG signals projected using six different spatial filters. These spatial filters were a) the three most important spatial filters that explain the largest variance of Class 1 and the smallest variance of Class 2 and b) the three most important spatial filters that explain the largest variance of Class 2 and the smallest variance of Class 1.

FBCSP is an extension of the CSP algorithm [55]. First, a filter bank is used to bandpass filter the EEG signals. Then, for each filtered EEG band, spatial filters are found using the CSP algorithm discussed earlier. In this study, three filtered EEG bands were generated: 7–15 Hz, 15–25 Hz, and 25–30 Hz. The FBCSP features selected for classification were the log-variance of each of the filtered EEG band projected using six different spatial filters.

The third method, logarithmic band power (BP) is a simpler method. The features used for classification were the log-variance of the bandpass filtered EEG signals from every channel.

Classification

Three classifiers were used to classify the three-class data:

  1. Linear Discriminant Analysis (LDA) [56]
  2. Logistic Regression (LR) using a fast Bayesian method [57]
  3. Support Vector Machine (SVM) with a radial basis function (RBF) kernel [58]
For the SVM with an RBF kernel (K(x,y) = eγ∣∣xy∣∣2), two parameters were optimized: a) the kernel parameter gamma γ and b) the penalty weight c, which acts like a regularization parameter that controls the misclassification rate of the training data. The optimal parameters were obtained from a grid search with c ranging from 2−5 to 215 and γ ranging from 2−15 to 23[59].

To apply these three machine learning algorithms to a multi-class problem, an one-vs-one voting strategy was employed. In the one-vs-one voting scheme, K(K−1)/2 binary classifiers for a K-way multi-class problem are trained. During testing, all the binary classifiers are applied to an unseen sample and the class that receives the highest number of votes wins [58]. In our case, K = 3 and for each 3-class classification problem, 3 binary classifiers were set up.

Next, to evaluate the performance of the 3-class BCI system, the 10 × 10 cross-validation method was employed [58]. The data set was randomized and divided into ten folds. Nine of the folds were used to set up the classifier and the remaining one fold was used to test the classifier. This procedure was repeated for ten times. Then, the average cross-validation classification accuracy was computed and used as a performance metric.

In this study, nine combinations of different feature extraction and classification algorithms listed above were used to discriminate the different classes of EEG signals. For each participant, the highest cross-validation accuracy obtained from one of the nine algorithms mentioned earlier was reported.

Results

ERD/ERS Analysis

It is known that motor imagery, preparation for movement, or movement is usually accompanied by a decrease in the mu and beta rhythms over the sensorimotor cortex area especially the contra-lateral region [51, 60]. This decrease is also known as event-related desynchronization (ERD). A recent EEG and fMRI study suggested that the degree of this decrease might be quantitatively associated with an increase in neuronal activity [60]. Besides ERD, an increase in the beta rhythm also occurs after a motor imagery or a movement is executed. This increase is known as event-related synchronization (ERS). In this section, the average time course for the ERD and ERS obtained from the contra-lateral C3 location of all participants are presented.

Fig. 4(a) shows the ERD time course for the mu rhythm (8–11 Hz) at the C3 location. Visual cues were prompted on the computer screen from time 0s to 3s. The ERD time course was obtained by averaging the power changes of the mu rhythm across all trials and all participants. As shown in the figure, the power of the mu rhythm is attenuated approximately 0.7s after the onset of MI-GRASP, MI-ELBOW, and MI-ELBOW-GOAL. Also, MI-ELBOW-GOAL produces greater ERD as compared to MI-ELBOW. As it takes time for the participants to see the cue, decide which task to perform and then react, the attenuation of the mu rhythm does not start at the onset of the motor imagery tasks. About 1s after the participants stop imagining, the ERD recovers to the rest and baseline level. The mu rhythm for MI-ELBOW-GOAL takes an additional 400ms to recover to the baseline level.

thumbnail
Fig 4. The average time course for ERD and ERS.

(a) ERD time course for the mu rhythm (8–11 Hz) at the C3 location; (b) ERD/ERS time course for the beta (14–18 Hz) rhythm at the C3 location.

https://doi.org/10.1371/journal.pone.0121896.g004

Next, Fig. 4(b) shows the ERD/ERS time course for the beta (14–18 Hz) rhythm at the C3 location. The time course was obtained by averaging the power changes of the beta rhythm across all trials and all participants. As shown in the figure, the beta rhythm also displays an attenuation in its power after the onset of MI-GRASP, MI-ELBOW, and MI-ELBOW-GOAL. In addition, ERS or a rebound in the beta rhythm is observed after the participants have completed the motor imagery tasks of MI-GRASP and MI-ELBOW. However, no ERS is observed in the case of MI-ELBOW-GOAL.

To illustrate the topographical distribution on the scalp of the difference between rest and imaginary grasp movements, the R2 values for frequency bands ranging from 8 to 24 Hz at each electrode locations are computed for all participants. R2 measures the difference between two classes, i.e., the proportion of the single-trial variance that is due to the task [1]. The topographical map of one of the participants (P06), which demonstrates prominent scalp difference between rest and imaginary grasp movements, is shown in Fig. 5. In this figure, large R2 values are observed at electrode locations near the contra-lateral motor cortex area. Such prominent differences occur as a result of the ERD of the mu and beta rhythms when MI tasks are executed.

thumbnail
Fig 5. R2 values for REST vs MI-GRASP for P06.

R2 measures the difference between two classes. In this figure, the R2 values for frequency bands ranging from 8 to 24 Hz at each electrode locations are computed for all participants. Large R2 values are observed at electrode locations near the contra-lateral motor cortex area.

https://doi.org/10.1371/journal.pone.0121896.g005

Frequency and Topographical Analysis for Different MIs

We are also interested in the topographical distribution on the scalp for different motor imagery tasks measured by R2 values. Unfortunately, the topographical difference is subject-specific and no consistent patterns can be observed. Examples are taken from participants PO6 and P07 to reflect this difference and their topographical maps are presented in Fig. 6 and Fig. 7 respectively. For the case of MI-GRASP vs MI-ELBOW, larger difference is observed in the contra-lateral of the motor cortex in participant P06 but in the ipsi-lateral of the motor cortex in participant P07. When comparing MI-ELBOW against MI-ELBOW-GOAL, larger R2 values are observed in the ipsi-lateral of the motor cortex in participant P06 but in the Pz region for participant P07. Finally, for MI-GRASP vs MI-ELBOW-GOAL, the difference in terms of the R2 values is the greatest in the contra-lateral of the motor cortex and the visual cortex (for the frequency range of 14–18 Hz) in participant P06. Participant P07, on the other hand, has the greatest difference in the Pz region and the ipsi-lateral of the motor cortex.

2-Class Classification Results

Table 4 shows the classification accuracy achieved for the binary classification of REST against another type of motor imagery (i.e., MI-GRASP, MI-ELBOW, or MI-ELBOW-GOAL) using EEG signals. For each participant, the reported classification accuracy is in fact the highest accuracy obtained from the different combinations of feature extraction and classification algorithms described in the previous section. The results obtained are consistent with those achieved in the literature [8, 9]. The best results are achieved for the binary classification of REST VS MI-GRASP (80.5%).

To compare the results of the three different binary classifiers, one-way ANOVA is used since the data are normally distributed as assessed by Shapiro-Wilk Test (p > 0.05) [61]. Besides, the variances of the data are homegeneous according to the Levene’s test for variance homogeniety (p > 0.05). The analysis shows that the means of the performance of the BCI for different binary combinations are not statistically significant (p > 0.05).

Table 5 presents the classification accuracy achieved for two different binary classifiers: MI-GRASP vs MI-ELBOW and MI-GRASP vs MI-ELBOW-GOAL. The data are normally distributed as assessed by Shapiro-Wilk Test (p > 0.05). Thus, the paired t-test [61] is used to test the statistical significance of the results. The analysis shows that the means of the performance of the BCI for different MI combinations are significantly different at a significance level of 0.05.

Table 5 also shows that higher classification accuracies are achieved for the combination that involves MI-ELBOW-GOAL except participant P01 and P03, whose classification rates for both cases are about the same. Large accuracy gains when using the goal-oriented strategy are observed in participants P02, P04, P07, and P09 where the increment ranges from 10.1 to 16.3%.

3-Class Classification Results

Table 6 shows the performance achieved by the BCI when classifying three classes of mental tasks, i.e., REST vs MI-GRASP vs (MI-ELBOW or MI-ELBOW-GOAL) using EEG signals. The data are normally distributed as assessed by Shapiro-Wilk Test (p > 0.05). The paired t-test is used to compare the accuracies for the non goal-oriented and goal-oriented 3-class classification problems. The analysis shows that the means of the performance of the BCI for different combinations are statistically significant (p < 0.05).

As shown in the table, higher classification accuracies are achieved for the combination that involve MI-ELBOW-GOAL except participant P6 and P10, whose classification rates for both cases are about the same. Large accuracy gains when using the goal-oriented strategy are observed in participants P02, P07, and P09 where the increment ranges from 10.1 to 13.6%.

Discussion

In this paper, we first look into the binary classification of different imaginary movements such as MI-GRASP, MI-ELBOW, and MI-ELBOW-GOAL. Then, the possibility of designing a 3-class BCI that discriminates rest, imaginary grasp movements, and imaginary elbow movements is investigated. This paper also investigates whether goal-oriented motor imagery outperforms non-goal-oriented motor imagery when classifying the task against other imaginary task and/or rest. In the following subsections, more details about our claims and results are provided.

Multi-Class Classification

The main aim of the present study is to investigate the possibility of detecting the motor imagery (MI) of different joint movements within the same limb as well as detecting MI from a rest state. To the best of our knowledge, this is one of the first study to distinguish both the imaginary grasp movements and imaginary elbow movements from resting states using EEG signals. Besides, we also investigate whether a goal-oriented motor imagery reaching task using elbow (a functional movement) produces EEG features that are more prominent when compared to a non-goal oriented motor imagery elbow flexion and extension.

From Table 4, it has been demonstrated that the binary classifiers achieve an average classification accuracy of 80.5%, 75.1%, and 76.6% for REST vs MI-GRASP, REST vs MI-ELBOW, and REST vs MI-ELBOW-GOAL respectively. These results are consistent with the performances reported in the literature [8, 9]. The classification of different MI from the same limb is more challenging. The best classification pair is MI-GRASP vs MI-ELBOW-GOAL with a classification accuracy of 66.9% (Table 5). This performance is significantly better than the binary classification pair of MI-GRASP vs MI-ELBOW.

The binary classification problem is then extended to a multi-class classification problem. For this 3-class classification problem, an average accuracy of 60.7% is achieved and all the participants have accuracies well above the random classification level of 33.3%. As expected, the classification accuracies are lower than those achieved by the binary classifiers. The deterioration in the accuracy is caused by the difficulty in discriminating MI-GRASP from MI-ELBOW or MI-ELBOW-GOAL. It is challenging to discriminate the motor imagery of different movements within the same limb because these motor tasks activate regions that have very close representations on the motor cortex area of the brain [16, 17]. As the number of electrodes placed around the motor cortex area is sparse, we could expect better performance when denser electrodes are placed over the scalp. Other approaches that can potentially improve the classification performance of the BCI system include more BCI training and the use of online feedback. Besides, a hybrid BCI that combines EMG and EEG could also potentially improve the efficiency and practicality of the system.

Simple vs Goal-Oriented Motor Imagery

Based on the ERD/ERS as well as frequency and topographical analysis using R2 values, we found differences between MI-GRASP and MI-ELBOW or MI-ELBOW-GOAL especially at the motor cortex area of the brain. Such differences are not consistent across all the participants. For example, for participant P07, the R2 values for MI-ELBOW vs MI-ELBOW-GOAL are prominent in the posterior parietal cortex area, which is consistent with [62] in which the imagined goal reaching task has been shown to activate areas that are posterior and medial in the parietal cortex area. Moreover, evidence also shows that the parietal cortex is involved in movement planning [63, 64]. In participant P07, visual areas were also activated probably because the participant performed visual imagery during the MI-ELBOW-GOAL tasks. The activations were stronger in the left hemisphere, as the participant in right-handed. For participant P06, no significant difference in the R2 values for MI-ELBOW vs MI-ELBOW-GOAL is observed. Hence, there was no difference between the classification performance when a simple or a goal-oriented imaginary elbow movement was involved.

For both binary classification and 3-class classification as shown in Table 5 and Table 6 respectively, higher classification accuracies are achieved for the combinations that involve MI-ELBOW-GOAL except two participants. From these tables, large accuracy gains when using the goal-oriented strategy are observed in participants P02, P07, and P09. For the binary classification of MI-GRASP and MI-ELBOW-GOAL, the gain is 16.3% for participant P02. The goal-oriented version of the motor imagery, MI-ELBOW-GOAL leads to a significantly higher accuracy probably because an goal-oriented action activated more regions of the brain. It could also be due to the fact that the participants were able to focus better in performing a functional task.

Comparing Different Feature Extraction and Classification Methods

Nine combinations of different feature extraction and classification methods were used to discriminate the different classes of EEG signals in this study. The reported accuracy for each participant is the highest cross-validation accuracy obtained from one of the nine algorithms. We are interested to know which of the feature extraction and classification methods lead to high performance. Thus, for each of the feature extraction methods, the percentage of cases where it outperforms other feature extraction methods is computed and shown in Fig. 8 (a). Fig. 8 (b), on the other hand, shows the percentage of the number of cases where each of the classification methods outperforms other classification methods.

thumbnail
Fig 8. The performance of the feature extraction and classification methods.

(a) The percentage of cases where a feature extraction method outperforms others; (b) The percentage of cases where a classification method outperforms others.

https://doi.org/10.1371/journal.pone.0121896.g008

The feature extraction method that has the highest percentage is the logarithmic band-power method (41.7%), followed by FBCSP (39.3%) and CSP (19.0%). For the classification methods, SVM with an RBF kernel is the best classifier where 46.4% of the times, it outperforms other classifiers such as LDA (44.1%) and logistic regression (9.5%). Of all the nine combinations of the algorithms, FBCSP with SVM and BP with LDA perform the best. They respectively yield the highest cross-validation accuracy 20.2% of the times respectively. CSP and SVM, on the other hand, has a percentage of 16.7%.

An understanding of the properties of the features is important when choosing a classifier. Even though logarithm was applied to all the features in this study, the features do not have a multivariate normal distribution as assessed by the Mardia’s multivariate normality test (p < 0.05) in most cases. Despite the violation of the normality assumption, LDA appears to be quite robust. 44.1% of the time, LDA outperforms LR and SVM, which makes no assumptions on the distribution of the data. Fig. 9 and Fig. 10 compare the decision boundaries of the three classification algorithms for the two-class problem Rest vs MI-GRASP and MI-GRASP vs MI-ELBOW respectively. For visualization purposes, these decision boundaries are derived using only two features: logarithmic band power at C3 and C4 using the EEG data collected from P03. The features are not normally distributed according to the Mardia’s test. For Rest vs MI-GRASP (Fig. 9), the decision boundary of SVM is almost linear. SVM, as well as both the linear classifiers (i.e., LDA and LR) produce a high accuracy of approximately 81.0% respectively. For MI-GRASP vs MI-ELBOW (Fig. 10), the classification problem becomes challenging as the data overlap more in the feature space. Thus, the classification accuracy achieved by LDA and LR is low, i.e., approximately 53.0%. The decision boundary of SVM is non-linear resulting in a higher accuracy 55.0%. As only two features were employed, the classification accuracies achieved in these two examples were smaller than those presented in Table 4 and Table 5.

thumbnail
Fig 9. The decision boundaries of LDA, LR, and SVM when classifying REST against MI-GRASP.

The red and black circles represent samples from REST and MI-GRASP respectively.

https://doi.org/10.1371/journal.pone.0121896.g009

thumbnail
Fig 10. The decision boundaries of LDA, LR, and SVM when classifying MI-ELBOW against MI-GRASP.

The red and black circles represent samples from MI-GRASP and MI-ELBOW respectively.

https://doi.org/10.1371/journal.pone.0121896.g010

The scatter plots in Fig. 11 compare the performance of the three different feature extraction methods. The comparison between the performance of the three different classification methods, on the other hand, is illustrated in Fig. 12. The red line in each scatter plot reflects a condition where the two algorithms under consideration achieve the same accuracy. Points that deviate from the red line are instances in which one algorithm outperforms the other. The larger the deviation, the performance difference between the two algorithms is greater. As shown in Fig. 11, the differences between the feature extraction methods are pronounced. The largest differences are a) 20.2% between CSP and BP; b) 17.4% between FBCSP and BP; and c) 12.9% between FBCSP and CSP. The performance difference between the the classification methods is smaller (see Fig. 12). The largest performance difference observed between LDA and LR is 7.3%, between SVM and LR is 9.4%, and between SVM and LDA is 7.5%. As the performance difference between algorithms can be large, it is important to use appropriate feature extraction and classification algorithms to optimize the BCI performance. The choice of the features, however, affects the BCI performance more compared to the choice of the classification algorithms used in this study.

thumbnail
Fig 11. Scatter plots of the performances of different feature extraction algorithms.

https://doi.org/10.1371/journal.pone.0121896.g011

thumbnail
Fig 12. Scatter plots of the performances of different classification algorithms.

https://doi.org/10.1371/journal.pone.0121896.g012

Potential Applications

The results obtained from this study are promising. The proposed BCI can increase the number of degree of freedom of the robotic system designed in our lab for stroke rehabilitation or for assistive purposes. For example, the stroke patients can use imagine elbow movements to extend the robotic arm when reaching out for a target object (e.g. a cup), and then imagine grasp movements to activate the FES and close their fingers to grab the object. For rehabilitation purposes, the same strategy could be used to perform task-specific exercises (e.g. picking up a bean bag and place it on one of the other four locations on the table). Task-specific training refers to a therapy in which patients practice goal-oriented motor tasks they would use in daily living such as a drinking task [65]. Studies have shown that task-specific training after stroke results in better functional outcomes [66]. In addition, task-specific training has been shown to produce long lasting cortical reorganization compared to traditional stroke rehabilitation [65, 67]. In another study, Boyd et al. investigate if target-specific or non-specific use of the hemiparetic arm would result in functional reorganization of the contralesional motor cortex after stroke [68]. It has been reported that task-specific training plays an important role in producing plasticity in the cortex [68].

Conclusions and Future Work

In summary, we have demonstrated in the present study that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow movements, the average accuracy achieved is 66.9%. On the other hand, the performance achieved when classifying three classes of EEG signals (i.e., rest, imaginary grasp, and imaginary elbow movements) is 60.7%, which is significantly larger than the random classification of 33.3%. Our results also show that goal-oriented motor imagery leads to higher classification performance.

In our future work, the proposed three-class BCI system will be integrated with an exoskeleton robotic arm and an FES to help stroke patients in performing task-specific exercises during rehabilitation. Consequently, the efficacy of the system will be evaluated. It would also be interesting to investigate the performance gain achieved when a hybrid BCI system that combines the BCI with EMG is used to operate the rehabilitation system. This proposed system aims to promote engagement amongst stroke patient when they are undergoing rehabilitation. More specifically, the system encourages stroke patients to perform mental rehearsal of a movement (i.e., engage in motor imagery) and at the same time, attempt to generate muscle movements that match their intention to move. Subsequently, the robotic exoskeleton would provide feedback and assist the patients in performing the desired movements. We believe that such a system can potentially lead to better functional outcomes.

Acknowledgments

This study was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institutes of Health Research (CIHR), and the Michael Smith Foundation for Health Research (MSFHR).

Author Contributions

Conceived and designed the experiments: XY CM. Performed the experiments: XY. Analyzed the data: XY. Contributed reagents/materials/analysis tools: XY CM. Wrote the paper: XY CM.

References

  1. 1. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interface for communication and control. Clin Neurophysiol. 2002;113:767–791. pmid:12048038
  2. 2. Scherer R, Müller GR, Neuper C, Graimann B, Pfurtscheller G. An asynchronously controlled EEG-based virtual keyboard: improvement of the spelling rate. IEEE Trans on Biomed Eng. 2004;51(6):979–1307.
  3. 3. Donchin E, Spencer KM, Wijesinge R. The mental prosthesis: assessing the speed of a P300-based brain computer interface. IEEE Trans Rehabil Eng. 2000;8(2):174–179. pmid:10896179
  4. 4. Pfurtscheller G, Guger C, Müller G, Krausc G, Neuper C. Brain oscillations control hand orthosis in a tetraplegic. Neuroscience Letters. 2000;292:211–214. pmid:11018314
  5. 5. Middendorf M, McMillan G, Calhoun G, Jones KS. Brain-computer interfaces based on the steady-state visual evoked response. IEEE Trans Rehabil Eng. 2000;8(2):211–214. pmid:10896190
  6. 6. Birbaumer N, Hinterberger T, Karim AA, Kubler A, Neumann N, Veit R. Brain-computer communication using self-control of slow cortical potentials (SCP). In: Proceedings of the 2nd International BCI Workshop and Training Course. Graz, Austria; 2004. p. 1–4.
  7. 7. Vaughan TM, Wolpaw JR, Donchin E. EEG-based communication: prospects and problems. IEEE Trans Rehabil Eng. 1996;4(4):425–430. pmid:8973969
  8. 8. Silvoni S, Ramos-Murgulaldy A, Cavinato M, Volpato C, Cisotto G, Turolla A, et al. Brain-computer interface in stroke: a review of progress. Clinical EEG and Neuroscience. 2011;42(4):242–252.
  9. 9. Ang KK, Guan C. Brain-computer interface in stroke rehabilitation. Journal of Computing Science and Engineering. 2013;7(2):139–146.
  10. 10. Pfurtscheller G, Neuper C. Motor imagery and direct brain-computer communication. In: Proceedings of the IEEE. 7; 2001. p. 1123–1134.
  11. 11. Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a non-invasive brain-computer interface in humans. Proceedings of the National Academy of Sciences of the United States of America. 2004;101(51):17849–17854.
  12. 12. McFarland DJ, Sarnacki WA, Wolpaw JR. Electroencephalographic EEG control of three-dimensional movement. Journal of Neural Engineering. 2010;7(3):1–21.
  13. 13. Royer AS, Doud AJ, Rose ML, He B. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;18(6):581–589. pmid:20876032
  14. 14. LaFleur K, Cassady K, Doud A, Shades K, Rogin E, He B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. Journal of Neural Engineering. 2013;10(4):1–15.
  15. 15. Doud AJ, Lucas JP, Pisansky MT, He B. Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLOS ONE. 2011;6(10):1–10.
  16. 16. Sanes JN, Donoghue JP, Thangaraj V, Edelman RR, Warach S. Shared neural substrates controlling hand movements in human motor cortex. Science. 1995;268 (5218):1775–1777. pmid:7792606
  17. 17. Plow EB, Arora P, Pline MA, Binenstock MT, Carey JR. Within-limb somatotopy in primary motor cortex–revealed using fMRI. Cortex. 2010;46(3):310–321. pmid:19446804
  18. 18. Liao K, Xiao R, Conzalez J, Ding L. Decoding individuals finger movements from one hand using human EEG signals. PLOS ONE. 2014;9(1):1–12.
  19. 19. Navarro I, Sepulveda F, Hubais B. A comparison of time, frequency and ICA based features and five classifiers for wrist movement classification in EEG signals. In: IEEE EMBS. Shanghai, China; 2005. p. 2118–2115.
  20. 20. Vuckovic A, Sepulveda F. Delta band contribution in cue based single trial classification of real and imaginary wrist movements. Medical Biological Engineering Computing. 2008;46(6):529–539. pmid:18418635
  21. 21. Ghani F, Sultan H, Anwar D, Farooq O, Khan YU. Classification of wrist movements using EEG signals. Journal of Next Generation Information Technology (JNIT). 2013;4(8):29–39.
  22. 22. Deng J, Yao J, Dewald JPA. Classification of the intention to generate a shoulder versus elbow torque by means of a time-frequency synthesized spatial patterns BCI algorithm. Journal of Neural Engineering. 2005;2:131–138. pmid:16317237
  23. 23. Zhou J, Yao J, Deng J, Dewald JPA. EEG-based classification for elbow versus shoulder torque intentions involving stroke patients. Computers in Biology and Medicine. 2009;39(5):443–452. pmid:19380125
  24. 24. Chakraborti T, Sengupta A, Banerjee D, Konar A, Anwesha SB, Janarthanan R. Implementation of EEG based control of remote robotic systems. In: International Conference on Recent Trends in Information Systems. Kolkata, India; 2011. p. 203–208.
  25. 25. Looned R, Webb J, Xiao ZG, Menon C. Assisting drinking with an affordable BCI-controlled wearable robot and electrical stimulation: a preliminary investigation. Journal of NeuroEngineering and Rehabilitation. 2014;11(51):1–13.
  26. 26. Page SJ, Levine P, Leonard A. Mental practice in chronic stroke: results of a randomized, placebo-controlled trial. Stroke. 2007;38(4):1293–1297. pmid:17332444
  27. 27. Ietswaart M, Johnston M, Dijkerman HC, Joice S, Scott CL, MacWalter RS, et al. Mental practice with motor imagery in stroke recovery: radomized controlled trial of efficacy. Brain. 2011;134(5):1373–1386. pmid:21515905
  28. 28. Prasad G, Herman P, Coyle D, McDonough S, Crosbie J. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: a feasibility study. Journal of Neuroengineering and Rehabilitation. 2010;7(60):1–17.
  29. 29. Ortner R, Irimia DC, Scharinger J, Guger C. A motor-imagery based brain-computer interface for stroke rehabilitation. Annual Review of Cybertherapy and Telemedicine. 2012;181:319–323.
  30. 30. Kaiser V, Daly I, Pichiorri F, Mattia D, Müller-Putz GR, Neuper C. Relationship between electrical brain responses to motor imagery and motor impairment in stroke. Stroke. 2012;43(10):2735–2740. pmid:22895995
  31. 31. Daly JJ, Cheng R, Rogers J, Litinas K, Hrovat K, Dohring M. Feasibility of a new application of noinvasive brain-computer interface (BCI): a case study of training for recovery of volitional motor control after stroke. Journal of Neurologic Physical Therapy. 2009;33(4):203–211. pmid:20208465
  32. 32. Tam WK, Tong KY, Meng F, Gao S. A minimal set of electrodes for motor imagery BCI to control an assistive device in chronic stroke subjects: a multi-session study. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2011;19(6):617–627. pmid:21984520
  33. 33. Meng F, Tong KY, Chan ST, Wong WW, Lui KH, Tang KW, et al. BCI-FES training system design and implementation for rehabilitation of stroke patients. In: IJCNN. Hong Kong; 2008. p. 4103–4106.
  34. 34. Young BM, Nigogosyan Z, Nair VA, Walton LM, Song J, Tyler ME, et al. Case report: post-stroke interventional BCI rehabilitation in an individual with preexisting sensorineural disability. Frontiers in Neuroengineering. 2014;7(18):1–12.
  35. 35. Tan HG, Kong KH, Shee CY, Wang CC, Guan C, Ang WT. Post-acute stroke patients use brain-computer interface to activate electrical stimulation. In: EMBS 2010. Buenos Aires, Argentina; 2010. p. 4234–4237.
  36. 36. Ang KK, Guan C, Chua KSG, Ang BT, Kuah C, Wang C, et al. A clinical study of motor imagery-based brain-computer interface for upper limb robotic rehabilitation. In: EMBS 2009. Minnesota, USA; 2009. p. 5981–5984.
  37. 37. Ang KK, Guan C, Chua KSG, Ang BT, Kuah C, Wang C, et al. Clinical study of neurorehabilitation in stroke using EEG-based motor imagery brain-computer interface with robotic feedback. In: EMBS 2010. Buenos Aires, Argentina; 2010. p. 5549–5552.
  38. 38. Ang KK, Guan C, Chua KSG, Ang BT, Kuah C, Wang C, et al. A large clinical study on the ability of stroke patients to use an EEG-based motor imagery brain-computer interface. Clinical EEG and Neuroscience. 2011;42(4):253–258. pmid:22208123
  39. 39. Ang KK, Guan C, Phua KS, Wang C, Zhou L, Tang KY, et al. Brain-computer interface-based robotic end effector system for wrist and hand rehabilitation:results of a three-armed randomized controlled trial for chronic stroke. Frontiers in Neuroengineering. 2014;7(30):1–9.
  40. 40. Gomez-Rodriguez M, Peters J, Hill J, Schölkopf B, Gharabaghi A, Grosse-Wentrup M. Closing the sensorimotor loop: haptic feedback facilitates decoding of arm movement imagery. In: IEEE International Conference on Systems Man and Cybernetics. Istanbul, Turkey; 2010. p. 121–126.
  41. 41. Gomez-Rodriguez M, Grosse-Wentrup M, Hill J, Gharabaghi A, Schölkopf B, Peters J. Towards brain-robot interfaces in stroke rehabilitation. In: IEEE International Conference on Rehabilitation Robotics. Zurich, Switzerland; 2011. p. 1–6.
  42. 42. Buch E, Weber C, Cohen LG, Braun C, Dimyan MA, Ard T, et al. Think to move: a neuromagnetic brain-computer interface BCI system for chronic stroke. Stroke. 2008;39(3):910–917. pmid:18258825
  43. 43. Broetz D, Braun C, Weber C, Soekadar SR, Caria A, Birbaumer N. Combination of brain-computer interface training and goal-directed physical therapy in chronic stroke: a case report. Neurorehabilitation and Neural Repair. 2010;24(7):674–679. pmid:20519741
  44. 44. Shindo K, Kawashima K, Ushiba J, Ota N, Ito M, Ota T, et al. Effects of neurofeedback training with an electroencephalogram-based brain-computer interface for hand paralysis in patients with chronic stroke: a preliminary case series study. Journal of Rehabilitation Medicine. 2011;43(10):951–957. pmid:21947184
  45. 45. Ramos-Murguialday A, Broetz D, Rea M, Ls̈er L, Yilmaz Ö, Brasil FL, et al. Brain-machine interface in chronic stroke rehabilitation: a controlled study. Annals of Neurology. 2013;74(1):100–108. pmid:23494615
  46. 46. Frisoli A, Loconsole C, Leonardis D, Bannò F, Barsotti M, Chisari C, et al. A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews. 2012;42(6):1169–1179.
  47. 47. Cincotti F, Pichiorri F, Aricò P, Aloise F, Leotta F, de Vico Fallani F, et al. EEG-based brain-computer interface to support post-stroke motor rehabilitation of the upper limb. In: IEEE EMBS. San Diego, USA; 2012. p. 4112–4115.
  48. 48. Electrical Geodesics I. Geodesic Sensor Net Technical Manual. Electrical Geodesics, Inc.; 2007.
  49. 49. Electrical Geodesics I. Net Amps 400 Series Amplifiers;. http://www.egi.com.
  50. 50. Delorme A, Mullen T, Kothe C, Acar ZA, Bigdely-Shamlo N, Vankov A, et al. EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing. Computational Intelligence and Neuroscience. 2011;2011:1–12.
  51. 51. Pfurtscheller G, da Silva FHL. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999;110:1842–1857. pmid:10576479
  52. 52. Dornhege G, Blankertz B, Krauledat M, Losch F, Curio G, Müller KR. Optimizing spatio-temporal filters for improving Brain-Computer Interfacing. In: Platt J, editor. Advances in Neural Inf. Proc. Systems (NIPS05). vol. 18. Vancouver, Canada; 2005. p. 315–322.
  53. 53. Ramoser H, Müller-Gerking J, Pfurtscheller G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans Rehabil Eng. 2000;8(4):441–447. pmid:11204034
  54. 54. Wang, Y, Gao S, Gao X. Common spatial pattern method for channel selection in motor imagery based Brain-computer Interface. In: Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the IEEE; 2005. p. 5392–5395.
  55. 55. Ang KK, Chin CY, Zhang H, Guan C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In: IEEE International Joint Conference on Neural Networks. Hong Kong; 2008. p. 2390–2397.
  56. 56. Lachenbruch PA. Discriminant Analysis. New York: Hafner Press; 1975.
  57. 57. Jaakkola TS, Jordan MI. Bayesian parameter estimation via variational methods. Statistics and Computing. 2000;10:25–37.
  58. 58. Bishop CM. Pattern Recognition and Machine Learning. Springer; 2006.
  59. 59. Hsu CW, Chang CC, Lin CJ. A practical guide to support vector classification; 2010.
  60. 60. Yuan H, Liu T, Szarkowski R, Rios C, Ashe J, He B. Negative covariation between task-related responses in alpha/beta-band activity and BOLD in human sensorimotor cortex: an EEG and fMRI study of motor imagery and movements. Neuroimage. 2010;49(3):1–21.
  61. 61. Hollander M, Wolfe DA, Chicken E. Nonparametric Statistical Methods. 3rd ed. Wiley; 2013.
  62. 62. Filimon F, Nelson JD, Hagler DJ, Sereno MI. Human cortical representation for reaching: mirror neurons for execution, observation, and imagery. Neuroimage. 2007;37(4):1315–1328. pmid:17689268
  63. 63. Andersen RA, Buneo CA. Intentional maps in posterior parietal cortex. Annu Rev Neurosci. 2002;25:189–220. pmid:12052908
  64. 64. Scherberger H, Jarvis MR, Andersen RA. Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron. 2005;46(2):347–354. pmid:15848811
  65. 65. Hubbard IJ, Parsons MW, Neilson C, Carey LM. Task-specific training: evidence for and translation to clinical practice. Occupational Therapy International. 2009;16(3–4):175–189. pmid:19504501
  66. 66. Rensink M, Schuurmans M, Linderman E, Hafsteinsdottir T. Task-oriented training in rehabilitation after stroke: systematic review. Journal of Advanced Nursing. 2008;65(4):737–754.
  67. 67. Classen J, Liepert J, Wise SP, Hallet M, Cohen LG. Rapid plasticity of human cortical movement representation induced by practice. Journal of Neurophysiology. 1998;79(2):1117–1123. pmid:9463469
  68. 68. Boyd LA, Vidoni ED, Wessel BD. Motor learning after stroke: is skill acquisition a prerequisite for contralesional neuroplastic change? Neuroscience Letters. 2010;482(1):21–25. pmid:20609381