The authors have declared that no competing interests exist.
Conceived and designed the experiments: JPSC SN CV. Performed the experiments: JPSC HMPC APR JMF FA AML CV EH SN. Analyzed the data: JPSC HMPC EH. Wrote the paper: JPSC HMPC APR JMF CV EH SN.
Epilepsy is a common neurological disorder which affects 0.5–1% of the world population. Its diagnosis relies both on Electroencephalogram (EEG) findings and characteristic seizure−induced body movements − called seizure semiology. Thus, synchronous EEG and (2D)video recording systems (known as Video−EEG) are the most accurate tools for epilepsy diagnosis. Despite the establishment of several quantitative methods for EEG analysis, seizure semiology is still analyzed by visual inspection, based on epileptologists’ subjective interpretation of the movements of interest (MOIs) that occur during recorded seizures. In this contribution, we present NeuroKinect, a low-cost, easy to setup and operate solution for a novel 3Dvideo-EEG system. It is based on a RGB-D sensor (Microsoft Kinect camera) and performs 24/7 monitoring of an Epilepsy Monitoring Unit (EMU) bed. It does not require the attachment of any reflectors or sensors to the patient’s body and has a very low maintenance load. To evaluate its performance and usability, we mounted a state-of-the-art 6-camera motion-capture system and our low-cost solution over the same EMU bed. A comparative study of seizure-simulated MOIs showed an average correlation of the resulting 3D motion trajectories of 84.2%. Then, we used our system on the routine of an EMU and collected 9 different seizures where we could perform 3D kinematic analysis of 42 MOIs arising from the temporal (TLE) (n = 19) and extratemporal (ETE) brain regions (n = 23). The obtained results showed that movement displacement and movement extent discriminated both seizure MOI groups with statistically significant levels (mean = 0.15 m vs. 0.44 m,
Epilepsy is a common neurological disorder which affects 0.5–1% of the world population. The main symptom of epilepsy are epileptic seizures which typically occur unexpectedly. In between seizures most individuals with epilepsy function normally. Video-EEG monitoring is the most accurate tool for diagnosis and differential diagnosis of epilepsy. The seizure semiology and its relations to changes in the electroencephalogram (EEG) are the cornerstone of this diagnostic method [
Although quantitative methods for EEG analysis are established for many years, seizure semiology is still analyzed in most Epilepsy Monitoring Units (EMUs) by visual inspection, based on epileptologists’ subjective interpretation of the movements of interest (MOIs) that occur during recorded seizures.
Our group initiated attempts to quantify seizure movements in the last century with a 2D infrared (IR) marker-based approach [
The limitation of the 2D approaches were obvious: the difficulty of tracking the erratic MOIs, frequent marker occlusions and instability of the attached reflectors or sensors due to violent seizure associated movements. Eventually, the lack of precision when MOIs are not performed parallely to the image plane distorts the analysis. These limitations asked for a 3Dvideo-EEG system.
An evolution in this direction, based on the integration of a high-speed (200Hz) 6 infrared motion tracking camera system (Vicon plc., Oxford, UK) with a commercial 64/128 channels video-EEG system (XLTEK, London, ON, Canada [
Non-camera approaches for motion quantification in epileptic seizures have been presented relying on wearable inertial sensors such as gyroscopes, accelerometers or magnetometers. These approaches were used for seizure detection based on filtering [
Newly available RGB-D cameras (color and depth streams) such as the Microsoft Kinect [
In this contribution, we present NeuroKinect—a low-cost, easy to setup and operate solution for a novel 3Dvideo-EEG system based on a RGB-D sensor (Microsoft Kinect) for 24/7 monitoring of a EMU bed. It does not require the attachment of any reflectors or sensors to the patient’s body and has a very low maintenance load, only associated to data management that can be largely automated. A comparative study of seizure-simulated MOIs using simultaneously the 6-camera Vicon system and the NeuroKinect is presented. Furthermore, we introduce the system 3D tracking algorithm that uses the RGB and depth data, and the corresponding kinematics analysis of 42 real seizure MOIs, 19 from temporal (TLE) and 23 from extratemporal (ETE) brain regions. We show that several quantified 3D parameters obtained with the developed system can discriminate TLE from ETE seizure MOIs considering a
To the best of our knowledge, no similar low cost system is currently available with these characteristics and potential.
The archicheture of our 3Dvideo-EEG system can be depicted in
Notice the KiT time-code is inserted in the video-EEG system for complete synchronization purposes. The system does not interferes with the EMU routine.
The RGB-D camera (a Kinect v1 sensor, Microsoft Corp., Redmond, WA, USA) provides synchronized 640 × 480 8-bit color (RGB) and depth images at 30 frames-per-second (fps), and connects to the workstation PC through a USB port [
The KiT application enables the management of acquisition sessions performed with the system. Users can perform several actions such as calibration, start/stop data acquisition, insert a label associated to an given instant (e.g. beginning of a seizure), among other workflow controls. Furthermore, a ring-buffer solution for data acquisition was implemented to enable 24/7 continuous recording, required for a routine EMU within a hospital environment.
Based on this ring-buffer solution, while KiT continuously acquires all data to this buffer, the user can mark the occurrence of Events (e.g. seizures) and within a “buffer size duration” (currently 72h but dependent on the buffer disk size), transfer the marked events to a database hosted by the workstation PC through the high-speed data connection using the KiMA tool (
Synchronized multiple EEG channels with the KiT software (Fig A). Optimized field-of-view of the Kinect installed in the epilepsy room, showed in KiMA’s user interface (Fig B). The typical distances range between the sensor and the bed center ranges from 1.9–2.1 meters.
A comparative performance preliminary study between our low-cost system and an expensive multi-camera optical system was performed. NeuroKinect was installed over the same EMU bed where we have been using the Vicon system reported elsewhere [
Following this preliminary study, we proceeded to embed the NeuroKinect into the routine of the University of Munich EMU and established a data acquisition protocol with the technicians and epileptologists. The designed experimental protocol consists of several steps. Firstly, the scene background is saved before initiating the monitoring of a new patient. This image can be later used for background-subtraction purposes. Then the normal routine is carried on by putting the patient on the EMU bed, connecting the EEG electrodes, etc.
The system starts then running 24/7 storing data in the ring-buffer and waiting for a seizure to occur. As part of the EMU routine, seizures onsets are marked by the technicians in the video-EEG system and can be retrieved in the NeuroKinect. The EEG seizure pattern and its evolution is classified according to a system published earlier [
The preliminary study has shown that the built-in Kinect 3D body-joint position tracking algorithm [
The dataset used in this study is composed of 42 seizure MOIs found in different parts of the patients’ body, 19 from TLE and 23 from ETE seizures distributed as depicted in
Sz Type | #Sz | #MOI | MOIs | #Frames | Duration (min) | Data size (Gb) | |||
---|---|---|---|---|---|---|---|---|---|
Head | Left hand | Right hand | Left foot | ||||||
TLE | 4 | 19 | 6 | 6 | 6 | 1 | 23,520 | 7.2 | 13.3 |
ETE | 5 | 23 | 8 | 8 | 6 | 1 | 47,876 | 13.4 | 24.8 |
Total | 9 | 42 | 14 | 14 | 12 | 2 | 71,396 | 20.6 | 38.1 |
These MOIs have a total of 71,396 frames (RGB and depth) covering over 20 minutes of ictal activity and totalizing almost 40 Gbytes of data to analyze.
Since we could not use the Microsoft Kinect built-in algorithm to track the 3D positions of the body joints, we developed a novel user-interactive algorithm to process the NeuroKinect data. This method joins the optical flow (OF) concept [
The initial step of the tracking process consists in calculating the optical flow velocities over the RGB frames using the Horn-Schunk OF method [
The yellow and green colors are associated with the depth of the centroid in two consecutive frames. In frame n, the user selects and ellipsoid mask over the aimed ROI (Fig A). The resulting velocity vector is estimated (based on the OF pixel velocity vectors—in blue) and is used to calculate the next centroid (Fig B). In frame n + 1, the new centroid is calculated. If the depth of the estimated
Let
Once the ellipsoid mask is defined, the median over the highest velocities vectors is calculated and then used to shift the centroid in the calculated proportion. The median was considered after performing the Jarque-Bera test [
Let
So, if the shifted mask centroid is within a certain depth range (
If the calculated centroid is rejected, the algorithm searches for the nearest neighbor inside the ellipsoid ROI that fulfill criteria
The process is repeated until all frames are analyzed. A schematic representation of this process is depicted in
Finally, a Savitzky-Golay FIR filter [
Once movement tracking is completed, 3D quantification is performed by feeding the MOIs tracking time series (
For all MOIs we computed a set of 56 + 23 quantified metrics and studied their significance level in discriminating the two types of seizure MOIs of the dataset. We start by computing the maximum, median, mean, minimum and standard deviation (std) of velocity, acceleration and jerk (second derivative of velocity) [
To these 17 measures, we added a new parameter—3D movement extent (ME)–that is a generalization for 3D of a previous 2D measure we have been using for our 2D studies [
The solid line inside each rounding box represents movement extent, calculated as the maximum volume traveled by the MOI and limited by the three-dimensional maxima 1, 2, 3, 4, 5 and 6. The dotted line represents the medial planes, used as reference to calculate the movement laterality in each direction (R-right, L-left, F-front, B-back). In this example, a mostly right, down and front laterality MOI is represented.
Laterality is the maximum distance of a MOI trajectory in relation to the median planes (XY and YZ)—left/right; down/up and back/front, respectively. A criterion was also applied to assess whether there was a clear tendency of the MOI to travel in a certain direction. In such case, the distance between the MOI position and the medial plane is calculated (
Finally, we introduce several new seizure quantification parameters—Seizure Movement Extent (SME); Seizure Covered Distance (SCD); Seizure Movement Displacement (SMD); Seizure Aggregate Velocity/Acceleration/Jerk (SAV, SAA, SAJ), which correspond to 18 seizure quantification parameters. SME is the aggregation of the MEs of all MOIs involved on a given seizure. For example, if a seizure presents hand automatisms and head rotations, these parameters aggregates the ME of both hands’ MOIs and the head MOI resulting in the overall volume covered by that seizure MOIs, as defined in
Given the dataset characteristics (
All 56 + 23 seizure quantitative parameters for the two different seizure pattern groups (temporal vs. extratemporal) were compared using the Wilcoxon signed rank test [
MOIs were acquired at 200 Hz by the Vicon system and 20 Hz by KiT. To perform a comparison, the Vicon motion signals were down-sampled to 20 Hz. For all stimulated MOIs, signals were low-pass filtered and then compared visually and through a 3D correlation analysis using a Matlab custom-made program. The overall average correlation was 84.2% considering all 3 axes. Details of this study can be found in
The average correlation for all 3 axes as 84.2%.
The results of the performed statistical analysis are summarized from Tables
Sz Type | #MOIs | Velocity (ms−1) | Acceleration (ms−2) | Jerk (ms−3) | MD (m) | CD (m) | ME (m3) | MOI duration (s) | |||
---|---|---|---|---|---|---|---|---|---|---|---|
Median | Sth | Mean | Sth | Mean | Sth | ||||||
TLE | 19 | 0.086 ± 0.064 | 0.58 ± 0.41 | 12 ± 8.4 | 38 ± 33 | 345 ± 287 | 1204 ± 1151 | 0.15 ± 0.11 | 6.3 ± 4.2 | 0.068 ± 0.11 | 23 ± 6.6 |
ETE | 23 | 0.068 ± 0.063 | 0.46 ± 0.29 | 8.6 ± 4.6 | 23 ± 17 | 172 ± 127 | 574 ± 506 | 0.44 ± 0.29 | 7.6 ± 2.9 | 0.14 ± 0.16 | 35 ± 10 |
n.s. | n.s. | n.s. | n.s. | <0.05 | n.s. | <0.001 | n.s. | <0.05 | <0.01 |
[Note.] Values are given as mean ± std. n.a.: not applicable. n.s.: not significant.
Sz Type | #MOIs (L-R) | MOI laterality X | #MOIs (D-U) | MOI laterality Y | #MOIs (B-F) | MOI laterality Z | |||
---|---|---|---|---|---|---|---|---|---|
Left (cm) | Right (cm) | Down (cm) | Up (cm) | Back (cm) | Front (cm) | ||||
TLE | 14 (6–8) | 1.5 ± 0.8 | 1.0 ± 0.89 | 13 (8–5) | 0.35 ± 0.4 | 0.53 ± 0.075 | 14 (8–6) | 5.4 ± 3.9 | 10 ± 7.9 |
ETE | 19 (7–12) | 6.1 ± 8.6 | 0.88 ± 0.87 | 18 (12–6) | 1.1 ± 0.14 | 1.8 ± 0.25 | 19 (12–7) | 2.2 ± 1.0 | 3.2 ± 2.3 |
n.a. | n. s. | <0.05 | n.a. | <0.001 | n. s. | n.a. | n. s. | <0.001 |
[Note.] Values are given as mean ± std. n.a.: not applicable. n.s.: not significant
Sz Type | #Sz | SAV (ms−1) | SAA (ms−2) | SAJ (ms−3) | SME (m3) | SMD (m) | SCD (m) | Sz duration (s) |
---|---|---|---|---|---|---|---|---|
Max | Max | Max | ||||||
TLE | 4 | 0.332 ± 0.217 | 26.3 ± 23.9 | 940.9 ± 960.2 | 0.32 ± 0.25 | 0.69 ± 0.57 | 30.1 ± 19.2 | 35 ± 3 |
ETE | 5 | 0.261 ± 0.257 | 15.8 ± 15.2 | 509.1 ± 501.5 | 0.55 ± 0.79 | 1.9 ± 2.1 | 32.3 ± 21.0 | 55 ± 30 |
n.a. | n.s | n.s | n.s | n.s | n.s | n.s | n.s |
[Note.] Values are given as mean ± std. n.a.: not applicable. n.s.: not significant
Additionally, the analysis of the movement extent separated TLE MOIs from ETE MOIs, as it was greater in the extratemporal than in the temporal seizures (mean = 0.14 vs 0.068 m3 m3,
In terms of movement laterality, highly significant differences were found in terms of the movements characteristics, as it can be depicted in
Based on the performed classification, we then evaluated whether there were any significant differences between TLE and ETE movements in the same direction. It can be seen (
We also evaluated whether there is any significant asymmetry between ictal movements of TLE and ETE seizures. For that purpose, three 2x2 contingency tables were designed (using the information available in
Finally, considering the seizure quantification parameters, presented in
Comparing our semi-automatic approach with a state-of-the-art 2D motion analysis software, MaxTRAQ [
This feasibility study shows that our low cost 3D movement analysis system is easily usable in the clinical routine of a standard EMU setting and is capable to track and identify sets of MOIs which allow the discrimination between seizures arising from temporal and extratemporal brain areas. This system allows to overcome major limitations of our previous 2D marker-based system using a monochrome camera and infrared markers attached to the patient’s body [
The direct comparison to a high cost multicamera system [
The parallel use of a RGB-D camera with a conventional video-EEG recording allows full synchronization of the signals. This is particularly important for the evaluation of ictal EEG localization with regard to seizure related movements. It is well known that spread of epileptic activity over the brain leads to changes of characteristic seizure associated movements [
Analysis of epileptic seizure semiology relies basically on qualitative criteria which makes it prone to inter-observer discrepancy [
Our 3D analysis shows that TLE seizure related movements are faster and more confined in space than ETE. On the other hand, the ETE patterns are longer and cover a higher tri-dimensional volume. The ETE movement occurs at lower velocities and is progressive in space. For the purpose of this paper, we focused on the statistical analysis of the above mentioned movement parameters. However, other approaches would also be of interest such as pattern recognition and machine learning. MOI pattern analysis did not reach statistical significance to discriminate TLE and ETE seizures in this study, probably due to the low number of seizures included in this pilot feasibility study.
Kinect for Windows Developer Program Preview [
Kinect v2 sensor was recently integrated into the 3Dvideo-EEG system. From the acquisitions carried out so far, Kinect v2 seems to provide better information than the Kinect v1 sensor used in the present study, as it can be seen in
In the primary and secondary source, the new color and bodyindex stream, as well as the 3D joints estimations of the human body are presented.
In this contribution, we introduce a novel 3D-video-EEG system, based on a low-cost markerless motion capture system that has been integrated into a routine EMU setting, allowing 24/7 patient monitoring. The system has been in operation for approximately one year and is producing valuable data every day now. To our knowledge, we are gathering the only 3D-video-EEG database in the world used in a routine clinical epileptology department. The present paper is a report of the first version of this system and the first dataset extracted form our multimedia clinical database, approaching the methods behind the acquisition and processing of seizure semiology. A new version of the system is already producing data which is now equipped with the new Kinect v2 sensor. It is providing more accurate information than the previous sensor, and we are also able to relate the quantified motion information here reported with EEG events occurring in the course of the seizure. Automation methods are not covered in this paper but we intend to do so in the coming publications. Seizure types or psychogenic vs. real seizure separation and seizure automated detection are in the horizon of the next challenges we are already tackling.
We also believe that the low-cost aspect of the system has the potential and the strengths to be spread and deployed in the routine of multiple epilepsy units around the world. We are open to receive requests from other EMUs around the world so that our development is used for the benefit of the larger number of patients possible.
To the best of the authors’ knowledge, the results here reported constitute the first usage of a Kinect sensor (both v1 and v2) in a real hospital environment, in the context of seizure semiology analysis.
(PDF)
(MP4)
The authors wish to thank the subjects that participated in the study. A special thanks to Microsoft for providing the pre-release version of Kinect v2, and to Eduardo Dias for contributions in the KiT application. This work is partially funded by the ERDF (European Regional Development Fund) through the COMPETE Programme (operational programme for competitiveness) and by Portuguese Funds through the FCT Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the projects Incentivo EEI/UI0127/2014 and UID/CEC/00127/2013, MovEpil3D (PTDC/SAUBEB/72305/2006), MDAS (PTDC/NEU-SCC/0767/2012) and UID/EEA/50014/2013 granted to INESC TEC. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.