Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Machine learning model to study the rugby head impact in a laboratory setting

  • Danyon Stitt,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Validation, Visualization, Writing – original draft

    Affiliations Department of Mechanical Engineering, University of Canterbury, Christchurch, Canterbury, New Zealand, Sports Health and Rehabilitation Research Center (SHARRC), University of Canterbury, Christchurch, Canterbury, New Zealand

  • Natalia Kabaliuk ,

    Roles Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing

    natalia.kabaliuk@canterbury.ac.nz

    Affiliations Department of Mechanical Engineering, University of Canterbury, Christchurch, Canterbury, New Zealand, Sports Health and Rehabilitation Research Center (SHARRC), University of Canterbury, Christchurch, Canterbury, New Zealand

  • Nicole Spriggs ,

    Contributed equally to this work with: Nicole Spriggs, Stefan Henley

    Roles Data curation, Methodology, Writing – review & editing

    Affiliations Department of Tourism, Sport, and Society, Lincoln University, Lincoln, Canterbury, New Zealand, Sports Health and Rehabilitation Research Center (SHARRC), University of Canterbury, Christchurch, Canterbury, New Zealand

  • Stefan Henley ,

    Contributed equally to this work with: Nicole Spriggs, Stefan Henley

    Roles Data curation, Methodology, Writing – review & editing

    Affiliations Faculty of Health, University of Canterbury, Christchurch, Canterbury, New Zealand, Sports Health and Rehabilitation Research Center (SHARRC), University of Canterbury, Christchurch, Canterbury, New Zealand

  • Keith Alexander,

    Roles Supervision, Writing – review & editing

    Affiliation Department of Mechanical Engineering, University of Canterbury, Christchurch, Canterbury, New Zealand

  • Nick Draper

    Roles Funding acquisition, Project administration, Supervision, Writing – review & editing

    Affiliations Faculty of Health, University of Canterbury, Christchurch, Canterbury, New Zealand, Sports Health and Rehabilitation Research Center (SHARRC), University of Canterbury, Christchurch, Canterbury, New Zealand

Abstract

The incidence of head impacts in rugby has been a growing concern for player safety. While rugby headgear shows potential to mitigate head impact intensity during laboratory simulations, evaluating its on-field effectiveness is challenging. Current rugby-specific laboratory testing methods may not represent on-field conditions. This study aimed to create a machine-learning model capable of matching head impacts recorded via wearable sensors to the nearest match in a pre-existing library of laboratory-simulated head impacts for further investigation. Separate random forest models were trained, and optimised, on a training dataset of laboratory head impact data to predict the impact location, impact surface angle, neck inclusion, and drop height of a given laboratory head impact. The models achieved hold-out test set accuracies of 0.996, 1.0, 0.998, and 0.96 for the impact location, neck inclusion, impact surface angle, and drop height respectively. When applied to a male and female youth rugby head impact dataset, most impacts were classified as being to the side or rear of the head, with very few at the front of the head. Nearly 80% were more similar to laboratory impacts that included the neck with an impact surface angled at 30 or 45° with just under 20% being aligned with impacts onto a flat impact surface, and most were classified as low drop height impacts (7.5-30cm). Further analysis of the time series kinematics and spatial brain strain resulting from impact is required to align the laboratory head impact testing with the on-field conditions.

Introduction

Head impact exposure within sports has been linked to adverse mental and physical health outcomes, even without a clinically diagnosed traumatic brain injury (TBI) [13]. Concussion is especially prevalent in rugby [46], where players experience an average of 14–52 significant head impacts per game [79]. Rates of mTBI in rugby vary between cohort, country, and level of play with studies reporting between 0.4 and 46 mTBIs per 1000 player hours [1016] making it one of the most common injuries in the sport [46, 17]. In terms of protective equipment, rugby players show mixed opinions and beliefs, especially regarding the rugby-specific soft-shelled headgear. Studies investigating the rates of headgear usage report 2–27% of players wearing rugby headgear, with most studies reporting rates between 2–15% [1823]. Those studies reporting the highest rates of headgear usage were specific to American and Canadian rugby players [21, 23]. Studies that also investigated player’s attitudes towards the protective performance of rugby headgear found that up to 62% of players believed headgear would protect from concussive injury [2123], while the one study reporting coach’s attitudes reported that 33% of them held the same belief [23]. Many of the studies report a disconnect between awareness and attitudes towards concussion, and player behaviour, with many players ignoring return-to-play guidelines, deliberately influencing concussion tests, or simply disregarding the long-term effects of concussive injuries [2427].

This divide is not unfounded. The scientific community lacks a clear consensus on whether soft-shelled rugby headgear could reduce the risk of concussive injury. Nearly all investigations in a laboratory setting find that the presence of rugby headgear significantly reduces the peak linear and rotational head impact accelerations associated with concussive injury [2833]. Unfortunately, the results are unclear on the field. Most studies of the efficacy of headgear to reduce concussive injury risk during gameplay have found no significant difference in injury rates between those wearing and those not wearing headgear [3438], with only one study finding a lower concussive injury rate in the headgear wearing cohort [39]. As discussed in a review article on soft-shelled rugby headgear [40], variability in the research design and concussion definition used within these articles complicates any direct comparison. The same review article reported that many of these studies did not consider confounding variables, such as the psychological effects of headgear use and specific risk-taking behaviour of those wearing headgear versus those without. A study of rugby players’ tendency to engage in aggressive play found that those who believed headgear prevented concussion were, on average, 4 times more likely to play in a more aggressive form than those who believed headgear did not prevent concussions [21].

Until recently, strict limits on the development of protective headgear were imposed by World Rugby [41]. The requirements for headgear design have been relaxed in World Rugby’s Law 4 trial assessment [42], bringing forth a new generation of headgear designs such as the Npro and Gamebreaker soft-shelled headgear. Specifically, the limit on material density (previously limited to ≤45kg/m3) and prohibition of sandwich construction were lifted to promote such innovations in headgear that may reduce head impact intensity in rugby. Despite this change, the standard methods used to simulate laboratory rugby head impacts remain unchanged. The most recent World Rugby standard requires headgear, fitted to a steel headform conforming to EN960 [43], to be dropped onto a steel impact surface from heights of 15–60cm with no clear criteria for interpreting the resulting impact kinematics. Most laboratory studies aiming to recreate rugby-specific head impacts for assessing impact attenuation of headgear, however, use a drop test method more closely aligned with the NOCSAE standard for evaluating American football helmets [28, 29, 31, 32, 44]. Although the NOCSAE standards cover a range of head impact testing methods, such as pneumatic ram and pendulum impacts, the drop testing method is the most commonly used for rugby headgear testing. The NOCSAE drop test standard requires a NOCSAE-specific headform to be dropped onto a modular elastomer programmer (MEP) pad at a range of impact velocities corresponding to free-fall drop heights of about 15 − 90cm [44]. However, many laboratory headgear studies use the Hybrid III (HIII) headform instead of the NOCSAE headform.

mTBI are believed to arise from a rapid change in motion of the head, specifically a change in rotational motion, causing excess strain on, and subsequent damage to, the axons in the brain. Such damage is commonly related to kinematic measures of head motion such as the peak linear and rotational acceleration (PLA and PRA) and the peak rotational velocity (PRV) as they are easily, and non-invasively, measured with accelerometers and instrumented mouthguards. Studies reporting the differences in the head impact kinematics across a range of drop test conditions have found seemingly minor changes to have a significant effect [4548]. The head mass has been reported to decrease the peak linear acceleration (PLA) for the same impact velocity during drop testing [45], while headform shape was found to significantly affect the peak rotational acceleration (PRA) and velocity (PRV) during matched oblique drop tests, with the NOCSAE headform producing 20—30% lower values than the HIII headform [46]. Oeur and Hoshizaki found that the peak linear and rotational accelerations decreased as the compliance of the impact surface increased for matched drop test impacts [47]. A study by Stitt et al. found substantial differences in the peak accelerations, the ratios of the peak kinematics, and the kinematic time-series shape between variations in the drop test method [48]. Similar to Oeur and Hoshizaki, Stitt et al. found that increasing the impact surface compliance reduced the PLA and PRA of an impact and increased the duration of the linear acceleration peak for matched impacts. Introducing a HIII neckform was found to increase the duration of the peak rotational velocity, and alter the shape of the rotational acceleration temporal profile, but had little effect on the peak acceleration and velocity values. Altering the angle of the impact surface was found to have minimal effect on the peak kinematic values or the shapes of the traces.

Unfortunately, there is limited information regarding the impact mitigation of rugby headgear across different impact conditions. There has, however, been a significant amount of research investigating the impact attenuation of ice hockey headgear using methods that may better reflect on field conditions than standard drop tests. Clark and colleagues measured the kinematic and brain strain attenuation of ice hockey helmets for different impact events in ice hockey [49, 50], finding that the helmets had a diminishing effect on impact duration, peak accelerations, and peak maximal principal strain as the impact surface became more compliant compared to unhelmeted conditions. In both studies, the authors reported this to arise from the materials in the ice hockey helmets being stiffer than those of the impact surface. As a result, the helmet materials do not compress enough to absorb the impact energy, thereby minimizing impact attenuation. A study by de Grau and colleagues [51] also found that the reductions in peak kinematics and brain strain through the use of an ice hockey helmet were higher during low compliance impacts compared to high compliance impact conditions. A further study by Haid and colleagues performed drop tests onto surfaces with varying compliance, comparing the impact attenuation of ice hockey helmets [52]. When increasing impact surface compliance, the difference between helmeted and unhelmeted impacts decreased until a fitted helmet made no measurable difference to the peak accelerations.

Despite this work, there has been no direct comparison of the head impact conditions and kinematics that exist during rugby gameplay and training to those simulated in the laboratory. A method for simulating rugby-specific head impacts in the laboratory that aims to match the resulting kinematics that exist on the field as closely as possible would benefit the understanding of head impact biomechanics, head and brain injury biomechanics, and the development of protective headgear. Therefore, this study aimed to create a machine-learning model capable of matching head impacts recorded via wearable sensors to the nearest match in a pre-existing library of laboratory-simulated head impacts for further investigation.

Materials and methods

Data collection

All laboratory impacts were carried out on a twin wire guided drop test rig using an HIII headform instrumented with four triaxial accelerometers (Analog Devices ADXL377, 20,000Hz, range: ±200g, sensitivity: 6.5 mV/g) arranged into a nine accelerometer package (NAP) [53] with three redundant sensing axes. This allowed linear and rotational accelerations, and rotational velocity, to be measured and calculated. Accelerometers were positioned at the center of mass of the headform, and then orthogonally in the x, y, and z directions separated by between 52 and 75mm. Three variations of the drop test method were included based on previous work by Stitt et al. and Draper et al. The first used an HIII head and a 1-inch MEP pad impact surface, with no neck involved [28, 48]. The second and third drop test variations used were taken from the same authors’ study of rugby headgear [29]. Using the same HIII head and neck, and 1-inch MEP pad impact surface, drop tests were carried out with the impact surface angled at 0°, 30°, and 45° relative to the test rig base. Impacts onto the flat impact surface were carried out across four impact locations: forehead, front boss, side, and rear boss (labelled rear-rear boss), as shown in Fig 1. Impacts onto the 30° and 45° impact surfaces also included a fifth impact location labelled side-rear boss, shown in Fig 2. Impacts both with and without headgear were included in the dataset, resulting in 1806 individual impacts spread across all drop test conditions.

thumbnail
Fig 1. Laboratory impact locations onto a 0°MEP pad impact surface.

From top left to bottom right: Forehead, Front boss, Side, Rear-rear boss. Image from [29], used under (CCAL) CC BY 4.0, copied from the original.

https://doi.org/10.1371/journal.pone.0305986.g001

thumbnail
Fig 2. Laboratory impact locations onto a 30° and 45°MEP pad impact surface.

From top left to bottom: Forehead, Front boss, Side, Rear-rear boss, Side-rear boss. Image from [29], used under (CCAL) CC BY 4.0, copied from the original.

https://doi.org/10.1371/journal.pone.0305986.g002

Field head impact data were recorded from two club rugby union teams: one male and one female, both from Christchurch, New Zealand, over the 2022 and 2023 seasons of play for males and over the 2022 season for females. Player recruitment began in March 2022 and ended in April 2023. Ethics approval was attained from the Human Ethics Committee of the University of Canterbury, reference: HEC 2021/26. Written informed consent was received from all players, and their parents, prior to the study. Players were followed across their club and school games and training. The age range of the players was 14–16 years old for the males and 13–17 years old for the females. Head impact kinematics were measured using the HitIQ nexus A9 mouthguards, previously validated under laboratory conditions by Stitt et al. [54]. The threshold for recording data was 8g. Retrospective video verification identified positive direct and indirect head acceleration events. The field head impact dataset comprised 440 male and 239 female video-verified, direct head impacts. A direct head impact was defined as a head acceleration event arising from a direct impact between an external body and a player’s head. Field head impact locations were also approximated during the video verification process with only impacts to the forehead, front boss, side, and rear regions of the head considered for this study to match the laboratory dataset.

Data processing and feature extraction

Laboratory accelerometer data were recorded as raw voltages by a LABVIEW system using a National Instruments cDAQ 9171 as the interface between the sensors and LABVIEW. Raw voltages were converted into accelerations using Python 3.8. All laboratory kinematic data were filtered using a Butterworth low-pass filter with a cutoff frequency of 300Hz. The order and cutoff frequency of the low-pass filter were chosen to match that of the HitIQ in house post processing of the mouthguard head acceleration data. A set of 50 kinematic single-value features were extracted from each head impact and used to develop the kinematic feature set for the laboratory and field datasets. The feature set included: Peak resultant accelerations and velocities, the change in x, y, z accelerations and velocities, peak x, y, z accelerations and velocities, ratios of peak kinematics, injury metrics, ratios of injury metrics, and the duration of resultant kinematic peaks. The change in directional kinematics was calculated as the maximum value minus the minimum value and the injury metrics included were the HIC and RIC [55]. The duration of peak kinematics was calculated at 30% of the peak value as this was found to be the most reliable when used on field data where the rotational kinematics did not always return to zero post-peak. These features were chosen as they are easily interpreted in terms of a physical mechanism. A summary of these features can be found in S1 Table.

To describe the conditions required to carry out an impact on the drop test rig, only four parameters need to be known. These are the impact location, the impact surface angle, whether or not to include a neck, and the drop height. In addition, the impact surface characteristics would also need to be known, however, as there was only one impact surface involved in the laboratory dataset, this analysis was not carried out. Based on a previous investigation of laboratory impact kinematics under different conditions [48], it was assumed that different features would best predict the different drop test parameters. Based on this, separate models were built to predict each of the four parameters associated with drop tests.

Classifier selection

Laboratory data was first split into training and testing sets of 70 and 30% of the entire dataset respectively using the stratified train test split function from the sklearn library. Stratification was based on the target class (e.g. the specific location for the impact locations). A k nearest neighbour, a logistic regression, and a random forest model were all assessed for predicting each target parameter. These classification algorithms were chosen due to their interpretability, ease of implementation in Python using the sklearn library, and use within the literature [5658]. Each algorithm was assessed using the grid search function in sklearn where the hyperparameters were also tuned at the same time. Within this search, the best feature selection method was found between the f-classification and the mutual information scores. Other feature selection methods such as permutation feature importance were not considered due to the presence of correlated features within the features set. The performance of the algorithms was assessed using the average accuracy from 10-fold cross-validation of the training data. Other algorithms such as support vector machines, and gradient boosting trees were excluded as they were much more computationally intensive to run, and preliminary testing showed little to no improvement over the algorithms selected for testing, which were also much faster learners. The hyperparameters that were tuned were the number of features (from 1 to 20), the number of neighbours (for the KNN from 5 to 15), the penalty for the logistic regression (between l2 and none) and the inverse regularisation strength, and the number of estimators (from 50 to 200) and max depth (from 1 to 10) for the random forest model. All other hyperparameters were left as their default within the sklearn library. This process was carried out with separate classifiers for each drop test parameter.

Model assessment and field condition prediction

Each model built using the optimal hyperparameters was assessed on the hold-out test set to ensure the best cross-validation accuracy did not come from overfitting to the training data. The accuracy was used to evaluate the different models instead of receiver operating curves and the corresponding AUROC, as most of the classification problems were multiclass and not binary. Finally, each of the models were used together to predict the respective drop test parameter which were combined for each impact in the hold-out test set to create the full set of impact parameters associated with each drop test impact. The accuracy of this total prediction was found by taking the percentage of fully correct predictions from the total number of predictions made. The final model was then used to predict the drop-test parameters for all field impacts. The impact location predictions were compared to the video approximated impact locations using a confusion matrix style analysis. This was mostly an early exploratory analysis to see what the most common impact parameters were and the differences, if any, between the males and females. The distribution of each predicted condition within each drop test parameter was subsequently found for the field head impact dataset.

Results

The random forest models gave the highest accuracy across most drop-test parameters. This was followed by the k-nearest neighbours models and then the logistic regression models. It should be noted that all models gave the same cross-validation accuracy when predicting the neck status, however, for continuity with the other models the random forest model was also chosen for predicting this parameter. The cross-validation and test set accuracies, along with the optimal hyperparameters found during the grid search, for the random forest model are shown in Table 1. When predicting the impact location, the random forest model had an average cross-validation accuracy of 0.998, followed by the KNN model with 0.992, and the logistic regression model with an accuracy of 0.983. All models resulted in the same average cross-validation accuracy of 0.998 for the neck status. The impact surface angle was predicted with an average cross-validation accuracy of 0.995 with the random forest model followed by the KNN model at 0.994, then the logistic regression at 0.888. Finally, the drop height was predicted with an average accuracy of 0.993 with the random forest model, 0.991 with the KNN model, and 0.990 with the logistic regression model. The best feature selection method was found to be the mutual information classification method across all drop test parameters for the random forest model. A detailed classification report showing the class-wise precision, recall, and F1-score can be foun in the S2S5 Tables.

thumbnail
Table 1. Mean (SD) cross-validation accuracy, the test set accuracy, and the optimal hyperparameters used for each random forest model.

https://doi.org/10.1371/journal.pone.0305986.t001

The impact location required 10 features with 90 estimators and a maximum tree depth of 7 nodes. The features that made up the model were the x and y peak linear acceleration and rotational velocity, the relative directional peak linear and rotational accelerations and velocities, along with the vector direction of the peak kinematics (Table 2). The neck status model only required a single feature, 50 estimators, and a single decision node. The single feature required for prediction was the duration of the rotational velocity peak at 30% of the peak value. The impact surface angle model required 7 features, with 200 estimators, and a max tree depth of 10 nodes. The features involved were the directional peak rotational velocities and the relative directional linear and rotational peak kinematics. Drop height was best predicted with the resultant peak linear acceleration and rotational velocity, followed by the change in linear velocity, the HIC, and the RIC. This model required 6 features with 120 estimators and a max tree depth of 9 nodes.

When all four models were combined to predict the full set of drop test parameters for each impact in the hold out test set, the test set accuracy was 0.983. The incorrect predictions are compared to the actual impact parameters in Table 3. In most cases, only one of the drop test parameters was incorrectly classified, the exception being a forehead impact onto the 30°MEP pad which had the impact location and the impact surface angle misclassified. The algorithm correctly predicted all but three of the impact location conditions, with the algorithm incorrectly classifying two forehead impacts as front boss impacts and one front boss impact as a forehead impact. Four of the impacts had the impact surface angle incorrectly classified. These misclassifications were between 0° and 30° or 30° and 45°. No impacts onto the 0 MEP pad were classified as impacts ponto the 45°MEP pad, or vice versa. There were three incorrect predictions by the drop height model. Two were misclassified as the adjacent drop height and a third 30cm impact was classified as a 15cm impact. There were no recorded incorrect predictions of the neck status within the hold-out test set.

thumbnail
Table 3. Incorrect predictions of the entire set of drop test parameters for each impact in the hold out test set.

The incorrect prediction, and the correct counterpart, is highlighted in bold font.

https://doi.org/10.1371/journal.pone.0305986.t003

Field data predictions

The predicted impact location of each of the field head impacts compared to the approximated impact location from video verification is shown in Fig 3. While the predicted impact location agreed with the video approximated impact location for some impact locations, there were a large number of discrepancies. Specifically, male head impacts labelled as forehead impacts were commonly classified as front boss or side-rear boss impacts. Male head impacts that were labelled as front boss impacts were most commonly classified as an imp[act to the side, or rear of the head. Similar results were seen in the female head impact location predictions. It was assumed that an impact to the front boss or the rear boss could easily have been misidentified as an impact to the side of the head, and vice versa, during video verification. Discrepancies such as these, therefore, were not believed to be an issue. Male and female impacts to the side of the head were commonly predicted as such, while many were classified as side-rear boss impacts. A similar observation was made with impacts labelled as rear or rear boss impacts.

thumbnail
Fig 3. Predicted impact locations of male (right) and female (left) head impact data compared to the impact location identified during the video verification process.

https://doi.org/10.1371/journal.pone.0305986.g003

Figs 4 and 5 show the fraction of each predicted drop test parameter within the field data as predicted by the random forest models. The most common impact location was the side-rear boss for males, followed by the side, rear-rear boss, the front boss, and the forehead. For females the most commonly predicted impact location was the side of the head, followed by the side-rear boss, rear-rear boss, the front boss, and the forehead. Forehead impacts were by far the least frequent. Minimal differences were seen between males and females. Both male and female field data were predicted to require inclusion of the neck in around 80% of cases, with the remaining impacts being more similar to those carried out with no neck involved (Fig 4). In both male and female groups, the most commonly predicted impact surface angle was 30°, followed by 0°, and finally 45°, comprising around 45%, 35%, and 20% in each group respectively. Drop heights, again, showed a strong similarity between the males and females with around 33% of male and 40% of female head impacts being predicted as 7.55cm drops, followed by 15cm, 30cm, 45cm, and 60cm. Only a few impacts were predicted as 22.5cm drops.

thumbnail
Fig 4. Fraction of each Impact location (right) and neck status (left) prediction classes within the field data as predicted by the random forest models.

The blue shows males, red shows females.

https://doi.org/10.1371/journal.pone.0305986.g004

thumbnail
Fig 5. Fraction of each impact surface angle (right) and drop height (left) prediction classes within the field data as predicted by the random forest models.

The blue shows males, red shows females.

https://doi.org/10.1371/journal.pone.0305986.g005

Discussion

Each random forest model performed exceedingly well during cross-validation and hold-out testing on the laboratory data. As assumed, each drop test parameter was predicted by distinctively different kinematic features, justifying the choice of individual models for predicting each drop test parameter. The impact location prediction model used the x and y direction peak linear acceleration and rotational velocity, as well as the relative directional peak kinematics, both linear and rotational. The vector directions of the linear and rotational acceleration were also included, however including these features only resulted in marginal improvements in the model’s accuracy. These kinematics are influenced by the direction of the impact, which appeared to be strongly indicative of the location of the impact. During evaluation on the hold-out test set, the impact location model incorrectly identified two forehead impacts as front boss impacts and one front boss impact as a forehead impact. These, however, were relatively minor misclassifications as the front boss and forehead impact locations are next to each other. A more serious misclassification would be predicting a forehead impact as a rear boss impact, however such a prediction error was not observed. The impact surface angle classifier, while also using the peak and directional kinematics, favoured the rotational velocity and acceleration kinematics. This was interesting as a previous investigation of the effect of impact surface angle on the kinematics resulting from drop test head impacts found there to be almost no difference in the resultant peak rotational velocity between impacts onto a flat surface and one angled at 45° [48]. It should be noted that out of all the classifiers, this model required the highest maximum tree depth and number of estimators, suggesting there may be less distinction between impacts on the flat and those onto an angled surface.

The model classifying whether a neckform was used in the drop test was the most accurate classifier created in this study. This may have been due to the binary classification problem or due to a significant effect of the neck on the drop test kinematics. The model only required one feature and a single decision node to achieve an accuracy of 0.998 during cross-validation and a hold-out test set accuracy of 1.0. This feature was the duration (in ms) of the resultant rotational velocity peak. Again, this makes sense as the same comparison of drop test parameters on the resulting head impact kinematics found the main effect of including a neck was an increase in the duration of the rotational velocity peak while leaving the other kinematics relatively unchanged, including peak resultant accelerations and velocities [48].

While the drop height could theoretically be estimated using only the change in resultant linear velocity, the predictive model incorporated kinematics that seemed unintuitive and likely resulted in increased prediction accuracy compared to the linear velocity alone. This model required 6 features centred around peak resultant kinematics, the change in linear velocity, and the injury criteria. Similar to the impact location classifier, the drop height classifier incorrectly identified the drop heights of three impacts as 15, 15, and 60cm in the hold-out test set. The true drop heights of these impacts were close to those predicted, with the first being 30cm, the second being 7.5cm, and the third being 45cm. It is important to note that drop height is an ordinal variable, meaning both regression and classification could be used. Drop height prediction performance may increase by using a regression algorithm, however, this was not investigated due to the high accuracy of the classification algorithm. Ordinal regression problems may benefit from an implementation of ordinal logistic regression. Again, due to the high classification accuracy, this was not investigated.

Discrepancies between the predicted and video-verified impact locations were observed when looking at the predicted labels of the field head impact data. Such differences could easily be due to a mistake during video verification. Many recordings of the impacts were partially obstructed by other players, thus, attaining a confident impact location description of the impact location may not have been possible for every field head impact. Despite this, many of the predicted impact locations matched those of the video verification. Common discrepancies were predicting an impact labelled as a rear impact via video verification as a rear-rear boss, side-rear boss, or side impact, and vice versa. In both male and female datasets, video-labelled front boss impacts were most commonly predicted to be side-rear boss, side, or rear-rear boss impacts. There may be a difference in the injury potential between impacts to the front, rear, and side of the head, which is important for interpreting the severity of the misclassifications of the impact location model in this study. For example, misclassifying a forehead impact as a front boss impact would not be as detrimental as misclassifying a forehead impact as a side-rear boss impact. Additionally, if these models are to be used to investigate the impact mitigation of rugby headgear, it is important to note that there are significant differences in the impact mitigation of rugby headgear between impacts to the front, side, and rear of the head [28, 29]. This should also be considered when evaluating the severity of the model misclassifications. This is also true for the drop height, where incorrectly classifying a 15cm impact as a 30cm impact would not be as bad as classifying the same impact as a 45 or 60cm impact.

Both male and female head impact data sets shared similar proportions of each predicted condition within each of the drop test parameters. For the impact location, the side and rear of the head were the most commonly predicted locations. The front boss and forehead locations showed dramatically fewer prediction occurrences than the other impact locations. This could be due to the way that rugby players are taught to tackle or the specific biomechanics of tackling where players go in with their shoulder, leaving the side of the head open to impact. Most (around 80%) of the field impacts were predicted to be more similar to impacts with the neckform involved than without. This likely means that, in general, longer duration rotational velocity peaks are present in rugby head impacts compared to those in the laboratory. This could arise from the relatively unconstrained nature of the human head and neck during gameplay compared to that of the HIII head and neck in the laboratory. During and following a head impact, the head, neck, and body of a person are free to move and rotate. This may create rotational velocity data that has a large peak duration compared to the laboratory where the head and neck are fixed to a drop frame, which in turn is guided by tightened steel wires. This setup does not have the same freedom to move and rotate following an impact that likely exists within a real-life scenario.

The most commonly predicted impact surface angle for the male and female field head impact data was the 30°impact surface angle. This was followed by the 0°impact surface and then the 45°impact surface angle. The proportions of each of these within the male and female datasets were nearly identical. Physically, this implied that most of the head impacts recorded were due to a glancing blow to the head, rather than an impact where the line of action of the force was travelling near parallel to the direction of head travel. Additionally, from the video verification process, it was seen that most impacts were glancing blows to the head with either another player’s body or the ground. Such impacts were rarely a direct head-on collision as would be simulated in the laboratory with an impact onto a flat impact surface.

The most commonly predicted drop height was 7.5cm for both males and females, followed by the 15cm and 30cm drop heights. This indicated that most of the impacts experienced by the study cohort were at lower impact velocities and, by extension, severity. While few of these impacts would have a change in linear velocity matching that of the predicted drop height, these results do indicate the relative proportion of drop heights that reflect on-field conditions. For recreating specific impacts in the laboratory it may be more useful to find the change in linear velocity directly from the mouthguard kinematic data and calculate the drop height associated with this velocity change.

The classification models developed in this study provide a solid basis for exploring the relationship of laboratory-simulated head impacts to those measured during youth rugby union. There are, however, some significant limitations. Firstly, this analysis does not directly compare the time series traces. Instead, the models only predict the closest matching impact within our library of head impact simulations based on single-value kinematics. Such a comparison would be necessary to fully understand the relationship between rugby head impact conditions and the laboratory simulations of said impacts. Additionally, these classifiers do not take into account any brain strain data which may be one of the most important features to preserve between the field and the laboratory. Secondly, the predictive classifiers are only able to make predictions based on what has been generated for the library of laboratory head impacts. Many of the field impacts may be better reflected by other impact surfaces or head sizes or even by a different method of testing such as a pneumatic ram or pendulum impacts. Finally, the field dataset should be extended. This would allow the field head impact dataset to capture a broader range of on-field head impact conditions, allowing the predictive models developed here to be assessed under a greater range of head impact conditions.

Conclusion

This study provides a method of matching impacts recorded during youth rugby to those in a pre-existing library of head impacts. This can be carried out for any sport of interest and any head impact simulation method of interest. Such a method may aid in the understanding of the role of protective equipment during specific head impacts. However, the library of drop test simulations used in this study did not include a variety of impact surfaces. Rugby head impacts likely cover a range of impact surface stiffnesses. Extending the library of laboratory head impact conditions would be necessary to correctly predict, and subsequently recreate, a given field head impact.

Supporting information

S1 Table. Features, along with their explanation, extracted from each impact.

https://doi.org/10.1371/journal.pone.0305986.s001

(JPG)

S2 Table. Classification report for the random forest impact location classifier.

https://doi.org/10.1371/journal.pone.0305986.s002

(JPG)

S3 Table. Classification report for the random forest neck status classifier.

https://doi.org/10.1371/journal.pone.0305986.s003

(JPG)

S4 Table. Classification report for the random forest impact surface angle classifier.

https://doi.org/10.1371/journal.pone.0305986.s004

(JPG)

S5 Table. Classification report for the random forest drop height classifier.

https://doi.org/10.1371/journal.pone.0305986.s005

(JPG)

Acknowledgments

We thank Nicholas Ward from the University of Canterbury for his aid and helpful discussions in understanding data science techniques and methods.

References

  1. 1. McAllister TW, Ford JC, Flashman LA, Maerlender A, Greenwald RM, Beckwith JG, et al. Effect of head impacts on diffusivity measures in a cohort of collegiate contact sport athletes. Neurology. 2014;82(1):63–69. pmid:24336143
  2. 2. Talavage M T, NaumanEric A, BreedloveEvan L, DyeAnne E, MorigakiKatherine E, LeverenzLarry J, et al. Functionally-detected cognitive impairment in high school football players without clinically-diagnosed concussion. Journal of neurotrauma. 2014;. pmid:20883154
  3. 3. Breedlove EL, Robinson M, Talavage TM, Morigaki KE, Yoruk U, O’Keefe K, et al. Biomechanical correlates of symptomatic and asymptomatic neurophysiological impairment in high school football. Journal of biomechanics. 2012;45(7):1265–1272. pmid:22381736
  4. 4. Gardner AJ, Iverson GL, Williams WH, Baker S, Stanwell P. A systematic review and meta-analysis of concussion in rugby union. Sports Med. 2014;44(12):1717–31. pmid:25138311
  5. 5. Hendricks S, Jordaan E, Lambert M. Attitude and behaviour of junior rugby union players towards tackling during training and match play. Safety Science. 2012;50(2):266–284.
  6. 6. Marshall SW, Spencer RJ. Concussion in Rugby: The Hidden Epidemic. J Athl Train. 2001;36(3):334–338. pmid:12937506
  7. 7. King D, Hume P, Gissane C, Cummins C, Clark T. Measurement of Head Impacts in a Senior Amateur Rugby League Team with an Instrumented Patch: Exploratory Analysis. ARC Journal of Research in Sports Medicine. 2017;2(1):9–20.
  8. 8. King DA, Hume PA, Gissane C, Kieser DC, Clark TN. Head impact exposure from match participation in women’s rugby league over one season of domestic competition. J Sci Med Sport. 2018;21(2):139–146. pmid:29122475
  9. 9. King D, Hume PA, Brughelli M, Gissane C. Instrumented mouthguard acceleration analyses for head impacts in amateur rugby union players over a season of matches. Am J Sports Med. 2015;43(3):614–24. pmid:25535096
  10. 10. Fuller CW, Taylor A, Raftery M. 2016 Rio Olympics: an epidemiological study of the men’s and women’s Rugby-7s tournaments. Br J Sports Med. 2017;51(17):1272–1278. pmid:28137789
  11. 11. Hecimovich MD, King D. Prevalence of head injury and medically diagnosed concussion in junior-level community-based Australian Rules Football. J Paediatr Child Health. 2017;53(3):246–251. pmid:27862527
  12. 12. King D, Hume P, Cummins C, Pearce A, Clark T, Foskett A, et al. Match and Training Injuries in Women’s Rugby Union: A Systematic Review of Published Studies. Sports Med. 2019;49(10):1559–1574. pmid:31292854
  13. 13. Ma R, Lopez J V, Weinstein MG, Chen JL, Black CM, Gupta AT, et al. Injury Profile of American Women’s Rugby-7s. Med Sci Sports Exerc. 2016;48(10):1957–66. pmid:27232243
  14. 14. Moore IS, Ranson C, Mathema P. Injury Risk in International Rugby Union: Three-Year Injury Surveillance of the Welsh National Team. Orthop J Sports Med. 2015;3(7):2325967115596194. pmid:26674339
  15. 15. Schick DM, Molloy MG, Wiley JP. Injuries during the 2006 Women’s Rugby World Cup. Br J Sports Med. 2008;42(6):447–51. pmid:18424486
  16. 16. Silver D, Brown N, Gissane C. Reported concussion incidence in youth community Rugby Union and parental assessment of post head injury cognitive recovery using the King-Devick test. J Neurol Sci. 2018;388:40–46. pmid:29627029
  17. 17. King D, Hume PA, Hardaker N, Cummins C, Clark T, Pearce AJ, et al. Female rugby union injuries in New Zealand: A review of five years (2013-2017) of Accident Compensation Corporation moderate to severe claims and costs. J Sci Med Sport. 2019;22(5):532–537. pmid:30477931
  18. 18. Marshall SW, Waller AE, LOOMIS DP, Feehan M, CHALMERS DJ, BIRD YN, et al. Use of protective equipment in a cohort of rugby players. Medicine & Science in Sports & Exercise. 2001;33(12):2131–2138.
  19. 19. Finch CF, McIntosh AS, McCrory P, Zazryn T. A pilot study of the attitudes of Australian Rules footballers towards protective headgear. Journal of Science and Medicine in Sport. 2003;6(4):505–511. pmid:14723399
  20. 20. Braham R, Finch CF, McIntosh A, McCrory P. Community football players’ attitudes towards protective equipment—a pre-season measure. British journal of sports medicine. 2004;38(4):426–430. pmid:15273177
  21. 21. Menger R, Menger A, Nanda A. Rugby headgear and concussion prevention: misconceptions could increase aggressive play. Neurosurgical focus. 2016;40(4):E12. pmid:27032915
  22. 22. Barnes A, Rumbold JL, Olusoga P. Attitudes towards protective headgear in UK rugby union players. BMJ open sport exercise medicine. 2017;3(1):e000255. pmid:29081983
  23. 23. Pettersen JA. Does rugby headgear prevent concussion? Attitudes of Canadian players and coaches. Br J Sports Med. 2002;36(1):19–22. pmid:11867487
  24. 24. Kearney PE, See J. Misunderstandings of concussion within a youth rugby population. Journal of science and medicine in sport. 2017;20(11):981–985. pmid:28476439
  25. 25. O’Connell E, Molloy M. Concussion in rugby: knowledge and attitudes of players. Irish Journal of Medical Science (1971-). 2016;185:521–528. pmid:26026952
  26. 26. van Vuuren H, Welman K, Kraak W. Concussion knowledge and attitudes amongst community club rugby stakeholders. International Journal of Sports Science & Coaching. 2020;15(3):297–305.
  27. 27. Delahunty SE, Delahunt E, Condon B, Toomey D, Blake C. Prevalence of and attitudes about concussion in Irish schools’ rugby union players. Journal of school health. 2015;85(1):17–26. pmid:25440449
  28. 28. Draper N, Kabaliuk N, Stitt D, Alexander K. Potential of Soft-Shelled Rugby Headgear to Reduce Linear Impact Accelerations. Journal of Healthcare Engineering. 2021;2021:5567625. pmid:33981403
  29. 29. Stitt D, Kabaliuk N, Alexander K, Draper N. Potential of Soft-Shell Rugby Headgear to Mitigate Linear and Rotational Peak Accelerations. Annals of Biomedical Engineering. 2022;. pmid:35059915
  30. 30. Ganly M, McMahon JM. New generation of headgear for rugby: impact reduction of linear and rotational forces by a viscoelastic material-based rugby head guard. BMJ Open Sport Exerc Med. 2018;4(1):e000464. pmid:30622730
  31. 31. Knouse CL, Gould TE, Caswell SV, Deivert RG. Efficacy of Rugby Headgear in Attenuating Repetitive Linear Impact Forces. J Athl Train. 2003;38(4):330–335. pmid:14737216
  32. 32. McIntosh A, McCrory P, Finch CF. Performance enhanced headgear: a scientific approach to the development of protective headgear. Br J Sports Med. 2004;38(1):46–9. pmid:14751945
  33. 33. Frizzell ERA, Arnold GP, Wang W, Abboud RJ, Drew TS. Comparison of branded rugby headguards on their effectiveness in reducing impact on the head. BMJ Open Sport Exerc Med. 2018;4(1):e000361. pmid:30498572
  34. 34. Hollis SJ, Stevenson MR, McIntosh AS, Shores EA, Collins MW, Taylor CB. Incidence, risk, and protective factors of mild traumatic brain injury in a cohort of Australian nonprofessional male rugby players. The American journal of sports medicine. 2009;37(12):2328–2333. pmid:19789332
  35. 35. Marshall SW, Loomis DP, Waller AE, Chalmers DJ, Bird YN, Quarrie KL, et al. Evaluation of protective equipment for prevention of injuries in rugby union. International journal of epidemiology. 2005;34(1):113–118. pmid:15561749
  36. 36. McIntosh AS, McCrory P. Effectiveness of headgear in a pilot study of under 15 rugby union football. British journal of sports medicine. 2001;35(3):167–169. pmid:11375874
  37. 37. Mcintosh A, McCrory P, Finch C, Best J, Chalmers D, Wolfe R. Does padded headgear prevent head injury in rugby union football? Medicine+ Science in Sports+ Exercise. 2009;41(2):306. pmid:19127196
  38. 38. Stokes KA, Cross M, Williams S, McKay C, Hagel BE, West SW, et al. Padded Headgear does not Reduce the Incidence of Match Concussions in Professional Men’s Rugby Union: A Case-control Study of 417 Cases. International Journal of Sports Medicine. 2021;. pmid:33607666
  39. 39. Kemp SP, Hudson Z, Brooks JH, Fuller CW. The epidemiology of head injuries in English professional rugby union. Clinical Journal of Sport Medicine. 2008;18(3):227–234. pmid:18469563
  40. 40. Henley S, Andrews K, Kabaliuk N, Draper N. Soft-shell headgear in rugby union: a systematic review of published studies. Sport Sciences for Health. 2023; p. 1–18.
  41. 41. type [;]Available from: https://playerwelfare.worldrugby.org/headgear.
  42. 42. type [;]Available from: https://playerwelfare.worldrugby.org/content/getfile.php?h=3a2d78ccb4e008771c7c5c75dd31e86d&p=pdfs/reg-22/Law-4-Headgear-Trial-Explanation_EN.pdf.
  43. 43. Headforms for use in the testing of protective helmets; 2006.
  44. 44. NOCSAE. type [; 2020]Available from: https://nocsae.org/standard/standard-linear-impactor-test-method-and-equipment-used-in-evaluating-the-performance-characteristics-of-protective-headgear-and-faceguards-2/.
  45. 45. Gimbel GM, Hoshizaki TB. Compressive properties of helmet materials subjected to dynamic impact loading of various energies. European Journal of Sport Science. 2008;8(6):341–349.
  46. 46. Bland ML, McNally C, Rowson S. Headform and neck effects on dynamic response in bicycle helmet oblique impact testing. In: Proceedings of the IRCOBI Conference. Athens, Greece; 2018. p. 413–423.
  47. 47. Oeur RA, Hoshizaki TB. The effect of impact compliance, velocity, and location in predicting brain trauma for falls in sport. In: Proceedings of the IRCOBI 2016 Conference International Research Council on Biomechanics of Injury, Malaga, Spain; 2016. p. 14–16.
  48. 48. Stitt D, Kabaliuk N, Alexander K, Draper N. Drop test kinematics using varied impact surfaces and head/neck configurations for rugby headgear testing. Annals of biomedical engineering. 2022; p. 1–15. pmid:36002780
  49. 49. Clark JM, Post A, Hoshizaki TB, Gilchrist MD. Protective capacity of ice hockey helmets against different impact events. Annals of biomedical engineering. 2016;44:3693–3704. pmid:27384941
  50. 50. Clark JM, Taylor K, Post A, Hoshizaki TB, Gilchrist MD. Comparison of ice hockey goaltender helmets for concussion type impacts. Annals of biomedical engineering. 2018;46:986–1000. pmid:29600424
  51. 51. de Grau S, Post A, Meehan A, Champoux L, Hoshizaki TB, Gilchrist MD. Protective capacity of ice hockey helmets at different levels of striking compliance. Sports Engineering. 2020;23:1–10.
  52. 52. Haid D, Duncan O, Hart J, Foster L. Free-fall drop test with interchangeable surfaces to recreate concussive ice hockey head impacts. Sports Engineering. 2023;26(1):1–11.
  53. 53. Padgaonkar AJ, Krieger KW, King AI. Measurement of Angular Acceleration of a Rigid Body Using Linear Accelerometers. Journal of Applied Mechanics. 1975;42(3):552–556.
  54. 54. Stitt D, Draper N, Alexander K, Kabaliuk N. Laboratory Validation of Instrumented Mouthguard for Use in Sport. Sensors. 2021;21(18):6028. pmid:34577235
  55. 55. Kimpara H, Iwamoto M. Mild traumatic brain injury predictors based on angular accelerations during impacts. Ann Biomed Eng. 2012;40(1):114–26. pmid:21994065
  56. 56. Greenwald RM, Gwin JT, Chu JJ, Crisco JJ. Head impact severity measures for evaluating mild traumatic brain injury risk exposure. Neurosurgery. 2008;62(4):789–98; discussion 798. pmid:18496184
  57. 57. Hernandez F, Wu LC, Yip MC, Laksari K, Hoffman AR, Lopez JR, et al. Six degree-of-freedom measurements of human mild traumatic brain injury. Annals of biomedical engineering. 2015;43(8):1918–1934. pmid:25533767
  58. 58. Cai Y, Wu S, Zhao W, Li Z, Wu Z, Ji S. Concussion classification via deep learning using whole-brain white matter fiber strains. PloS one. 2018;13(5):e0197992. pmid:29795640