Figures
Abstract
Background
Velocity-Based Training (VBT) is an emerging method in resistance training for objectively prescribing and monitoring training intensity and neuromuscular function. Given its growing popularity, assessing the validity and reliability of VBT devices is critical for strength and conditioning coaches.
Objective
The primary purpose of this review was twofold: (1) to identify and address methodological gaps in current assessments of VBT device validity and reliability, and (2) to propose and apply a novel, multi-layered, criterion-based framework—developed in collaboration with statisticians and domain experts—for evaluating these devices.
Methods
A systematic search was conducted in PubMed, Scopus, and SPORTDiscus following PRISMA guidelines, focusing on original research studies published before February 2024 that assessed VBT device validity or reliability. Out of 568 studies identified, 75 met the inclusion criteria.
Results
Among the included studies, 66 investigated device validity and 56 examined reliability, with some studies addressing both aspects. Notably, only 5 of the 66 validity studies met all of the proposed criteria, while just 16 of the 56 reliability investigations satisfied the required statistical thresholds defined by our framework. These findings highlight significant methodological variability and underscore the need for more standardized evaluation practices.
Conclusions
This review systematically evaluated the validity and reliability of various VBT devices and introduced a robust, multi-layered framework for their assessment. By integrating statistician-led and domain expert-led criteria, the framework offers a standardized approach that enhances the precision of device evaluation. Promising tools identified include the GymAware LPT, Perch Motion Capture Single Camera System, Flex optical device, and VmaxPro. Future research should build upon and refine this methodology to further standardize study designs, improve data reporting, and ultimately support more informed decision-making in sports technology and training practice.
Citation: Wannouch YJ, Leahey SR, Ramírez-Campillo R, Banyard HG, Dorrell HS, Stern S, et al. (2025) A systematic review using a multi-layered criteria framework for assessing the validity and reliability of velocity monitoring devices in resistance training. PLoS One 20(9): e0324606. https://doi.org/10.1371/journal.pone.0324606
Editor: Daniel J. Glassbrook, Aston University, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
Received: February 4, 2025; Accepted: April 28, 2025; Published: September 8, 2025
Copyright: © 2025 Wannouch et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Velocity monitoring devices
Technological advancements over the past decade have fueled the surge in Velocity-Based Training (VBT), making sophisticated velocity-monitoring tools accessible to Strength and Conditioning (S&C) coaches across various settings. The efficacy of VBT is contingent upon the validity and reliability of velocity-monitoring devices [1,2]. Validity in this context refers to the device’s ability to accurately measure what it is intended to measure, often benchmarked against a “gold-standard” criterion from existing literature [1] in the case of barbell velocity for VBT the gold standard is three-dimensional motion capture. Reliability denotes the device’s ability to produce consistent results over repeated measures [1]. Researchers previously assessing the validity or reliability of technological devices have used different statistical approaches to determine acceptable validity and reliability, such as a high correlation (r > 0.70), low coefficient of variation (CV < 10%), and small effect size (ES < 0.60) for validity [1,3,4], and a high intra-class correlation (ICC ≥ 0.90), low CV (< 10%), and a standardized mean bias < 0.60 for reliability [4,5]. Both intra- and inter-device reliability are crucial for meaningful progress tracking, particularly when the same device is used consistently [1,6]. Furthermore, it is essential to differentiate between biological variations—like an athlete’s physical condition and readiness—and the technological inconsistencies of the device [4]. The current landscape of sport science literature reveals a lack of evidence-based standardized measures for assessing these parameters [7–15].
Philosophical and empirical considerations of validity & reliability in sport science
While statistical validity and reliability serve as essential tools for empirically addressing epistemological questions of knowledge, the operationalization of these constructs has historically been somewhat arbitrary [16–20]. This lack of standardization is not unique to sport science; for example, the once-coveted “P-value” has come under scrutiny, leading to some academic journals to reject it altogether due to its alleged misuse and the growing awareness of its limitations in accurately reflecting the strength of evidence in scientific studies [21–23]. Others have opted to use P-values alongside other statistical measures such as effect sizes with confidence intervals to enhance the certainty of their inferences [24–27]. Similarly, theories of validity and reliability are not immune to critical examination [28–30]. In the context of this systematic review, validity should be benchmarked against a recognized ‘gold standard,’ providing a semblance of objectivity [1], such as three-dimensional motion capture for velocity-based metrics. However, subjectivity arises not in the gold standards, which serve as benchmarks for objective truth, but in the methods used to assess how closely these standards are approximated [28]. The thresholds for determining validity and reliability have varied over time, often based on arbitrary criteria [1,15,30,31]. Thus, traditionally used approaches to the assessment of validity and reliability, specifically in this context of technological devices, often lack a comprehensive framework that balances statistical rigor with domain expertise for improved context. Additionally, there is widespread practice of employing correlational coefficients as a primary tool for assessing validity [1,3,4]. While correlations can indicate the presence of an association between variables, or, in this context, between different measurement devices, this statistical relationship when used solely offers a superficial understanding at best [32]. This poses a fundamental limitation as a high correlation between devices may suggest a strong association, yet it provides no critical information on the aspects of accuracy and precision. For instance, two devices could consistently yield measurements that are highly correlated but systematically biased, wherein one device consistently overestimates or underestimates values compared to a gold standard [33]. Similarly, proportional bias, where the discrepancy between measurements varies across the range of values, remains unknown by the sole use of correlation measures and thus it is advisable to employ a direct measure of error for more accurate assessment. The sole use of effect sizes to assess validity is also inherently flawed in nature. Effect sizes can indicate how substantial the differences are between groups or conditions, providing a quantitative measure of the impact of an intervention or in this context the comparative performance of different devices [34]. However, neither of these approaches directly addresses the question of whether a device accurately measures what it intends to measure. Furthermore, the use of any criterion device not considered a gold standard in the assessment of validity simply investigates the level of agreement or association between those devices, and not true validity.
It seems that in the landscape of sport science research, a methodological shortcut is too often adopt validity or reliability criteria from previous studies that have assessed tests or devices, without critical evaluation of their contextual appropriateness. This approach does not consider whether that specific criteria effectively increases practitioner confidence in making practical decisions. Essentially, the simple replication of broad statistical benchmarks for validity and reliability does not provide the necessary assurance that these tools or tests will perform as expected in new practical scenarios or environments. This methodological oversight highlights the need for a more thoughtful examination of how to determine and apply criteria for assessing the validity and reliability of research tools, ensuring that these measures are genuinely informative and applicable to the decisions practitioners must make. Kyprianou, Lolli [35] recently highlighted the importance of consulting domain experts in validity assessments to define practical equivalence margins when comparing devices or tests to gold standards. The authors of that study proposed this approach as a novel method for validity assessment. The authors of the present review concur with this perspective and acts on these suggestions, recognizing that strictly adhering to standardized statistical thresholds can pose a preventable limitation in practical assessments aimed at achieving an acceptable level of certainty [35]. To the author’s knowledge, this review proposes the first framework that uses multi-layered criteria and domain expert consultation to assess validity and reliability within this specific context. While there are existing frameworks that evaluate the methodological quality of studies [36], this proposed framework is unique in its specific focus on assessing validity and reliability at the methodological level (i.e., the use of a gold standard criterion or differentiation between technological and biological reliability), the statistical level (i.e., the statistician recommended testing measures) and domain-specific interpretation of said measures (i.e., the domain expert input on the relative statistical thresholds required for increased certainty). Due to its comprehensive coverage of multiple dimensions, the authors contend that the proposed framework enhances certainty (see Fig 1) compared to previously established approaches—such as those utilizing arbitrary statistical thresholds [1,3–5]—for assessing validity and reliability in this context. While the concept of certainty can be quantified using various objective measures (e.g., percentages, degrees, or levels), it ultimately exists on a continuum between absolute certainty and complete uncertainty. Although this framework does not claim to provide ultimate certainty, it represents an improvement over what is currently applied. Fig 1 helps visually demonstrate how different degrees of certainty might be situated along this continuum. Ultimately, although we do not assign an exact numerical increase in certainty before or after applying this framework, its enhanced methodological rigor offers a more robust assessment of validity and reliability. Thus, logically enabling practitioners to develop a greater sense of certainty in the evidence they apply.
The primary purpose of this review was twofold: first, to identify and address methodological gaps within the assessment of validity and reliability of velocity monitoring devices used in velocity-based training; second, to propose and transparently apply a multi-layered, criterion-based framework—developed in collaboration with statisticians and domain experts—for assessing RT variables measured via VBT technological devices. By systematically evaluating the latest research using this comprehensive framework, we aim to assist S&C coaches in identifying the most valid and reliable devices for their specific needs, thereby enabling more accurate and effective use of VBT in both practice and research.
Methods
Search strategy
A search strategy was implemented following PRISMA guidelines for systematic reviews [37]. The academic databases PubMed, Scopus, and SPORTDiscus were systematically searched in February 2024 to identify peer-reviewed original research studies on the validity and reliability of technological devices used to quantify velocity, displacement, and additional RT variables. These additional variables include metrics such as time spent at isokinetic velocity, time to reach isokinetic velocity, total work, exercise recognition, repetition count, 1RM prediction, and full waveform velocity, which are listed in Table S6 in S1 File. Furthermore, the original search was limited to studies published in English. To verify that this language restriction did not introduce bias, we applied the same search criteria to non-English studies within the same date range across all databases searched. This approach identified eight non-English studies (six in Chinese and two in Spanish), whose titles and abstracts were in English and subsequently evaluated for eligibility. No non-English studies met the inclusion criteria for the review, thereby ensuring that the language restriction did not bias the review’s results. The search was guided by the PICO strategy [38], and utilized pre-determined search terms, keywords, and Boolean operators (AND/OR). The search results were extracted and imported into a reference manager (EndNote 20.4.1, Clarivate Analytics, Philadelphia, PA, USA) and analyzed for relevance to the scope of this systematic review. The specific search strategies and predetermined search terms for each database are presented in the electronic supplementary material (Table S1 in S1 File).
Searching other resources
In addition to the electronic searches, the reference lists of the included full text articles were screened with publications that met the inclusion criteria included in the review.
Study selection & data extraction
All articles were screened using pre-determined eligibility criteria, which included the requirement for the studies to be original research investigations, published prior to February 2024, and focused on the validity or reliability of VBT devices. The screening process was conducted independently by two reviewers (YW, RC) to minimize any potential biases, and any conflicts were resolved through discussion or by consulting a third reviewer (SL). The data extraction process was conducted by YW and SL using a systematic review software package, (Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia), which eliminated duplicates and allowed for the extraction of relevant information from the included studies.
Additionally, this review intentionally excluded any studies or data that aimed to validate force or power-related metrics, even if they involved comparisons to a gold-standard force plate. This method of assessment has major flaws and limitations when contrasted with a VBT device. The fundamental difference lies in the approach to measurement, as force plates directly measure force and time, while VBT devices such as LPT’s measure displacement and time, with the latter permitting calculation of both mean and peak velocity of the barbell. This distinction is critical to understand as the velocity of the barbell has been shown to be substantially different from the velocity of the athletes center of mass and the system center of mass (center of mass of the combined mass of the athlete and barbell), during the back squat [39,40], jump squat [40] and power clean [40,41]. As such, it is not feasible to validate the power output from a VBT device to that calculated using force plates, as only the velocity of the system can be calculated from the resulting force-time data. Additionally, attempting to validate forces predicted from barbell velocity to forces directly measured on force plates are inherently flawed, as the velocity of the barbell, center of mass and system center of mass differ [39–42], with greater differences where the displacement of the barbell is substantially greater than the center of mass (e.g., power clean) [40,41]. As such the relative force required to accelerate the system center of mass and the barbell, given their different displacements and velocities, would differ substantially. It is therefore recommended that VBT devices should be used to determine displacement and velocity, but not force or power, and thus the decision was made to exclude such studies and data from this review.
Quality assessment tool
To assess the quality of each study included in this review, a modified version of the Downs and Black checklist was used to better suit the nature of studies included [36] using Covidence software (Veritas Health Innovation, Melbourne, Australia). This specific instrument was deliberately selected over more conventional tools (e.g., Cochrane Risk of Bias Tool), which are primarily intended for randomized controlled trials. This choice was made due to the inherently context-specific nature of validity and reliability assessments, particularly given the diversity of devices, metrics, and observational designs encountered in this review. Traditional trial-based assessment tools typically prioritize sources of heterogeneity less relevant to our research aims. Conversely, the modified Downs and Black checklist permits context-sensitive and meaningful comparisons across diverse study designs, thereby facilitating a more precise evaluation of methodological quality within this domain [36]. This checklist was validated for reporting the quality of observational study designs and has been previously used in sport science systematic reviews [1,43]. However, not all checklist criteria were applicable to the studies included in this review. Nine of the 27 criteria were used to assess the studies included. The modified Downs and Black questions can be found in Table S2 in S1 File. The reporting quality was scored on a scale using 0 (unable to determine or no) or 1 (yes). A total score of 9 indicated the highest reporting quality with scores above 6 considered “good”, scores of 4–6 considered “moderate”, and scores below 4 considered “poor”.
Technological efficacy – validity and reliability criteria
Following extensive discussions with domain experts—practitioners and researchers who have published extensively on the validity, reliability, and application methods of VBT devices—and statistical experts, the authors established stringent validity criteria (Table 1) and reliability criteria (Table 2) for assessing technological devices. Additionally, to ensure that the threshold values used to categorize device validity and reliability were both statistically robust and practically relevant, we individually consulted three domain experts. Each expert was asked to provide their expert perspective on the statistical values they deemed acceptable for establishing validity and reliability in VBT devices across different practical scenarios. Their individual responses were then collated, and through an iterative discussion process between the experts and the authors, a consensus was reached. This consensus informed the specific threshold values presented in Figs 2 and 3 and aimed to provide a more comprehensive and objective approach for evaluating the technological efficacy of VBT devices.
Table 1 outlines the criteria for studies focused on validity. This 4-item checklist includes:
- Verification of the use of a gold standard criterion, such as a multi-camera 3D motion capture system.
- Evaluation of the appropriateness of the statistical methods used for assessing device validity, based on a multi-layered decision-making process that incorporates both statistical measures and domain expert validation (Fig 2). Selecting at least three statistical measures (such as measurement error, coefficient of variation, and mean difference) is a required criterion to advance to the next item on the checklist. This approach ensures greater contextual certainty regarding the true nature of statistical validity given its complexity.
- Examination of the original study’s interpretation of statistical outputs to determine whether it claimed the device was valid.
- A final assessment to confirm that the device and study meet the required thresholds set by the domain experts outlined in Fig 2, thereby validating the device’s intended measurements.
Studies meeting the criteria for items 1, 2, and 4 are deemed to have met the validity criteria for this review. While item 3 does not directly impact the final assessment, it is included to acknowledge the complexities and variations in interpreting statistical data, as well as the diverse perspectives that different researchers and practitioners may bring to this interpretation. The statisticians consulted recommended selecting at least three different statistical measures (Fig 2) to account for the various dimensions of validity when compared to a gold standard [44]. These dimensions include: 1Measurement Concordance to a gold standard—can be assessed using metrics such as the Pearson correlation coefficient and coefficient of variation, which evaluate the relative strength and consistency of the relationship between the device’s measurements and the gold standard criterion [17,28,44]. 2Accuracy of the measures to the gold standard—can be assessed using metrics such as standard error of the estimate, typical error, and root mean square error, which respectively quantify the predictive accuracy, degree of absolute error, or deviation of the device’s measurements from the true values provided by the gold standard [17,28,44]. 3Assessment of Bias—can be assessed using metrics such as limits of agreement, mean difference, and graphically represented via Bland-Altman plots, which identify and quantify any systematic bias or differences between the device’s measurements and the gold standard [17,28,44]. Thus, by selecting a minimum of three statistical measures from the list provided in Fig 2, at least two dimensions of validity will be covered. This approach ensures sufficient certainty in making inferences about the validity of the device measurements.
Table 2 outlines the criteria for studies assessing reliability, also organized as a 4-item checklist:
- Assessment of the study’s ability to differentiate between technological and biological reliability. This is considered a bonus criterion due to the inherent challenges in isolating biological variation from reliability values. However, its inclusion is crucial as such variations can potentially inflate error rates.
- Evaluation of the statistical methods used for assessing device reliability, with specific requirements outlined in Fig 3. The selection of at least three statistical measures (i.e., such as intraclass correlation coefficient, coefficient of variation, and limits of agreement) as indicated in Fig 3, is a required criterion to advance to the next item on the checklist. This approach ensures greater contextual clarity regarding the reliability of the devices.
- Examination of the original study’s interpretation of statistical outputs to determine whether it claimed the device was reliable.
- A final assessment to confirm that the device and study meet the required thresholds set by the domain experts for reliability.
Given the resource-intensive nature of investigating true technological reliability—often requiring a specially calibrated rig programmed to travel at predefined velocities—many studies may lack the necessary resources or funding to undertake this task. Items 2, 3, and 4 follow the same principles as those outlined for the validity criteria. Similar to validity, the justification behind the criteria to select no less than three different statistical measures outlined in Fig 3 was recommended by statisticians to account for the different dimensions of measurement reliability when assessing intra- or inter-device reliability [44]. These dimensions include: 1Consistency of Measures—can be assessed using metrics such as the intraclass correlation coefficient and coefficient of variation, which evaluate the relative strength and consistency of the relationship between repeated measurements [17,28,44]. 2Accuracy of Repeated Measures—can be assessed using metrics such as typical error and relative typical error, which quantify the degree of absolute error or deviation in repeated measurements [17,28,44]. 3Assessment of Bias or Systematic Variability—can be assessed using metrics such as mean difference, limits of agreement, and Bland-Altman plots, which identify and quantify any systematic bias or variability within the measurements [17,28,44]. Therefore, by choosing at least three statistical measures from the list provided in Fig 3, at least two dimensions of reliability will be covered. This approach will provide adequate assurance when drawing conclusions on the reliability of the measurements.
Results
Identification of studies
The systematic search retrieved 568 studies (Fig 4). After removing duplicates, the titles, and abstracts of the remaining 476 studies were screened for eligibility with 67 studies included from the original search and an additional 8 studies included from the references list of the included studies, totaling 75 studies. Table 3 provides a summary of technological efficacy studies identified in this review.
Reporting quality
The reporting quality of the included studies was generally high (mean ± standard deviation 7.93 ± 0.98) (Table 4). Of the 75 studies, 25 reported a perfect score of 9, while one study achieved the lowest score of 5. However, 11 studies failed to provide full details of the technological device in accordance with item 3 of the checklist, and 6 studies did not report actual/ relevant statistics according to item 10. The most common item not met was item 18, which relates to the use of contextually appropriate statistical analyses, and 46 studies did not meet these criteria.
Study characteristics
In this review, a total of 75 studies were included. Among these, 66 studies investigated the validity of a VBT device, and 56 studies assessed reliability. Notably, 47 studies examined both validity and reliability, while 19 studies focused exclusively on validity, and 9 studies addressed only reliability. Across all validity studies, a total of 40 different devices were investigated (Tables S7–S11 in S1 File), resulting in 105 validity investigations (Table S3 in S1 File). Of the 66 validity studies, only 24 used a gold standard criterion device to assess validity. After applying the framework proposed in this review, we found that only five studies [4,45–48] out of the 66 included met all the validity criteria (Fig 2), and thus, only these specific studies could classify the devices as valid based on our assessment. Additionally, 56 studies were identified that investigated the reliability of a VBT device, with a total of 33 different device types investigated (Tables S12–S16 in S1 File). The total number of reliability investigations per device type was 94 (Table S4 in S1 File). Of the 56 studies that investigated the reliability of a VBT device, only two studies [4,47] met the bonus criteria (Item 1) in the reliability assessment (Table 2) referring to the reporting of both biological and technological reliability. A total of 33 studies met Item 2 of the reliability criteria referring to the use of contextually appropriate statistics (Fig 3). Out of these 33 studies 16 were able to meet Item 4 of the criteria referring to meeting the required statistical thresholds (Fig 3), and therefore classified as reliable for their respective devices according to the assessment framework proposed in this review. Table 5 outlines these 16 studies, additionally Table 5 also details if that device had any supporting evidence for validity via other studies.
Across the included studies, a total of 358 measures were recorded. The most commonly used VBT exercises were the F/W back squat (n = 94, 26.3%), S/M bench press (n = 57, 15.9%), S/M back squat (n = 36, 10.1%), and F/W bench press respectively (n = 44, 12.3%) (Table S5 in S1 File). The variables of mean velocity (n = 191, 53.3%) and peak velocity (n = 96, 26.8%) were the most frequently investigated in studies (Table S6 in S1 File). The relative loads were expressed as a percentage of 1RM measured by direct assessment, while absolute loads were expressed in kilograms, velocities were expressed in meters per second, and displacement was measured both in meters and centimeters.
Discussion
This review highlights the considerable diversity among the included studies. First, we evaluated a wide spectrum of VBT devices—with 40 distinct devices assessed in validity studies and 33 in reliability studies, which demonstrated the rapid evolution of technology in this field. Second, the studies involved a varied array of exercise modalities, with common examples including the free-weight back squat and the Smith machine bench press. Finally, although a range of outcome metrics were employed, mean and peak velocity emerged as the most frequently measured parameters. This synthesis reveals the heterogenic nature of the evidence and emphasizes the importance of our comprehensive, multi-layered assessment framework.
Of the 66 studies investigating the validity of a VBT device, 24 used a gold standard criterion device to assess criterion validity. Only five studies [4,45–48] met the validity criteria proposed in this review. Specifically, Appleby, Banyard [45] assessed the validity of the GymAware LPT device to measure vertical barbell displacement. Weakley, Munteanu [47] assessed the validity of the Perch 3D motion camera device, and Weakley, Chalkley [4] assessed the validity of the Flex optical device. More recently, Lu, Zhang [46] assessed the validity of a novel full-waveform resistance training monitoring device (FWRTD) that is based on a linear position transducer. Olaya-Cuartero, Villalón-Gasch [48] was the only study identified in this review to successfully validate an IMU/Accelerometer device, the VmaxPro, according to our proposed criteria. These five studies were the only ones that were able to meet all the statistical and domain expert validity assessment guidelines (Table 1 and Fig 2). Among the remaining 19 studies that used a gold standard device, 15 failed to use the recommended statistics by the statistical experts, while four studies [74,83,92,93] used the appropriate statistics but failed to meet the validity thresholds set by the VBT domain experts (Fig 2).
As only five out of 66 studies met the validity criteria proposed in this systematic review, the quality of validity assessments in sport science studies needs more attention. It is crucial to acknowledge that validity assessments can be subjective and influenced by individual biases and perspectives [115]. Therefore, to ensure the assessments are universal and consistent, they should be carried out to the highest possible standard. When validity assessments are not robust, heterogeneity in interpretation can lead to inconsistent findings and make it challenging to draw clear conclusions from research [116]. More broadly, this can even result in ineffective interventions or treatments and impede progress in understanding athletic performance mechanisms. To address this issue, clear guidelines and standards should be established for validity assessments across sport science. Greater statistical transparency could be achieved by applying a range of contextually appropriate statistics and presenting the full set of results [117]. These results can then be accompanied with specific recommendations for their interpretation. This could potentially help improve the overall quality of sport science research and enhance our contextual understanding of the statistical data presented in studies.
In summary, this assessment of technological validity studies found that the GymAware LPT device can be a valid tool to measure vertical barbell displacement for the F/W back squat across a load range of 70–90% 1RM. The GymAware LPT device was investigated 10 times for validity, the limitation of this specific device’s reported utility within this review is a methodological one, as three of the ten investigations lacked a comparison to a gold standard device and thus automatically failed the first layer of the proposed criteria, as well as five studies failing to meet the statistical standards recommended, leaving one study employing appropriate statistics but failing to meet the proposed thresholds, while the study by Appleby, Banyard [45] met all the proposed criteria. Future studies with appropriate methodological quality should be conducted to provide greater certainty on the broader validity of the GymAware device. The Perch motion capture single camera system was shown to be a valid tool for measuring mean and peak velocity of the F/W back squat and F/W bench press across all loads ranging from 20–100% 1RM. The Flex optical device was also shown to be a valid tool for measuring mean velocity for the F/W back squat and F/W bench press across all loads ranging from 20–90% 1RM. The VmaxPro IMU/accelerometer device can be a valid tool to measure mean velocity and displacement for the F/W back squat at 75–95% 1RM. Additionally, the novel Full-Waveform Resistance Training Monitoring System (FRTMS) was valid for the S/M back squat across loads ranging from 30–90% 1RM for mean velocity, eccentric mean velocity, and the full waveform velocity metric proposed in the study [46].
To determine the true reliability of a technological device, it is critical that we are able to determine the intra-device and inter-device technological variability without the influence of biological variability from human involvement [47]. Intra-device reliability informs S&C coaches on how consistent a device is in measuring the same parameter over multiple times. Inter-device reliability is important because it informs S&C coaches how consistent different devices of the same model are in measuring the same parameter, for example a multiple device setup in team settings. Thus, to determine an accurate assessment of technological reliability both intra and inter-device technological variation needs to be assessed on a calibrated mechanical rig with pre-determined speeds. As this type of setup and study design can be expensive to resource, it is often not investigated or discussed. Researchers aiming to investigate the true reliability of a technological device should be aware of this limitation and aim to mitigate the amount of biological variability and influence through highly stable testing conditions.
In this review, out of the 56 reliability investigations identified, only two [4,47] met the bonus criteria concerning the differentiation between biological and technological reliability. This highlights the inherent challenges in isolating biological factors when assessing technological reliability, an issue that warrants further attention in future research. A total of 31 studies met our criteria for the application of contextually appropriate statistical methods, and 16 of these also met the required statistical thresholds for reliability (Table 5). This underscores the rigor of our multi-layered criterion and its utility in identifying VBT devices that not only meet but exceed arbitrarily standard reliability metrics. Given these complexities and challenges, device manufacturers have a pivotal role to play. They should be conducting highly controlled investigations aimed at reporting the technological error inherent in their devices. Such information is invaluable for S&C coaches in making informed decisions and determining meaningful changes during their VBT applications. Therefore, this review highlights the need for a multi-dimensional evaluation framework that considers both technological and biological factors when assessing the reliability of VBT devices. While we advocate for increased investment in rigorous research methods and recommend collaboration between academic researchers and device manufacturers to generate reliable data that can guide S&C coaches, we also recognize the practical challenges that may limit such partnerships. The fast-paced nature of the industry, bureaucratic hurdles like paperwork and institutional review board processes, and resource constraints—including time, budget, and product launch timelines—can impede collaboration. Despite these feasibility concerns, exploring innovative strategies to overcome these barriers is essential for fostering effective academia-industry partnerships.
The multi-layered criterion proposed in this review is a methodological and philosophical enhancement for evaluating the validity and reliability of velocity-monitoring devices in VBT. Developed through a consultative process involving statisticians and domain experts, this criterion framework offers a more critical approach to technological assessment. Statisticians provided a robust set of statistical methods for assessing validity and reliability, while domain experts contextualized these methods by setting specific thresholds based on their experience and expertise. This dual consultation ensures that the framework is both statistically rigorous and practically relevant, addressing the often-arbitrary nature of statistical thresholds in validity and reliability assessments. Designed for adaptability, the criterion-based framework is open to further refinement through ongoing consultation with experts, aligning it with the scientific principle of falsifiability. It offers a more robust and accurate method for identifying the most valid and reliable devices, thereby providing an improvement over existing practices for informed decision-making by S&C coaches and sport scientists. While the criterion is tailored to the specific context of VBT devices, its multi-layered framework could be adapted and applied to other contexts, addressing the inherent complexities and subjectivities in scientific inquiry. It’s important to note that while our criterion improves upon existing methods, it is not a claim to ultimate certainty. This approach represents a significant step toward a more accurate and reliable method of assessment, serving as a comprehensive attempt to navigate the complexities of scientific inquiry.
Although the current framework does not explicitly address sample size, the authors acknowledge that sample size is crucial in reliability assessments [118]. However, in the context of the reliability studies reported in this review—focusing on within-subject reliability of VBT devices—smaller sample sizes are common due to practical constraints in sport science, including limited participant availability, time, and resources. Because within-subject reliability emphasizes repeated measures within the same individuals, a robust methodology can still yield meaningful data even with fewer participants. While Bland and Altman recommend a minimum of 50 participants to obtain precise population-level Limits of Agreement [118], this threshold is less applicable to repeated-measures reliability at the individual level. Similarly, prescribing a strict cutoff—such as the 62 participants required for an ICC ≥ 0.95 at α = 0.05 and 80% power [119]—could exclude studies that otherwise meet the proposed statistical and methodological standards. As shown in Table 5, the sample sizes for studies deemed to have adequate reliability evidence based on the applied framework ranged from n = 9 to n = 31, meaning that even the largest sample did not reach the minimum recommended thresholds for population-specific estimations. All included studies’ sample sizes are reported in the relevant data tables.
Future research should carefully balance practical constraints with the statistical rigor necessary to ensure valid and reliable outcomes. The methodological approach and multi-layered framework proposed in this review have broad applicability for evaluating the validity and reliability of technological devices used in sports and resistance training. Researchers can readily adapt this framework to establish standardized protocols, ensuring consistent and comprehensive evaluation across diverse technologies. Through replication and ongoing refinement, this approach will foster more uniform standards within sport science, ultimately improving comparability among studies and facilitating informed decision-making by practitioners.
Importantly, merely replicating arbitrary statistical thresholds for simplicity does not enhance certainty regarding validity and reliability. In contrast, the proposed framework integrates both statistician-driven and domain expert-driven criteria, thus mitigating the arbitrary nature of previously established thresholds and addressing variability across studies. While this multi-layered framework constitutes a significant methodological advancement, it relies on established statistical measures and expert-defined criteria. As sports technology continues to evolve, novel statistical analyses capable of capturing additional aspects of error, bias, or reliability beyond current methodologies are likely to emerge. Therefore, future iterations of this framework should incorporate advanced analytical techniques to ensure ongoing robustness and comprehensiveness. By maintaining adaptability, this framework aims to preserve its relevance and strengthen confidence in the validity and reliability assessments of evolving sports technologies.
Conclusions
This review systematically assessed the available literature to evaluate the validity and reliability of velocity-based training devices by applying our proposed comprehensive framework. Developed in collaboration with statisticians and domain experts, this framework was applied to all 75 studies identified through our systematic search. After application, only five studies met the validity criteria, and 16 met the reliability criteria. Despite the limited number of studies meeting these criteria, we included data from all relevant studies to fully demonstrate the framework’s application. The decision to include all relevant studies and extract the relevant data to then assess them using the framework was made to provide a thorough and transparent demonstration of how our framework operates across the entire landscape of validity and reliability studies (Tables S7–S16 in S1 File). Excluding studies that did not meet the highest standards as part of the exclusion criteria would have severely limited the scope and potentially introduced bias into our conclusions. By incorporating all studies, we ensure that our analysis captures a broad spectrum of literature, allowing for a more nuanced understanding of the framework’s utility and limitations. This inclusive approach allows us to present conclusions and practical applications for S&C coaches that are based on a comprehensive analysis of the available evidence. It demonstrates how the framework functions across varying levels of methodological quality, thereby enhancing the reliability and applicability of our findings.
The main conclusions of this review reported that the GymAware LPT device is a valid and reliable tool to measure vertical barbell displacement for the F/W back squat across a load range of 70–90% 1RM, with the majority of investigations failing to meet the methodological validity criteria. The Perch motion capture single camera system can be a valid and reliable tool for measuring mean and peak velocity of the F/W back squat and F/W bench press across all loads ranging from 20–100% 1RM. The Flex optical device can also be a valid and reliable tool for measuring mean velocity for the F/W back squat and F/W bench press across all loads ranging from 20–90% 1RM. The Flex and Perch VBT devices showed the most robust validity and reliability assessments and were the only devices to be assessed for true technological reliability on a mechanically calibrated rig setup. The VmaxPro was the only IMU/accelerometer device identified to be a valid and reliable tool to measure mean velocity and displacement for the F/W back squat at 75–95% 1RM. The novel Full-Waveform Resistance Training Monitoring System (FRTMS) was valid for the S/M back squat across loads ranging from 30–90% 1RM for mean velocity, eccentric mean velocity, and the full waveform velocity metric. This review emphasizes the need to establish standardized guidelines and consistent statistical practices for future validity and reliability assessments in sport science, along with clear recommendations for interpreting results within their specific contexts. Future investigations should aim to apply a gold-standard criterion in the form of 3D motion capture across a broader range of exercises and loading conditions, as well as differentiate between biological and technological reliability for greater device precision.
Practical applications
Device Selection: S&C coaches could consider using the GymAware LPT device for vertical barbell displacement for the F/W back squat within a 70–90% 1RM load range. For a broader range of measures and exercises, the Perch motion capture single camera system and the Flex optical device are also recommended. The VmaxPro IMU device is a valid and reliable tool to measure mean velocity and displacement for the F/W back squat at 75–95% 1RM. Additionally, practitioners should avoid using VBT devices to measure force and power related metrics due to the inherent flaws in the methods of measurement that limit their validation to a gold standard force plate. Load Range: When using the Perch and Flex devices, coaches can confidently measure mean velocity for the F/W back squat and F/W bench press across all loads ranging from 20–100% 1RM (Perch) and 20–90% 1RM (Flex). For the VmaxPro device, loads ranging from 75–95% 1RM were valid and reliable. Technological Reliability: Given that the Flex and Perch devices were the only ones assessed for true technological reliability, these should be prioritized when true technological reliability is a critical factor. Contextual Interpretation: Coaches and researchers should be cautious when generalizing findings and should consider the specific context in which the device will be used. VBT devices should be used to evaluate barbell displacement and velocity, but not to approximate force or power as this would only permit estimations of force and power applied to the barbell and not the entire system.
Supporting information
S1 File. Supplemental Digital Content Tables S1–S17.
https://doi.org/10.1371/journal.pone.0324606.s001
(DOCX)
S2 File. PRISMA Checklist PLOSONE- Systematic Review of VBT Devices.
https://doi.org/10.1371/journal.pone.0324606.s002
(DOCX)
References
- 1. Weakley J, Morrison M, García-Ramos A, Johnston R, James L, Cole MH. The validity and reliability of commercially available resistance training monitoring devices: a systematic review. Sports Med. 2021;51(3):443–502. pmid:33475985
- 2. Weakley J, Mann B, Banyard H, McLaren S, Scott T, Garcia-Ramos A. Velocity-based training: from theory to application. Strength Cond J. 2021;43(2):31–49.
- 3. Banyard HG, Nosaka K, Sato K, Haff GG. Validity of various methods for determining velocity, force, and power in the back squat. Int J Sports Physiol Perform. 2017;12(9):1170–6.
- 4. Weakley J, Chalkley D, Johnston R, García-Ramos A, Townshend A, Dorrell H, et al. Criterion validity, and interunit and between-day reliability of the FLEX for measuring barbell velocity during commonly used resistance training exercises. J Strength Cond Res. 2020;34(6):1519–24. pmid:32459410
- 5. Orange ST, Metcalfe JW, Marshall P, Vince RV, Madden LA, Liefeith A. Test-retest reliability of a commercial linear position transducer (GymAware PowerTool) to measure velocity and power in the back squat and bench press. J Strength Cond Res. 2020;34(3):728–37. pmid:29952868
- 6. Banyard HG, Nosaka K, Vernon AD, Haff GG. The Reliability of Individualized Load-Velocity Profiles. Int J Sports Physiol Perform. 2018;13(6):763–9. pmid:29140148
- 7. Bernards JR, Sato K, Haff GG, Bazyler CD. Current research and statistical practices in sport science and a need for change. Sports (Basel). 2017;5(4):87. pmid:29910447
- 8.
Atkinson G, Nevill AM. Method agreement and measurement error in the physiology of exercise. In: Sport and exercise physiology testing guidelines. Vol 1. London: Routledge; 2007. p. 41–48.
- 9. Impellizzeri FM, Marcora SM. Test validation in sport physiology: lessons learned from clinimetrics. Int J Sports Physiol Perform. 2009;4(2):269–77. pmid:19567929
- 10. Lorenzetti S, Lamparter T, Lüthy F. Validity and reliability of simple measurement device to assess the velocity of the barbell during squats. BMC Res Notes. 2017;10(1):707. pmid:29212552
- 11. Orange ST, Metcalfe JW, Liefeith A, Marshall P, Madden LA, Fewster CR, et al. Validity and reliability of a wearable inertial sensor to measure velocity and power in the back squat and bench press. J Strength Cond Res. 2019;33(9):2398–408. pmid:29742745
- 12. Atkinson G, Nevill AM. Selected issues in the design and analysis of sport performance research. J Sports Sci. 2001;19(10):811–27. pmid:11561675
- 13. Batterham AM. Bias in Bland-Altman but not regression validity analyses. Sportscience. 2004;8:42–7.
- 14. Halperin I, Vigotsky AD, Foster C, Pyne DB. Strengthening the practice of exercise and sport-science research. Int J Sports Physiol Perform. 2018;13(2):127–34. pmid:28787228
- 15. Muyor JM, Granero-Gil P, Pino-Ortega J. Reliability and validity of a new accelerometer (Wimu®) system for measuring velocity during resistance exercises. Proc Inst Mech Eng Part P J Sports Eng Technol. 2017;232(3):218–24.
- 16.
de Boeck P, Elosua P. Reliability and validity: history, notions, methods, and discussion. 2016.
- 17. Peeters MJ, Harpe SE. Updating conceptions of validity and reliability. Res Social Adm Pharm. 2020;16(8):1127–30. pmid:31806566
- 18. Ahire SL, Devaraj S. An empirical comparison of statistical construct validation approaches. IEEE Trans Eng Manage. 2001;48(3):319–29.
- 19. Yoccoz NG. Use, overuse, and misuse of significance tests in evolutionary biology and ecology. Bull Ecol Soc Am. 1991;72(2):106–11.
- 20. Sürücü L, Maslakci A. Validity and reliability in quantitative research. Bus Manag Stud Int J. 2020;8(3):2694–726.
- 21. Vidgen B, Yasseri T. P-values: misunderstood and misused. Front Phys. 2016;4:6.
- 22. Ranstam J. Why the P-value culture is bad and confidence intervals a better alternative. Osteoarthritis Cartilage. 2012;20(8):805–8. pmid:22503814
- 23. Trafimow D, Marks M. Editorial. Basic Appl Soc Psychol. 2015;37(1):1–2.
- 24. Halsey LG. The reign of the p-value is over: what alternative analyses could we employ to fill the power vacuum? Biol Lett. 2019;15(5):20190174. pmid:31113309
- 25. Cumming G. Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspect Psychol Sci. 2008;3(4):286–300. pmid:26158948
- 26. Blume J, Peipert JF. What your statistician never told you about p-values. J Am Assoc Gynecol Laparosc. 2003;10(4):439–44.
- 27. Yaddanapudi LN. The American Statistical Association statement on P-values explained. J Anaesthesiol Clin Pharmacol. 2016;32(4):421–3. pmid:28096569
- 28. Higgins PA, Straub AJ. Understanding the error of our ways: mapping the concepts of validity and reliability. Nurs Outlook. 2006;54(1):23–9. pmid:16487776
- 29. Thanasegaran G. Reliability and validity issues in research. Integration Dissemination. 2009;4.
- 30. Atkinson G, Nevill AM. Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine. Sports Med. 1998;26(4):217–38. pmid:9820922
- 31. Hopkins WG. Measures of reliability in sports medicine and science. Sports Med. 2000;30(1):1–15. pmid:10907753
- 32. Puth MT, Neuhäuser M, Ruxton GD. Effective use of Pearson’s product–moment correlation coefficient. Anim Behav. 2014;93:183–9.
- 33. Johnson R. Assessment of bias with emphasis on method comparison. Clin Biochem Rev. 2008;29 Suppl 1(Suppl 1):S37-42. pmid:18852855
- 34. Caldwell A, Vigotsky AD. A case against default effect sizes in sport and exercise science. PeerJ. 2020;8:e10314. pmid:33194448
- 35. Kyprianou E, Lolli L, Haddad HA, Di Salvo V, Varley MC, Mendez Villanueva A. A novel approach to assessing validity in sports performance research: integrating expert practitioner opinion into the statistical analysis. Sci Med Footb. 2019;3(4):333–8.
- 36. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84. pmid:9764259
- 37. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9, W64. pmid:19622511
- 38. Yensen J. PICO search strategies. Online J Nurs Inform. 2013;17(3).
- 39. Lake JP, Lauder MA, Smith NA. Barbell kinematics should not be used to estimate power output applied to the Barbell-and-body system center of mass during lower-body resistance exercise. J Strength Cond Res. 2012;26(5):1302–7. pmid:22516904
- 40. McBride JM, Haines TL, Kirby TJ. Effect of loading on peak power of the bar, body, and system during power cleans, squats, and jump squats. J Sports Sci. 2011;29(11):1215–21. pmid:21777152
- 41. Haines T, McBride JM, Skinner J, Woodall M, Larkin TR, Kirby TJ, et al. Effect of load on bar, body and system power output in the power clean. J Strength Cond Res. 2010;24:1.
- 42. McBride JM, Blow D, Kirby TJ, Haines TL, Dayne AM, Triplett NT. Relationship between maximal squat strength and five, ten, and forty yard sprint times. J Strength Cond Res. 2009;23(6):1633–6. pmid:19675504
- 43. Johnston RD, Black GM, Harrison PW, Murray NB, Austin DJ. Applied sport science of Australian football: a systematic review. Sports Med. 2018;48(7):1673–94. pmid:29633084
- 44. Walther BA, Moore JL. The concepts of bias, precision and accuracy, and their use in testing the performance of species richness estimators, with a literature review of estimator performance. Ecography. 2005;28(6):815–29.
- 45. Appleby BB, Banyard H, Cormack SJ, Newton RU. Validity and reliability of methods to determine barbell displacement in heavy back squats: implications for velocity-based training. J Strength Cond Res. 2020;34(11):3118–23. pmid:33105362
- 46. Lu C, Zhang K, Cui Y, Tian Y, Wang S, Cao J, et al. Development and evaluation of a full-waveform resistance training Monitoring system based on a linear position transducer. Sensors (Basel). 2023;23(5):2435. pmid:36904637
- 47. Weakley J, Munteanu G, Cowley N, Johnston R, Morrison M, Gardiner C. The criterion validity and between-day reliability of the Perch for measuring barbell velocity during commonly used resistance training exercises. J Strength Cond Res. 2022.
- 48. Olaya-Cuartero J, Villalón-Gasch L, Penichet-Tomás A, Jimenez-Olmedo JM. Validity and reliability of the VmaxPro IMU for back squat exercise in multipower machine. J Phys Educ Sport. 2022.
- 49. Askow AT, Stone JD, Arndts DJ, King AC, Goto S, Hannon JP, et al. Validity and reliability of a commercially-available velocity and power testing device. Sports (Basel). 2018;6(4):170. pmid:30544687
- 50. Dorrell HF, Moore JM, Smith MF, Gee TI. Validity and reliability of a linear positional transducer across commonly practised resistance training exercises. J Sports Sci. 2019;37(1):67–73. pmid:29851551
- 51. Fritschi R, Seiler J, Gross M. Validity and effects of placement of velocity-based training devices. Sports (Basel). 2021;9(9):123. pmid:34564328
- 52. Janicijevic D, García-Ramos A, Lamas-Cepero JL, García-Pinillos F, Marcos-Blanco A, Rojas FJ, et al. Comparison of the two most commonly used gold-standard velocity monitoring devices (GymAware and T-Force) to assess lifting velocity during the free-weight barbell back squat exercise. Proc Inst Mech Eng Part P J Sports Eng Technol. 2021;237(3):205–12.
- 53. Menrad T, Edelmann-Nusser J. Validation of velocity measuring devices in velocity based strength training. Int J Comput Sci Sport. 2021;20(1):106–18.
- 54. Mitter B, Hölbling D, Bauer P, Stöckl M, Baca A, Tschan H. Concurrent validity of field-based diagnostic technology monitoring movement velocity in powerlifting exercises. J Strength Cond Res. 2021;35(8):2170–8. pmid:30946263
- 55. Thompson SW, Rogerson D, Dorrell HF, Ruddock A, Barnes A. The reliability and validity of current technologies for measuring barbell velocity in the free-weight back squat and power clean. Sports (Basel). 2020;8(7):94. pmid:32629842
- 56. Beckham GK, Layne DK, Kim SB, Martin EA, Perez BG, Adams KJ. Reliability and criterion validity of the assess2perform bar sensei. Sports (Basel). 2019;7(11):230. pmid:31703335
- 57. Jovanovic M, Jukic I. Within-unit reliability and between-units agreement of the commercially available linear position transducer and barbell-mounted inertial sensor to measure movement velocity. J Strength Cond Res. 2022.
- 58. Oleksy Ł, Kuchciak M, Bril G, Mika A, Przydział M, Pazdan-Śliż I, et al. Intra-rater and test–retest reliability of barbell force, velocity, and power during the landmine punch throw test assessed by the GymAware linear transducer system. Appl Sci. 2023;13(19):10875.
- 59. Suchomel TJ, Techmanski BS, Kissick CR, Comfort P. Reliability, validity, and comparison of barbell velocity measurement devices during the Jump Shrug and Hang High Pull. J Funct Morphol Kinesiol. 2023;8(1):35. pmid:36976132
- 60. Garnacho-Castaño MV, López-Lastra S, Maté-Muñoz JL. Reliability and validity assessment of a linear position transducer. J Sports Sci Med. 2015;14(1):128–36. pmid:25729300
- 61. McGrath G, Flanagan E, O’Donovan P, Collins D, Kenny I. Velocity-based training: validity of monitoring devices to assess mean concentric velocity in the bench press exercise. J Aust Strength Cond. 2018;26.
- 62. Chéry C, Ruf L. Reliability of the load-velocity relationship and validity of the PUSH to measure velocity in the deadlift. J Strength Cond Res. 2019;33(9):2370–80. pmid:31460987
- 63. Goldsmith JA, Trepeck C, Halle JL, Mendez KM, Klemp A, Cooke DM, et al. Validity of the Open Barbell and Tendo Weightlifting Analyzer Systems Versus the Optotrak Certus 3D Motion-Capture System for Barbell Velocity. Int J Sports Physiol Perform. 2019;14(4):540–3. pmid:30300064
- 64. Stock MS, Beck TW, DeFreitas JM, Dillon MA. Test-retest reliability of barbell velocity during the free-weight bench-press exercise. J Strength Cond Res. 2011;25(1):171–7. pmid:21157383
- 65. Martinopoulou K, Tsoukos A, Donti O, Katsikas C, Terzis G, Bogdanis GC. Comparison of movement velocity and force-velocity parameters using a free video analysis software and a linear position transducer during unilateral and bilateral ballistic leg press. Biomed Human Kinet. 2021;14(1):25–32.
- 66. Pérez-Castilla A, Piepoli A, Delgado-García G, Garrido-Blanca G, García-Ramos A. Reliability and concurrent validity of seven commercially available devices for the assessment of movement velocity at different intensities during the bench press. J Strength Cond Res. 2019;33(5):1258–65. pmid:31034462
- 67. García-Ramos A, Pérez-Castilla A, Martín F. Reliability and concurrent validity of the Velowin optoelectronic system to measure movement velocity during the free-weight back squat. Int J Sports Sci Coach. 2018;13(5):737–42.
- 68. Courel-Ibáñez J, Martínez-Cava A, Morán-Navarro R, Escribano-Peñas P, Chavarren-Cabrero J, González-Badillo JJ, et al. Reproducibility and repeatability of five different technologies for bar velocity measurement in resistance training. Ann Biomed Eng. 2019;47(7):1523–38. pmid:30980292
- 69. García-Pinillos F, Latorre-Román PA, Valdivieso-Ruano F, Balsalobre-Fernández C, Párraga-Montilla JA. Validity and reliability of the WIMU® system to measure barbell velocity during the half-squat exercise. Proc Inst Mech Eng Part P J Sports Eng Technol. 2019;233(3):408–15.
- 70. Martínez-Cava A, Hernández-Belmonte A, Courel-Ibáñez J, Morán-Navarro R, González-Badillo JJ, Pallarés JG. Reliability of technologies to measure the barbell velocity: implications for monitoring resistance training. PLoS One. 2020;15(6):e0232465. pmid:32520952
- 71. Muniz-Pardos B, Lozano-Berges G, Marin-Puyalto J, Gonzalez-Agüero A, Vicente-Rodriguez G, Casajus JA, et al. Validity and reliability of an optoelectronic system to measure movement velocity during bench press and half squat in a Smith machine. Proc Inst Mech Eng Part P J Sports Eng Technol. 2019;234(1):88–97.
- 72. Peña García-Orea G, Belando-Pedreño N, Merino-Barrero JA, Jiménez-Ruiz A, Heredia-Elvar JR. Validation of an opto-electronic instrument for the measurement of weighted countermovement jump execution velocity. Sports Biomech. 2021;20(2):150–64. pmid:30427269
- 73. Pérez-Castilla A, Miras-Moreno S, García-Vega AJ, García-Ramos A. The ADR Encoder is a reliable and valid device to measure barbell mean velocity during the Smith machine bench press exercise. Proc Inst Mech Eng Part P J Sports Eng Technol. 2021;238(1):102–7.
- 74. Feuerbacher JF, Jacobs MW, Dragutinovic B, Goldmann J-P, Cheng S, Schumann M. Validity and test-retest reliability of the Vmaxpro sensor for evaluation of movement velocity in the deep squat. J Strength Cond Res. 2022.
- 75. Gomez-Piriz PT, Sanchez ET, Manrique DC, Gonzalez EP. Reliability and comparability of the accelerometer and the linear position measuring device in resistance training. J Strength Cond Res. 2013;27(6):1664–70. pmid:22847523
- 76. Lopez-Torres O, Fernandez-Elias VE, Li J, Gomez-Ruano MA, Guadalupe-Grau A. Validity and reliability of a new low-cost linear position transducer to measure mean propulsive velocity: the ADR device. Proc Inst Mech Eng Part P J Sports Eng Technol. 2022.
- 77. Bardella P, Carrasquilla García I, Pozzo M, Tous-Fajardo J, Saez de Villareal E, Suarez-Arrones L. Optimal sampling frequency in recording of resistance training exercises. Sports Biomech. 2017;16(1):102–14. pmid:27414395
- 78. Balsalobre-Fernández C, Marchante D, Baz-Valle E, Alonso-Molero I, Jiménez SL, Muñóz-López M. Analysis of wearable and smartphone-based technologies for the measurement of barbell velocity in different resistance training exercises. Front Physiol. 2017;8:649. pmid:28894425
- 79. Boehringer S, Whyte DG. Validity and test-retest reliability of the 1080 quantum system for bench press exercise. J Strength Cond Res. 2019;33(12):3242–51. pmid:31136548
- 80. Fernandes JFT, Lamb KL, Clark CCT, Moran J, Drury B, Garcia-Ramos A, et al. Comparison of the FitroDyne and GymAware rotary encoders for quantifying peak and mean velocity during traditional multijointed exercises. J Strength Cond Res. 2021;35(6):1760–5. pmid:30399117
- 81. Gonzalez AM, Mangine GT, Spitz RW, Ghigiarelli JJ, Sell KM. Agreement between the Open Barbell and Tendo linear position transducers for monitoring barbell velocity during resistance exercise. Sports (Basel). 2019;7(5):125. pmid:31126039
- 82. van den Tillaar R, Ball N. Validity and reliability of kinematics measured with PUSH band vs. linear encoder in bench press and push-ups. Sports (Basel). 2019;7(9):207. pmid:31509960
- 83. Callaghan DE, Guy JH, Elsworthy N, Kean C. Validity of the PUSH band 2.0 and Speed4lifts to measure velocity during upper and lower body free-weight resistance exercises. J Sports Sci. 2022;40(9):968–75. pmid:35188434
- 84. Held S, Rappelt L, Deutsch J-P, Donath L. Valid and reliable barbell velocity estimation using an inertial measurement unit. Int J Environ Res Public Health. 2021;18(17):9170. pmid:34501761
- 85. Proc Inst Mech Eng Part P J Sports Eng Technol. Rodriguez-Perea Á, Jerez-Mayorga D, García-Ramos A, Martínez-García D, Chirosa Ríos LJ. Reliability and concurrent validity of a functional electromechanical dynamometer device for the assessment of movement velocity. Proc Inst Mech Eng Part P J Sports Eng Technol. 2021;235(3):176–81.
- 86. Qu H r, Qian D x, Xu S s, Shen Y f. Validity and test-retest reliability of a resistance training device for Smith machine back squat exercise. iScience. 2024;27(1).
- 87. Moreno-Villanueva A, Rico-González M, Pérez-Caballero CE, Rodríguez-Valero G, Pino-Ortega J. Level of agreement and reliability of ADR encoder to monitor mean propulsive velocity during the bench press exercise. Proc Inst Mech Eng Part P J Sports Eng Technol. 2022.
- 88. Sato K, K. Beckham G, Carroll K, Bazyler C, Sha Z, G. GH. Validity of wireless device measuring velocity of resistance exercises. Journal of Trainology. 2015;4(1):15–8.
- 89. Balsalobre-Fernández C, Kuzdub M, Poveda-Ortiz P, Campo-Vecino JD. Validity and reliability of the PUSH wearable device to measure movement velocity during the back squat exercise. J Strength Cond Res. 2016;30(7):1968–74. pmid:26670993
- 90. Patil S, Rajendraprasad S, Velagapudi M, Aurit S, Andukuri V, Alla V. Readmissions among people living with HIV admitted for hypertensive emergency. South Med J. 2022;115(7):429–34. pmid:35777749
- 91. Lake J, Augustus S, Austin K, Comfort P, McMahon J, Mundy P, et al. The reliability and validity of the bar-mounted PUSH BandTM 2.0 during bench press with moderate and heavy loads. J Sports Sci. 2019;37(23):2685–90. pmid:31418312
- 92. Orser K, Agar-Newman DJ, Tsai M-C, Klimstra M. The validity of the Push Band 2.0 to determine speed and power during progressively loaded squat jumps. Sports Biomech. 2024;23(1):109–17. pmid:33118478
- 93. Dragutinovic B, Jacobs MW, Feuerbacher JF, Goldmann J-P, Cheng S, Schumann M. Evaluation of the Vmaxpro sensor for assessing movement velocity and load-velocity variables: accuracy and implications for practical use. Biol Sport. 2024;41(1):41–51. pmid:38188099
- 94. Pino-Ortega J, Bastida-Castillo A, Hernández-Belmonte A, Gómez-Carmona CD. Validity of an inertial device for measuring linear and angular velocity in a leg extension exercise. Proc Inst Mech Eng Part P J Sports Eng Technol. 2019;234(1):30–6.
- 95. Ferro A, Floría P, Villacieros J, Muñoz-López A. Maximum velocity during loaded countermovement jumps obtained with an accelerometer, linear encoder and force platform: a comparison of technologies. J Biomech. 2019;95:109281. pmid:31471113
- 96. Mateo PG. Measurement of a squat movement velocity: comparison between a Rehagait accelerometer and the high-speed video recording method called MyLift. J Phys Educ Sport. 2020;20(3):1343–1353.
- 97. Abbott JC, Wagle JP, Sato K, Painter K, Light TJ, Stone MH. Validation of inertial sensor to measure barbell kinematics across a spectrum of loading conditions. Sports (Basel). 2020;8(7):93. pmid:32610449
- 98. Merrigan JJ, Martin JR. Is the OUTPUT sports unit reliable and valid when estimating back squat and bench press concentric velocity. J Strength Cond Res. 2021.
- 99. Pelaez Barrajon J, San Juan AF. Validity and reliability of a smartphone accelerometer for measuring lift velocity in bench-press exercises. Sustainability. 2020;12(6):2312.
- 100. Oberhofer K, Erni R, Sayers M, Huber D, Lüthy F, Lorenzetti S. Validation of a smartwatch-based workout analysis application in exercise recognition, repetition count and prediction of 1RM in the strength training-specific setting. Sports (Basel). 2021;9(9):118. pmid:34564323
- 101. Balsalobre-Fernández C, Marchante D, Muñoz-López M, Jiménez SL. Validity and reliability of a novel iPhone app for the measurement of barbell velocity and 1RM on the bench-press exercise. J Sports Sci. 2018;36(1):64–70. pmid:28097928
- 102. Cetin O, Isik O. Validity and reliability of the My Lift app in determining 1RM for deadlift and back squat exercises. Eur J Hum Mov. 2021;46:28–36.
- 103. Balsalobre-Fernández C, Xu J, Jarvis P, Thompson S, Tannion K, Bishop C. Validity of a smartphone app using artificial intelligence for the real-time measurement of barbell velocity in the bench press exercise. J Strength Cond Res. 2023;37(12):e640–5. pmid:38015739
- 104. De Sá EC, Medeiros AR, Ferreira AS, Ramos AG, Janicijevic D, Boullosa D. Validity of the iLOAD® app for resistance training monitoring. PeerJ. 2019;7:e7372.
- 105. Pérez-Castilla A, Boullosa D, García-Ramos A. Reliability and validity of the iLOAD application for monitoring the mean set velocity during the back squat and bench press exercises performed against different loads. J Strength Cond Res. 2021;35(Suppl 1):S57–65. pmid:33021586
- 106. Pérez-Castilla A, Boullosa D, García-Ramos A. Sensitivity of the iLOAD® application for monitoring changes in barbell velocity following power- and strength-oriented resistance training programs. Int J Sports Physiol Perform. 2021;16(7):1056–60. pmid:33662923
- 107. Kasovic J, Martin B, Carzoli JP, Zourdos MC, Fahs CA. Agreement between the iron path app and a linear position transducer for measuring average concentric velocity and range of motion of barbell exercises. J Strength Cond Res. 2021;35(Suppl 1):S95–101. pmid:33666594
- 108. Sánchez-Pay A, Courel-Ibáñez J, Martínez-Cava A, Conesa-Ros E, Morán-Navarro R, Pallarés JG. Is the high-speed camera-based method a plausible option for bar velocity assessment during resistance training?. Measurement. 2019;137:355–61.
- 109. Jimenez-Olmedo JM, Penichet-Tomás A, Villalón-Gasch L, Pueo B. Validity and reliability of smartphone high-speed camera and Kinovea for velocity-based training measurement. JHSE. 2020;16(4).
- 110. Sañudo B, Rueda D, Pozo-Cruz BD, de Hoyo M, Carrasco L. Validation of a video analysis software package for quantifying movement velocity in resistance exercises. J Strength Cond Res. 2016;30(10):2934–41. pmid:24918300
- 111. Pueo B, Lopez JJ, Mossi JM, Colomer A, Jimenez-Olmedo JM. Video-based system for automatic measurement of barbell velocity in back squat. Sensors (Basel). 2021;21(3):925. pmid:33573170
- 112. Tomasevicz CL, Hasenkamp RM, Ridenour DT, Bach CW. Validity and reliability assessment of 3-D camera-based capture barbell velocity tracking device. J Sci Med Sport. 2020;23(1):7–14. pmid:31421988
- 113. Laza-Cagigas R, Goss-Sampson M, Larumbe-Zabala E, Termkolli L, Naclerio F. Validity and reliability of a novel optoelectronic device to measure movement velocity, force and power during the back squat exercise. J Sports Sci. 2019;37(7):795–802. pmid:30306839
- 114. Peña García-Orea G, Belando-Pedreño N, Merino-Barrero JA, Heredia-Elvar JR. Validation of an opto-electronic instrument for the measurement of execution velocity in squat exercise. Sports Biomech. 2021;20(6):706–19. pmid:31124753
- 115. Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–25. pmid:20679844
- 116.
Cooper H, Hedges LV, Valentine JC. The handbook of research synthesis and meta-analysis. New York: Russell Sage Foundatio;. 2019.
- 117. Kahn MG, Brown JS, Chun AT, Davidson BN, Meeker D, Ryan PB, et al. Transparent reporting of data quality in distributed data networks. EGEMS (Wash DC). 2015;3(1):1052. pmid:25992385
- 118. Atkinson G, Nevill A. Typical error versus limits of agreement. Sports Med. 2000;30(5):375–81. pmid:11103850
- 119.
Arifin WN. Sample size calculator. 2024. Available from: http://example.com. Accessed 2024 December 5.