Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Clinically interpretable electrovectorcardiographic machine learning criteria for the detection of echocardiographic left ventricular hypertrophy

  • Fernando De la Garza-Salazar ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    fernandodelagarza@gmail.com

    ‡ This affiliation was not involved in the design, execution, or funding of this research.

    Affiliations Independent Researcher, Monterrey, México,, Tecnológico de Monterrey, Escuela de Medicina, Avenida Ignacio Morones Prieto, Sertoma, Monterrey, Nuevo León, México,

  • Brian Egenriether

    Roles Validation, Software

    Affiliation Independent Researcher, Charleston, South Carolina, United States of America

Abstract

Echocardiographic left ventricular hypertrophy (Echo-LVH) is frequently underdetected by traditional electrocardiogram (ECG) criteria due to limited sensitivity. We investigated whether integrating ECG with vectorcardiography (VCG) using a clinically interpretable machine learning algorithm (C5.0) could improve diagnostic performance. We analyzed ECG and VCG data from 664 patients, 42.8% of whom had Echo-LVH. The study introduced three new criteria—Marcos VCG, Marcos VCG-ECG, and Marcos VCG-ECGsp—named in honor of the software used for VCG synthesis, and compared their diagnostic performance against 23 established ECG criteria, including Cornell voltage, Peguero-Lo Presti, and Sokolow-Lyon. Marcos VCG-ECGsp, optimized for higher specificity, was included to evaluate trade-offs in performance. Validation was performed using train/test split and 10-fold cross-validation. Marcos VCG-ECG achieved higher AUC than Cornell voltage in both training (0.81 vs. 0.68, p < 0.0001) and testing (0.78 vs. 0.69, p = 0.04). The new criteria also showed superior sensitivity compared to Peguero-Lo Presti, the most sensitive traditional criterion (73.1%, 62.4%, 55.9% vs. 30.1%, p < 0.0001). While specificity was lower than Cornell (81.1% vs. 96.4%, p = 0.017), it remained acceptable, reflecting a clinically relevant trade-off favoring detection over false positives. In conclusion, integrating ECG with VCG through machine learning enances Echo-LVH detection, delivering superior sensitivity while preserving specificity. The proposed criteria are clinically interpretable, highlight the novelty of combining two electrical spectra, and hold potential to impact routine diagnostic practice.

Introduction

Echocardiographic left ventricular hypertrophy (Echo-LVH) significantly predicts cardiovascular morbidity and mortality [1]. Electrocardiographic detection of LVH dates back to Einthoven’s initial descriptions in 1906 [2] and was further refined by criteria such as Cornell voltage (1985) [3] and Peguero-Lo Presti (2017) [4]. Despite decades of development [5], conventional ECG criteria still demonstrate poor sensitivity for Echo-LVH [6].

Vectorcardiography (VCG) [7], expanded electrocardiography by enabling three-dimensional representations of cardiac electrical activity [8,9]. Although early adoption was limited by hardware constraints [10], VCG synthesis via matrix multiplication became feasible with digital ECG systems in the 1980s [1113]. Techniques such as the Inverse Dower matrix [11], Kors regression [12], and least-square VCG estimations (QLSV, PLSV) [13,14] have since facilitated advanced wave-specific analysis.

We recently developed Marcos, a VCG analysis software capable of synthesizing VCG using four matrix methods, quantifying global and intra-wave metrics (P, QRS, and T loops), and characterizing electrical loop morphology [15]. VCG metrics have shown diagnostic value in a variety of cardiovascular conditions, including ischemic heart disease, ventricular arrhythmias, cardiac resynchronization therapy, and myocardial infarction [1624]. However, several VCG-derived metrics, particularly intra-waveform and loop morphology features, remain underexplored in the context of Echo-LVH detection.

Machine learning (ML) offers promising advances in electrodiagnostics [25,26]. Diverse algorithms such as SVM, Random Forest, GLMNet, XGBoost, AdaBoost, and various neural networks, have achieved sensitivities ranging from 29% to 96.6% [26]. However, most models incorporate non-electrical data and operate as black-box systems, which limits their interpretability and application in purely electrical diagnostics [26].

The C5.0 algorithm, a transparent, white-box ML model evolved from decision trees, allows interpretable rule-based classification [27]. Previously, we applied C5.0 to 31 manually extracted ECG features, producing a model with 71.4% accuracy, 79.6% sensitivity, and 53% specificity [28]. An automated version using 458 ECG parameters achieved 70.5% accuracy, 74.3% sensitivity, and 68.7% specificity using only three ECG predictors [29].

We hypothesize that integrating VCG-derived parameters into an interpretable C5.0 machine learning model would improve the diagnostic accuracy and sensitivity for detecting Echo-LVH compared to conventional ECG criteria, while maintaining acceptable levels of specificity. The model was designed to rely exclusively on electrical signals (ECG and VCG) without the inclusion of demographic or clinical variables.

This study presents the development and validation of the Marcos VCG, a model based solely on vectorcardiographic features; Marcos VCG-ECG, which integrates both vectorcardiographic and electrocardiographic variables; and Marcos VCG-ECGsp, optimized for higher specificity. Using the C5.0 algorithm and an extensive pool of quantitative ECG/VCG parameters, these models were developed with the goal of improving upon conventional ECG criteria in the detection of Echo-LVH, while prioritizing interpretability and consistent diagnostic performance

Methods

Study design and population

This retrospective, single-center study aimed to develop and internally validate a machine learning–based diagnostic prediction model for Echo-LVH. The analysis included adult patients who underwent both transthoracic echocardiography and 12‑lead ECG within seven days at the Cardiology Department of a tertiary hospital in Monterrey, Mexico, between 1 January 2016 and 31 August 2019. Ethical approval was obtained (CMHAE‑001‑19) with a waiver of individual consent, and the study adheres to the Declaration of Helsinki as well as STARD 2015 guidelines for diagnostic studies [30] and international guidelines for the development of machine learning models [31].

From 7 567 consecutive examinations, exclusions for age < 18 years, incomplete imaging, or predefined rhythm/structural abnormalities (full list in S1 Text) left 664 analysable patients. Clinical demographics and comorbidities (age, sex, BMI, BSA, hypertension, diabetes, etc.) were abstracted from electronic records; BSA was calculated with the Mosteller formula [32,33].

Imaging and signal acquisition

Transthoracic echocardiographic studies were acquired on Philips EPIQ7/IE33 scanners using 2‑D–guided M‑mode in accordance with ASE/EACVI standards [33]. End‑diastolic left ventricular internal diameter (LVID), interventricular septal thickness (IVST) and left ventricular posterior wall thickness (LVPWT) were measured, and left‑ventricular mass (LVM), mass index (LVMI) and relative wall thickness (RWT) were calculated using guideline formulas. LVH was defined as LVMI > 115 g/m² in men or > 95 g/m² in women [26,33]. Three cardiologists performed the readings with excellent agreement (κ = 0.91). Segmental hypokinesia or akinesia on echo were noted as echo‑detected ischaemic heart disease (IHD).

Standard 12‑lead ECGs were recorded on a Philips PageWriter TC50 (10 mm/mV, 25 mm/s). The built‑in DXL‑16 algorithm automatically extracted 458 quantitative parameters per tracing; the full list is provided in S1 Table.

For comparison with established electrocardiographic markers of LVH, 23 contemporary ECG criteria—including Cornell voltage, Peguero‑Lo Presti and Sokolow‑Lyon—were computed. Voltage‑duration‑product variants of each criterion were also generated to allow sensitivity analyses.

Digital 12‑lead ECGs were transformed into orthogonal X‑, Y‑, Z loops using the Marcos software [15], which applies four validated transformation matrices. From each segmented P, QRS and T loop, 3 360 quantitative features were extracted, including isochronal velocity, angle and magnitude metrics [1821,23,24,3438]. Full details on the transformation, segmentation and feature extraction pipeline have been reported previously [15] and are available in Supplementary S2 Text

Feature extraction and data preprocessing

Angular variables were expressed in radians; missing values were imputed (mean for symmetric, median for skewed distributions), and all continuous predictors were z‑score scaled to minimise unit‑driven bias. Models were trained on the standardised data, but decision‑tree cut‑offs are reported in original units for clinical interpretability.

Records were randomly allocated 70% to a training set (n = 460) and 30% to a test set (n = 204); age, sex, comorbidities and Echo‑LVH distribution were comparable between sets (all p > 0.05; Table 1).

thumbnail
Table 1. Demographic and echocardiographic characteristics of the study population.

https://doi.org/10.1371/journal.pone.0334829.t001

High‑dimensional feature spaces were pruned with Lasso regression applied separately to VCG and ECG variables [39,40], retaining the subset that maximised cross‑validated AUC in the training data.

Model development and evaluation

Feature‑reduced datasets were modelled with the white‑box C5.0 decision‑tree algorithm [41]. Separate C5.0 models were trained using: (i) VCG predictors alone, (ii) combined VCG + ECG predictors, and (iii) a variant tuned for higher specificity via cost‑matrix weighting. Hyperparameters (minCases, winnowing, cost matrix) were optimised by grid search within the training set; trees were pruned automatically to prevent over‑fitting [42]. A fixed probability threshold of 0.50 defined Echo‑LVH presence, consistent with previous work [28,29]. Final models were selected based on training‑set AUC and rule simplicity and subsequently assessed on the independent test set.

Statistical analysis

Continuous and categorical variables were summarized as mean ± SD and n (%), respectively. Normality was assessed using the Kolmogorov–Smirnov test. Group differences between training and test sets were evaluated using Student’s t-test for continuous variables and χ² or Fisher’s exact test for categorical variables, as appropriate; continuous variables were log-transformed when normality assumptions were not met.

Model performance for Echo‑LVH detection was evaluated using AUC (pROC package) with DeLong comparisons [43,44]. Accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and F1‑score were computed from confusion matrices [45,46]. Traditional ECG criteria were compared using McNemar’s test with Bonferroni correction [45].

Internal validation used 10‑fold cross‑validation within the training set [47]. Subgroup analyses considered sex, age > 60 years, comorbidities (e.g., hypertension, obesity), Echo‑LVH geometry, and severity grades.

Correct classification rates (CCR) were reported for each left ventricle geometry and LVH severity category. All analyses were conducted in RStudio (C50, ggplot2) [27,48], with p < 0.05 considered significant.

Sample size determination and data integrity

An a priori power analysis (R pwr package) indicated that ≥ 178 Echo‑LVH cases, and 178 controls would provide 80% power (α = 0.05) to detect a 10‑percentage‑point improvement in sensitivity over conventional ECG criteria (30% to 40%) [49].

Results

Baseline characteristics

A total of 664 patients were analyzed, with 42.8% (n = 284) presenting Echo-LVH. Table 1 compares demographic and echocardiographic data between the training (n = 460) and testing (n = 204) sets. Both cohorts had similar Echo-LVH prevalence, LV geometries, and comorbidity distribution, supporting sample comparability (Table 1).

Model development and performance

Lasso regression identified 33 VCG and 26 ECG variables as relevant (S4 Table). The final Marcos decision tree models (C5.0) selected 11 of these as core predictors (Fig 2 and Table 2).

thumbnail
Table 2. Description of electrocardiographic and vectorcardiographic features relevant to Echo-LVH detection.

https://doi.org/10.1371/journal.pone.0334829.t002

thumbnail
Fig 1. Methodological Flowchart for Enhanced Echo-LVH Detection.

Fig 1 shows the methodological approach for detecting Echo-LVH. (A) Upper panel describes the patient cohort selection and extraction of VCG and ECG parameters. (B) Bottom panel outlines the use of statistical and machine learning techniques, including Lasso regression and the C5.0 algorithm, to develop the Marcos VCG and Marcos VCG-ECG criteria. Abbreviations: C5.0: Decision Tree Machine Learning Algorithm, ECG: Electrocardiography, Echo-LVH: Echocardiographic Left Ventricular Hypertrophy, LASSO: Least Absolute Shrinkage and Selection Operator, VCG: Vectorcardiography.

https://doi.org/10.1371/journal.pone.0334829.g001

The Marcos VCG model, based on five vectorcardiographic features, includes atrial and ventricular depolarization and repolarization metrics. It achieved an internal validation accuracy of 76.1% (95% CI: 61.3%–87.3%), with sensitivity of 59.6%, specificity of 87.8%, PPV of 78.2%, and NPV of 75.5%. These results were consistent in both the training (accuracy 76.1%, F1-score 0.674) and testing sets (72.1%, F1-score 0.645) (Table 3).

thumbnail
Table 3. Performance metrics of the Marcos models in training and testing sets.

https://doi.org/10.1371/journal.pone.0334829.t003

Similarly, the Marcos VCG-ECG model, combining six ECG and VCG variables, showed improved diagnostic balance by jointly capturing depolarization and repolarization patterns. It yielded an accuracy of 76.7% (95% CI: 61.2–87.9%) in internal validation, with high sensitivity (77.4%) and NPV (83.3%). Its performance remained robust in training (F1-score 0.734) and testing (F1-score 0.731) cohorts (Table 3).

The simplified Marcos VCG-ECGsp model demonstrated an internal accuracy of 75.4% (95% CI: 60.6–86.8%), sensitivity of 63.3%, and specificity of 84% (S5 Table). While slightly reducing sensitivity (63.3% and 63.4%) in favor of specificity (84% and 81.1%), its performance remained consistent across training (75.4%) and testing (72.6%) sets (Table 3). This model may be particularly useful when the clinical priority is reducing false positives.

Head‑to‑head comparison with ECG criteria

The Marcos criteria outperformed 23 classical ECG models in diagnostic accuracy, sensitivity, and F1-score (Fig 3, Table 4). The Marcos VCG-ECG criteria achieved higher AUC than the Cornell voltage criterion in both training (0.81 vs. 0.68, p < 0.0001) and testing (0.78 vs. 0.69, p = 0.04). Compared to our approach, classic ECG models showed 11.8%–27.2% lower accuracy, 43%–73.1% lower sensitivity, and F1-score reductions of 0.298–0.71 (S7 Table).

thumbnail
Table 4. Diagnostic performance of Marcos models versus classic ECG criteria in test set.

https://doi.org/10.1371/journal.pone.0334829.t004

thumbnail
Fig 2. Decision tree algorithms for Echo-LVH detection using Marcos VCG and ECG criteria.

C5.0 decision trees for (A) Marcos VCG, (B) Marcos VCG‑ECG and (C) Marcos VCG‑ECGsp. Bar height at each leaf indicates the number of patients classified as Echo‑LVH (dark) versus non‑LVH (light). Abbreviations: ANG_TERM_NUM: Magnitude of the ventricular depolarization terminal vector (mV), ECG 21: Cornell voltage criteria (mV), GAV_K: Geometric area vector of ventricular depolarization (None), PMagP2: Magnitude of auricular depolarization near the onset (2nd part of the loop) (μV), POrbFrqD5: Orbital frequency of auricular depolarization in the middle (5th part of the loop) (ms − 1), QRSMagP1: Ventricular depolarization magnitude at the onset (first part of the loop) (μV), QRSMagPf9: Ventricular depolarization magnitude at the offset (9th part of the loop) (μV), RangeAngR: Rotational angle range of ventricular depolarization (°), T AREA DI: T wave area in DI (Ashman Units), TVelQ5: Ventricular repolarization velocity in the middle of the loop (μV/ms), Vel Min Kf T: Minimum velocity of ventricular repolarization (μV/ms).

https://doi.org/10.1371/journal.pone.0334829.g002

thumbnail
Fig 3. Receiver Operating Characteristic (ROC) curves comparison.

ROC curves for Marcos VCG‑ECG versus the best conventional criterion (Cornell voltage) in the training cohort (A, n = 460) and the independent test cohort (B, n = 204). Marcos VCG‑ECG achieved a significantly higher AUC in both sets (DeLong p < 0.0001 and p = 0.04). Abbreviations: AUC: Area Under the Curve, ROC: Receiver Operating Characteristic, ECG: Electrocardiography, Echo-LVH: Echocardiographic Left Ventricular Hypertrophy, VCG: Vectorcardiography.

https://doi.org/10.1371/journal.pone.0334829.g003

The VCG-ECG model had the highest sensitivity and F1-score overall, significantly outperforming the VCG-only, VCG-ECGsp, Cornell voltage, and Peguero-Lo Presti criteria (all p < 0.05 for sensitivity) (Fig 4). Cornell voltage was more specific than VCG-ECG and VCG-ECGsp, but not VCG (p = 0.19). Peguero showed no significant specificity advantage over any Marcos model.

thumbnail
Fig 4. Sensitivity and specificity comparisons across the proposed criteria and the most sensitive ECG criteria.

Comparative bar plots of sensitivity (A) and specificity (B) for Marcos models versus the two most sensitive ECG criteria. P values were obtained with the McNemar test and adjusted using Bonferroni correction for multiple comparisons; adjusted values were truncated to 1.0 when exceeding that threshold.

https://doi.org/10.1371/journal.pone.0334829.g004

In summary, the Marcos criteria substantially improve sensitivity while maintaining specificity within acceptable ranges.

Subgroup analyses

The proposed models maintained consistent performance across key subgroups (Table 5). The VCG-only criteria demonstrated superior accuracy in females and patients with IHD on echo— populations where classic ECG criteria are known to have limitations. Although Cornell voltage and Peguero-Lo Presti showed high specificity (>90%), their sensitivity remained low across all subpopulations (15.6%–42.9%).

thumbnail
Table 5. Subanalysis of diagnostic performance of Marcos criteria in different populations in test set.

https://doi.org/10.1371/journal.pone.0334829.t005

In geometric classification (Table 6), Cornell voltage and Peguero-Lo Presti achieved higher CCR for normal geometry (+24.4% and +14.6%) and concentric remodeling (+15.7% each), reflecting their conservative profile. Despite this, Marcos VCG maintained high CCRs: 82.9% (normal geometry), 87.1% (concentric remodeling), and 85.6% (normal LVMI).

thumbnail
Table 6. Correct classification rate of best criteria in test set.

https://doi.org/10.1371/journal.pone.0334829.t006

The VCG-ECG model achieved the highest CCRs for pathologic patterns—concentric LVH (72.7%), eccentric LVH (75%), and all grades of severity: 77.1% (mild), 63.6% (moderate), and 63.9% (severe). In contrast, Cornell voltage and Peguero-Lo Presti underperformed with lower CCRs in these categories (Table 6).

Discussion

Main findings and diagnostic innovation

In this study, we developed three clinically interpretable C5.0 machine learning models that integrates VCG and ECG data without relying on demographic or clinical variables to enhance the detection of Echo-LVH (Fig 1). Remarkably, in specific groups such as patients without hypertension or those under 60 years old, the proposed criteria achieved accuracies exceeding 80% (Table 5 and S8 Table). Additionally, by performing a detailed analysis of P, QRS, and T loops—including intraloop dynamics— our approach offers a complementary diagnostic perspective that extends beyond current ECG or VCG criteria. These findings support our hypothesis that VCG and ECG integration into a interpretable model enhances sensitivity and diagnostic accuracy, while maintaining clinically acceptable specificity. Importantly, all predictions were generated through fully automated ECG and VCG interpretation, illustrating the potential of automated systems to support clinical decision-making [50,51].

Advancing VCG interpretation

Early studies highlighted VCG’s ability to analyze complex QRS morphology, ST segments, T wave configurations, and QRS-T angles, demonstrating its diagnostic utility for Echo-LVH [52]. Subsequent research revealed significant changes in QRS loop forces and durations in patients with LVH, furthering our understanding of its pathophysiology [53]. Unlike previous studies that primarily focused on descriptive loop analysis, such as loop-specific configurations like loop folding, our research employs an automated, quantitative approach to VCG analysis [54]. Notably, a study showed that normal VCG variables were highly specific for normal LVM [55]. This aligns with our findings, where the VCG model demonstrated the highest specificity among the proposed criteria (Table 3) [55].

A description of the Marcos criteria

The ventricular depolarization activation gradient, or GAV, consistently used in the VCG model (100% usage), indicates distortions like concavities previously identified in Echo-LVH cases [23]. Our findings show an increase in the minimum velocity of ventricular repolarization (Vel Min Kf T) in patients with Echo-LVH, aligning with research indicating that hypertrophied myocardium can exhibit a similar behavior during depolarization [56].

The Marcos VCG-ECG version exhibits improved accuracy (75.5%) over the VCG model (72.1%), with well-balanced sensitivity (73.1%) and specificity (77.5%) (McNemar test p-value: 0.51) (Figs 2 and 4, Table 3). The VCG-ECGsp criteria, streamlined for simplicity, offer enhanced specificity over the Marcos VCG-ECG model, albeit with a modest significant reduction in sensitivity (Fig 2 and Table 3). Integrating VCG and ECG metrics, particularly the Cornell voltage criterion, renowned for its efficacy in predicting major adverse cardiovascular events [57], illustrates a combined diagnostic approach that capitalizes on the complementary features of both signal types. Additionally, the inclusion of variables that explore the ventricular repolarization area (low area under T wave in lead DI), an early marker of repolarization alteration in patients with hypertension, underscores the potential in detecting hypertensive heart disease-related changes [58].

Comparative value of VCG versus ECG criteria

Our analysis reveals that, compared to Peguero-Lo Presti and Cornell voltage, the VCG-ECG model exhibits higher accuracy (75.5%) and sensitivity (73.1%), although with slightly reduced specificity (Table 4 and Fig 4). Notably, the VCG-only version demonstrated significantly higher sensitivity than both Cornell and Peguero–Lo Presti criteria, while maintaining comparable specificity, suggesting it may serve as a useful complement to traditional ECG criteria in certain clinical contexts (Fig 4).

The first study of Cornell voltage criteria resulted in a sensitivity for Echo-LVH of 41%, specificity of 90%, and accuracy of 68% [3], while Peguero-Lo Presti’s first publication reported a sensitivity of 62% with a specificity of 90% [4]. However, subsequent data have shown highly variable results. For instance, a meta-analysis of over 13,000 patients revealed that the Peguero-Lo Presti and Cornell voltage criteria achieved accuracies and sensitivities of approximately 69% and 52% and 67% and 92%, respectively, aligning with our observations [59]. Other investigations have shown similar performance trends in older adult populations with Peguero-Lo Presti’s and the Sokolow-Lyon voltage index [60]. Another study noted Peguero-Lo Presti’s enhanced sensitivity at 55%, despite a specificity decrease to 72% [61]. A separate metanalysis reported Peguero-Lo Presti had the highest pooled sensitivity (43%) followed by Cornell voltage criteria (26.1%) and Sokolow-Lyon (22%) [62].

Research across diverse regions, including China and Brazil, supports these patterns, reporting comparable AUCs (0.6 and 0.69), accuracies (33.8% and 56.3%), sensitivities (15–31.9% and 6–41%), and specificities (>90% and >78.5%) across various ECG-based criteria [63,64]. Our sensitivity results align with prior reports ranging from 17.5% to 29.9% across various cohorts [6568].

Recently, it has been acknowledged that the superiority of Peguero-Lo Presti for diagnosing Echo-LVH or predicting major cardiovascular outcomes over other ECG strategies “does not appear to be sufficiently proven” [69]. It is worth noting that a multivariable approach may offer advantages for Echo-LVH detection due to its capacity to integrate diverse electrical features, acknowledging the heterogeneity of electrocardiographic alterations in this condition (Fig 2) [70].

Detection performance by Echo-LVH geometry

Moreover, our study demonstrates more consistent detection of both concentric and eccentric variants of Echo-LVH than standard ECG-based methods (Table 6). Previous studies have shown that Cornell voltage sensitivity for these geometries is 29.3% for concentric and 14.9% for eccentric with specificity at 96.5% for both [63]. Another analysis found that the accuracy of ECG in identifying concentric Echo-LVH ranged from 44.5% to 58.4%, with sensitivities as low as 5% to 33.9% and showed similarly limited performance in eccentric cases [64].

Additionally, another study in patients with essential hypertension using linear regression showed that RaVL + SD voltage in male subjects followed an increasing trend across the spectrum from normal geometry to concentric remodeling, concentric hypertrophy, and finally eccentric hypertrophy. However, traditional ECG metrics exhibited reduced performance compared to the VCG-ECG approach, which showed higher sensitivity for detecting both concentric (+50.6% and +42.8%) and eccentric (+56.2%) patterns (Table 6) [71].

Performance across diverse subgroups

Our results demonstrate improved performance in subgroups where ECG-based detection is often limited, particularly in normotensive, younger, or obese individuals. Age, sex, obesity, hypertension, and myocardial ischemia can affect the performance of ECG criteria for Echo-LVH. A study on patients with hypertension reported lower sensitivities (10%−17.5%) compared to our findings [72]. In line with this, we observed that Cornell voltage and Peguero-Lo Presti showed reduced accuracy in hypertensive individuals (51.7% and 58.4%) versus non-hypertensive counterparts [72]. In contrast, the Marcos VCG-ECG criteria showed a higher accuracy in both groups: 71.2% and 82.5%, respectively (Table 5 and S8 Table).

Compared with normal-weight individuals, obese and overweight patients had lower Sokolow-Lyon voltage and a reduced prevalence of ECG-detected LVH using this criterion (31.4% versus 16.2% versus 10.9%, p < 0.001) [73]. Another report found that Cornell voltage sensitivity significantly declined in patients with BMI > 30 kg/m² relative to those with a BMI ≤ 25 kg/m² [74]. As a result, standard ECG benchmarks, often fail to achieve sufficient sensitivity (0–20%) in obese populations, even after BMI correction [75].

In patients with IHD on echo, loss of ventricular mass may result in voltage reduction, further diminishing ECG sensitivity [76]. In our study, IHD on echo adversely impacted the performance of all evaluated criteria except the VCG model, which maintained a high accuracy (78.6%) (Table 5).

Comparison with other machine learning models

Several machine learning models have been proposed for the electrocardiographic detection of Echo-LVH, with varying degrees of performance depending on population characteristics, input variables, and model complexity (S9 Table) [7787]. These approaches typically combine clinical, laboratory, and ECG parameters, and frequently utilize black-box algorithms such as random forests, convolutional neural networks, or ensemble methods. While some achieved high specificity or F1-scores, few provided a balanced combination of accuracy, sensitivity, and interpretability (S8 Table). Notably, only one study incorporated VCG and ECG for model construction, achieving a high F1-score but lacking detailed reporting of predictors [87]. In contrast, the Marcos criteria rely solely on ECG and VCG signals and require fewer inputs (S8 Table).

Study limitations and methodological considerations

The retrospective and single-center nature of our study, alongside a modest cohort size, underscores inherent constraints, suggesting the preliminary nature of our results. Although internal validation was robust, no external validation was performed, and the applicability of the criteria to other populations remains to be established. In addition, the Marcos software used for VCG synthesis, while previously validated, is not yet widely available, which may limit immediate reproducibility.

The primary intention of this study is not to declare the proposed models as universally superior to ECG benchmarks but to introduce and evaluate a novel methodological framework: combining VCG synthesis, electrovectorcardiographic quantification, and interpretable machine learning for optimizing Echo-LVH detection. This work serves as a proof-of-concept exploration in a cohort of Mexican patients diagnosed with Echo-LVH.

Finally, patients with conditions potentially affecting ECG/VCG interpretation (e.g., bundle branch block, paced rhythms) were excluded, limiting generalizability in such scenarios.

Overall, the results are consistent with our hypothesis: integrating VCG with ECG using a white-box machine learning model like C5.0 improved diagnostic accuracy and showed a notable gain in sensitivity compared to traditional ECG criteria. Although specificity did not exceed that of the most specific ECG criterion (Cornell voltage), it remained within clinically acceptable ranges across all proposed models. These findings support the exploratory use implementation of electrovectorcardiographic decision trees in diagnostic workflows for Echo-LVH.

Future directions and clinical translation

Building on these findings, future research will leverage publicly available datasets to externally validate optimized Marcos-based models across diverse populations and cardiovascular conditions. In addition, the predictive potential of the Marcos software might be explored beyond Echo-LVH by developing models to forecast major adverse cardiovascular events (MACE) and other cardiac conditions, including arrhythmic and structural heart diseases. Finally, further development of the software to incorporate additional electrophysiological spectra may help expand its diagnostic scope and potential translational utility.

Conclusion

In conclusion, the detection of Echo-LVH through the integration of electrovectorcardiography and the ML C5.0 algorithm achieved higher sensitivity and overall accuracy than traditional ECG criteria, while maintaining clinically acceptable specificity. Among all models, the VCG-ECG model demonstrated the best overall performance. Additionally, the VCG model showed significantly higher sensitivity than both the Cornell voltage and Peguero–Lo Presti criteria, while preserving comparable specificity—highlighting its potential as a streamlined alternative in select clinical scenarios. Given its interpretability and diagnostic accuracy, the Marcos VCG-ECG criteria may be further explored for integration into ECG/VCG software to support LVH screening in clinical practice.

Supporting information

S1 Table. List of electrocardiogram (ECG) parameters analyzed by the Philips DXL-16 algorithm.

https://doi.org/10.1371/journal.pone.0334829.s001

(DOCX)

S2 Table. Diagnostic criteria and corresponding ECG cut-values by various authors.

https://doi.org/10.1371/journal.pone.0334829.s002

(DOCX)

S3 Table. Parameters of vectorcardiography (VCG) and their clinical descriptions.

https://doi.org/10.1371/journal.pone.0334829.s003

(DOCX)

S4 Table. Low-prevalence comorbidities and conduction disorders in the training and test cohorts.

https://doi.org/10.1371/journal.pone.0334829.s004

(DOCX)

S5 Table. Dimensionality reduction of VCG and ECG parameters with Lasso regression.

https://doi.org/10.1371/journal.pone.0334829.s005

(DOCX)

S6 Table. Ten-fold cross validation for the proposed Marcos VCG, VCG-ECG, and VCG-ECGsp criteria.

https://doi.org/10.1371/journal.pone.0334829.s006

(DOCX)

S7 Table. Diagnostic performance of Marcos models vs all classic ECG criteria in test set.

https://doi.org/10.1371/journal.pone.0334829.s007

(DOCX)

S8 Table. Subanalysis of diagnostic performance of Marcos criteria in different populations in test set (complement).

https://doi.org/10.1371/journal.pone.0334829.s008

(DOCX)

S9 Table. Comparative performance and design characteristics of published machine learning models for detecting Echo-LVH based on ECG and/or VCG features.

https://doi.org/10.1371/journal.pone.0334829.s009

(DOCX)

S1 Dataset. Anonymized dataset used to reproduce C5.0 models.

https://doi.org/10.1371/journal.pone.0334829.s010

(CSV)

S1 File. R script for training and testing C5.0 models (Marcos criteria).

https://doi.org/10.1371/journal.pone.0334829.s011

(R)

S1 Text. Participant Selection and Cohort Characteristics (Condensed).

https://doi.org/10.1371/journal.pone.0334829.s012

(DOCX)

S2 Text. VCG Data Synthesis and Software Validation.

https://doi.org/10.1371/journal.pone.0334829.s013

(DOCX)

Acknowledgments

We would like to thank Diana Lorena Lankenau-Vela and Laura Celia Salazar-Salazar for their invaluable assistance with manual tasks that significantly contributed to the completion of this work.

References

  1. 1. Playford D, Strange G, Joseph M, Perry R, Chan YK, Harris S, et al. Increasing left ventricular mass and mortality in 303,548 men and women investigated with echocardiography. European Heart Journal. 2023;44(Supplement_2).
  2. 2. Einthoven W. The telecardiogram. American Heart Journal. 1957;53(4):602–15.
  3. 3. Casale PN, Devereux RB, Kligfield P, Eisenberg RR, Miller DH, Chaudhary BS, et al. Electrocardiographic detection of left ventricular hypertrophy: development and prospective validation of improved criteria. J Am Coll Cardiol. 1985;6(3):572–80. pmid:3161926
  4. 4. Peguero JG, Lo Presti S, Perez J, Issa O, Brenes JC, Tolentino A. Electrocardiographic Criteria for the Diagnosis of Left Ventricular Hypertrophy. J Am Coll Cardiol. 2017;69(13):1694–703. pmid:28359515
  5. 5. Hancock EW, Deal BJ, Mirvis DM, Okin P, Kligfield P, Gettes LS, et al. AHA/ACCF/HRS recommendations for the standardization and interpretation of the electrocardiogram: part V: electrocardiogram changes associated with cardiac chamber hypertrophy: a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society. Endorsed by the International Society for Computerized Electrocardiology. J Am Coll Cardiol. 2009;53(11):992–1002. pmid:19281932
  6. 6. Bacharova L, Chevalier P, Gorenek B, Jons C, Li Y-G, Locati ET, et al. ISE/ISHNE expert consensus statement on the ECG diagnosis of left ventricular hypertrophy: The change of the paradigm. Ann Noninvasive Electrocardiol. 2024;29(1):e13097. pmid:37997698
  7. 7. Frank E. An accurate, clinically practical system for spatial vectorcardiography. Circulation. 1956;13(5):737–49. pmid:13356432
  8. 8. Malmivuo J, Plonsey R. BioelectromagnetismPrinciples and Applications of Bioelectric and Biomagnetic Fields. Oxford University Press. 1995. doi: https://doi.org/10.1093/acprof:oso/9780195058239.001.0001
  9. 9. Levkov CL. Orthogonal electrocardiogram derived from the limb and chest electrodes of the conventional 12-lead system. Med Biol Eng Comput. 1987;25(2):155–64. pmid:3695618
  10. 10. Rautaharju PM. A hundred years of progress in electrocardiography. 2: The rise and decline of vectorcardiography. Can J Cardiol. 1988;4(2):60–71. pmid:3284620
  11. 11. Edenbrandt L, Pahlm O. Vectorcardiogram synthesized from a 12-lead ECG: superiority of the inverse Dower matrix. J Electrocardiol. 1988;21(4):361–7. pmid:3241148
  12. 12. Kors JA, van Herpen G, Sittig AC, van Bemmel JH. Reconstruction of the Frank vectorcardiogram from standard electrocardiographic leads: diagnostic comparison of different methods. Eur Heart J. 1990;11(12):1083–92. pmid:2292255
  13. 13. Guillem MS, Sahakian AV, Swiryn S. Derivation of orthogonal leads from the 12-lead ECG. accuracy of a single transform for the derivation of atrial and ventricular waves. Computers in Cardiology. 2006;:249–52.
  14. 14. Vondrak J, Penhaker M, Jurek F. Selected transformation methods and their comparison for VCG leads deriving. Alexandria Engineering Journal. 2022;61(5):3475–85.
  15. 15. De la Garza Salazar F, Egenriether B. Exploring vectorcardiography: An extensive vectocardiogram analysis across age, sex, BMI, and cardiac conditions. J Electrocardiol. 2024;82:100–12. pmid:38113771
  16. 16. Man S, Rahmattulla C, Maan AC, Holman E, Bax JJ, van der Wall EE, et al. Role of the vectorcardiogram-derived spatial QRS-T angle in diagnosing left ventricular hypertrophy. J Electrocardiol. 2012;45(2):154–60. pmid:22074745
  17. 17. Man S, Algra AM, Schreurs CA, Borleffs CJW, Scherptong RWC, van Erven L, et al. Influence of the vectorcardiogram synthesis matrix on the power of the electrocardiogram-derived spatial QRS-T angle to predict arrhythmias in patients with ischemic heart disease and systolic left ventricular dysfunction. J Electrocardiol. 2011;44(4):410–5. pmid:21704219
  18. 18. Sedaghat G, Ghafoori E, Waks JW, Kabir MM, Shvilkin A, Josephson ME, et al. Quantitative Assessment of Vectorcardiographic Loop Morphology. J Electrocardiol. 2016;49(2):154–63. pmid:26826894
  19. 19. Tereshchenko LG, Waks JW, Kabir M, Ghafoori E, Shvilkin A, Josephson ME. Analysis of speed, curvature, planarity and frequency characteristics of heart vector movement to evaluate the electrophysiological substrate associated with ventricular tachycardia. Comput Biol Med. 2015;65:150–60. pmid:25842361
  20. 20. van Deursen CJM, Vernooy K, Dudink E, Bergfeldt L, Crijns HJGM, Prinzen FW, et al. Vectorcardiographic QRS area as a novel predictor of response to cardiac resynchronization therapy. J Electrocardiol. 2015;48(1):45–52. pmid:25453196
  21. 21. Emerek K, Friedman DJ, Sørensen PL, Hansen SM, Larsen JM, Risum N, et al. Vectorcardiographic QRS area is associated with long-term outcome after cardiac resynchronization therapy. Heart Rhythm. 2019;16(2):213–9. pmid:30170227
  22. 22. Dima S-M, Panagiotou C, Mazomenos EB, Rosengarten JA, Maharatna K, Gialelis JV, et al. On the Detection of Myocadial Scar Based on ECG/VCG Analysis. IEEE Trans Biomed Eng. 2013;60(12):3399–409. pmid:24001951
  23. 23. Arnaud P, Morlet D, Rubel P. Planarity of the spatial QRS loop. Comparative analysis in normals, infarcts, ventricular hypertrophies, and intraventricular conduction defects. J Electrocardiol. 1989;22(2):143–52. pmid:2523464
  24. 24. Horinaka S, Yamamoto H, Yagi S. Spatial orientation of the vectorcardiogram in patients with myocardial infarction. Jpn Circ J. 1993;57(2):109–16.
  25. 25. Attia ZI, Harmon DM, Behr ER, Friedman PA. Application of artificial intelligence to the electrocardiogram. European Heart Journal. 2021;42(46):4717–30.
  26. 26. Rabkin SW. Searching for the Best Machine Learning Algorithm for the Detection of Left Ventricular Hypertrophy from the ECG: A Review. Bioengineering (Basel). 2024;11(5):489. pmid:38790356
  27. 27. Kuhn M, Quinlan R. C50: C5.0 Decision Trees and Rule-Based Models. CRAN: Contributed Packages. The R Foundation. 2012. doi: https://doi.org/10.32614/cran.package.c50
  28. 28. De la Garza-Salazar F, Romero-Ibarguengoitia ME, Rodriguez-Diaz EA, Azpiri-Lopez JR, González-Cantu A. Improvement of electrocardiographic diagnostic accuracy of left ventricular hypertrophy using a Machine Learning approach. PLoS One. 2020;15(5):e0232657. pmid:32401764
  29. 29. De la Garza Salazar F, Romero Ibarguengoitia ME, Azpiri López JR, González Cantú A. Optimizing ECG to detect echocardiographic left ventricular hypertrophy with computer-based ECG data and machine learning. PLoS One. 2021;16(11):e0260661. pmid:34847202
  30. 30. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ. 2015;351:h5527. pmid:26511519
  31. 31. Luo W, Phung D, Tran T, Gupta S, Rana S, Karmakar C, et al. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View. J Med Internet Res. 2016;18(12):e323. pmid:27986644
  32. 32. Mosteller RD. Simplified calculation of body-surface area. N Engl J Med. 1987;317(17):1098. pmid:3657876
  33. 33. Lang RM, Badano LP, Mor-Avi V, Afilalo J, Armstrong A, Ernande L, et al. Recommendations for cardiac chamber quantification by echocardiography in adults: an update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. J Am Soc Echocardiogr. 2015;28(1):1-39.e14. pmid:25559473
  34. 34. Tereshchenko LG, Cheng A, Fetics BJ, Marine JE, Spragg DD, Sinha S, et al. Ventricular arrhythmia is predicted by sum absolute QRST integralbut not by QRS width. J Electrocardiol. 2010;43(6):548–52. pmid:20832820
  35. 35. Tereshchenko LG, Cheng A, Fetics BJ, Butcher B, Marine JE, Spragg DD, et al. A new electrocardiogram marker to identify patients at low risk for ventricular tachyarrhythmias: sum magnitude of the absolute QRST integral. J Electrocardiol. 2011;44(2):208–16. pmid:21093871
  36. 36. Burch GE, Abildskov JA, Cronvich JA. The spatial vectorcardiogram and mean spatial ventricular gradient in normal pregnant women. Circulation. 1954;9(3):381–7. pmid:13141367
  37. 37. Sur S, Han L, Tereshchenko LG. Comparison of sum absolute QRST integral, and temporal variability in depolarization and repolarization, measured by dynamic vectorcardiography approach, in healthy men and women. PLoS One. 2013;8(2):e57175. pmid:23451181
  38. 38. Scherptong RWC, Henkens IR, Man SC, Le Cessie S, Vliegen HW, Draisma HHM, et al. Normal limits of the spatial QRS-T angle and ventricular gradient in 12-lead electrocardiograms of young adults: dependence on sex and heart rate. J Electrocardiol. 2008;41(6):648–55. pmid:18817923
  39. 39. Tibshirani R. Regression Shrinkage and Selection Via the Lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology. 1996;58(1):267–88.
  40. 40. Tibshirani R. Regression Shrinkage and Selection via The Lasso: A Retrospective. Journal of the Royal Statistical Society Series B: Statistical Methodology. 2011;73(3):273–82.
  41. 41. Karalis G. Decision Trees and Applications. Adv Exp Med Biol. 2020;1194:239–42. pmid:32468539
  42. 42. Saum N, Sugiura S, Piantanakulchai M. Hyperparameter Optimization Using Iterative Decision Tree (IDT). IEEE Access. 2022;10:106812–27.
  43. 43. Robin X, Turck N, Hainard A, Tiberti N, Lisacek F, Sanchez J-C, et al. pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics. 2011;12:77. pmid:21414208
  44. 44. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics. 1988;44(3):837.
  45. 45. Trajman A, Luiz RR. McNemar χ2test revisited: comparing sensitivity and specificity of diagnostic examinations. Scandinavian Journal of Clinical and Laboratory Investigation. 2008;68(1):77–80.
  46. 46. Kuhn M. Caret: classification and regression training. 2015.
  47. 47. Reps JM, Ryan P, Rijnbeek PR. Investigating the impact of development and internal validation design when training prognostic models using a retrospective cohort in big US observational healthcare data. BMJ Open. 2021;11(12):e050146. pmid:34952871
  48. 48. Wickham H. ggplot2 Wiley Interdiscip Rev Comput Stat. CrossRef View in Scopus:2011; 3:180–5.
  49. 49. Champely S, Ekstrom C, Dalgaard P, Gill J, Weibelzahl S, Anandkumar A. Package ‘pwr.’. 2018.
  50. 50. Asatryan B, Ebrahimi R, Strebel I, van Dam PM, Kühne M, Knecht S, et al. Man vs machine: Performance of manual vs automated electrocardiogram analysis for predicting the chamber of origin of idiopathic ventricular arrhythmia. J Cardiovasc Electrophysiol. 2020;31(2):410–6. pmid:31840899
  51. 51. Cook DA, Oh S-Y, Pusic MV. Accuracy of Physicians’ Electrocardiogram Interpretations: A Systematic Review and Meta-analysis. JAMA Intern Med. 2020;180(11):1461–71. pmid:32986084
  52. 52. Mazzoleni A, Wolff R, Wolff L. The vectorcardiogram in left ventricular hypertrophy. American Heart Journal. 1959;58(5):648–62.
  53. 53. Wallace AG, Mccall BW, Estes EH Jr. The vector-cardiogram in left ventricular hypertrophy. A study using the Frank lead system. Am Heart J. 1962;63:466–76. pmid:14004645
  54. 54. Varriale P, Alfenito JC, Kennedy RJ. The vectorcardiogram of left ventricular hypertrophy. Analysis and criteria (Frank Lead system). Circulation. 1966;33(4):569–76. pmid:5937553
  55. 55. Granfeldt H, Nylander E. Use of vectorcardiography in determination of the left ventricular muscle mass. Clin Physiol. 1987;7(3):209–16. pmid:2955997
  56. 56. Toyoshima H, Park YD, Ishikawa Y, Nagata S, Hirata Y, Sakakibara H, et al. Effect of ventricular hypertrophy on conduction velocity of activation front in the ventricular myocardium. Am J Cardiol. 1982;49(8):1938–45. pmid:6211083
  57. 57. You Z, He T, Ding Y, Yang L, Jiang X, Huang L. Predictive value of electrocardiographic left ventricular hypertrophy in the general population: A meta-analysis. J Electrocardiol. 2020;62:14–9. pmid:32745730
  58. 58. Dilaveris P, Gialafos E, Poloniecki J, Hnatkova K, Richter D, Andrikopoulos G, et al. Changes of the T-wave amplitude and angle: an early marker of altered ventricular repolarization in hypertension. Clin Cardiol. 2000;23(8):600–6. pmid:10941547
  59. 59. Yu Z, Song J, Cheng L, Li S, Lu Q, Zhang Y, et al. Peguero-Lo Presti criteria for the diagnosis of left ventricular hypertrophy: A systematic review and meta-analysis. PLoS One. 2021;16(1):e0246305. pmid:33513186
  60. 60. Tavares C de AM, Samesima N, Hajjar LA, Godoy LC, Padrão EMH, Lazar Neto F, et al. Clinical applicability and diagnostic performance of electrocardiographic criteria for left ventricular hypertrophy diagnosis in older adults. Sci Rep. 2021;11(1):11516. pmid:34075174
  61. 61. Gamrat A, Trojanowicz K, Surdacki MA, Budkiewicz A, Wąsińska A, Wieczorek-Surdacka E, et al. Diagnostic Ability of Peguero-Lo Presti Electrocardiographic Left Ventricular Hypertrophy Criterion in Severe Aortic Stenosis. J Clin Med. 2021;10(13):2864. pmid:34203345
  62. 62. Noubiap JJ, Agbaedeng TA, Nyaga UF, Nkoke C, Jingi AM. A meta-analytic evaluation of the diagnostic accuracy of the electrocardiographic Peguero-Lo Presti criterion for left ventricular hypertrophy. J Clin Hypertens (Greenwich). 2020;22(7):1145–53. pmid:32608577
  63. 63. Wang D, Xu J-Z, Zhang W, Chen Y, Li J, An Y, et al. Performance of Electrocardiographic Criteria for Echocardiographically Diagnosed Left Ventricular Hypertrophy in Chinese Hypertensive Patients. American Journal of Hypertension. 2020;33(9):831–6.
  64. 64. Marcato JP, Senra Santos F, Gama Palone A, Lenci Marques G. Evaluation of Different Criteria in the Diagnosis of Left Ventricular Hypertrophy by Electrocardiogram in Comparison With Echocardiogram. Cureus. 2022;14(6):e26376. pmid:35911263
  65. 65. Luangphiphat W, Phaisitkriengkrai A, Methavigul R, Methavigul K. Peguero-Lo Presti criteria modified by body surface area for the electrocardiographic diagnosis of left ventricular hypertrophy in Thai patients. Asian Biomed (Res Rev News). 2021;15(2):101–7. pmid:37551402
  66. 66. Lv T, Yuan Y, Yang J, Wang G, Kong L, Li H, et al. The association between ECG criteria and Echo criteria for left ventricular hypertrophy in a general Chinese population. Noninvasive Electrocardiol. 2021;26(5).
  67. 67. Narita M, Yamada M, Tsushima M, Kudo N, Kato T, Yokono Y, et al. Novel Electrocardiographic Criteria for the Diagnosis of Left Ventricular Hypertrophy in the Japanese General Population. Int Heart J. 2019;60(3):679–87. pmid:31019179
  68. 68. Keskin K, Ser OS, Dogan GM, Cetinkal G, Yildiz SS, Sigirci S, et al. Assessment of a new electrocardiographic criterion for the diagnosis of left ventricle hypertrophy: A prospective validation study. North Clin Istanb. 2019;7(3):231–6. pmid:32478294
  69. 69. Faggiano A, Gherbesi E, Tadic M, Carugo S, Grassi G, Cuspidi C. Do We Need New Electrocardiographic Criteria for Left Ventricular Hypertrophy? The Case of the Peguero-Lo Presti Criterion. A Narrative Review. Am J Hypertens. 2024;37(3):155–62. pmid:38112655
  70. 70. Estes EH, Zhang Z-M, Li Y, Tereshchenko LG, Soliman EZ. Individual components of the Romhilt-Estes left ventricular hypertrophy score differ in their prediction of cardiovascular events: The Atherosclerosis Risk in Communities (ARIC) study. Am Heart J. 2015;170(6):1220–6. pmid:26678644
  71. 71. Xu M, Ge Z, Huang J, Shao X, Li J, Mrcp, et al. Modified Cornell electrocardiographic criteria in the assessment of left ventricular hypertrophy geometry of patients with essential hypertension. J Clin Hypertens (Greenwich). 2020;22(7):1239–46. pmid:32639109
  72. 72. Hussein M, Tareq S, Al-juboori O. The Accuracy of Electrocardiographic Criteria for Predicting Left Ventricular Hypertrophy in adult Patients with Systemic Hypertension. IPMJ. 2023;22(2):156–65.
  73. 73. Okin PM, Jern S, Devereux RB, Kjeldsen SE, Dahlöf B, LIFE Study Group. Effect of obesity on electrocardiographic left ventricular hypertrophy in hypertensive patients : the losartan intervention for endpoint (LIFE) reduction in hypertension study. Hypertension. 2000;35(1 Pt 1):13–8. pmid:10642268
  74. 74. Salamaga S, Juszczyk I, Dydowicz F, Matyjasek M, Turowska A, Lipski D, et al. Evaluation of sensitivity and specificity of ECG left ventricular hypertrophy criteria in obese and hypertensive patients. Arterial Hypertension. 2023;27(4):223–31.
  75. 75. Snelder SM, van de Poll SWE, de Groot-de Laat LE, Kardys I, Zijlstra F, van Dalen BM. Optimized electrocardiographic criteria for the detection of left ventricular hypertrophy in obesity patients. Clin Cardiol. 2020;43(5):483–90. pmid:31990994
  76. 76. Loperfido F, Digaetano A, Guccione P, Desantis F, Vigna C, Laurenzi F, et al. Assessment of left ventricular hypertrophy by ECG and VCG in patients with inferior and posterior myocardial infarction. A comparison with echocardiographic data. J Electrocardiol. 1986;19(3):247–56. pmid:2943847
  77. 77. Sabovčik F, Cauwenberghs N, Kouznetsov D, Haddad F, Alonso-Betanzos A, Vens C, et al. Applying machine learning to detect early stages of cardiac remodelling and dysfunction. Eur Heart J Cardiovasc Imaging. 2021;22(10):1208–17. pmid:32588036
  78. 78. Angelaki E, Marketou ME, Barmparis GD, Patrianakos A, Vardas PE, Parthenakis F, et al. Detection of abnormal left ventricular geometry in patients without cardiovascular disease through machine learning: An ECG-based approach. J Clin Hypertens (Greenwich). 2021;23(5):935–45. pmid:33507615
  79. 79. Sammani A, Jansen M, de Vries NM, de Jonge N, Baas AF, Te Riele ASJM, et al. Automatic Identification of Patients With Unexplained Left Ventricular Hypertrophy in Electronic Health Record Data to Improve Targeted Treatment and Family Screening. Front Cardiovasc Med. 2022;9:768847. pmid:35498038
  80. 80. Lim DYZ, Sng G, Ho WHH, Hankun W, Sia C-H, Lee JSW, et al. Machine learning versus classic electrocardiographic criteria for the detection of echocardiographic left ventricular hypertrophy in a pre-participation cohort. Kardiol Pol. 2021.
  81. 81. Zhao X, Huang G, Wu L, Wang M, He X, Wang J-R, et al. Deep learning assessment of left ventricular hypertrophy based on electrocardiogram. Front Cardiovasc Med. 2022;9:952089. pmid:36035939
  82. 82. Liu C-W, Wu F-H, Hu Y-L, Pan R-H, Lin C-H, Chen Y-F, et al. Left ventricular hypertrophy detection using electrocardiographic signal. Sci Rep. 2023;13(1):2556. pmid:36781924
  83. 83. Lin G-M, Liu K. An Electrocardiographic System With Anthropometrics via Machine Learning to Screen Left Ventricular Hypertrophy among Young Adults. IEEE J Transl Eng Health Med. 2020;8:1800111. pmid:32419990
  84. 84. Kwon J-M, Jeon K-H, Kim HM, Kim MJ, Lim SM, Kim KH, et al. Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography. EP Europace. 2020;22:412–9.
  85. 85. Kokubo T, Kodera S, Sawano S, Katsushika S, Nakamoto M, Takeuchi H, et al. Automatic Detection of Left Ventricular Dilatation and Hypertrophy from Electrocardiograms Using Deep Learning. Int Heart J. 2022;63(5):939–47. pmid:36104234
  86. 86. Liu C-M, Hsieh M-E, Hu Y-F, Wei T-Y, Wu I-C, Chen P-F, et al. Artificial Intelligence-Enabled Model for Early Detection of Left Ventricular Hypertrophy and Mortality Prediction in Young to Middle-Aged Adults. Circ Cardiovasc Qual Outcomes. 2022;15(8):e008360. pmid:35959675
  87. 87. Kataoka Y, Tomoike H. Spatial Feature Extraction of Vectorcardiography via Minimum Volume Ellipsoid Enclosure in Classifying Left Ventricular Hypertrophy. Annu Int Conf IEEE Eng Med Biol Soc. 2021;2021:625–8. pmid:34891371