Opportunistic screening for atrial fibrillation by clinical pharmacists in UK general practice during the influenza vaccination season: A cross-sectional feasibility study

Background Growing prevalence of atrial fibrillation (AF) in the ageing population and its associated life-changing health and resource implications have led to a need to improve its early detection. Primary care is an ideal place to screen for AF; however, this is limited by shortages in general practitioner (GP) resources. Recent increases in the number of clinical pharmacists within primary care makes them ideally placed to conduct AF screening. This study aimed to determine the feasibility of GP practice–based clinical pharmacists to screen the over-65s for AF, using digital technology and pulse palpation during the influenza vaccination season. Methods and findings Screening was conducted over two influenza vaccination seasons, 2017–2018 and 2018–2019, in four GP practices in Kent, United Kingdom. Pharmacists were trained by a cardiologist to pulse palpate, record, and interpret a single-lead ECG (SLECG). Eligible persons aged ≥65 years (y) attending an influenza vaccination clinic were offered a free heart rhythm check. Six hundred four participants were screened (median age 73 y, 42.7% male). Total prevalence of AF was 4.3%. All participants with AF qualified for anticoagulation and were more likely to be male (57.7%); be older; have an increased body mass index (BMI); and have a CHA2DS2-VASc (Congestive heart failure, Hypertension, Age ≥ 75 years, Diabetes, previous Stroke, Vascular disease, Age 65–74 years, Sex category) score ≥ 3. The sensitivity and specificity of clinical pharmacists diagnosing AF using pulse palpation was 76.9% (95% confidence interval [CI] 56.4–91.0) and 92.2% (95% CI 89.7–94.3), respectively. This rose to 88.5% (95% CI 69.9–97.6) and 97.2% (95% CI 95.5–98.4) with an SLECG. At follow-up, four participants (0.7%) were diagnosed with new AF and three (0.5%) were initiated on anticoagulation. Screening with SLECG also helped identify new non-AF cardiovascular diagnoses, such as left ventricular hypertrophy, in 28 participants (4.6%). The screening strategy was cost-effective in 71.8% and 64.3% of the estimates for SLECG or pulse palpation, respectively. Feedback from participants (422/604) was generally positive. Key limitations of the study were that the intervention did not reach individuals who did not attend the practice for an influenza vaccination and there was a limited representation of UK ethnic minority groups in the study cohort. Conclusions This study demonstrates that AF screening performed by GP practice–based pharmacists was feasible, economically viable, and positively endorsed by participants. Furthermore, diagnosis of AF by the clinical pharmacist using an SLECG was more sensitive and more specific than the use of pulse palpation alone. Future research should explore the key barriers preventing the adoption of national screening programmes.

Sources of funding and other support; role of funders Funding STARD 2015 AIM STARD stands for "Standards for Reporting Diagnostic accuracy studies". This list of items was developed to contribute to the completeness and transparency of reporting of diagnostic accuracy studies. Authors can use the list to write informative study reports. Editors and peer-reviewers can use it to evaluate whether the information has been included in manuscripts submitted for publication.

EXPLANATION
A diagnostic accuracy study evaluates the ability of one or more medical tests to correctly classify study participants as having a target condition. This can be a disease, a disease stage, response or benefit from therapy, or an event or condition in the future. A medical test can be an imaging procedure, a laboratory test, elements from history and physical examination, a combination of these, or any other method for collecting information about the current health status of a patient.
The test whose accuracy is evaluated is called index test. A study can evaluate the accuracy of one or more index tests.
Evaluating the ability of a medical test to correctly classify patients is typically done by comparing the distribution of the index test results with those of the reference standard. The reference standard is the best available method for establishing the presence or absence of the target condition. An accuracy study can rely on one or more reference standards.
If test results are categorized as either positive or negative, the cross tabulation of the index test results against those of the reference standard can be used to estimate the sensitivity of the index test (the proportion of participants with the target condition who have a positive index test), and its specificity (the proportion without the target condition who have a negative index test). From this cross tabulation (sometimes referred to as the contingency or "2x2" table), several other accuracy statistics can be estimated, such as the positive and negative predictive values of the test. Confidence intervals around estimates of accuracy can then be calculated to quantify the statistical precision of the measurements.
If the index test results can take more than two values, categorization of test results as positive or negative requires a test positivity cut-off. When multiple such cut-offs can be defined, authors can report a receiver operating characteristic (ROC) curve which graphically represents the combination of sensitivity and specificity for each possible test positivity cut-off. The area under the ROC curve informs in a single numerical value about the overall diagnostic accuracy of the index test.
The intended use of a medical test can be diagnosis, screening, staging, monitoring, surveillance, prediction or prognosis. The clinical role of a test explains its position relative to existing tests in the clinical pathway. A replacement test, for example, replaces an existing test. A triage test is used before an existing test; an add-on test is used after an existing test.
Besides diagnostic accuracy, several other outcomes and statistics may be relevant in the evaluation of medical tests. Medical tests can also be used to classify patients for purposes other than diagnosis, such as staging or prognosis. The STARD list was not explicitly developed for these other outcomes, statistics, and study types, although most STARD items would still apply.

DEVELOPMENT
This STARD list was released in 2015. The 30 items were identified by an international expert group of methodologists, researchers, and editors. The guiding principle in the development of STARD was to select items that, when reported, would help readers to judge the potential for bias in the study, to appraise the applicability of the study findings and the validity of conclusions and recommendations. The list represents an update of the first version, which was published in 2003.
More information can be found on http://www.equator-network.org/reporting-guidelines/stard.