Reliability and validation of an attitude scale regarding responsible conduct in research

Background Several studies reveal a problematic prevalence of research misbehaviors. There are several potential causes of research misconduct but ensuring that scientists hold attitudes that reflect norms of acceptable behaviors is fundamental. Aim Our aim was to evaluate the psychometric properties (factor structure and reliability) of an “attitude” scale that we adopted from a questionnaire we previously used to investigate the prevalence of research misbehaviors in the Middle East. Methods We used data from participants (n = 254) who were involved in our prior questionnaire study to determine the validity of an attitude scale that we adapted from this previous study. We performed exploratory factor analysis (EFA) to determine the factor structure of the attitude scale followed by measures of convergent and concurrent validity. We assessed reliability by computing the Cronbach’s alphas of each construct of the attitude scale. Results EFA indicated that the attitude scale consists of two factors (constructs). Convergent validity was demonstrated by significant correlations of item-item and item-total. Correlation analysis revealed that the attitude constructs were significantly correlated with the Research Misbehavior Severity Score, thereby demonstrating concurrent validity. Cronbach’s alphas were greater than 0.75 for both constructs. Conclusion We demonstrated a valid and reliable 20-item attitude scale with two factors related to “acceptability of practices in responsible conduct in research” and “general attitudes regarding scientific misconduct”. The use of a validated attitude scale can help assess the effectiveness of educational programs that focus on participants acquiring attitudes that are instrumental in responsible conduct in research.


Methods
We used data from participants (n = 254) who were involved in our prior questionnaire study to determine the validity of an attitude scale that we adapted from this previous study. We performed exploratory factor analysis (EFA) to determine the factor structure of the attitude scale followed by measures of convergent and concurrent validity. We assessed reliability by computing the Cronbach's alphas of each construct of the attitude scale.

Results
EFA indicated that the attitude scale consists of two factors (constructs). Convergent validity was demonstrated by significant correlations of item-item and item-total. Correlation analysis revealed that the attitude constructs were significantly correlated with the Research Misbehavior Severity Score, thereby demonstrating concurrent validity. Cronbach's alphas were greater than 0.75 for both constructs.

Conclusion
We demonstrated a valid and reliable 20-item attitude scale with two factors related to "acceptability of practices in responsible conduct in research" and "general attitudes regarding scientific misconduct". The use of a validated attitude scale can help assess the

Development of the item pool of the "attitude" scale
From our previous study (7), we developed the item pool of the attitude scale section of the questionnaire from a review of the existing literature and previous questionnaires (deductive approach). These published resources provided an initial framework for the item pool that was expanded after discussions among the research team members. We next assessed content validity (CV) with an expert panel of five investigators with knowledge and expertise on RCR. We asked the experts to individually review and rate the items' relevancy on a 4-point Likert scale (e.g., not relevant, somewhat relevant, quite relevant, very relevant). We deleted Items if two or more experts assessed items as being "not relevant". We conceptually hypothesized that 21 attitude questions from our previous questionnaire consisted of two constructs or factors. One construct represented "attitudes toward the acceptability of RCR practices", which included 16 items divided in the following sub-constructs: a) circumventing research ethics regulations (3 items); b) data fabrication and falsification (4 items); c) plagiarism (3 items); d) authorship (3 items); and e) conflict of interest (3 items). Selecting only those data that support your hypothesis

Plag_1
Publishing results that belong to someone else

Plag_2
Using someone else's words or ideas without giving proper credit

Plag_3
Submitting a manuscript to a journal that you already published in another journal

Authorship misconduct
Authorship_1 Giving authorship credit to someone who has not contributed substantively to a manuscript Authorship_2 Denying authorship credit to someone who has contributed substantively to a manuscript Authorship_3 Allowing your name to be put on papers to which you have made no reasonable contribution

COI_1
Awareness of a conflict of interest (e.g., you have a financial interest with a drug company, and you are conducting a study for them) and did not disclose it to either the ethics committee or a journal

COI_2
Compromising the rigor of a study's design or methodology in response to pressure from a commercial or not for profit funding source COI_3 Inappropriately altering or suppressing research results in response to pressure from a commercial or not for profit funding source The other postulated construct represented "general attitudes scientific toward misconduct" and consisted of five items. Table 1 shows the description of the item pool of each of these attitude constructs.

Data set for testing the psychometric properties
To test the validity and reliability of the "attitude" constructs, we used the data set from our previous study that was conducted between February 2015 to September 2015. We had distributed the questionnaire to a convenient sample of academics by a) sending a web link on Sur-veyMonkey1 via a recruitment email, and b) distributing by "hand" to investigators at Cairo University. All questionnaires were returned anonymously. The language of the survey was in English. We recruited participants from several universities in the Middle East located in Egypt, Lebanon, and Bahrain. Our target population included: 1) academic faculty; 2) individuals with master's and PhD degrees and postdoctoral students; and 3) senior undergraduate students and individuals working in research positions (e.g., research assistants and technicians).
The questionnaire consisted of the following sections: 1) demographic data, including place of graduate school attended, previous research ethic training, and previous experience in conducting research; 2) respondents self-report of the frequency of their research misconduct ("Never," "Once or twice," or "Three or more"); 3) "attitudes of the acceptability of RCR practices"; and 4) "general attitudes toward scientific misconduct." Responses regarding the "acceptability of RCR practices" were measured with a five-point Likert scale ranging from "very acceptable" to "definitely unacceptable." Values of "1" to 5" were assigned to "very acceptable" to "definitely unacceptable." For each of the 16 items, the total scores were calculated by simple addition and ranged between 16-80.
Responses regarding the "general attitudes toward scientific misconduct" were measured with a five-point Likert scale ranging from "strongly agree" to "strongly disagree." Values of "1" to "5" were assigned to "strongly agree" to "strongly disagree". We reversed scored several questions that were worded opposite to the other questions. For each of the 5 items, a total score was calculated by simple addition and ranged between 5-25. We also calculated a "total attitude" score by simple addition of the scores of "attitudes toward the acceptability of RCR practices" and ''general attitudes toward scientific misconduct". Higher numbers for attitude scores are representative of accepted norms toward responsible conduct in research.
Regarding prevalence of misconduct, participants were asked to self-report how often they committed each type of misconduct by choosing either "never", "once or twice" or "three or more times". Our data showed that the latter response category exhibited small cell frequencies in the range of 3-5% of the total responses. To ensure meaningful categories with sufficient data for analysis, we transformed the respondents' self-report of 16 different research misconducts into dichotomous responses: "never" and "one or more times" [23]. The specific misbehaviors are listed in Table 2 of our original publication [11]. We calculated a "Research Misconduct Severity Scale" (RMSS) similar to the method used by previous investigators [11,15]. To construct the RMSS, each misconduct item was assigned a value of "0" if respondents did not self-report the misconduct and a value of "1" if they self-reported the misconduct at least once in the last three years. To compute the RMSS, items related to fabrication and falsification and plagiarism were each given a weight value of 3 (7 items), items related to "circumventing research ethics regulations" and "conflict of interest" were each given a weight of 2 (6 items); and items regarding "authorship" were given a weight of 1 (3 items) [11]. The total RMSS score (16 items) ranged between 0-36 points. Higher numbers represent greater severity of research misconduct.

Psychometric evaluation of the attitude scale
We assessed the psychometric properties of our "attitude" scale by investigating its construct validity and its reliability.
Construct validity. Construct validity represents the extent to which an instrument assesses a construct of concern. Construct validity can be demonstrated by evidence of content validity, face validity, structural or factorial validity as well as divergent, convergent and concurrent related validities. If these measures of construct validity are deficient, it will be difficult to interpret results from the questionnaire and inferences cannot be made regarding predictors of a behavior domain.
Factorial validity. Exploratory Factor Analysis (EFA) identifies the structure/dimensionality of observed data to reveal the underlying constructs that give rise to observed phenomena. To determine the factor structure of the attitude scale, an EFA was used to identify the underlying factors/constructs of our set of 21 attitude items. A "factor" represents a collection of the items that have similar patterns of responses to create a construct. The resulting factor structure would help confirm our a priori assumptions about the relationships among the items in each of our hypothesized constructs. EFA evaluates construct validity via two functions: it identifies the factor structure and the number of factors or constructs that underlie a set of variables, (i.e., the questionnaire items) and determines as to whether the factors are uncorrelated with each other [24]. Before doing the EFA, we assessed factorability with both the Kaiser-Meyer-Olkin Index (KMO) test and Bartlett's test of sphericity. Kaiser-Meyer-Olkin (KMO) is a measure of sampling adequacy. The KMO statistics range from 0 to 1 with values closer to 1 denoting greater adequacy of the factor analysis (KMO � 0.6 low adequacy, KMO � 0.7 medium adequacy, KMO � 0.8 high adequacy, KMO � 0.9 very high adequacy). Bartlett's test of sphericity determines whether the variables are correlated in an identity matrix; a significant p-value associated with this test (e.g., < 0.05), indicates that factorial analysis can be used [25]. To perform the EFA, we used the Principal Line axis factoring with Promax oblique rotation, which leads to the calculation of the factor loadings for each question item [26].
We then determined the number of factors to retain (i.e., to determine how many factors account for most of the variance of the original observed variables) based on three procedures: the Eigenvalue (>1) criteria; parallel analysis [27]; and a scree plot.
An Eigenvalue measures the amount of variation in the total sample accounted for by each factor and is determined by the sum of the squared factor loadings for that factor divided by the number of variables. Factors with Eigenvalue >1 are considered significant. In a scree plot, the Eigenvalues are plotted against the factors and the number of factors to retain is determined by the data point above the point of inflexion in the scree plot [28].
The identification of a group of questionnaire items that belongs to a "factor" is achieved through a process of "factor loading", which shows the degree to which a question item loads or correlates with the factor [29]. There are rules to determine whether an item "loads" in a meaningful way on a factor [24]. The process of exploratory factor analysis results in the smallest and most compatible number of underlying factors from a larger set of initial variables on a questionnaire.
Question items with high factor loadings (a cut-off value of 0.40) are associated with the distinct factor [30]. Items with factor loadings below 0.40 are considered inadequate as they contribute <10% variation of the latent construct measured. Hence, it is often recommended to retain items that have factor loadings of 0.40 and above. Items should also not cross-load on more than one single factor. To summarize, Items that cross-load or that appear not to load uniquely on an individual factor are deleted, which reduces the number of questionnaire items for that construct.
Divergent validity. We next calculated the correlation between each of the factors (interfactor correlation matrix) to determine divergent validity Correlation coefficients between any two factors that demonstrates statistically significant differences and is less than 0.70 confirms that each factor represent a distinct entity from the other factors [31]. This procedure confirms divergent validity. Essentially, measures of constructs that theoretically should be related to each other are determined to be related to each other and measures of constructs that theoretically should not be related to each other are determined not related to each other (that is, one should be able to discriminate between dissimilar constructs. Convergent validity. We assessed convergent validity by determining the inter-item and item-to-total correlations, which are used to examine existence of relationships between individual items in a construct.
Inter-item correlation examines the extent to which items on a scale are assessing the same content. Items with very low item-to-total correlations provides evidence that the item is not measuring the same construct measured by the other items in the factor and may be deleted [24,32]. item-to-total correlation examines the extent to which items in a factor are correlated with the total score that is calculated from all items in the factor.
Demonstration of convergent validity provides further evidence of construct validity. Concurrent validity. We also assessed concurrent validity as an indicator of construct validity. Concurrent validity represents the extent to which one measurement is backed up by a related measurement obtained at about the same point in time. We sought to demonstrate concurrent validity by calculating the correlation between each of the attitude scales (''acceptability of RCR practices", "general attitude toward research misconduct" and the combined attitude scale) with the RMSS score [33].
Reliability analysis. To assess reliability, we calculated Cronbach's alphas for each construct of the attitude scale: "attitudes to acceptability of RCR practices" and "general attitudes regarding scientific misconduct". As a rule of thumb, a Cronbach's alpha of .70 to .80 is considered respectable for a scale for research use and an alpha more than .80 is considered very good [34].
Predictors of attitudes. We used multiple linear regression analysis to assess the predictive ability of the different independent criteria (demographics and data regarding previous ethics training and research experience) to discriminate between individuals regarding their attitudes toward research misconduct. We built three models to identify the predictors of the construct "attitudes of the acceptability of RCR practices", ''general attitude toward research misconduct" and the "combined attitude score".
We performed all statistical analyses were done using SPSS version (21). All variables with p<0.05 are considered significant predictors.
Ethics. Ethics approval was obtained from the respective research ethics committees in Bahrain, Lebanon, and Egypt to perform the original survey study. We obtained ethics approval to perform secondary analysis of the original data set from the University of Maryland, USA (HP-00094812).

Characteristics of the participants
We obtained completed surveys from 278 respondents of whom 212 were from universities in Egypt, 33 attended Royal College of Surgeons in Ireland in Bahrain, and 33 were from Ain Wazein Hospital in Lebanon.
For our analysis investigating the construct validity and reliability of our "attitude" scale, we used the data from the participants (n = 254) who completed the questionnaire beyond the "attitudes" questions. Ages ranged between 18 to 73 years and the mean age was 36 years, SD ± 12 years. Table 2 shows the baseline characteristics of our sample. More than 60% of participants were females (62.1%); the majority was of Egyptian nationality (72%). Almost one half (47.6%) represented academic faculty. One fourth (25.2%) had earned their Masters (MSc/MPH) while 44.9% had MD/PhD degree. There were 7.5% who attended faculties in North America, 10.6% in EU/UK, and 74.8% in the Middle East or North Africa. More than half (57.5%) of the respondents indicated they had received ethics training and 82.3%reported previous experience in research. Table 3 shows the results of the participants' responses regarding the "acceptability of RCR practices" and the "general attitudes toward scientific misconduct" constructs. For the former construct, the percentages of 'acceptability to the different items (very acceptable and acceptable) ranged from 4.3% for "publishing results that belong to someone else" item to 9.0% for ''Selecting only those data that support your hypothesis" item. For the "general attitudes toward misconduct" construct; most of the study participants (85.6%) strongly agreed or agreed that" Investigators should report instances of research misconduct" and 35.8% strongly agreed or agreed that "the responsibility for misconduct lies with the principal investigator only". Table 4 shows the descriptive statistics of each "attitude" item of the questionnaire and the extent of acceptability (very acceptable and acceptable) and agreement (strongly agree and agree). For the attitudes regarding "acceptability of RCR practices", the mean ranged from 4.07 to 4.49 and standard deviations were from 0.86 to 1.11. The means of the items of ''general attitudes scientific misconduct" ranged from 1.78 to 3.24 and standard deviation were between 0.77 and 1.94.

Descriptive statistics of participants' responses
For the "attitudes of the acceptability of RCR practices" construct, the percentages of (very unacceptable or unacceptable) ranged from 93.7% for 'Denying authorship credit to someone who has contributed substantively to a manuscript" to 81.1% for "Giving authorship credit to someone who has not contributed substantively to a manuscript". For the "general attitudes toward scientific misconduct" construct; more than two-thirds of the study participants either (strongly agreed or agreed) that ''I'm concerned about the amount of misconduct that occurs", ''Investigators should report instances of research misconduct", and "Investigators should declare conflicts of interest to the appropriate officials"; almost twothirds either (strongly agreed or agreed) that ''I should monitor my trainees' work to ensure that they are developing into responsible research". Slightly more than one-third (35.8%) either, (strongly agreed of agreed) that the ''The responsibility for misconduct lies with the principal investigator only".

Construct validity
Exploratory Factor Analysis (EFA). We determined the factorability of the attitude scale. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.944, which is above the recommended value of 0.60, and the Bartlett's test of sphericity was found to be highly significant (p < 0.001). The results indicate that the data is suitable for factor analysis.
To decide how many of the factors to retain from the EFA, we identified that there were two factors with an Eigenvalue > 1. We confirmed this number of factors by parallel analysis and the scree plot which is shown in Fig 1. In the scree plot, the number of eigenvalues is on the y-axis and the number of factors on the x-axis. The "elbow" of the graph where the eigenvalues seem to level off Is indicated by a horizontal line parallel to x axis. The number of factors to the left of this point (or above the line) indicates that two factors should be retained. This analysis confirms that the two-factor solution was the best for the EFA analysis. The result also confirms our hypothesis regarding the number of constructs within the entire attitude scale. Subsequently, we performed the EFA with the two-factor model. Using the Principal axis factoring with Promax oblique rotation, we calculated the factor loadings of the 21 items of the questionnaire. Table 5 shows the results of the EFA. We included items with loadings greater than or equal to 0.4 in the final EFA model. We deleted the item: "The responsibility for misconduct lies with the principal investigator only" as it loaded only with a value of 0.163. The final EFA included 20 items for the two factors. The two factors together explained 71.242% of the model cumulative variance.
The inter-factor correlation between the two factors determined from the EFA was 0.263, which confirms divergent validity between the two factors.
Convergent validity. Table 1 (a) and 1(b) in S1 File show that the inter-item correlation of both constructs; "attitudes toward acceptability of RCR practices" and ''general attitudes toward research misconduct' were significant (p<0.001). These results demonstrate that the items in each factor are well related to each other and hence, are suitable to for measuring the same construct. Table 2 in S2 File shows that the item-total correlations of "attitudes toward acceptability of RCR practices" and ''general attitudes toward research misconduct" were significant (i.e., with each other); p<0.001. This signifies that every item of each factor is consistent (or correlates well) with the overall scale, which is additional evidence that all items in each factor represents a valid construct.
Concurrent validity.  show that the correlations between the individual total scores of each of the attitude constructs (individually and when combined) and the prevalence of the RMSS score. In each case, the "attitude" construct was significantly inversely correlated with the RMSS score. The more the respondents' attitudes were according to acceptable norms of scientific conduct, the lower the RMSS score. As this result is expected, it further shows that the attitude scales represent valid instruments for measuring attitudes.
Predictors of attitudes. Table 6 shows the predictors of the three "attitude" scales. Participants who held a graduate degree, either BA/BSc or MSc/MPH/other degree, was significantly associated with higher scores for the "attitudes toward the acceptability of RCR practices", ''general attitudes toward research misconduct" and the combined attitude scale compared to those who had not graduated (p<0.05). Participants who held a MD/PhD degree was significantly associated with higher scores only for the ''general attitudes toward research misconduct" scale compared to those who had not graduate (p<0.05).
Prior training in research ethics was significantly associated with higher scores on the "Attitude to the acceptability of RCR practices" (p = 0.04) and to the "combined attitude scale" (p = 0.03).

Reliability analysis
The Cronbach's alpha value was 0.975 for the 16 items in the "attitudes toward the acceptability of RCR practices" scale. The Cronbach's alpha value was 0.754 for the four items in the "general attitudes toward research misconduct". These values demonstrate a level of reliability that is respectable and acceptable, respectively.

Discussion
We were able to determine the psychometric properties of an attitude scale that we adopted from a questionnaire we used in a previous study. Our factor analysis showed that the item pool of attitudes can be divided into two factors, each indicative of different constructs related Table 5. Factor loadings of the different items of the attitude scales: "Attitudes toward the acceptability of RCR practices" and "general attitude toward scientific misconduct".

Factor 1 (Attitudes toward the acceptability of RCR practices)
Factor 2 (General attitudes toward scientific misconduct) to scientific research misconduct. One factor can serve as a valid and reliable measure of attitudes toward the acceptability of practices in responsible conduct in research while the other can serve as a valid and reliable measure of general attitudes toward research misconduct. The significance of having a validated attitude instrument relies on the work of social scientists who demonstrated the significance of attitudes toward behavior when they theorized that individuals' intentions to engage in certain behaviors is the best predictor of those behaviors. Ajzen's and Fishbein's theory posits that two components can predict intentions, which in turn predict behaviors [35].    Table 6. Predictors of the attitude scales ("attitudes toward the acceptability of RCR practices", "general attitudes toward research misconduct", and the "combined attitude scale").

Variable
Attitude  norms represent the perceived social pressures to engage or not to engage in a behavior, thereby giving importance to ethical climate of the organization or region within which individuals are exposed. Hence, the overall theory of behaviors consists of three determinants: attitudes, subjective norms, and intentions. Gorsuch and Ortberg showed that the inclusion of a component of moral obligation added significantly to those of attitudes and subjective norms in predicting behaviors [36].
In our study, we demonstrated correlations between participants' attitudes (both attitude constructs, individually and when combined) and their self-reported misconduct as measured by the RMMS. Based on the above-mentioned theory between attitudes and behaviors, this result is expected and provides further evidence of the construct validity of our attitude scales. To be sure, it is not possible from just the correlation analysis to determine the direction of any causal link between attitudes and behaviors, that is, attitudes might influence behavior or that behavior changes attitudes. Furthermore, even assuming from the correlation analysis that there is a direct influence of attitudes on misbehaviors, having "correct" attitudes while necessary, are insufficient predictors of behavior, as subjective norms are also important.
These results have implications for developing teaching strategies that aim to instill the appropriate attitudes as well as discussing the proper norms regarding responsible behaviors in research. Traditionally, RCR education have emphasized learning outcomes that mainly reflect Bloom's cognitive and psychomotor domains [37]. Investigators have demonstrated statistically significant but modest outcomes in both domains [38,39]. Conversely, Bloom's affective domain (representing characteristics such as "interests, attitudes, appreciations, values, and biases" [40] is more congruent to the behavioral model espoused by Ajzen and Fishbein [35] and Gorsuch and Ortberg [36], as attitudes serve as precondition "for someone to consider applying their learned knowledge or skills" [18].
As such, more attention should be given to attitudes as an important outcome measure for RCR education. However, studies investigating the effects of RCR training on attitudes have demonstrated mixed results [16,41,42]. For example, one study investigating the outcomes of an RCR course showed that the impact on knowledge was more significant than that for changes in skills or attitudes [16]. McGee and colleagues performed in-depth Interviews to study the effects of a course in RCR on the attitudes of doctorate and postdoctoral students. The impact of the course on attitudes was greater for students with limited prior knowledge in RCR compared to students who held prior experiences or existing knowledge that conflicted with what was taught [43]. Admitting, achieving a change in attitudes from RCR training programs can be variable among individuals, may be dependent of teachers' skills [44], and may involve instruction that extends beyond just a few courses.
While attitudes are reflective of personal integrity, individuals' perception of the integrity of their research environment as conveyed through knowledge of existing norms of behavior can also be instrumental in shaping proper research behaviors. Several studies have investigated such a relationship. For example, Hoffman and Holm surveyed postdoctoral researchers regarding their knowledge, attitudes and actions related to research misconduct as well as their perceptions of the integrity of the research environment [45]. These investigators demonstrated a "connection between attitudes and environmental integrity factors" [45]. In another study, Mumford and colleagues assessed the relationship between "ethical decision-making to climate and environmental experiences" in first-year doctoral students [46]. Aspects of the climate included "procedural justice, distributive justice, social context, individual caring, law and code, trust, freedom, and lack of conflict". Environmental experiences included mentoring occurrences, production pressures, professional leadership, poor coping, lack of rewards, and poor career direction [46]. These investigators found that environmental experiences when compared with climate dimensions were better predictors of research integrity as determined by an "ethical decision-making measure" [46]. Overall, these studies suggest that research misbehaviors stem from personal integrity as well as influences from the environment in which individuals are situated [18].
We found positive results regarding correlations between prior ethics education and the attitude constructs. Holm and Hoffman demonstrated that previous ethics education was associated with lower RMSS [22] and Adeleye and Adebamowo found that 'self-assessment of one's knowledge of research ethics as being inadequate was associated with at least one type of research misconduct [47]. Other studies investigating the potential effects of ethics education on research misconduct have yielded mixed results [17,[48][49][50]. Whether ethics training can be supportive of behaviors that reflect societal norms may be dependent on course design and length, pedagogy, the focus of the educational objectives, i.e., knowledge, skills, or attitudes, as well as the supporting environment.

Limitations
There are several limitations in our study. First, after performing the reliability analysis and exploratory factor analysis, the general attitude factor consisted of only three items. Future efforts should expand this set of items by using the Delphi method that relies on a panel of experts. Second, as we used data obtained from individuals in the Arab Region, our attitude constructs may not be generalizable to other regions. Third, our data set was not large enough to add a confirmatory factor analysis. Finally, our data set was from 2015 and since this time there have been an increased focus on RCR resulting in additional training efforts and conferences. However, despite such efforts, the ability of measuring attitudes with a validated scale maintains its importance.

Conclusions
Our study shows that the attitude scale adopted from our previous questionnaire study is a statistically valid and reliable tool for investigating constructs related to attitudes toward the acceptability of RCR practices as well as general attitudes regarding research misconduct. In developing educational programs in RCR as well as in survey research focused on research misconduct, it is important to be able to measure the attitudes of participants toward specific types of misconduct as well as their general attitudes toward misconduct. Results from such endeavors can help promote advances in the field of the responsible conduct in research.
Supporting information S1 File. Table 1