Figures
Abstract
Cognitive styles are commonly studied constructs in cognitive psychology. The theory of field dependence-independence was one of the most important cognitive styles. Yet in the past, its measurement had significant shortcomings in validity and reliability. The theory of analytic and holistic cognitive styles attempted to extend this theory and overcome its shortcomings. Unfortunately, the psychometric properties of its measurement methods were not properly verified. Furthermore, new statistical approaches, such as analysis of reaction times, have been overlooked by current research. The aim of this pre-registered study was to verify the psychometric properties (i.e., factor structure, split-half reliability, test-retest reliability, discriminant validity with intelligence and personality, and divergent, concurrent and predictive validity) of several methods routinely applied in the field. We developed/adapted six methods based on self-report questionnaires, rod-and-frame principles, embedded figures, and hierarchical figures. The analysis was conducted on 392 Czech participants, with two data collection waves. The results indicate that the use of methods based on the rod-and-frame principle may be unreliable, demonstrating no absence of association with intelligence. The use of embedded and hierarchical figures is recommended. The self-report questionnaire used in this study showed an unsatisfactory factor structure and also cannot be recommended without futher validation on independent samples. The findings also did not correspond with the original two-dimensional theory.
Citation: Lacko D, Prošek T, Čeněk J, Helísková M, Ugwitz P, Svoboda V, et al. (2023) Analytic and holistic cognitive style as a set of independent manifests: Evidence from a validation study of six measurement instruments. PLoS ONE 18(6): e0287057. https://doi.org/10.1371/journal.pone.0287057
Editor: Danka Purić, University of Belgrade Faculty of Philosophy: Univerzitet u Beogradu Filozofski Fakultet, SERBIA
Received: November 24, 2022; Accepted: May 28, 2023; Published: June 13, 2023
Copyright: © 2023 Lacko et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data, materials and R syntaxes are available at https://osf.io/7ezax/.
Funding: This publication was supported by Masaryk University (MUNI/A/1323/2020: “Validation of the methods for analytic/holistic cognitive style”) and by the Czech Science Foundation (GC19-09265J: “The Influence of Socio-Cultural Factors and Writing Systems on the Perception and Cognition of Complex Visual Stimuli”). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The term cognitive style refers to stable attitudes, preferences, or habitual strategies which determine an individual’s mode of perceiving, remembering, thinking, learning, and problem-solving [1, 2] and allow adaptation to the external world developing through interaction with the surrounding environment [3, 4]. It also represents the missing link between cognition and personality [5, 6]. The main characteristics of the cognitive style are therefore the relative stability of the construct and the absence of associations with cognitive abilities and personality [1, 2].
During the second half of the last century, dozens of models of cognitive style were created [for a review, see 1, 3, 4, 6, 7]. The majority of these works coincide in that one of the fundamental and superordinate orthogonal dimensions is the wholistic-analytic dimension we investigate in this article. According to these models, persons with a stronger preference for a wholistic cognitive style tend to process information as an integrated whole, whereas persons with strong preference for an analytic cognitive style proceed in discrete parts of that whole. In the prior research, these two types of cognitive styles were usually perceived as two polar ends of one continuum (i.e., unidimensional structure).
Today, this construct plays a key role in cross-cultural research [e.g., 8–10] but it is applied also in research on consumer behaviour [11], marketing [12], creativity [13], brain responses [14], perception of risk [15], information sciences [16] or even donation decisions during COVID-19 pandemic [17].
Old controversy
The most representative and frequently applied theory from the wholistic-analytic cognitive style family is the field independence-dependence cognitive style (FDI), based on Witkin’s theory of psychological differentiation [2] and cognitive restructuring [18].
Throughout the theory’s history, two generations of instruments for measuring cognitive style have been introduced: maximum performance tests (1st generation) and self-report questionnaires (2nd generation). According to some critics, the first generation of methods assessed cognitive ability rather than cognitive style since unsuitably high associations of cognitive style with general intelligence, spatial ability, working memory, attention, and academic achievement have been found in previous studies [e.g., 19–24].
Concerning second generation methods, some authors discovered high associations with personality traits questioning the validity [e.g., 20, 25, 26]. Self-report methods are also often unrelated to performance-based measures of cognitive style, an observation which challenges their convergent validity [e.g., 20, 27, 28].
Another crucial aspect of the validity of cognitive styles is stability of the construct. Cognitive styles may shape themselves throughout a person’s life [29], be affected by various socio-cultural factors [30, 31] and be generally considered dynamic [32] and task dependent [4]. Nevertheless, they should remain relatively stable in the short to medium-term. The contemporary body of knowledge, however, remains inconclusive since some studies have found that cognitive styles are stable [e.g., 33, 34] and others have found they may change significantly, for instance after specific training [e.g., 35, 36].
Finally, some past studies criticized the generally poor or unknown psychometric properties of instruments which measure cognitive style [e.g., 37–39] and highlighted the absence of combinations of mixed-methods applied in cognitive style assessments [40].
Current perspective
Strongly inspired by the cross-cultural differences between “modern” and “traditional” cultures found in Witkin’s FDI [see 31, 41] and between Western and Eastern countries at similar level of technological development [42], as well as by Sloman’s distinction of two cognitive systems of reasoning [43], Nisbett with colleagues proposed the new and currently predominant theory of analytic and holistic cognitive style twenty years ago (AH) [for a review, see 44–46] which should have overcome the issues of prior research. (The term wholistic is used more broadly and refers to holistic cognition in general, whereas holistic is a term used purely in Nisbett’s cross-cultural theory. In this article, we keep this distinction.) Throughout history multiple tasks of measuring AH were introduced ranging from self-report inventories and performance-based measures based on older wholistic-analytic cognitive style methods, through new computer-based instruments, to the usage of modern technologies such as virtual reality or eye-tracking. In contrast to the older methods, these methods usually incorporate specific tasks for both analytic and holistic cognition and thus demonstrate the desirable shift from unidimensional to a two-dimensional structure. We identified four main clusters of methods which attempt to eliminate the shortcomings of measurement summarized in the previous section.
Performance-based measures based on the wholistic-analytic family
These methods stem from the early experiments of Witkin and Asch on space orientation and the FDI theory. Based on their previous work, they formulated the Rod and Frame Test (RFT) [18]. In the RFT method, participants are asked to set a rod, embedded in a square, to the subjective vertical position, regardless of the surrounding frame. The method is partially based on Wertheimer’s [47] tilted-mirror experiment and is derived from the little-known Gestalt principle called frame of reference. The improved version of RFT is called Framed-Lined Test (FLT) [48]. Compared to the RFT, where the ideal solution strategy is always field independence, the FLT takes into account both the absolute and relative fields of reference in two individual subtests. Despite its 19-year history, the psychometric properties of the FLT are unknown. Only two studies have verified its reliability via internal consistency, which was even inadequate (αs < .70) [cf. 49, 50].
The FDI is also commonly assessed via the Embedded Figures Test (EFT) [18]. These methods are based on Gottschaldt’s embedded figures [51], which are complex figures composed of simple figures. Participants are instructed to spot a simple form within a more complex figure. The psychological principle beyond these figures lies in the Gestalt principles of figure - ground organization, especially the laws of proximity, similarity, good continuation, closure and mirror symmetry [47]. Improved version of EFT is called Cognitive Style Analysis (CSA) [7] which contains verbal-imagery and wholistic-analytic cognitive style dimensions. Similarly to the FLT, the CSA enhances the reductionist unidimensional approach of the EFT and incorporates two subtests of AH cognitive style measurement. The CSA (unlike EFT) also does not correlate with intelligence [52], personality [53] or academic achievement [54], although some serious issues with its test-retest reliability have been revealed [e.g., 55–58]. Hence, the Extended Cognitive Style Analysis–Wholistic/Analytic (E-CSA-W/A) was proposed by Peterson and colleagues [57, 59]. E-CSA-W/A showed sufficient split-half reliability, parallel forms reliability and test-retest reliability [57, 60] and a lack of association with mathematical performance [61], intelligence and personality [59].
Performance-based measures based on global-local family
The following group of methods is based on global and local processing of Navon’s hierarchical figures [62, 63], i.e., a large figure (global level) composed of small figures (local level). These figures were created with respect to the Gestalt principles of grouping, especially in proximity and continuity [47]. Compared to the embedded figures used in E-CSA-WA, hierarchical figures are self-contained and not nested in the surrounding context. These methods were not originally proposed for AH measurement. In fact, probably the first use of Navon figures in the context of cognitive styles can be traced to 2006 [64]. From the perspective of analytic and holistic cognitive styles, local processing corresponds to the analytic cognitive style and global processing to the holistic cognitive style [64]. Today, different versions of Navon figures are frequently used to estimate a person’s global and local processing.
These modifications differ at two levels: 1) the type of figure and 2) the aim of the task. The type of figure can be either verbal (e.g., numbers, logograms, latin script) or non-verbal (e.g., geometric shapes, abstract drawings, specific objects such faces). Both verbal and non-verbal types can be mutually combined (e.g., a large letter composed of smaller geometric shapes). Global and local features can also be combined; each figure may be congruent (local features are the same as global features) or incongruent (local features differ from global features) [62]. Distinction according to the aim of the task is less complicated since the task’s aim might be to find the correct answer (i.e., Navon Search Task) or to select an answer which is as similar as possible to the original figure (Navon Similarity Matching Task) [65].
Evidence of the psychometric properties of Navon figures is rather mixed. Some studies suggested that these methods have satisfactory test-retest reliability [66, 67] and split-half reliability [68, 69], while others directly questioned their validity and reliability [70, 71]. Hierarchical figures are also not associated with general intelligence [72].
Self-report questionnaires
Despite self-report questionnaires being relatively common in the assessment of other cognitive styles, their use specifically for AH measurement remains relatively scarce. Nevertheless, a few inventories have been created exclusively for the measurement of AH as defined above. The best examples are the two-dimensional Holism Scale (HS) [73], and four-dimensional Analysis-Holism Scale (AHS) [74, 75]. AHS, the most often used questionnaire in AH research, showed discriminant validity with individualism/collectivism and independent/interdependent self-construal scales and concurrent validity in terms of weak associations with the categorization task [76]. However, its factor structure was not ideal and internal reliability was not entirely satisfactory in some subscales, and, therefore, two brief versions of AHS (AHS-12 and AHS-4) that overcome these issues were proposed [77]. Recently published four-dimensional Holistic Cognition Scale (HCS) [78] also demonstrated satisfactory factor structure and addressed these well-known shortcomings of AHS.
Other methods
Methods other than those mentioned above have also been used within the AH paradigm. Quite often, these studies applied complex visual stimuli of artificially created or natural visual scenes as stimulus material [e.g., 79, 80]. Another frequently used set of tasks is based on the categorization (triad) tasks [76] and the change blindness task [81]. Na and colleagues [50] described other six tasks not as commonly employed as the methods described above. Generally, these tasks do not represent any formally standardized methods, and scholars usually use their own ad-hoc stimuli and interpret the validity of tasks according to the patterns of results expected in certain cultures. Their psychometric properties are therefore unknown. The only exception is research by Na and colleagues [50], who found that the internal consistency of these tasks varies significantly (αs ranged from .24 to .96) and that the test-retest reliability of four of these methods is moderate at best (rs ranged from .47 to .70).
New challenges
Uncertain or unknown psychometric properties.
Despite the relatively large body of literature which describes various AH measurement methods, evidence of psychometric properties in most of them remains unknown or ambiguous. Studies which applied self-report questionnaires and FLT did not report any evidence of stability in construct, and Navon-based methods reported unclear results. Even studies which provided some evidence of validity or reliability of measures can be disputed (e.g., E-CSA-WA instrument), mainly because of statistical and methodological reasons (inappropriate analyses, sample size or composition).
The most serious problem, however, is the lack of concurrent validity in AH methods. As with FDI, AH measurement shows inconsistent associations between methods. Specifically, the FLT only weakly associated with the change blindness (r = .19) and causal attribution (r = .22) tasks, and no associations were detected with the other nine AH measurement methods [50, 82]. Navon hierarchical figures also barely associated with Gottschaldt embedded figures [e.g., 64, 70, 72, 83], except for one study, which found a strong association [cf. 84].
Some studies reported low or no correlations between various modifications of Navon figures [66], thereby also challenging the convergent validity of this method. A lack of association was also observed between self-report questionnaires and performance-based measures of AH [e.g., 74, 85]. Hence, scholars in the field appear to be using methods which might not even be related to each other but are interpreting their findings as differences under the same attributes (i.e., analytic and holistic cognitive styles).
The final important factor to consider concerns self-report questionnaires. Despite the validation studies of AHS and HSC using desirable structural equation modeling techniques to verify its factor structure, only a few studies have already provided any evidence of cross-cultural comparability (as far as we know, only [77]), such as scalar measurement invariance [see 86].
Distribution of reaction times and their relationship to cognitive processes.
Reaction times (RTs) have held a prominent position in psychological research of analytic and holistic cognitive styles. To analyse RTs, however, scientists have typically used the statistical techniques they were the most familiar with, such as analysis of variance on the sample mean [87, 88], although this was found unsuitable in many of its applications [89] because RTs are usually not identically and independently distributed (iid) as a result of trial-by-trial sequential effects. More importantly, in a majority of cases, RTs are not normally distributed (Gaussian) but rise rapidly on the left and have a long positive tail on the right [88, 89]. This feature of RTs may have produced misleading or potentially contradictory results [90].
One of the major challenges of current AH research is therefore the incorporation of suitable methods of statistical analysis of RTs. These methods would reflect not only the specific RT distribution but also its true intrinsic relationship to the cognitive construct of AH per se. Between the most commonly applied approaches belongs distribution analysis, joint models and process models [for a review, see 91, 92] which we are applying in this article. Since not a single article has yet used appropriate estimates of RTs in AH measurement, it is possible that all previous AH studies did not compare the real differences in AH cognitive traits, but rather the differences in psychomotor tempo, stimulus encoding, response carefulness [93] or working speed [94].
The issue of dimensionality: Futility of derived indices.
Since Nisbett with his colleagues [44] introduced AH as two “systems of thought”, it is considered and especially measured as a two-dimensional construct [see also 42, 74, 76]. According to their original formulation, AH represented two separate and perhaps even qualitatively different cognitive processes that might be mutually independent and that, at the same time, might coexist simultaneously. For instance, current neuroscience evidence suggests these two processes are reflected by differences in brain activity [e.g., 14, 95, 96]. Therefore, some individuals might show a tendency to reason or perceive analytically and others holistically (naturally, some might show high or low preferences of both styles).
However, researchers often ignore this original conceptual postulation of AH theory, and reduce the amount of information in their data by treating AH as a uni-dimensional structure. This might be understandable in self-reported questionnaires due to the item wording [74], but not in performance-based methods that almost always offer two separate analytic and holistic subtests. Despite that, scholars commonly calculate derived indexes, such as difference or ratio between mean or median of RTs from both subtests, and use them as a single uni-dimensional indicator of cognitive style without any justification. For example, the manual of E-CSA-WA suggests calculating the main index as a ratio of median reaction times between analytic and holistic subtests [97], and Navon-based methods often use the difference between subtests as an indicator of the global precedence score [e.g., 98]. Even in FLT some scholars create derived indexes [e.g., 99]. Such an approach, however, cannot be regarded as appropriate, because it is not only rather reductionist, but in some cases, it is clearly misleading and unreliable [see 100, 101]. Because of the proposed two-dimensional nature of the AH construct, scholars are not forced to use any derived indices and instead analyse scores from multiple subtests separately.
In summary, it seems that the “old controversy” might have survived to this day, even after a face-lift in the form of analytic and holistic cognitive style theory. Many new challenges have also been raised recently. Hence, the aim in the present article is to verify the psychometric properties of methods which measure AH. To do so, we implemented several important steps: 1) application of relatively recently created methods of measuring AH to overcome the issues of the old controversy (e.g., holistic style is not perceived as inferior to analytic style); 2) the use of multiple and different methods; 3) analysis of discriminant validity with personality and intelligence; 4) verification of the stability of the construct; 5) collection of a large sample of the general population (i.e., non-student); 6) no derived indices; 7) application of advanced statistical procedures for RT estimation.
Methods
Analytic plan
The hypotheses, statistical analyses and data cleaning procedures were pre-registered before data collection (see https://osf.io/w483c). The hypotheses were pre-registered as a validation process composed of five phases (Fig 1). In the first phase, specific aspects of validity and reliability (e.g., factor structure, internal consistency, split-half reliability) are verified. This phase was designated Phase 0 since it is not relevant to all the methods. In Phase 1, all methods are tested for their stability in time (test-retest reliability). In Phase 2, the discriminant validity of the methods is verified. In Phase 3, the concurrent and convergent validity of the methods is assessed. Finally, in Phase 4, predictive validity is estimated. Throughout the analysis, methods which clearly demonstrate insufficient quality do not progress to the next phase. The deviations from a pre-registered analytic plan are described in Appendix 1 in S1 File. The data, methods and R source codes are available online (see https://osf.io/7ezax/). The study has been approved by the Masaryk University Ethical Board (EKV-2020-118).
Measures
Methods for measurement of AH.
Extended Cognitive Style Analysis–Wholistic/Analytic (E-CSA-WA; see Fig 2) [57, 59]. It contains 80 items (40 analytic and 40 holistic). In a holistic subtest, participants are presented with two complex figures and their goal is to identify whether these figures are identical. In an analytic subtest, participants are exposed to one simple and one complex figure, and their goal is to reveal whether the complex figure contains the simple figure. Participants respond by pressing one of two keyboard buttons.
Absolute-relative test (ART). The ART is a computer-based adaptation of pen-and-paper FLT [48; line lengths were adapted from 102] Participants are exposed to the original stimulus (square with a vertical line). Their goal is to draw a line of exactly the same absolute length as the original, regardless of the size of the square (analytic task) or line that proportionally corresponds to the proportion of the line and side of the square of the original stimulus (holistic task). The original stimulus is presented for 5 s, followed by a mask presented for 100 ms. It contains 12 items (6 for analytic and 6 for holistic subtest, see Fig 3).
Compound Figure Test 1 (CFT1) [9, 103]. CFT1 is a verbal (numbers) and incongruent form of the Navon Search task (Fig 4). Participants have to choose the correct answer from four options and indicate it with a mouse-click. The method is composed of 32 figures (16 for local identification and 16 for global identification). Fixation crosses are presented before each trial (for 500 ms) and the figure remains visible until the participant’s response.
Compound Figure Test 2 (CFT2). CFT2 is an extended version of CFT1 (80 items instead of 32) with several modification: 1) local features are larger, decreasing the advantage of the global features [104]; 2) items are block-randomized (participants do not know whether they will identify a local or global feature in the next stimulus); 3) the presentation time is shorter; original stimulus disappears (after 100, 150, 200 or 250 ms; the time is the same for each block of 20 items; see Fig 5). All other settings are the same as with CFT1. This measure was created from previous research [68, 98].
Compound Figure Test 3 (CFT3). CFT3 is a non-verbal (geometric shapes) and incongruent form of the Navon Similarity Matching Task, which was created from previous research [65, 105, 106]. The participant does not choose the correct answer, but rather chooses a preferable answer from two options. CFT3 contains 20 items, with one sample stimulus and two options (the first shares its global feature with the original stimulus, the second shares the local features; see Fig 6). The participant is instructed to choose the option which, according to his/her opinion, is more similar to the sample stimulus.
Analysis-Holism Scale (AHS) [74, 75]. AHS contains four subscales (locus of attention, causal theory, perception of change and attitude toward contradictions) and twenty-four 7-point Likert items (1 = strongly disagree, 7 = strongly agree) with six items per subscale.
Methods for discriminant validity
Big-Five Inventory 2 (BFI-2). The BFI-2 [107] is an updated version of BFI [108] and measures a theoretically expected five-factor model of personality (with 15 facet subscales): extraversion, agreeableness, conscientiousness, negative emotionality and open-mindedness. The BFI-2 is composed of 60 items with 5-point Likert scales (1 = disagree strongly, 5 = agree strongly). We administered the adapted and validated Czech version by Hřebíčková and colleagues [109].
International Cognitive Ability Resource (ICAR) [110]. We used ICAR matrix reasoning (11 items, similar principle to Raven’s progressive matrices; see Fig 7), three-dimensional rotation (24 items; see Fig 8) for estimating general intelligence, and computer-generated number series subtests (11 number series, randomly selected for each item model). The ICAR generally showed satisfactory psychometric properties [111, 112].
Besides these methods, we asked participants also about their age, gender, education, martial status, number of siblings, and socio-economic status. It was measured via 4-category self-report scale (poor, lower mid, mid, upper mid).
Procedure
Method development procedure.
Great care was given to the process of translation to reduce potential method bias caused by potential shifts of meaning and resulting conceptual inequivalence [113, 114]. Even though the complete elimination of method bias caused by the translation might be improssible, there are methods that have proven to be effective at least in its reduction [115]. In the first step, the test methods were translated from English to target languages using a back-translation procedure by two independent translators. In the second step, the English original and the back-translation were compared by both translators. Any differences between translations were discussed and evaluated for potential shift in meaning. This step was supervised by authors of the test methods. We performed two quantitative pilot studies (N1 = 32, N2 = 21) and qualitative cognitive interviews (N = 7). From these three pilot studies, we clarified the instructions to reduce any potential misunderstandings. All performance-based methods contained practice trials with feedback.
Testing procedure.
We initially created set of 48 different strata based on the combination gender, education level and age which proportionally corresponded to their representation in the general Czech population [116]. These target subgroups of the general population were then addressed in relevant groups in social network and thematic websites using the snowball method, resulting in a relatively balanced pool of participants of 600 volunteers. Forty randomly selected participants were given a reward of CZK 2,000 (approx. 40x €80).
All participants were tested online in two data collection waves. The first wave was conducted in early May, 2021, the second was collected from mid-June to mid-August, 2021. During the first wave, participants completed all methods which focused on cognitive style assessment, three subtests of ICAR and a demographic questionnaire; during the second wave, they completed all methods for cognitive style and BFI-2 (in the mentioned order). The typical time of administration of the entire test battery was approximately 90 minutes for the first wave and 45 minutes for the second wave. All participants have given online informed consent.
Due the ongoing COVID-19 pandemic in the Czech Republic, the testing was online (with unified experimental conditions, such as computer peripheralsp). All methods were adapted for online testing in Hypothesis software allowing reliable capture of response times of all performance-based methods [117].
Participants
Out of the pool of 600 participants, we collected data from 392 participants in total (380 in the first wave, 217 in the second wave). 12 participants participated only in the second wave, therefore the total number of participants is higher than a number in the first way. Eight additional participants were removed because of their high number of invalid answers (see Appendix 2 in S1 File); 384 participants were included in the analysis (see Table 1). Since not all participants answered all methods (some participants omitted them probably due to the online experimental settings, many participants were active only during first wave), only 116 participants passed all methods in both waves. However, each method was answered by at least 196 participants. And more importantly, methods for assessing cognitive style were fulfilled by 233–354 participants in the first wave of data collection which is crucial since these data were used for most of the following analyses. An a priori power analysis suggested that 250 participants should be sufficient for reliable estimation of crucial statistical procedures (see pre-registration for more details, https://osf.io/w483c). We therefore consider this sample size sufficient.
Data analysis
Before the main analyses, the data cleaning was performed precisely as pre-registered (see Appendix 2 in S1 File). Since we removed a large number of participants in the ICAR number series subtest caused by technical issues in the SQL database (44% of participants had half and more missing values), we omitted it this method from further analysis.
The RTs from CFT1, CFT2, E-CSA-WA were modelled using Bayesian 4-parameter shifted Wald distribution process model with drift, alpha, theta and drift variability parameters [118]. The Gibbs sampling method with three Markov Chain Monte Carlo (MCMC) chains and 10,000 iterations was used. We set up the (rather non-informative) prior distributions based on quantitative pilot studies (for prior distributions, see https://osf.io/7ezax/).
We also used alternative procedures for modelling RT (ex-Gaussian distribution, lognormal response time item response theory models, Q-diffusion item response theory models and shifted Wald distribution process model), however, they did not fit the data well (see Appendix 3 in S1 File). The findings of alternative procedures are reported in the Appendix 4 in S1 File.
The CFT3 was modelled via the hierarchical Linear Ballistic Accumulator (LBA) [119] that models five parameters (drift rate for each accumulator, threshold, starting point, start point variability, non decision time) and priors for the group-level means for starting point, non decision time and treshold, and group level drift rates [120]. Hamiltonian Monte Carlo algorithm with 4 chains and 4,000 iterations (500 burn-in) was used. Having no information about the prior distribution beforehand, we used the prior distributions reported in Annis and colleagues [120].
ICAR subtests were estimated with Rasch models. The analysis of split-half reliability and the correlation analyses were bootstrapped with 10,000 iterations. Since the BFI-II had already been validated in Czech and yielded satisfactory internal consistency in our sample (extraversion ω = .872, agreeableness ω = .869, conscientiousness ω = .907, negative emotionality ω = .914, open mindedness ω = .881), we applied the subscale scores in the analyses calculated as arithmetic means.
The stability of constructs was verified via intraclass correlation coefficients (ICC) with two-way mixed effects and absolute agreement. The discriminant validity was verified via heterotrait-monotrait ratio of correlations (HTMT) and two-one-sided t-tests (TOST) [121]. In accordance with pre-registration, we specified for TOST the upper and lower equivalence bounds based on the smallest effect size of interest (SESOI) to -.25 and .25 as equivalents to practically null (absent) effect sizes. The concurrent and divergent validity was estimated within a multi-trait multi-method matrix (MTMM) and set of Spearman correlations. The predictive validity of the methods was verified with receiver operating characteristic (ROC) curves on the subsample of contast groups (participants with high vs. low socio-economic status).
In the structural equtation modelling (SEM), we used maximum likelihood estimation with robust (Huber-White) standard errors to handle missing values because the data showed multivariate non-normality [122] according to Henze-Zirkler’s test (HZ = 1.007, p < .001) as well as univariate non-normality of indicators according to Anderson–Darling tests (ps < .001), and because of the character of the response scale (seven categories) [123]. For evaluation of configural model fit, we applied the criteria proposed by Hu and Bentler [124]. In SEM, we used full information maximum likelihood (FIML) for dealing with missing values. In all other analyses, the pairwise approach was used. All analyses were performed in R (v4.1.1) [125] with packages lavaan [126], semTools [127], TOSTER [121], psych [128], eRm [129], and irr [130].
Results
Phase 0: Specific evidence of psychometric qualities
In the first step, the factor structure of AHS was verified with confirmatory factor analysis. Even though several alternative structures were proposed (including brief versions AHS-12 and AHS-4 [77], exploratory structural equtation modeling, and models obtained via exploratory and data-driven approach), none of them achieved the pre-registered criteria of model evaluation (RMSEA < .08, SRMR < .08, CFI > .90, TLI > .90; see Table 2). The internal consistency and average variance extracted (AVE) of all scales was also highly insufficient (locus of attention ω = .159, α = .315 AVE = .110; causal theory ω = .254, α = .314, AVE = .129; perception of change ω = .234, α = .238, AVE = .116; attitude toward contradictions ω = .550, α = .495, AVE = .249). Since the AHS demonstrated significant shortcomings in its factor structure and internal consistency and no model adjustment helped overcome these issues, we omitted AHS from further analysis (for results of AHS in all phases, see additional online supplementary materials at https://osf.io/7ezax/.
In the next step, we estimated empirical reliability from latent trait estimates and their corresponding standard errors (calculated from diffusion IRT model), i.e., empirical reliability of maximum a posteriori estimates, showing sufficient (reliability ≥ .70) for both subscales of CFT1 (local = .958, global = .983), CFT2 (local = .974, global = .954) and E-CSA-WA (analytic = .945, holistic = .942).
Split-half reliability was estimated only in ART. Split-half reliability was calculated on random halves using Guttman’s λ2. The reliability was identical to the pre-registered threshold for sufficient evidence of reliability (analytic = .50, holistic = .50).
Phase 1: Stability of the construct
We found that ART indicated moderate test-retest reliability for the holistic subtest but insufficient reliability for the analytic subtest. CFT3 indicated good reliability. CFT1, CFT2 and E-CSA-WA indicated good reliability in median raw RTs and moderate reliability estimated according to the drift parameter from the Bayesian 4-parameter shifted Wald process model (except for the holistic subtest of E-CSA-WA, which, similarly to ART, showed ICC slightly below the pre-registered .50; see Tables 3 and 4).
Phase 2: Discriminant validity with intelligence and personality
Concerning the discriminant validity of E-CSA-WA and CFT2 with personality traits, none of the HTMT values was above .90. Hence, all methods showed satisfactory discriminant validity (see Table 5).
The discriminant validity of ART, CFT1 and CFT3 with personality traits was verified using the TOST. All correlations with personality were lower than pre-registered .20 (see Table 6). The only exception was the drift parameter from CFT3 holistic subtests which correlated with conscientiousness, but its correlation only neglictibly exceeded the pre-registered treshold (r = .205). We can therefore conclude that all methods indicated satisfactory discriminant validity with personality traits.
The discriminant validity of all methods with intelligence was verified using TOST. CFT1 indicated weak negative correlations with matrix reasoning for both raw RTs and drift parameters (quicker participants in both subscales were more successful in matrix reasoning). These associations, however, were contained within the equivalence range, and therefore practical significance was not established. Associations between CFT2, CFT3 and E-CSA-WE were generally much lower and therefore also practically insignificant for both RTs and drift parameters (see Table 7). Nevertheless, CFT2 indicated practically significant and potentially problematic associations with the rotation subtest according to the alternative RT estimations (i.e., theta parameters of diffusion IRT models and lognormal RT IRT models; see Appendix 4, Table S3D in S1 File). Finally, the holistic subtest of ART indicated statistically and practically significant negative association with the rotation subset of ICAR. Participants who were more accurate in drawing relative lines in ART were also more successful in rotation. Since the discriminant criterion with intelligence is crucial, and ART also revealed some issues with stability, we omitted this method from the next phase of the validation process (for results of ART in all phases, see additional online supplementary materials at https://osf.io/7ezax/.
Phase 3: Concurrent and divergent validity
The results of MTMM suggest that remaining four methods measure entirely different traits (i.e., the associations between related analytic/holistic subtests from various methods are not satisfactory). Similarly, low associations were found also for the same method-different trait values, (i.e., the association between analytic and holistic subtest within a single measure), and for different method-different trait values (i.e., associations between analytic/holistic subtest from different methods which should not be related; see Table 8).
To analyse specific associations between methods in more detail, we performed additional correlation analyses (see Table 9). It is evident that CFT3 was not associated with any other measure since all its correlations were lower than .12. Although the associations between CFT1, CFT2 and E-CSA-WA at the RT level were higher than the pre-registered threshold of .30, these results were not replicated for drift parameters. Lack of association between the drift parameters of various methods indicates that CFT3 and CFT1 represent different aspects of AH. Even though CFT2 and E-CSA-WA showed weak associations with drift parameters, they cannot be interpreted in terms of satisfactory concurrent validity, and therefore, most likely also measure different facets of AH.
Even thought the MTMM revealed the very small values in the same method-different trait values, we also report the associations between analytic and holistic subtests within each method separately because they provide more evidence about dimensionality of the construct (see Table 10). CFT1, CFT2 and E-CSA-WA showed very strong associations between both subtests for RTs, and subtests of CFT2 and E-CSA-WA remained highly correlated, even for drift parameters. On the other hand, the CFT3 showed negative associations between subtests (this fact caused the low same method-different trait value since other subtests from other three methods were associated). This suggests that CFT1 might effectively distinguish between analytic and holistic dimensions, whereas the assumption of two dimensions might be violated for CFT2 and E-CSA-WA. As for CFT3, the negative associations means that participants who score more on analytic subtest, score lower on holistic subtest (and vica versa) which suggests uni-dimensional structure of AH.
Phase 4: Predictive validity
The final phase of the validation process attempted to verify the predictive validity of methods. According to some evidence, social class should affect AH similarly to other cultural influences [131]. Persons from lower social classes should be more holistic and less analytic than persons from higher social classes. For this purpose, we split socio-economic status into two extremes: poor and lower mid socio-economic status (N = 57) on one side and upper mid socio-economic status (N = 36) on the other. We conducted a ROC curve and calculated area under the curve (AUC) to assess the ability of the methods of interest to discriminate between high and low socio-economic status.
Only the CFT1 local subtest met the recommended criteria of the acceptable discrimination (i.e., AUC > .70) at the RT level in the expected direction. However, the remainder of AUC values for both raw RT level and drift parameter were much smaller and thus hardly interpretable, especially since some of the means of drift paramters and RTs were opposite to the expected results (see Table 11).
Discussion
Psychometric properties of developed AH instruments
This article presented psychometric properties of six proposed methods for measuring AH. The FLT appeared to be a very promising successor of the Witkin’s rod-and-frame test. However, our computer-based adaptation demonstrated a problem with the stability of the construct in time and an undesirable correlation with general intelligence, namely with its spatial ability subtest. Since no previous study has provided sufficient evidence for the validity of this method, in the context of this study, we cannot recommend it for further use. It appears that the rod-and-frame principle might be an interesting indicator of spatial ability, but its informative value regarding analytic and holistic cognitive styles remains ambiguous.
The E-CSA-WA, however, indicated moderate test-retest reliability, absence of association with personality and intelligence, and very weak concurrent validity with CFT1 and CFT2. Together with previous evidence of validity and reliability [57, 59] and a sufficient number of items per subscale for reliable RT estimation, we may, with certain reservations, consider it a valid method and recommend it (and the principle of embedded figures behind the instrument) for further use in assessing an individual’s cognitive style. However, the method’s moderate correlation between both subtests might suggest a certain dependence between analytic and holistic modes of processing information, which represents a certain limitation which should be further studied.
Regarding Navon figures, three modifications of this test were used (CFT1, CFT2 and CFT3). All relevant psychometric properties of CFT1 were found to be satisfactory. Even though this instrument indicated weak association with intelligence, it was not practically significant. Both subtests were only weakly associated with each other and hence we can recommend CFT1 for further use. However, since CFT1 contains a small number of items (although for traditional RT analysis, it might be considered satisfactory [105, 132]), estimation of the drift parameters in the shifted Wald model might be unreliable, and adding more items per subtest is desirable. To reflect this in future research, at least fifty items per subtest are recommended for a reliable estimation of drift parameters within shifted Wald distribution [133].
CFT2 also satisfied almost all validity and reliability criteria (with the exception of high correlation in its subtests; similarly to E-CSA-WA). CFT2, however, yielded some inconsistencies in the results which required investigation before the use of this instrument as an indicator of AH. The accuracy of CFT2 indicated higher variability in responses and was generally lower than in other instruments. In relation to this, the alternative RT estimations which take into account difficulty, for example, lognormal response time item response theory models and Q-diffusion item response theory models, also indicated a slight improvement in model fit (and therefore more reliable estimation). Even though we still cannot consider these estimations reliable, they indicate that the CFT2 has a practically significant association with intelligence (rotation subtest of ICAR, see Table S3D in Appendix 4 in S1 File). Is it therefore possible that when the difficulty of the tasks increases, solving them automatically overlaps with cognitive ability? If this is true, AH measurement must rely on simplier tasks to keep its discriminant validity with cognitive ability such as intelligence. We believe that the principle behind CFT2 needs further examination to verify whether the incorporation of more challenging tasks generates associations with general intelligence.
The CFT3, overall possessed good psychometric properties. Its stability in time was very high, probably because it is based on a different principle than the previous two Navon hierarchical figures (i.e., similarity matching task). It was not associated with intelligence and most of personality traits. Even thought it was slightly associated with conscientiousness (more holistic people were more conscientiousness), the association was very low and did not yeopardize its discriminat validity. Furthermore, with respect to the observed systematic cultural differences in the Big Five personality traits [134], our finding appears to be logical. After a few modifications, even this instrument can be considered for further use. The crucial modification should lie in the increase of number of tasks, since LBA generally needs more items to be reliable [119].
Our findings indicate the failure of self-report AHS to pass the first criterion of the valid factor structure. Since repeated verification of factor structure on different samples from multiple cultural groups is considered necessary when establishing the validity of questionnaires which measure cross-cultural constructs [86], its validity might be compromised. Of course, it is possible that our findings are specific for Czech samples, but since the prior evidence of factor structure and cross-cultural measurement invariance is scarce in the literature, we cannot recommend using of AHS self-report questionnaire in the research in its current form. Future validation research on various cultural samples is needed.
Common shortcomings of AH instruments and future research
Although in the previous chapter we recommended three methods and the principles behind them for reliable AH measurement in the future (CFT1, CFT3, E-CSA-WA) and one method for deeper inspection of the link between difficulty in AH tasks and general intelligence (CFT2), we also identified some of their key limitations. These issues do not necessarily jeopardize the validity or reliability of the methods, because they may simply stem from the insufficiently substantiated theoretical background of AH research. These issues also might represent the future research in the AH field.
The first limitation relates to the stability of the AH construct. CFT1, CFT2 and E-CSA-WA only tightly exceeded the pre-registered threshold of ICC > .50 for drift parameters estimated within Bayesian 4-parameter shifted Wald process models (the holistic subtest of the E-CSA-WA was slightly below this threshold). Good stability was shown only in the CFT3 that was not based on the Navon search task but on the Navon similarity matching task. These results were not surprising, as a considerable number of questions have been recently raised about the stability of the AH concept. For example, Zhang [32] argued that cognitive styles are inherently a dynamic phenomenon, and Kozhevnikov and colleagues [4] emphasized a more task-dependent character in cognitive styles. Many situational factors are also considered to potentially affect the scores in perceptual tasks. For instance, RT for Navon figures may be affected by the participant’s current mood (positive moods are more likely to elicit a global level of processing, whereas negative moods lead to a local level of processing [e.g., 135, 136]. Our original results supported the theoretical models, suggesting a dynamic change in the AH level rather than traditional views on a cognitive style as an entirely stable trait. Further research manipulating with the length of test-retest measurement and examining the factors influencing the change in the level of AH (such as the effects of training or the emotional state) can enrich the current knowledge about the stability of analytic and holistic cognition.
The second limitation relates to the dimensionality of AH. The correlation analysis revealed that the methods are not effective in distinguishing between their analytic and holistic subtests (apart from CFT3). It is possible that E-CSA-WA and CFT2 both measure one-dimensional constructs with two slightly different tasks (or that both dimensions underline a single second-order factor). What seems probable is that analytic and holistic styles do not represent orthogonal dimensions but are at least to some extent associated with each other. Other explanation might lie in “meta-style” called flexibility-rigidity [137]. It is possible that this style highly saturates the scores in E-CSA-WA and CFT2 and, consequently, many participants with higly flexibility style can obtain high scores in both subtests, whereas the other with high rigity have low levels in both subtests. It is a question whether this finding should be considered a limitation or an immanent feature of AH measure on individual level [cf. 82, 138]. Further research can attempt to distinguish analytic and holistic subtests more satisfactorily from each other. This distinction could be pursued with eye-tracking research measuring dwell time spent on background and dominant objects. From the findings, analytic components might emphasize a focus on detail (simple figures which must be identified in complex figures should differ only in small details) and holistic parts might incorporate even more complex and embedded backgrounds for figures. However, from the current theoretical approaches in combination with our empirical findings, it is impossible to decide whether analytic and holistic subtests should be (positively) correlated, as this association was beyond the scope of previous research and must be replicated (besides as a consequence of the usage of derived indices).
The third limitation is in divergent validity. The MTMM showed that the methods do not effectively distinguish between subtests. Concerning concurrent validity, deeper inspection using correlation analyses revealed that CFT1 and CFT2 were associated only weakly (rs < .30) with E-CSA-WA and that CFT2 and CFT1 did not correlate at all. CFT3 also did not relate to any other methods. These results agree with previous research which revealed only weak associations between various AH instruments and between modified versions of an instrument [e.g., 50, 64, 66, 70, 72, 82, 83]. These findings are strongly against a two-dimensional AH theoretical model, which we suggest should be potentially revised with respect to our present findings. This is also in line with some other research which already found a two-dimensional model of AH unsuitable and simplistic [e.g., 9, 82, 103, 139]. Our results can also explain why some studies showed contradictory or ambiguous results in the flagship of AH research–East-West cross-cultural comparisons [e.g., 9, 102, 103, 132, 140–142].
Future research can be inspired by for instance complex multilevel hierarchical models which were proposed to deal with a multitude of cognitive style models but not specifically to the AH dimension. For example, Kozhevnikov and colleagues [4] described four main clusters of cognitive style, namely context dependence/independence, rule-based vs. intuitive processing, integration vs. compartmentalization, and internal vs. external locus of processing, which can manifest at four hierarchically sorted levels: perception, concept formation, higher-order cognitive processing and metacognitive processing. It is thus possible, that AH methods measure to some extent independent facets of the AH construct or even entirely independent constructs which manifest similarly in cross-cultural comparisons but are not related at the individual level.
Finally, the fourth limitation is in the lack of predictive validity. None of the methods were capable of detecting the differences between participants of low and high socioeconomic status, and some of the statistically significant differences even indicated opposite directions (i.e., participants with higher socio-economic status showed higher levels of both holistic and analytic cognitive style). One group being higher in both subtests than the other is one of the possible outcomes of group comparisons and does not necessarily mean that instruments measure ability rather than style or trait. For example, Lee and colleagues [143] compared holistic and analytic thinkers (based on categorization/triad task) and found that holistic thinkers were quicker in both local and global subtests of hierarchical figures. The true reason behind these findings, however, is most likely the uncertain dimensionality of the construct and should be considered a topic for future research.
Study limitations
The presented study has several limitations. First, we were not able to obtain the pre-registered number of participants (N = 500). Even though a priori power analysis suggested that 250 observations should be sufficient for the planned techniques of reaction times modeling, especially for analyses based on structural equation modeling, the final sample size was relatively small and could decrease the statistical power. Second, we observed relatively high number of missing values in the dataset (most participants omitted some methods) which was most likely caused by online administration. We also had to removed ICAR number series subtest as a result of a technical issues during data collection. Third, since the AH is mainly cross-cultural theory, its predictive validity lies in cross-cultural comparison and not comparisons within a single cultural group. Usage the socioeconomic status as a main criterion of predictive validity is another limitation of this study. Hence, we must conclude that the predictive validity of the proposed instruments remains unknown and further robust cross-cultural validation of the instruments is desirable.
Acknowledgments
We would like to thank the experimental humanities laboratory at Masaryk University (HUMELab) for providing us with the Hypothesis software, Dr Elizabeth R. Peterson for providing us with a licence and materials for E-CSA-WA, Dr Dylan Molenaar for helping us solve an issue with a function in the diffIRT R package, and Bc. Bianka Masariková, Bc. Kamila Vlčková and Mgr. Nicol Dostálová for data collection during pilot testing.
References
- 1.
Messick S. Personality consistencies in cognition and creativity. In Messick S, editor. Individuality in learning. San Francisco: Jossey-Bass; 1976: pp. 4–23.
- 2. Witkin HA, Moore CA, Goodenough D, Cox PW. Field-Dependent and Field-Independent Cognitive Styles and Their Educational Implications. Rev Educ Res. 1977 Mar;47(1):1–64.
- 3. Kozhevnikov M. Cognitive styles in the context of modern psychology: Toward an integrated framework of cognitive style. Psychol Bull. 2007 May;133(3):464–81. pmid:17469987
- 4. Kozhevnikov M, Evans C, Kosslyn SM. Cognitive Style as Environmentally Sensitive Individual Differences in Cognition. Psychol Sci Public Interest. 2014 May;15(1):3–33.
- 5.
Riding RJ, Rayner S. Cognitive Styles and Learning Strategies Understanding Style Differences in Learning and Behavior. London: David Fulton Publishers, 1998.
- 6. Sternberg RJ, Grigorenko EL. Are cognitive styles still in style?. Am Psychol. 1997 Jul;52(7):700–12.
- 7. Riding RJ, Cheema I. Cognitive Styles—an overview and integration. Educ Psychol. 1991 Jan;11(3–4):193–215.
- 8. Anakwah N, Horselenberg R, Hope L, Amankwah‐Poku M, Koppen PJ. Cross‐cultural differences in eyewitness memory reports. Appl Cognit Psychol. 2020 Mar;34(2):504–15.
- 9. Čeněk J, Tsai J, Šašinka Č. Cultural variations in global and local attention and eye-movement patterns during the perception of complex visual scenes: Comparison of Czech and Taiwanese university students. pLoS ONE. 2020 Nov 16;15(11):e0242501. pmid:33196671
- 10. Lawrence RK, Edwards M, Chan GW, Cox JA, Goodhew SC. Does cultural background predict the spatial distribution of attention?. Cult Brain. 2020 Dec;8(2):137–65.
- 11. Beekman TL, Seo H. Analytic versus holistic: Cognitive styles can influence consumer response and behavior toward foods. J Sense Stud. 2022 Apr;37(2).
- 12. Park H, Kim S, Lee J. Native advertising in mobile applications: Thinking styles and congruency as moderators. J Mark Commun. 2020 Aug 17;26(6):575–95.
- 13. Chen B. Enhance creative performance via exposure to examples: The role of cognitive thinking style. Pers Individ Differ. 2020 Feb;154:109663.
- 14. Hsieh S, Yu Y, Chen E, Yang C, Wang C. ERP correlates of a flanker task with varying levels of analytic-holistic cognitive style. Pers Individ Differ. 2020 Jan;153:109673.
- 15. Spaccatini F, Pancani L, Richetin J, Riva P, Sacchi S. Individual cognitive style affects flood‐risk perception and mitigation intentions. J Appl Soc Psychol. 2021 Mar;51(3):208–18.
- 16. Baughan A, Oliveira N, August T, Yamashita N, Reinecke K. Do cross-cultural differences in visual attention patterns affect search efficiency on websites? Conf Hum. 2021 May; 326:1–12.
- 17. Zhou X, Requero B, Gonçalves D, Santos D. Every penny counts: The effect of holistic-analytic thinking style on donation decisions in the times of Covid-19. Pers Individ Differ. 2021 Jun;175:110713.
- 18. Witkin HA, Goodenough DR. Field dependence and interpersonal behavior. ETS Res Rep Ser. 1976 Jun;1976(1):i–78.
- 19. Boccia M, Piccardi L, Di Marco M, Pizzamiglio L, Guariglia C. Does field independence predict visuo-spatial abilities underpinning human navigation? Behavioural evidence. Exp Brain Res. 2016 Oct;234(10):2799–807.
- 20. Cuneo F, Antonietti J, Mohr C. Unkept promises of cognitive styles: A new look at old measurements. pLoS ONE. 2018 Aug 28;13(8):e0203115. pmid:30153302
- 21. Guisande M, Páramo M, Tinajero C, Almeida L. Field dependence-independence (FDI) cognitive style: An analysis of attentional functioning. Psicothema. 2007;19(4):572–577. pmid:17959109
- 22. Miyake A, Witzki AH, Emerson MJ. Field dependence–independence from a working memory perspective: A dual-task investigation of the Hidden Figures Test. Memory. 2001 Jul;9(4–6):445–57. pmid:11594363
- 23. Rémy L, Gilles P. Relationship between field dependence-independence and the g factor: Rev Eur Psychol Appl. 2014 Mar;64(2):77–82.
- 24. Tinajero C, Páramo MF. Field dependence-independence and academic achievement: a re-examination of their relationship. Br J Educ Psychol. 1997 Jun;67(2):199–212.
- 25. Furnham A. Personality and learning style: A study of three instruments. Pers Individ Differ. 1992 Apr;13(4):429–38.
- 26. von Wittich D, Antonakis J. The KAI cognitive style inventory: Was it personality all along?. Pers Individ Differ. 2011 May;50(7):1044–9.
- 27. Bergman H, Engelbrektson K. An examination of factor structure of Rod-and-frame Test and Embedded-figures Test. Percept Mot Skills. 1973 Dec;37(3):939–947. pmid:4764529
- 28. Zhang L. Field-dependence/independence: cognitive style or perceptual ability?––validating against thinking styles and academic achievement. Pers Individ Differ. 2004 Oct;37(6):1295–311.
- 29. Goodenough DR, Witkin HA. Origins of field-dependent and field-independent cognitive styles. ETS Res Bull Ser. 1977 Jun;1977(1):i–80.
- 30. Allinson CW, Hayes J. The Cognitive Style Index: A Measure of Intuition-Analysis For Organizational Research. J Management Studies. 1996 Jan;33(1):119–35.
- 31. Witkin HA, Berry JW. Psychological differentiation in cross-cultural perspective. ETS Res Bull Ser. 1975 Jun;1975(1):i–100.
- 32.
Zhang L. The malleability of intellectual styles. New York: Cambridge University Press; 2013.
- 33. Lis DJ, Powers JE. Reliability and Validity of the Group Embedded Figures Test for a Grade School Sample. Percept Mot Skills. 1979 Apr;48(2):660–2. pmid:461067
- 34. Kepner MD, Neimark ED. Test–retest reliability and differential patterns of score change on the Group Embedded Figures Test. J Pers Soc Psychol. 1984 Jun;46(6):1405–13. pmid:6737219
- 35. Goldstein AG, Chance JE. Effects of practice on sex-related differences in performance on Embedded Figures. Psychon Sci. 1965 Jan;3(1–12):361–2.
- 36. Ludwig I, Lachnit H. Effects of practice and transfer in the detection of embedded figures. Psychol Res. 2004 Aug;68(4). pmid:12937981
- 37. Álvarez-Montero FJ, Leyva-Cruz MG, Moreno-Alcaraz F. Learning Styles Inventories: an update of Coffield, Moseley, Hall, & Ecclestone’s Reliability and Validity Matrix. EJREP. 2018 Dec 9;16(46):597–629.
- 38.
Coffield F, Moseley D, Hall E, Ecclestone K. Learning Styles and Pedagogy in Post-16 Learning. London: Learning Skills Research Centre, 2004.
- 39. Curry L. A critique of the research on learning styles. Educ Leadersh. 1990;48:50–52.
- 40. Bendall RCA, Galpin A, Marrow LP, Cassidy S. Cognitive Style: Time to Experiment. Front Psychol. 2016 Nov 15;7. pmid:27895616
- 41. Witkin HA. Socialization, Culture and Ecology in the Development of Group and Sex Differences in Cognitive Style. Hum Dev. 1979;22(5):358–72.
- 42.
Nisbett R. The geography of thought: How asians and westerners think differently… and why. New York: The Free Press; 2003.
- 43. Sloman SA. The empirical case for two systems of reasoning. Psychol Bull. 1996;119(1):3–22.
- 44. Nisbett RE, Peng K, Choi I, Norenzayan A. Culture and systems of thought: Holistic versus analytic cognition. Psychol Rev. 2001;108(2):291–310. pmid:11381831
- 45. Nisbett RE, Masuda T. Culture and point of view. Proc Natl Acad Sci USA. 2003 Sep 16;100(19):11163–70. pmid:12960375
- 46. Nisbett RE, Miyamoto Y. The influence of culture: holistic versus analytic perception. Trends Cogn Sci. 2005 Oct;9(10):467–73. pmid:16129648
- 47. Wertheimer M. Untersuchungen zur Lehre von der Gestalt. Psychol Forsch. 1922;1(1):47–58.
- 48. Kitayama S, Duffy S, Kawamura T, Larsen JT. Perceiving an Object and Its Context in Different Cultures. Psychol Sci. 2003 May;14(3):201–6.
- 49. Kitayama S, Park H, Sevincer AT, Karasawa M, Uskul AK. A cultural task analysis of implicit independence: Comparing North America, Western Europe, and East Asia. J Pers Soc Psychol. 2009;97(2):236–55. pmid:19634973
- 50. Na J, Grossmann I, Varnum MEW, Karasawa M, Cho Y, Kitayama S, et al. Culture and personality revisited: Behavioral profiles and within‐person stability in interdependent (vs. independent) social orientation and holistic (vs. analytic) cognitive style. J Pers. 2020 Oct;88(5):908–24. pmid:31869444
- 51. Gottschaldt K. Über den Einfluß der Erfahrung auf die Wahrnehmung von Figuren. Psychol Forsch. 1926 Dec;8(1):261–317.
- 52. Riding RJ, Pearson F. The Relationship between Cognitive Style and Intelligence. Educ Psychol. 1994 Jan;14(4):413–25.
- 53. Riding RJ, Wigley S. The relationship between cognitive style and personality in further education students. Pers Individ Differ. 1997 Sep;23(3):379–89.
- 54. Peterson ER, Meissel K. The effect of Cognitive Style Analysis (CSA) test on achievement: A meta-analytic review. Learn Individ Differ. 2015 Feb;38:115–22.
- 55. Cook DA. Scores From Riding’s Cognitive Styles Analysis Have Poor Test–Retest Reliability. Teach Learn Med. 2008 Jul 14;20(3):225–9. pmid:18615296
- 56. Parkinson A, Mullally A, Redmond J. Test–retest reliability of Riding’s cognitive styles analysis test. Pers Individ Differ. 2004 Oct;37(6):1273–8.
- 57. Peterson ER, Deary IJ, Austin EJ. The reliability of Riding’s Cognitive Style Analysis test. Pers Individ Differ. 2003 Apr;34(5):881–91.
- 58. Rezaei AR, Katz L. Evaluation of the reliability and validity of the cognitive styles analysis. Pers Individ Differ. 2004 Apr;36(6):1317–27.
- 59. Peterson ER, Deary IJ, Austin EJ. Are intelligence and personality related to verbal-imagery and wholistic-analytic cognitive styles?. Pers Individ Differ. 2005 Jul;39(1):201–13.
- 60. Aslan H, Aslan A, Dinc D, Yunluel D. Testing the Reliability of CSA Test on a Sample of Turkish Population. Int J Sci Technol Res. 2018;4(9):27–31.
- 61. Pitta-Pantazi D, Christou C. Cognitive styles, task presentation mode and mathematical performance. Res Math Educ. 2009 Sep;11(2):131–48.
- 62. Navon D. Forest before trees: The precedence of global features in visual perception. Cogn Psychol. 1977 Jul;9(3):353–83.
- 63. Navon D. The forest revisited: More on global precedence. Psychol Res. 1981 Jul;43(1):1–32.
- 64. Peterson ER, Deary IJ. Examining wholistic–analytic style using preferences in early information processing. Pers Individ Differ. 2006 Jul;41(1):3–14.
- 65. Caparos S, Fortier-St-Pierre S, Gosselin J, Blanchette I, Brisson B. The tree to the left, the forest to the right: Political attitude and perceptual bias. Cognition. 2015 Jan;134:155–64. pmid:25460388
- 66. Dale G, Arnell KM. Investigating the stability of and relationships among global/local processing measures. Atten Percept Psychophys. 2013 Apr;75(3):394–406. pmid:23354593
- 67. Dale G, Arnell KM. Lost in the Forest, Stuck in the Trees: Dispositional Global/Local Bias Is Resistant to Exposure to High and Low Spatial Frequencies. PLoS ONE. 2014 Jul 3;9(7):e98625. pmid:24992321
- 68. Gerlach C, Poirel N. Navon’s classical paradigm concerning local and global processing relates systematically to visual object classification performance. Sci Rep. 2018 Dec;8(1). pmid:29321634
- 69. Gerlach C, Starrfelt R. Global precedence effects account for individual differences in both face and object recognition performance. Psychon Bull Rev. 2018 Aug;25(4):1365–72. pmid:29560562
- 70. Chamberlain R, Van der Hallen R, Huygelier H, Van de Cruys S, Wagemans J. Local-global processing bias is not a unitary individual difference in visual processing. Vis Res. 2017 Dec;141:247–57. pmid:28427891
- 71. Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behav Res. 2018 Jun;50(3):1166–86.
- 72. Milne E, Szczerbinski M. Global and local perceptual style, field-independence, and central coherence: An attempt at concept validation. Adv Cogn psychol. 2009 Jan 1;5(-1):1–26. pmid:20523847
- 73. Choi I, Dalal R, Kim-Prieto C, Park H. Culture and judgement of causal relevance. J Pers Soc Psychol. 2003;84(1):46–59.
- 74. Choi I, Koo M, Choi J. Individual differences in analytic versus holistic thinking. Pers Soc Psychol Bull. 2007;33(5):691–705. pmid:17440200
- 75.
Koo M, Choi J, Choi I. Analytic versus holistic cognition: Constructs and measurement. In: Spencer-Rodgers J, Peng K, editors. The psychological and cultural foundations of East Asian cognition: Contradiction, change, and holism. Oxford University Press; 2018: pp. 105–134.
- 76. Norenzayan A, Smith EE, Kim BJ, Nisbett RE. Cultural preferences for formal versus intuitive reasoning. Cogn Sci. 2002 Sep;26(5):653–84.
- 77. Martín-Fernández M, Requero B, Zhou X, Gonçalves D, Santos D. Refinement of the Analysis-Holism Scale: A cross-cultural adaptation and validation of two shortened measures of analytic versus holistic thinking in Spain and the United States. Pers Individ Differ. 2022;186(111322):111322.
- 78. Lux AA, Grover SL, Teo STT. Development and Validation of the Holistic Cognition Scale. Front Psychol. 2021 Sep 30;12. pmid:34658981
- 79. Chua HF, Boland JE, Nisbett RE. Cultural variation in eye movements during scene perception. Proc Natl Acad Sci USA. 2005 Aug 30;102(35):12629–33. pmid:16116075
- 80. Masuda T, Nisbett RE. Attending holistically versus analytically: comparing the context sensitivity of Japanese and Americans. J Pers Soc Psychol. 2001;81(5):922–934. pmid:11708567
- 81. Masuda T, Nisbett RE. Culture and Change Blindness. Cogn Sci. 2006 Mar 4;30(2):381–99. pmid:21702819
- 82. Na J, Grossmann I, Varnum MEW, Kitayama S, Gonzalez R, Nisbett RE. Cultural differences are not always reducible to individual differences. Proc Natl Acad Sci USA. 2010 Apr 6;107(14):6192–7. pmid:20308553
- 83. Huygelier H, Van der Hallen R, Wagemans J, de-Wit L, Chamberlain R. The Leuven Embedded Figures Test (L-EFT): measuring perception, intelligence or executive function?. PeerJ. 2018 Mar 26;6:e4524. pmid:29607257
- 84. Poirel N, Pineau A, Jobard G, Mellet E. Seeing the Forest Before the Trees Depends on Individual Field-Dependency Characteristics. Exp Psychol. 2008 Jan;55(5):328–33. pmid:25116300
- 85. Sadler-Smith E, Spicer DP, Tsang F. Validity of the Cognitive Style Index: Replication and Extension. Br J Management. 2000 Jun;11(2):175–81.
- 86. Lacko D, Čeněk J, Točík J, Avsec A, Đorđević V, Genc A, et al. The Necessity of Testing Measurement Invariance in Cross-Cultural Research: Potential Bias in Cross-Cultural Comparisons With Individualism–Collectivism Self-Report Scales. Cross Cult Res. 2022 Apr;56(2–3):228–67.
- 87. Balota DA, Yap MJ. Moving Beyond the Mean in Studies of Mental Chronometry. Curr Dir Psychol Sci. 2011 Jun;20(3):160–6.
- 88.
Van Zandt T. Analysis of Response Time Distributions. In: Pashler E, Wixted J, editors. Stevens’ handbook of experimental psychology: Methodology in experimental psychology. John Wiley & Sons Inc; 2002: pp. 461–516.
- 89. Whelan R. Effective Analysis of Reaction Time Data. Psychol Rec. 2008 Jul;58(3):475–82.
- 90. Lo S, Andrews S. To transform or not to transform: using generalized linear mixed models to analyse reaction time data. Front Psychol. 2015 Aug 7;6.
- 91. De Boeck P, Jeon M. An Overview of Models for Response Times and Processes in Cognitive Tests. Front Psychol. 2019 Feb 6;10. pmid:30787891
- 92. Kyllonen P, Zu J. Use of Response Time for Measuring Cognitive Ability. J Intell. 2016 Nov 1;4(4):14.
- 93. Molenaar D, Tuerlinckx F, van der Maas HLJ. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT. J. Stat. Soft. 2015;66(4):1–34.
- 94. Fox JP, Klotzke K, Simsek AS. LNIRT: An R Package for Joint Modeling of Response Accuracy and Times. arXiv:2106.10144 [preprint]; 2021. Available from: https://arxiv.org/abs/2106.10144
- 95. Apanovich V, Bezdenezhnykh B N, Sams M, Jääskeläinen I P, Alexandrov Y. Event-related potentials during individual, cooperative, and competitive task performance differ in subjects with analytic vs. holistic thinking. Int J Psychophysiol. 2018 Jan;123:136–142. pmid:28986326
- 96. Аpanovich V, Аramyan E, Dol’nikova M, Аleksandrov Y. Differences in brain support for solving analytical and holistic problems. Psikholog Zh. 2021;42(2):45–60.
- 97.
Peterson ER. Verbal Imagery Cognitive Styles Test & Extended Cognitive Style Analysis-Wholistic Analytic Test: Administration Guide. University of Edinburgh, 2005.
- 98. McKone E, Aimola Davies A, Fernando D, Aalders R, Leung H, Wickramariyaratne T, et al. Asia has the global advantage: Race and visual attention. Vis Res. 2010 Jul;50(16):1540–9. pmid:20488198
- 99. Istomin KV, Panáková J, Heady P. Culture, Perception, and Artistic Visualization: A Comparative Study of Children’s Drawings in Three Siberian Cultural Groups. Cogn Sci. 2014 Jan;38(1):76–100. pmid:23800235
- 100. Gerlach C, Krumborg JR. Same, same—but different: On the use of Navon derived measures of global/local processing in studies of face processing. Acta Psychol. 2014 Nov;153:28–38.
- 101. Draheim C, Mashburn CA, Martin JD, Engle RW. Reaction time in differential and developmental research: A review and commentary on the problems and alternatives. Psychol Bull. 2019;145(5):508–535. pmid:30896187
- 102. Hakim N, Simons DJ, Zhao H, Wan X. Do Easterners and Westerners Differ in Visual Cognition? A Preregistered Examination of Three Visual Cognition Tasks. Soc Psychol Personal Sci. 2017 Mar;8(2):142–52.
- 103. Lacko D, Šašinka Č, Stachoň Z, Lu W, Čeněk J. Cross-Cultural Differences in Cognitive Style, Individualism/Collectivism and Map Reading between Central European and East Asian University Students. Stud Psychol. 2020 Mar 4;62(1).
- 104. Ahmed L, de Fockert JW. Working memory load can both improve and impair selective attention: Evidence from the Navon paradigm. Atten Percept Psychophys. 2012 Oct;74(7):1397–405. pmid:22872549
- 105. Davidoff J, Fonteneau E, Fagot J. Local and global processing: Observations from a remote culture. Cognition. 2008 Sep;108(3):702–9. pmid:18662813
- 106. Oishi S, Jaswal VK, Lillard AS, Mizokawa A, Hitokoto H, Tsutsui Y. Cultural variations in global versus local processing: A developmental perspective. Dev Psychol. 2014;50(12):2654–65. pmid:25365123
- 107. Soto CJ, John OP. The next Big Five Inventory (BFI-2): Developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power. J Pers Soc Psychol. 2017 Jul;113(1):117–43. pmid:27055049
- 108.
John O, Naumann L, Soto C. Paradigm Shift to the Integrative Big-Five Trait Taxonomy: History, Measurement, and Conceptual Issues. In: John O, Robins R, Pervin L, editors. Handbook of personality: Theory and research. New York: Guilford Press; 2008: pp. 114–158.
- 109. Hřebíčková M, Jelínek M, Květon P, Benkovič A, Botek M, Sudzina F, et al. Big Five Inventory 2 (BFI-2): Hierarchický model s 15 subškálami. Cesk Psychol. 2020;64(4):437–460.
- 110. Condon DM, Revelle W. The international cognitive ability resource: Development and initial validation of a public-domain measure. Intelligence. 2014 Mar;43:52–64.
- 111. Loe B, Sun L, Simonfy F, Doebler P. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models. J Intell. 2018 Apr 2;6(2):20. pmid:31162447
- 112. Young SR, Keith TZ, Bond MA. Age and sex invariance of the International Cognitive Ability Resource (ICAR). Intelligence. 2019 Nov;77:101399.
- 113. Douglas SP, Craig CS. Collaborative and iterative translation: An alternative approach to back translation. J Int Mark. 2007; 15(1):30–43.
- 114.
Van De Vijver F, Leung K. Equivalence and bias: A review of concepts, models, and data analytic procedures. In Matsumoto D, Van De Vijver F, editors. Cross-cultural research methods in psychology. Cambridge University Press; 2011: pp. 17–45.
- 115. Van De Vijver F, Hambleton RK. Translating tests. Eur Psychol. 1996;1:89–99.
- 116. Czech Statistical Office. Population structure by sex, age and educational attainment. 2019. Available from: https://www.czso.cz/documents/10180/120583268/300002200102.pdf/ef2fb63c-7a0f-424f-b5f2-e5360ab32d57?version=1.1
- 117. Šašinka Č, Morong K, Stachoň Z. The Hypothesis Platform: An Online Tool for Experimental Research into Work with Maps and Behavior in Electronic Environments. IJGI. 2017 Dec 20;6(12):407.
- 118. Steingroever H, Wabersich D, Wagenmakers E. Modeling across-trial variability in the Wald drift rate parameter. Behav Res. 2021 Jun;53(3):1060–76. pmid:32948979
- 119. Brown SD, Heathcote A. The simplest complete model of choice response time: Linear ballistic accumulation. Cogn Psychol. 2008 Nov;57(3):153–78. pmid:18243170
- 120. Annis J, Miller BJ, Palmeri TJ. Bayesian inference with Stan: A tutorial on adding custom distributions. Behav Res. 2017 Jun;49(3):863–86.
- 121. Lakens D. Equivalence Tests. Soc Psychol Personal Sci. 2017 May;8(4):355–62.
- 122.
Finney S J, DiStefano C. Non-normal and Categorical data in structural equation modeling. In: Hancock G R, Mueller R O, editors. Structural equation modeling: A second course. Greenwich, Connecticut: Information Age Publishing; 2006: pp. 269–314.
- 123. Rhemtulla M, Brosseau-Liard P E, Savalei V. When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychol Methods. 2012;17(3):354–373. pmid:22799625
- 124. Hu L-T, Bentler P M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.
- 125. R Core Team. R: A Language and Environment for Statistical Computing [Internet]. Vienna, Austria; 2021. Available from: https://www.R-project.org/
- 126. Rosseel Y. lavaan: An R Package for Structural Equation Modeling. J. Stat. Soft. 2012 May;48(2):1–36.
- 127. Jorgensen T, Pornprasertmanit S, Schoemann A, Rosseel Y. semTools: Useful tools for structural equation modeling. R package version 0.5–4. 2021. Available from: https://cran.r-project.org/package=semTools.
- 128. Revelle W. psych: Procedures for Psychological, Psychometric, and Personality Research. R package version 2.1.3. 2021. Available from: https://cran.r-project.org/package=psych.
- 129. Mair P, Hatzinger R. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. J Stat Soft. 2007;20(9).
- 130. Gamer M, Lemon J, Singh I. irr: Various Coefficients of Interrater Reliability and Agreement. R package version 0.84.1. 2019. Available from: https://cran.r-project.org/web/packages/irr/index.html.
- 131. Grossmann I, Varnum MEW. Social Class, Culture, and Cognition. Soc Psychol Personal Sci. 2011 Jan;2(1):81–9.
- 132. von Mühlenen A, Bellaera L, Singh A, Srinivasan N. The effect of sadness on global-local processing. Atten Percept Psychophys. 2018 Jul;80(5):1072–82. pmid:29729000
- 133. Anders R, Alario F, Van Maanen L. The shifted Wald distribution for response time data analysis. Psychol Methods. 2016 Sep;21(3):309–27. pmid:26867155
- 134. McCrae RR, Terracciano A. Personality profiles of cultures: Aggregate personality traits. J Pers Soc Psychol. 2005 Sep;89(3):407–25. pmid:16248722
- 135. de Fockert JW, Cooper A. Higher levels of depression are associated with reduced global bias in visual processing. Cogn Emot. 2014 Apr 3;28(3):541–9. pmid:24067089
- 136. Ji L, Yap S, Best MW, McGeorge K. Global Processing Makes People Happier Than Local Processing. Front Psychol. 2019 Mar 26;10. pmid:30984079
- 137. Niaz M. Mobility-Fixity Dimension in Witkin’s Theory of Field-Dependence/Independence and its Implications for Problem Solving in Science. Percept Mot Skills. 1987 Dec;65(3):755–64.
- 138. Varnum ME, Grossmann I, Kitayama S, Nisbett RE. The Origin of Cultural Differences in Cognition: Evidence for the Social Orientation Hypothesis. Curr Dir Psychol Sci. 2010;19(1):9–13. pmid:20234850
- 139. Wong VC, Wyer RS, Wyer NA, Adaval R. Dimensions of holistic thinking: Implications for nonsocial information processing across cultures. J Exp Psychol Gen. 2021 Dec;150(12):2636–58. pmid:34152788
- 140. Evans K, Rotello CM, Li X, Rayner K. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: Does a cultural difference truly exist?. Q J Exp Psychol. 2009 Feb;62(2):276–85. pmid:18785074
- 141. Rayner K, Li X, Williams CC, Cave KR, Well AD. Eye movements during information processing tasks: Individual differences and cultural effects. Vis Res. 2007 Sep;47(21):2714–26. pmid:17614113
- 142. Stachoň Z, Šašinka Č, Čeněk J, Štěrba Z, Angsuesser S, Fabrikant SI, et al. Cross-cultural differences in figure–ground perception of cartographic stimuli. Cartogr Geogr Inf Sci. 2019 Jan 2;46(1):82–94.
- 143. Lee LY, Talhelm T, Zhang X, Hu B, Lv X. Holistic thinkers process divided-attention tasks faster: from the global/local perspective. Curr Psychol. 2021 May 26.