Effect of Age on Variability in the Production of Text-Based Global Inferences

Effect of Age on Variability in the Production of Text-Based Global Inferences

  • Lynne J. Williams, 
  • Joseph P. Dunlop, 
  • Hervé Abdi
  • Published: May 8, 2012
  • DOI: 10.1371/journal.pone.0036161


As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one’s world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation–a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging.


To understand spoken or written language, we need to integrate lexical, semantic, and contextual information and generate appropriate representations [1], [2]. Obviously, this process is highly dependent upon knowledge and memory [2][10], which are both sensitive to aging. But what happens to language comprehension when we age? The generally accepted view suggests that memory for textual details declines as memory declines with age [2], [4], [6], [11][13]. By contrast, however, older adults can access semantic information and understand complex linguistic representations as well as, or even better than, young adults in contexts where language comprehension is not dependent upon memory performance [2][10]. This apparent stability in older adults’ language comprehension performance is intriguing because text comprehension is a very complex activity (see e.g., [14]) that typically involves remembering the gist of the text rather than the surface details [15][21].

Kintsch [15] and colleagues suggested, that in order to remember the information in a text we need to reduce the amount of information by transforming the verbatim information into an abstract version of the text (see also [22], [23]). This abstract representation comes in the form of global inferences, which represent holistic concepts such as the theme or main point of a text [15], [16], [22][26]. These global inferences reduce the amount of information to be stored in memory because they integrate the text specific information with the individual’s world knowledge and experience (i.e., extra-textual information). Moreover, because global inferences represent generalized information (i.e., the text information is extended to contexts beyond the text itself, see e.g., [9]), we generate global inferences in order to fill informational gaps within the text and this allows us to incorporate the information from the text into our own world knowledge [27], [28].

Interestingly, the capacity to generate global inferences appears stable across age. For example, Ulatowska et al. [9] reported that there was no age difference in forming global representations of text in a longitudinal study of global inference generation in older adults. Similarly, Olness [29] found no differences between college-aged, middle-aged, and older adults in generating global inferences for didactic and non-didactic texts. Yet, there is growing evidence that knowledge structures thought to remain sable in aging–such as vocabulary and global inferences–may, in fact, be variable. For example, Christensen [30] found increased variability in older adults for measures of both memory, spatial, and reasoning skills (i.e., fluid intellectual abilities) as well as verbal abilities, including vocabulary (i.e., a crystallized intellectual ability). Similarly, Caskie, Schaie, and Willis [31] found considerable variability in verbal, spatial, and reasoning abilities in adults between 25 and 81 years of age. In addition, the patterns of variability were different for verbal abilities versus spatial and reasoning skills. In particular the changes in verbal abilities showed later onset, greater variability in the timing of onset, as well as greater variability in the overall rate of change. At the level of text comprehension, Hertzog, Dixon, and Hultsch [32] found significant variability in memory for textual information not accounted for by text-related factors in a group of seven elderly women. Likewise, Dixon and colleagues [33] reported an age increased variability in text recall for the stories used in the Logical Memory subtest of the Wechsler Memory Scale. Together all these findings suggest that an age-related increase in variability of the knowledge structures underlying linguistic ability and global inference generation may be a hallmark of cognitive aging, in the same way as the age-related increase in variability in reaction time, memory, and other cognitive abilities [34][42]. Therefore, we decided to investigate if age increased the variability of global inferences of participants. In order to do so, we measured age-related variability in generating global inferences among three groups of older adults.


Thirty-four participants between the ages of 62 and 94 years were divided into three age groups for the purpose of analysis. The young-old (Y-O) group consisted of 12 individuals (62 to 69 years of age), the middle-old (M-O) and old-old (O-O) elderly groups consisted of 11 participants each (70 to 76 and 77 to 94 years of age, respectively). Each participant gave possible lessons for each of Aesop fables ([43], see Supplementary Information S.1). Each lesson was scored categorically according to the criteria outlined in the Method section 4.3.1. Data were analyzed using discriminant correspondence analysis (dica) [44][52].

Dica is a multivariate technique developed to classify observations described by qualitative and/or quantitative variables into a-priori defined groups and has been used to discriminate clinical populations, such as early versus middle stage Alzheimer’s disease [51] and autistic paranoia from paranoid schizophrenia [52]. Based on correspondence analysis (ca), dica is a type of principal component analysis (pca)–specifically tailored for the analysis of categorical data–that represents the rows and columns as points in (a high dimensional) space [45], [49][51], [53][57]. Just like pca, dica finds the most important dimensions of variance of the data. These dimensions are uncorrelated to each other and ordered by the amount of the data variance that they explain. Rows and columns can be plotted as maps by using their coordinates on these dimensions. In order to reveal the pattern of variables associated with group differences, dica analyzes a data table in which each row sums the behavior of the participants of a given group (see [51] for more information). Dica is then obtained from the ca of this summed table. This analysis reveals the similarities and differences in patterns of performance across the age groups. See Method section 4.3.3, File S2, Figure S1 and [44], [51], [58], [59] for more information.

The dica derived two factors accounting, respectively, for 85 percent and 15 percent of the data variance. The eigenvalues (), proportion of explained variance (), and the contributions of each variable and group to the total variance for Factors 1 and 2 are shown in Table 1. The higher the contribution, the more important that variable (or observation) is in defining a given factor.

Table 1. Eigenvalues (), proportion of inertia (), contributions of the age groups and scoring categories for Factors 1 and 2.


2.1 Age-related Patterns of Global Inference Generation

The dica uncovered age related patterns in lesson generation performance. Factor 1 separated the Y-O from the M-O and O-O groups (see Figure 1). Because dica reliably separated the Y-O from the other groups, the effect size is quite large and is detectable with our current sample size. However, to ensure that we could detect a reasonable effect size, we computed an a-posteriori effect size analysis using G*Power 3 [60], [61]. For the purpose of power analysis, multivariate discriminant analysis can be considered under the manova framework [62], [63]. With an of.05 and achieved power () of.95, we had an effect size () of.41. This effect size is equivalent to a critical Pillai’s of 0.6 across the 3 groups, meaning that the between group-variance is 60% of the total variance. This effect size and critical were considered adequate to be able to discriminate between the Y-O, M-O, and O-O groups.

Figure 1. Discriminant correspondence analysis.

Variables shown along Factors 1 and 2. Lambda () and tau () are the eigenvalues and the percentage of explained inertia (i.e., variance) for a given factor (, ; , ). All sub-figures are plotted on the same scale along each factor. (A) Switch Perspective and Linguistic Form collapsed across both lesson types. Note that young elderly group switched perspectives between lesson types more frequently than the middle or old elderly groups. (B) Generalization Level for each lesson type. Note that the young elderly group produced extra-textual lessons more frequently. Extra-textual lessons incorporate information from outside of the text. (C) Character Viewpoint for each lesson type. Note that they young elderly group more frequently adopted the viewpoint of the main character for the best lesson (lesson 1) and the supporting character for the alternate lesson (lesson 2). (D) Representation of Theme was included as a supplementary element. Supplementary elements are variables that were not included in the calculations, but were projected into the space to see their placement along the factors. They are used to aid with interpretation. Note that the young elderly group more frequently produced lessons reflecting accurate fable themes for both lesson 1 and lesson 2. Note that in correspondence analysis, the eigenvalues are never greater than 1.


The results of the dica are shown in Figure 1. The scoring categories are shown in separate displays to help reading the map. The variable contributing the most to Factor 1 is “switching perspectives between lesson types.” The young elderly group more frequently switched perspectives than the middle and old elderly groups. Success in switching between lesson 1 and lesson 2 is more strongly associated with a lesson 1 that incorporated information from outside of the text (i.e., extra-textual) and represented the main character’s viewpoint. Successful switches in perspective also were more frequently stated as proverbs and showed themes consistent with the fable for both lesson types. By contrast, the middle and older elderly groups switched perspectives less frequently than the young elderly group. Failure to switch perspective between lesson types was associated with maintaining the main character’s viewpoint for lesson 2 and producing text specific lessons for both lesson types (i.e., the information content of each lesson did not go beyond information stated explicitly in the fable). Switch failures also were characterized by more frequent use of non-proverbial linguistic forms (i.e., a concrete interpretation) and inaccurate representation of the fable theme for both lessons 1 and 2.

Factor 2 distinguished the middle and old elderly groups. The middle elderly group had a slightly greater tendency to maintain the main character’s perspective on lesson 2. Furthermore, the middle elderly group produced lesson 1 showing an inaccurate fable theme more frequently than those produced by the old elderly group. However, the old elderly group’s lesson 1 had a slightly greater tendency, on average, to adopt neither the main nor the supporting character’s perspectives. The old elderly group also tended to state both lessons in a non-proverbial form.

The performance of the groups and the individual participants by age group are shown in Figure 2. The young elderly participants clustered more tightly together, indicating that they were predominantly successful in switching perspectives. The tight grouping also suggests that the young elderly group showed less between participant variability in generating lessons. The middle and old elderly participants, by contrast, were more dispersed. Some of the middle and old elderly participants showed a pattern of lesson generation similar to the young elderly participants, while others did not. This suggests greater between participant variability, especially in the ability to switch perspectives between lessons 1 and 2.

Figure 2. Discriminant correspondence analysis.

Participants shown by age group along Factors 1 and 2. Lambda () and tau () are the eigenvalues and the percentage of explained inertia (i.e., variance) for a given factor (, ; , ). All sub-figures are plotted on the same scale along each factor. (A) Barycenters (weighted average) of the groups, (B) Convex hull. The convex hull represents the average performance of individual participants within each age group. Individual participants were projected into the dica space as supplementary elements. Supplementary elements are variables or observations that were not included in the calculations, but were projected into the space to see their placement along the factors. Note that in correspondence analysis, the eigenvalues are never greater than 1.


2.2 Variability in Global Inference Generation

The variability in generating global inferences within the age groups was evaluated using a bootstrap procedure [64][66]. The bootstrap produced 95% confidence interval ellipses for each age group (see Figure 3; a description of the bootstrap is presented in the File S2.6.2). The area of a confidence interval ellipse represents the variability within each group. When the confidence ellipses do not overlap there is a significant difference between the groups at the level. Consequently, the confidence ellipses show that the young elderly group is reliably different from the middle and old elderly groups because there is no overlap with the confidence ellipses of the other two groups. In addition, the size of the young elderly group’s ellipse is smaller, indicating that there is less variability within this group. Although the middle and old elderly groups were not reliably distinguished, the middle elderly group, surprisingly, had the ellipse with the greatest area indicating that the middle elderly group showed the most variability (see also Figures 2A and 2c for actual dispersion in group performance).

Figure 3. Discriminant correspondence analysis.

95% confidence intervals for age groups shown on factors 1 and 2. Lambda () and tau () are the eigenvalues and the percentage of explained inertia (i.e., variance) for a given factor (, ; , ). Confidence ellipses represent the variability within the group. Ellipses showing no overlap represent different populations. Note that in correspondence analysis, the eigenvalues are never greater than 1.


2.3 Quality of the DICA Model

We evaluated the quality of our dica model by computing the amount of variance explained by the dica (, ; see Supplementary Information for details). We also evaluated how the model would generalize to new participants by using a jackknife procedure (also called “leave one out” procedure). The jackknife procedure [64], [67], [68] removes, in turn, each of the participants from the sample and performs a new dica on the remaining participants. The distance between the removed participant (projected into the new dica space as a supplementary element) and each of the groups is computed and the participant is assigned to the closest group (see [44] and [68] for more information about the jackknife in dica). The results of the jackknife are summarized in Table 2. The columns represent the original group assignment and the rows represent the dica assignment. From Table 2, we found that of the 34 possible assignments, only 13 were correct. The young elderly participants were more reliably assigned to their group (9 out of 12 correctly assigned) than participants from the middle and old elderly groups (2 correct assignments out of 11 participants for each group). This difference in classification reflects the larger variability of the middle and old elderly groups.

Table 2. Actual versus dica participant classification into young elderly, middle elderly, and old elderly groups.



Although most studies examining cognitive performance variability in the elderly have examined skills that are known to decrease with age (e.g., fluid intelligence abilities, reaction time (rt) or memory [39], [40], [69][83]), skills that remain stable or improve across age (i.e., crystallized intelligence) also show inter-trial variability. However, this age-associated pattern of variability may differ between the two intelligence domains. For example, variability in rt for speeded tasks shows that older adults are consistently more variable than younger adults [81][83] and that this increased rt variability is associated with poorer cognitive performance in normally aging older adults [84][86]. The general increase in variability in the M-O and O-O groups relative to the Y-O group supports this view and may be associated with the older two groups’ general difficulty in switching perspectives between lesson 1 and lesson 2.

By contrast, when older adults show increased variability in gist recall accuracy (rather than rt or detail recall), this increase in variability tends to be associated with poor health, rather than normal age-related change [32], [87]. In normally aging adults, increased item-to-item variability in non-speeded tasks (such as gist recall) is associated with higher mean performance and may actually be an indicator of learning rather than decline [76], [88]. The finding that the M-O group showed greater variability than the O-O group suggests that, at least in non-speeded tasks, increased variability may not be completely maladaptive. The strict view of increased variability indicating cognitive decline predicts a linear association between variability and age (see e.g., [81][86]), yet the current data do not show this pattern. Rather, they suggest the possibility varying patterns of variability at different life stages, especially given that the Y-O, M-O, and O-O individuals were cognitively normal and successfully performed the task.

If we consider that learning may also be a mechanism for increased variability in aging, then the M-O group would be expected to show the greatest amount of variability because this group has the largest proportion of recently retired individuals undergoing a major life change. For example, Adam and colleagues [89], [90] have found sudden decreases in cognitive functioning immediately following retirement, a pattern which suggests that there may be an increase in variability in cognitive performance around this time. Such a change in variability would be similar to the recursive increases in variability and subsequent plateauing during periods of social and cognitive development during childhood and adolescence [88].

Although showing increased variability associated with age, the current results show a mixed pattern. This suggests that multiple mechanisms may underlie the increase in performance variability for crystallised intellectual abilities in older adults and that the relationship between age and variability may not be as straightforward as with fluid intellectual skills. Nevertheless, these findings show that variability with age may not be just an indicator of decline, but may also signal new learning. As Garrett et al. [91] so aptly said, “variability is more than just noise” (p. 4914).


4.1 Participants

Thirty-four participants between the ages of 62 and 94 years were divided into three age groups for the purpose of analysis. The young elderly (Y-O) group consisted of 12 individuals (62 to 69 years of age), the middle (M-O) and older (O-O) elderly groups consisted of 11 participants each (70 to 76 and 77 to 94 years of age, respectively). All participants were highly educated, with an average of 15 years of formal education. All participants were living in the community and were self-reported native English speakers. None exhibited clinical signs of impaired cognitive performance as tested by the 7 Minute Screen [92], [93]. All participants scored within normal age limits on a hearing screening that included the Erber Sentences [94], CID Sentences [95], and a self-report of hearing loss. All subjects made no errors on a visual narrative screener where they read aloud an additional fable typeset in the same font as the stimulus fables. This study was approved by the Internal Review Board (IRB) of the University of Texas at Dallas. All participants gave written informed consent. Table 3 gives the participant characteristics.

Table 3. Participant characteristics.


4.2 Stimuli and Task

We selected twelve short narratives from George Townsend’s translations of Aesop’s fables [43]. We used fables because cultural knowledge is transmitted via their didactic form. This transmission of cultural knowledge takes the form of a lesson or moral (i.e., types of global inferences). In addition, the role of the fables in transmitting knowledge or “general truths” gives to the fables a similar function to proverbs in discourse. However, unlike proverbs, fables require the theme, lesson, or moral to be inferred from the characters’ actions and their consequences. Meaning in proverbs, by contrast, is derived from the text itself and not from its application to real-world contexts because proverbs are already stated in a global inference-like format [9], [96], [97]. Because fables are didactic, readers can interpret them at two levels: literally, at the level of the text itself (i.e., a textual interpretation), or metaphorically, as a guide to culturally appropriate behavior in real-life contexts (i.e., an extratextual interpretation; [98][102]). Given that multiple interpretations of each fable is possible, fables can be, at least in part, interpreted as each reader chooses [103] and therefore interpretations of a given fable can vary with the reader, the information that is chosen as salient during comprehension (e.g., a given character’s actions), and the overall level of generalization (i.e., textual versus extratextual).

All fables employed two characters, contained three episodes (i.e., setting, action, and resolution components), were between 10 and 21 propositions in length [15], and contained no mixture of anthropomorphized animal and human characters. We then modified the fables to exclude specific mention of character attributes (e.g., lazy, wise, etc.) and any specific mention of the moral or lesson. Fables are shown in File S1. We asked participants to generate two different lessons or morals for each of the 12 fables. We instructed participants to first give what they considered to be the “best” lesson for the fable (lesson 1). We then asked participants to generate a second possible lesson for each fable that reflected a different interpretation or perspective (lesson 2). The examiner read the fable to participants and a card with the printed fable was within view during generation of both lessons to minimize memory demands.

4.3 Analysis

4.3.1 Response coding.

Lessons were scored categorically according to: (a) whether there was a switch in perspective between lesson 1 and lesson 2, (b) whether the lesson reflected text specific or extratextual content [9], [29], (c) whether the lesson portrayed the viewpoint of the main or of the supporting character [98], and (d) whether the lesson was given in the form of a statement or proverb, that is, a literal or metaphorical interpretation [104]. The accuracy or semantic fit of each lesson theme was scored in reference to the original fable. Representation of theme was not included as an active variable in the analysis due to the high degree of accurate semantic representation produced by all three age groups (91% accurate). Table 4 shows further definitions of the scoring categories with examples.

Table 4. Scoring Criteria for lesson 1 and lesson 2 Lesson Responses.


4.3.2 Inter-rater reliability.

Inter-rater reliability was analyzed on a random 20% of the data by comparing the first author’s coding with the code ratings of a second trained rater. Point-by-point agreement was 79%. A Cohen’s Kappa was calculated to correct for chance agreement ( = 0.621), corresponding to a “substantial” rating of agreement [105].

4.3.3 Statistical analysis.

We used discriminant correspondence analysis (dica) to analyze the coded lesson responses. Dica combines the features of correspondence analysis (ca) and discriminant analysis ([44], [106]; see also [51] for a tutorial on language oriented applications). Correspondence analysis (ca) is a type of principal component analysis (pca)–specifically tailored for the analysis of categorical data–that represents the rows and columns as points in a (high dimensional) space [45], [49], [50], [53][55]. In addition, ca (and consequently, dica) can handle data sets with few observations described by many nominal variables [44], [45], [51], [107].

Just like pca, ca finds orthogonal factors or dimensions that reveal the patterns and the associations between the row and column profiles. The importance of the factors is determined by their inertia (i.e., a quality akin to variance), denoted by and the proportion of explained inertia, denoted by . ca converts contingency tables into visual displays (i.e., maps) in which the row profiles and column profiles represent points in the display. The proximity of the points within the display represents their degree of associa’tion. Points distributed more closely in space are more strongly associated than those that are farther apart. In addition, ca places no constraints on the data; therefore, the pattern seen in the maps represents associations contained within the data and not those superimposed by an external model [47], [48].

Dica is a multivariate technique developed to classify observations described by qualitative and/or quantitative variables into a-priori defined groups and therefore adds a discriminative component to ca. Here, we used dica to analyze lesson 1 and lesson 2 responses and to classify participants into pre-defined age categories: young-old (Y-O), middle-old (M-O) and old-old (O-O) groups.

For the dica, participants were grouped into the three age groups. Then, the pattern of performance of the participants in each group was combined into its common pattern of performance (see [51] for more information on how the common pattern is developed). Table 5 shows the age-group by lesson response contingency table, the common pattern of performance used for the dica in the current study.

Table 5. Frequency of occurrence of scoring categories by lesson type for the young elderly, middle elderly, and old elderly groups (contingency table input into dica).


We then ran a ca on the common performances, which allowed us to examine the similarities and differences in patterns of performance across the age groups. ca and dica also can be used to estimate the amount of variability within and between each category. To do this 95% confidence ellipses are constructed using a bootstrap resampling technique ([108], [109]; see also File S2.6.2). A detailed mathematical appendix is included in the Supplementary Information.

Supporting Information

Figure S1.



File S1.

Supporting Information PDF file.



File S2.



Author Contributions

Conceived and designed the experiments: LJW HA. Performed the experiments: LJW. Analyzed the data: LJW JD. Contributed reagents/materials/analysis tools: JD. Wrote the paper: LJW.


  1. 1. Burke DM (2006) Representation and aging. Bialystok E, Craik FIM, editors, Lifespan Cognition: Mechanisms of Change, Oxford University Press. pp. 193–206.
  2. 2. Chesneau S, Jbabdi S, Champagne-Lavau M, Giroux F, Ska B (2007) Comprehension de textes, ressources cognitives et vieillssement [Text comprehension, cognitive resources and aging]. Psychologie et NeuroPsychiatrie du Viellissement 5: 47–64.
  3. 3. Hamm V, Hasher L (1992) Age and the availability of inferences. Psychology and Aging 7: 56–64.
  4. 4. Hosokawa A, Hosokawa T (2006) Cross-cultural study on adult age-group differences in the recall of the literal and interpretive meanings of narrative text. Japanese Psychological Research 2: 77–90.
  5. 5. Laver GD, Burke DM (1993) Why do semantic priming effects increase in old age? A metaanalysis. Psychology and Aging 8: 34–43.
  6. 6. Meyer BI (1987) Reading comprehension and aging. Schaie KW, editor, Annual Review of Gerontology and Geriatrics (Volume 7), New York: Springer.
  7. 7. Radvansky GA, Zwaan RA, Curiel JM, Copeland DE (2001) Situation models and aging. Psychology and Aging 16: 145–160.
  8. 8. North A, Ulatowska HK, Macaluso-Haynes S, Bell H (1986) Discourse performance in older adults. International Journal of Aging and Human Development 23: 267–283.
  9. 9. Ulatowska HK, Chapman SB, Highley Amy P PJ (1998) Discourse in healthy old-elderly adults: A longitudinal study. Aphasiology 12: 619–663.
  10. 10. Zacks RT, Hasher L (1988) Integrating information from discourse: do older adults show deficits? Light LL, editor, Language, Memory, and Aging, Cambridge: Cambridge University Press. pp. 117–132.
  11. 11. Adams C, Labouvie-Vief G, Hobart CJ, Dorosz M (1990) Adult age group differences in story recall style. Journals of Gerontology: Psychological Sciences 45: 17–27.
  12. 12. Adams C (1991) Qualitative age differences in memory for text: a life-span developmental perspective. Psychology and Aging 6: 323–336.
  13. 13. Park D, Schwarz N (1999) Cognitive aging: A primer. Philadelphia, PA: Psychology Press.
  14. 14. Chastaing M, Abdi H (1980) La psychologie des injures [The psychology of insults]. Journal de Psychologie Normale et Pathologique 77: 31–62.
  15. 15. Kintsch W (1998) Comprehension: A Paradigm for Cognition. Cambridge: Cambridge University Press.
  16. 16. van Dijk TA, Kintsch W (1983) Strategies of Discourse Comprehension. New York: Academic Press.
  17. 17. Stahl C, Klauer KC (2008) A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory. Journal of Experimental Psychology Learning, Memory, and Cognition 34: 570–586.
  18. 18. Reyna VF, Kiernan B (1994) Development of gist versus verbatim memory in sentence recognition: effects of lexical familiarity, semantic content, encoding instructions, and retention interval. Developmental Psychology 30: 178–191.
  19. 19. Gutchess AH, Schacter DL (2012) The neural correlates of gist-based true and false recognition. NeuroImage 59: 3418–3426.
  20. 20. Abdi H (1990) Additive-tree representation of verbatim memory. CUMFID 16: 99–124.
  21. 21. Abdi H (1985) Représentations arborées de l’information verbatim [Additive tree representations of verbatim information]. Bulletin de Psychologie 38: 633–643.
  22. 22. Graesser AC, Singer M, Trabasso T (1994) Constructing inferences during narrative text comprehension. Psychological Review 101: 371–395.
  23. 23. McKoon G, Ratcliff R (1992) Inference during reading. Psychological Review 99: 440–446.
  24. 24. Schank R, Abelson R (1977) Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, NJ: Lawrence Erlbaum Associates.
  25. 25. van Dijk TA (1977) Semantic macro-structures and knowledge frames in discourse comprehension. Just MA, Carpenter PA, editors, Cognitive Processes in Comprehension, Hillsdale, NJ: Lawrence Erlbaum Associates.
  26. 26. van Dijk TA (1980) Macrostructures. Hillsdale, NJ: Lawrence Erlbaum Associates.
  27. 27. Magliano J, Graesser AC (1991) A three-pronged method for studying inference generation in literary text. Poetics 20: 193–232.
  28. 28. Seifert CM, Robertson SP, Black JB (1985) Types of inferences generated during reading. Journal of Memory and Language 24: 405–422.
  29. 29. Olness G (2000) Expression of Narrative Main-Point Inferences in Adults: A Developmental Perspective. Doctoral dissertation, University of Texas at Dallas, Richardson, TX.
  30. 30. Christensen H (2001) What cognitive changes can be expected with normal ageing? Australian and New Zealand Journal of Psychiatry 35: 768–775.
  31. 31. Caskie GIL, Schaie KW, Willis SL (1999) Individual differences in the rate of change in cognitive abilities during adulthood. Paper presented at the Gerontological Society of America Conference.
  32. 32. Hertzog C, Dixon RA, Hultsch DF (1992) Intraindividual change in text recall of the elderly. Brain and Language 42: 248–269.
  33. 33. Dixon RA, Hertzog C, Friesen I, Hultsch DF (1993) Assessment of intraindividual change in text recall of elderly adults. Brownell HH, Joanette Y, editors, Narrative Discourse in Neurologically Impaired and Normally Aging Adults, San Diego: Singular. pp. 77–101.
  34. 34. Hultsch DF, Hunter M, MacDonald SWS, Strauss E (2005) Inconsistency in response time as an indicator of cognitive aging. Duncan J, Phillips L, McLeod P, editors, Measuring the Mind: Speed, Control, and Age (2nd ed.), Oxford: Oxford University Press.
  35. 35. Luszcz MA (2004) Whats it all about: variation and aging. Gerontology 50: 5–6.
  36. 36. Martin M, Hofer SM (2004) Intraindividual variability, change, and aging: conceptual and analytical issues. Gerontology 50: 7–11.
  37. 37. Miller MB, Van Horn JD (2007) Individual variability in brain activations associated with episodic retrieval: a role for large-scale databases. International Journal of Psychophysiology 63: 205–13.
  38. 38. Nesselroade JR, Ram N (2004) Studying intraindividual variability: what we have learned that will help us understand lives in context. Research in Human Development 1: 9–29.
  39. 39. Ram N, Rabbitt P, Stoller B, Nesselroade JR (2005) Cognitive performance inconsistency: intraindividual change and variability. Psychology and Aging 20: 623–633.
  40. 40. Shammi P, Bosman E, Stuss DT (1998) Aging and variability in performance. Aging, Neuropsychology, and Cognition 5: 1–13.
  41. 41. Fiske DW (1957) The constraints on intra-individual variability in test responses. Educational and Psychological Measurement 17: 317–337.
  42. 42. Daly DL, Bath KE, Nesselroade JR (1963) On the confounding of inter- and intraindividual variability in examining change patterns. Intelligence. pp. 33–36.
  43. 43. Aesop/Townsend GF (1991) Aesops fables by Aesop. URL Retrieved September 20, 2008 from​sop11h.htm and Project Gutenberg.
  44. 44. Abdi H (2007) Discriminant correspondence analysis. Salkind NJ, editor, Encyclopedia of Measurement and Statistics, Thousand Oaks, CA: Sage. pp. 270–275.
  45. 45. Abdi H, Williams LJ (2010) Correspondence analysis. Salkind NJ, editor, Encyclopedia of Research Design, Thousand Oaks, CA: Sage. pp. 267–278.
  46. 46. Abdi H, Valentin D (2007) Multiple correspondence analysis. Salkind NJ, editor, Encyclopedia of Measurement and Statistics, Thousand Oaks, CA: Sage, volume 95. pp. 651–657.
  47. 47. Benzécri JP (1973) L’Analyse des Données. Tome 1: La Taxonomie [Data Analysis Volume 1: Taxonomy]. Paris: Dunod.
  48. 48. Benzécri JP (1973) L’Analyse des Données. Tome 2: L’Analyse des Correspondances [Data Analysis, Volume 2: Correspondence Analysis]. Paris: Dunod.
  49. 49. Greenacre M (1984) Theory and Applications of Correspondence Analysis. London: Academic Press.
  50. 50. Greenacre M (2007) Correspondence Analysis in Practice (Second ed.). Boca Raton, FL: Chapman & Hall/CRC.
  51. 51. Williams LJ, Abdi H, French R, Orange JB (2010) A tutorial on multi-block discriminant correspondence analysis (MUDICA): A new method for analyzing discourse data from clinical populations. Journal of Speech Language and Hearing Research 53: 1372–1393.
  52. 52. Pinkham AE, Sasson NJ, Beaton D, Abdi H, Kohler CG, et al. Qualitatively distinct factors contribute to elevated rates of paranoia in autism and schizophrenia. Journal of Abnormal Psychology 121: (in press).
  53. 53. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2: 433–459.
  54. 54. Blasius J, Greenacre M (2006) Correspondence analysis and related methods in practice. Greenacre M, Blasius J, editors, Multiple Correspondence Analysis and Related Methods, Boca Raton, FL: Chapman & hall. pp. 3–40.
  55. 55. Clausen SE (1998) Applied Correspondence Analysis: An introduction. Thousand Oaks, CA: Sage.
  56. 56. Abdi H (2010) Partial least squares regression and projection on latent structure regression (PLS Regression). Wiley Interdisciplinary Reviews Computational Statistics 2: 97–106.
  57. 57. Krishnan A, Williams LJ, McIntosh AR, Abdi H (2011) Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review. NeuroImage 56: 455–75.
  58. 58. Abdi H, Williams LJ, Connolly AC, Gobbini MI, Dunlop JP, et al. (2012) Multiple Subject Barycentric Discriminant Analysis (MUSUBADA): How to assign scans to categories without using spatial normalization. Computational and Mathematical Methods in Medicine.
  59. 59. Abdi H, Williams LJ, Valentin D, Bennani-Dosse M (2012) STATIS and DISTATIS: optimum multitable principal component analysis and three way metric multidimensional scaling. Wiley Interdisciplinary Reviews: Computational Statistics 4: 124–167.
  60. 60. Faul F, Erdfelder E, Lang AG, Buchner A (2007) G*Power 3: a exible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 39: 175–191.
  61. 61. Faul F, Erdfelder E, Buchner A, Lang AG (2009) Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behavior Research Methods 41: 1149–60.
  62. 62. Hwang D, Schmitt WA, Stephanopoulos G, Stephanopoulos G (2002) Determination of minimum sample size and discriminatory expression patterns in microarray data. Bioinfomatics 18: 1184–1193.
  63. 63. Chi YY (2011) Multivariate methods. Wiley Interdisciplinary Reviews Computational Statistis 4: 35–47.
  64. 64. Efron B, Gong G (1983) A leisurely look at the bootstrap, the jackknife, and cross-validation. The American Statistician 37: 36–48.
  65. 65. Efron B, Tibshirani R (1993) An Introduction to the Bootstrap. Boca Raton, FL: Chapman & Hall/CRC.
  66. 66. Lebart L (2006) Explorer l’espace des mots: du linéaire au non-linéaire [Exploring word-space: from the linear to the non-linear]. Journées internationales d’analyse statistique des données textuelles 8:
  67. 67. Efron B (1979) Bootstrap methods: Another look at the Jackknife. The Annals of Statistics 7: 1–26.
  68. 68. Abdi H, Williams LJ (2010) Jackknife. Salkind NJ, editor, Encyclopedia of Research Design, Thousand Oaks, CA: Sage. pp. 655–660.
  69. 69. Anstey KJ, Low LF (2004) Normal cognitive changes in aging. Australian Family Physician 33: 783–7.
  70. 70. Tucker-Drob EM, Salthouse TA (2008) Adult age trends in the relations among cognitive abilities. Psychology and Aging 23: 453–60.
  71. 71. Salthouse TA, Ferrer-Caja E (2003) What needs to be explained to account for age-related effects on multiple cognitive variables? Psychology and Aging 18: 91–110.
  72. 72. Salthouse TA (2001) Attempted decomposition of age-related inuences on two tests of reasoning. Psychology and Aging 16: 251–263.
  73. 73. Salthouse TA, Fristoe N, McGuthry KE, Hambrick DZ (1998) Relation of task switching to speed, age, and uid intelligence. Psychology and Aging 13: 445–461.
  74. 74. Lindenberger U, Baltes PB (1997) Intellectual functioning in old and very old age: cross-sectional results from the Berlin Aging Study. Psychology and Aging 12: 410–432.
  75. 75. Lindenberger U, Mayr U, Kliegl R (1993) Speed and intelligence in old age. Psychology and Aging 8: 207–220.
  76. 76. Allaire JC, Marsiske M (2005) Intraindividual variability may not always indicate vulnerability in elders’ cognitive performance. Psychology and Aging 20: 390–401.
  77. 77. Bielak AAM, Hultsch DF, Strauss E, Macdonald SWS, Hunter MA (2010) Intraindividual variability in reaction time predicts cognitive outcomes 5 years later. Neuropsychology 24: 731–41.
  78. 78. Bielak AAM, Hultsch DF, Strauss E, Macdonald SWS, Hunter MA (2010) Intraindividual Variability Is Related to Cognitive Change in Older Adults : Evidence for Within-Person Coupling. Psychology and Aging 25: 575–586.
  79. 79. Brose A, Schmiedek F, Lövdén M, Molenaar PCM, Lindenberger U (2010) Adult age differences in covariation of motivation and working memory performance: contrasting between-person and within-person findings. Research in Human Development 7: 61–78.
  80. 80. Christensen H, Dear K, Anstey KJ, Parslow RA, Sachdev P, et al. (2005) Within-occasion intraindividual variability and preclinical diagnostic status: is intraindividual variability an indicator of mild cognitive impairment? Neuropsychology 19: 309–317.
  81. 81. Dixon RA, Garrett DD, Lentz TL, MacDonald SWS, Strauss E, et al. (2007) Neurocognitive markers of cognitive impairment: exploring the roles of speed and inconsistency. Neuropsychology 21: 381–399.
  82. 82. Hilborn JV, Strauss E, Hultsch DF, Hunter MA (2009) Intraindividual variability across cognitive domains: investigation of dispersion levels and performance profiles in older adults. Journal of Clinical and Experimental Neuropsychology 31: 412–424.
  83. 83. Hultsch DF, MacDonald SWS, Dixon RA (2002) Variability in reaction time performance of younger and older adults. The Journals of Gerontology Series B, Psychological Sciences and Social Sciences 57: P101–P115.
  84. 84. Papenberg G, Bäckman L, Chicherio C, Nagel IE, Heekeren HR, et al. (2011) Higher intraindividual variability is associated with more forgetting and dedifferentiated memory functions in old age. Neuropsychologia 49: 1879–1888.
  85. 85. MacDonald SWS, Hultsch DF, Dixon RA (2003) Performance variability is related to change in cognition: evidence from the Victoria Longitudinal Study. Psychology and Aging 18: 510–523.
  86. 86. Jackson JD, Balota DA, Duchek JM, Head D (2012) White matter integrity and reaction time intraindividual variability in healthy aging and early-stage Alzheimer disease. Neuropsychologia 50: 357–366.
  87. 87. Li SC, Aggen SH, Nesselroade JR, Baltes PB (2001) Short-term uctuation in elderly people’s sensorimotor functioning predict text and spatial memory performance: the MacArthur Successful Aging Studies. Gerontology 47: 100–116.
  88. 88. Siegler RS (1994) Cognitive variability: a key to understanding cognitive development. Current Directions in Psychological Science 3: 1–5.
  89. 89. Adam S, Bonsang E, Germain S, Perelman S (2007) Retraite, activités non professionnelles et vieillissement cognitif: une exploration à partir des données de SHARE [Retirement, non-professional activities and cognitive aging: an investigation based on SHARE data]. Économie et Statistique 403: 83–96.
  90. 90. Bonsang E, Adam S, Perelman S (2010) Does retirement affect cognitive functioning? Technical report, Network for Studies on Pensions, Aging and Retirement. URL
  91. 91. Garrett DD, Kovacevic N, McIntosh AR, Grady CL (2011) The importance of being variable. The Journal of Neuroscience 31: 4496–503.
  92. 92. Solomon PR, Pendlebury WW (1998) Recognition of Alzheimer’s disease: the 7 Minute Screen. Family Medicine 4: 265–271.
  93. 93. Solomon PR, Hirschoff A, Kelly B, Relin M, Brush M, et al. (1998) A 7 minute neurocognitive screening battery highly sensitive to Alzheimers disease. Archives of Neurology 55: 349–355.
  94. 94. Erber NP (1982) Washington, DC: Alexander Graham Bell Association for the Deaf.
  95. 95. Sims DG (1975) The validation of the cid everyday sentence test for use wih the severely hearing impaired. Journal of the Academy of Rehabilitative Audiology 8: 70–79.
  96. 96. Carnes P (1988) Proverbia in Fabula: Essays on the Relationship of the Fable and the Proverb. Bern: Verlag Peter Lang.
  97. 97. Ulatowska HK, Sadowska M, Kadzielawa D, Kordys J, Rymarczyk K (2000) Linguistic and cognitive aspects of proverb processing in aphasia. Aphasiology 14: 227–250.
  98. 98. Dorfman M, Brewer W (1994) Understanding the point of fables. Discourse Processes 17: 105–129.
  99. 99. Gaudreault R (1994) Textes narratifs, descriptifs et autres: Une approche sémiotique [Narrative, descriptive, and other texts: a semiotic approach]. Semiotic 99: 297–317.
  100. 100. Hanauer DI, Waksman S (2000) The role of explicit moral points in fable reading. Discourse Processes 30: 107–132.
  101. 101. Jamet MC (1988) Linguistique textuelle et analyse littéraire [Text linguistics and literary analysis]. Lingue del Mondo 53: 79–82.
  102. 102. Smith ME (1915) The fable and kindred forms. Journal of English and Germanic Philology 14: 519–529.
  103. 103. Dolby-Stahl SK (1980) Sour grapes: fable, proverb, unripe fruit. Burlakoff N, Lindahl C, editors, Folklore on Two Continents: Essays in Honor of Linda Daugh, Bloomington, IN: Trickster Press. pp. 160–168.
  104. 104. Ulatowska HK, Olness GS, Williams-Hubbard LJ (2005) Macrostructure revisited: An examination of gist responses in aphasia. Brain and Language 35: 109–110.
  105. 105. Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33: 159–174.
  106. 106. Saporta G, Niang N (2006) Correspondence analysis and classification. Greenacre M, Blasius J, editors, Multiple Correspondence Analysis and Related Methods, Boca Raton, FL: Chapman & Hall/CRC. pp. 371–392.
  107. 107. Le Roux B, Rouanet H (2010) Multiple Correspondence Analysis. Thousand Oaks, CA: SAGE.
  108. 108. Abdi H, Dunlop JP, Williams LJ (2009) How to compute reliability estimates and display confidence and tolerance intervals for pattern classifiers using the Bootstrap and 3-way multidimensional scaling (DISTATIS). NeuroImage 45: 89–95.
  109. 109. Efron B (1987) Better Bootstrap Confidence Intervals. Journal of the American Statistical Association 82: 171–185.
  110. 110. Wechsler D (1998) Wechsler Adult Intelligence Scale (3rd ed.). New York: Psychological Press.
  111. 111. Wechsler D (1998) Wechsler Memory Scale (3rd ed.). New York: Psychological Press.
  112. 112. Reitan R (1958) Validity of the trail making test as an indicator of organic brain damage. Perception and Motor Skills 8: 271–276.
  113. 113. Hart RP, Kwentus JA, Wade JB, Taylor JR (1988) Modified Wisconsin Card Sorting Test in elderly normal, depressed and demented patients. Clinical Neuropsychologist 2: 49–56.