Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys

  • Patrick R. Heck,

    Roles Data curation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Autism & Developmental Medicine Institute, Geisinger Health System, Lewisburg, PA, United States of America

  • Daniel J. Simons,

    Roles Conceptualization, Data curation, Methodology, Writing – review & editing

    Affiliation Department of Psychology, University of Illinois, Champaign, IL, United States of America

  • Christopher F. Chabris

    Roles Conceptualization, Data curation, Methodology, Project administration, Supervision, Writing – review & editing

    chabris@gmail.com

    Affiliations Autism & Developmental Medicine Institute, Geisinger Health System, Lewisburg, PA, United States of America, Institute for Advanced Study in Toulouse, Toulouse, France

65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys

  • Patrick R. Heck, 
  • Daniel J. Simons, 
  • Christopher F. Chabris
PLOS
x

Abstract

Psychologists often note that most people think they are above average in intelligence. We sought robust, contemporary evidence for this “smarter than average” effect by asking Americans in two independent samples (total N = 2,821) whether they agreed with the statement, “I am more intelligent than the average person.” After weighting each sample to match the demographics of U.S. census data, we found that 65% of Americans believe they are smarter than average, with men more likely to agree than women. However, overconfident beliefs about one’s intelligence are not always unrealistic: more educated people were more likely to think their intelligence is above average. We suggest that a tendency to overrate one’s cognitive abilities may be a stable feature of human psychology.

Introduction

The statement that a majority of people claim to be more intelligent than average is literally a textbook example of overconfidence and self-enhancement [16]. Here we ask whether such “intelligence overconfidence” is reliably found in large samples weighted to be nationally representative, differs by method of data collection (telephone or online), and varies according to demographic factors including sex, age, and race/ethnicity. The answers to these questions will help solidify the evidence base for popular claims in psychology and contribute to research on self-perceptions, overconfidence, and intelligence.

Most demonstrations of the “smarter than average” effect are conducted using convenience samples, a method that raises concerns about generalizability [7,8]. Some studies have improved upon convenience sampling by collecting nationally representative survey data from college [9] and high school [10] students to measure change in self-positivity and narcissism over time. However, student populations suffer the limitations of failing to represent older and less-educated people, differing from the general population in income, race/ethnicity, and sex, and potentially having difficulty imagining the “average person” outside of a university environment.

Sampling from a more representative source of participants can overcome these limitations. Applying probability weighting to the sample can then account for over- and under-sampling of demographic groups. Some representative surveys of people’s beliefs about their own intelligence have been reported in the media [11,12]. However, these reports do not include important methodological details like sample sizes, weighting schemes, and inferential statistics. The only published study of a nationally representative sample of Americans reporting overconfident beliefs about relative intelligence was conducted over 50 years ago [13]. For these reasons, we decided to examine the pattern of intelligence overconfidence in the present U.S. population. From two large samples weighted to be nationally representative, drawn using distinct polling methods (telephone and online), with the second constituting a replication of the first, we report the proportions of Americans who agreed with the statement, “I am more intelligent than the average person”.

Although self-enhancement and overconfidence have been demonstrated across a broad range of traits [14,15], we chose to focus on the specific trait of intelligence because of its practical and theoretical importance. Because intelligence is normally distributed (when measured as IQ), rather than skewed like many other desirable traits [14], 50% of people in the general population will be above average (i.e., the mean and the median are the same). Additionally, general intelligence is consistently and readily measured [16], predictive of a wide variety of positive outcomes [17], relevant to trait-level overconfidence [18], and broadly perceived as highly desirable [1]. Measuring population-representative beliefs about this trait allows us to draw specific conclusions about possible demographic differences that are exploratory (e.g., sex, age, and race/ethnicity) [19] and theoretically informed (e.g., education level). With the results of each survey weighted to United States Census population data, we can directly compare patterns across survey methods (e.g., telephone and online) and demographic categories. Moreover, our approach updates the only similar study, conducted over 50 years ago [13], and improves upon it by examining whether, as a population, Americans have a calibrated sense of their own intelligence [20,21]. Specifically, we asked whether college-educated respondents, who on average are more intelligent than the average person, correctly believe that they are more intelligent than average.

Method

Survey methods

Telephone.

A large telephone survey (N = 1,838) was conducted in June of 2009 by the polling company SurveyUSA using random digit-dialing to contact land-line telephone users in the United States. Volunteer participants answered a series of questions read by a pre-recorded female voice program by using their telephone’s keypad. Approximately 2.3% of the 79,014 random digit calls yielded a complete response. This response rate is typical of automated calling surveys and is associated with acceptably low amounts of sampling bias [22]. All data were collected anonymously.

Online.

A large sample of respondents (N = 983) was recruited via Amazon Mechanical Turk (MTurk) in July-August of 2011. The listing advertised that respondents would complete a “short survey of your beliefs about psychology.” Each participant was paid $0.25. Recruitment was restricted to the United States and repeat IP addresses were blocked from taking the survey. Questions were presented in the same order as the telephone survey, but appeared on screen instead of being read aloud. Of 1,020 people recruited, only 37 did not complete the survey.

These sample sizes exceeded the minimum required to detect a significant difference from 50% agreement (which would indicate no overconfidence at the population level) with 99% power. Our studies were not preregistered because they were conducted before preregistration was common in psychology. Therefore, all statistical inferences we draw may be regarded as exploratory rather than confirmatory.

Procedure

The telephone survey originally was designed to achieve a nominal, nationally representative sample of 1,500 participants after weighting to the 2000 U.S. Census, and the online survey was designed to achieve a nominal sample of 750 after weighting to the 2010 census. The telephone survey was re-weighted to achieve a nominal sample of 750 participants based on 2010 census figures to allow for a direct comparison between the surveys. These sizes are typical of representative public opinion and political polls. In each case, we recruited more than the nominal number of participants to ensure adequate representativeness of the U.S. population after weighting. In addition to the item “I am more intelligent than the average person,” participants responded to items regarding popular myths about memory, attention, and the brain (full survey available at https://osf.io/zkh3e/?view_only=57b247e35eb4496399f40ca20cdf635f). These results are reported elsewhere [2326].

For each item, participants chose one of five possible responses (Strongly Agree; Mostly Agree; Mostly Disagree; Strongly Disagree; Don’t Know). Additionally, participants provided the following information: sex, age, race/ethnicity, education, household income, region, number of psychology classes ever taken, and number of psychology books read in the last three years. All participants answered the same questions in the same order, and response options were always presented in the order shown above. All research reported in this manuscript was conducted with approval from the IRB of the University of Illinois.

Weighting to U.S. Census

To directly compare both surveys, we weighted each to a nominal nationally representative sample of 750 nationally Americans using the 2010 U.S. Census demographics for sex (male, female), age (< 44, ≥ 44; based on median age), and race/ethnicity (white, nonwhite). Dichotomous weighting accounts for the over- or under-sampling for each combination of demographics in each polling method and is standard practice in polling and survey methodology. A greater proportion of women and older Americans completed the telephone survey and a greater proportion of younger Americans completed the online survey. Table 1 displays the raw sample sizes and demographic weightings applied to each sample. All data are publicly available (https://osf.io/zkh3e/?view_only=57b247e35eb4496399f40ca20cdf635f).

thumbnail
Table 1. Weighted sample proportions and demographic weighting values obtained from 2010 U.S. Census data.

https://doi.org/10.1371/journal.pone.0200103.t001

Results

Combining both surveys (resulting in 1,500 total weighted participants), 65% of participants agreed with the statement “I am more intelligent than the average person,” with 20% choosing “Strongly Agree” and 45% choosing “Mostly Agree.” The remaining 35% of participants included those who chose “Mostly Disagree,” “Strongly Disagree,” or “Don’t Know.” Because we classified “Don’t Know” responses as not agreeing, 65% represents a conservative estimate of the proportion of people who place themselves above average. Considering only people who expressed an opinion (i.e., excluding all “Don’t Know” responses), nearly three times as many people agreed (65%) as disagreed (23%) that they are above average in intelligence. Fig 1 presents, for each weighted demographic category, the percentage of participants choosing each level of agreement (Tables 25 display results for all demographic categories). Here and for subsequent results, we include two-tailed p-values for their heuristic value, but they should be treated as the outcomes of exploratory analyses rather than as confirmatory tests [27]. We report group differences using the z-test for differences between two independent proportions along with the 95% confidence interval around the difference.

thumbnail
Fig 1. Percentages of participants reporting each level of agreement with the statement, “I am more intelligent than the average person”.

Weighted to 2010 U.S. Census categories for sex, age, and race/ethnicity. Top: telephone survey; Bottom: online survey.

https://doi.org/10.1371/journal.pone.0200103.g001

Comparing telephone and online surveys

Before weighting to the U.S. census, a greater percentage of online survey participants (68%) than telephone survey participants (62%) claimed to be more intelligent than average (difference = 5.9%, 95% CI: [2.2%, 9.6%]), z = 3.11, p = .002. This difference diminished after weighting (telephone: 65% agreement; online: 66% agreement, difference = 0.9%, 95% CI: [–3.9%, 5.7%]), z = .33, p = .75, indicating that overall agreement did not differ between weighted samples.

The smarter than average effect within weighted demographic categories

Sex. In both surveys, men were more likely to agree that they are more intelligent than average than were women (telephone: 71% vs. 59%, difference = 12.4%, 95% CI: [5.5%, 19.0%]), z = 3.54, p < .001; (online: 72% vs. 60%, difference = 12.2%, 95% CI: [5.6%, 19.0%]), z = 3.57, p < .001.

Men were much more likely to strongly agree” with the intelligence statement than were women (telephone: 29% vs. 16%, difference = 13.0%, 95% CI: [7.1%, 18.9%]), z = 4.30, p < .001; (online: 24% vs. 12%, difference = 11.9%, 95% CI: [6.5%, 17.5%]), z = 4.28, p < .001. Men and women chose “mostly agree” in similar proportions (telephone: 42% vs. 43%, difference = 0.7%, 95% CI: [–6.4%, 7.7%]), z = .19, p = .85; (online: 48% vs. 48%, difference = 0.3%, 95% CI: [–7.0%, 7.3%]), z = .04, p = .97. Thus, the overall sex difference was driven by differences in participants who “strongly agree” that they are more intelligent than the average person.

Age. Younger Americans (< 44 years) were more likely to agree than were older Americans (≥ 44 years) in the online survey (71% vs. 60%, difference = 11.6%, 95% CI: [4.9%, 18.4%], z = 3.38, p < .001), but this difference was smaller in the telephone survey (67% vs. 62%, difference = 5.0%, 95% CI: [–1.2%, 11.8%], z = 1.45, p = .147). To ensure that the age effect observed in each survey was not a spurious result of dichotomization, we regressed the dichotomized agreement variable on the continuous unweighted measure of age using logistic regression. In both surveys, younger Americans were more likely to agree: telephone exp(B) = .992, 95% CI: [.986, .997], Wald = 7.96, p = .005; online exp(B) = .978, 95% CI: [.969, .989], Wald = 17.37, p < .001.

Younger Americans (< 44 years) were more likely to “strongly agree” than were older Americans (≥ 44 years) (telephone: 27% vs. 17%, difference = 9.5%, 95% CI: [3.7%, 15.5%]), z = 3.17, p = .002; (online: 21% vs. 15%, difference = 6.7%, 95% CI: [1.5%, 12.4%]), z = 2.48, p = .013. Younger and older adults were comparably likely to respond “mostly agree” (telephone: 40% vs. 45%, difference = 4.5%, 95% CI: [–2.5%, 11.5%]), z = 1.30, p = .21; (online: 50% vs. 45%, difference = 4.8%, 95% CI: [–2.3%, 11.9%]), z = 1.32, p = .19.

Race/ethnicity. White and nonwhite Americans showed similar tendencies to believe that they were smarter than average in the telephone survey (66% vs. 63%, difference = 2.5%, 95% CI: [–4.9%, 10.1%], z = .646, p = .518) and online survey (64% vs. 71%, difference = 7.4%, 95% CI: [–0.2%, 14.3%], z = 1.91, p = .056).

Non-white Americans were more likely to “strongly agree” than were white Americans on the telephone survey (29% vs. 19%, difference = 9.2%, 95% CI: [2.7%, 16.4%]), z = 2.80, p = .005, but not on the online survey (19% vs. 18%, difference = 0.9%, 95% CI: [–4.9%, 7.2%]), z = .28, p = .78. White participants were more likely than non-white participants to select “mostly agree” on the telephone poll (46% vs. 34%, difference = 11.7%, 95% CI: [4.1%, 19.1%]), z = 2.98, p = .001, but this pattern was smaller and in the opposite direction in the online poll (white: 46% vs. nonwhite: 52%, difference = 6.5%, 95% CI: [–1.4%, 14.1%]), z = 1.61, p = .11.

Education: Are beliefs calibrated?

In both surveys, people with more education were more likely to claim above-average intelligence (see Fig 2). A linear contrast of education level, assigning contrast weights of –3, –1, 1, and 3 to the education levels of no college, some college, college graduate, and graduate degree, predicted unweighted agreement (disagreement and agreement were assigned 0 and 1, respectively) in both the telephone survey (contrast estimate = 0.802, 95% CI: [0.597, 1.01], t(1819) = 7.87, p < .001) and the online survey (contrast estimate = 0.564, 95% CI: [0.228, .900], t(978) = 3.29, p = .001). Although the measured education levels are not linear per se, the typical number of years of education required to attain each level follow a nearly linear structure. The proportion of Americans in our samples holding a college degree or higher (telephone: 36%; online: 42%) approximated the 2010 U.S. Census figure of 36%. Consequently, our “smarter than average” effect across education levels is unlikely to have resulted from oversampling highly-educated people.

thumbnail
Fig 2. Percentages of participants agreeing with the statement, “I am more intelligent than the average person,” grouped by education level.

Agreement was measured as selecting either “Strongly Agree” or “Mostly Agree”.

https://doi.org/10.1371/journal.pone.0200103.g002

What proportions of people should claim above-average intelligence? Given that the average college graduate has an IQ approximately 13–15 points (one standard deviation) above the population mean (based on WAIS norms [28] and Bureau of Labor Statistics NLSY-97 data [29]), we compared the proportion of participants with a college or graduate degree who agreed that they were more intelligent than average (telephone: 73%; online: 73%) with the proportion of college graduates who could be considered more intelligent than the average American (84% by one account [28], or 81% by another [29]). This result suggests that college graduates in our samples actually slightly underestimated their relative intelligence. Conversely, data in the NLSY-97 study put the average IQ score for Americans with a high school diploma or GED at 99, which implies that only 47% of individuals in this category can be considered above average [29]. Of those in our sample who reported “no college” or “some college,” 55% of the telephone sample and 62% of the online sample claimed above-average intelligence. This result suggests that relatively uneducated participants tended to overestimate their relative intelligence [30,31]. Because only a minority of Americans have college degrees, members of the population as a whole tended to somewhat overestimate their relative intelligence.

Discussion

Two surveys, weighted to be nationally representative (total N = 2,821), found that nearly two-thirds of Americans believe that they are more intelligent than average. The survey methods (telephone, online) yielded similar overall agreement rates after weighting responses to match the U.S. population in sex, age, and race/ethnicity. In both surveys, men were more likely to express confidence in their intelligence than were women, and younger people were somewhat more likely to agree with the claim than older people.

These beliefs about relative intelligence appeared to be somewhat calibrated: Highly educated individuals were more likely to agree that they are more intelligent than the average person, whereas relatively uneducated individuals were less likely to agree [21, 31, 32]. Still, even among the least educated group of respondents, 50% or more agreed that they were above average in intelligence. These findings are consistent with several major theories of overconfidence: that the least intelligent are the most overconfident [30]; that self-perceptions are somewhat calibrated to reality [33]; and that comparative self-judgments regress toward the mean when collected from groups of educated and uneducated individuals [34, 35].

Our results do not explain why 65% of Americans agree that they are more intelligent than average. Several explanations are plausible [36]. First, although one-item, self-report measures of global intelligence correlate positively with IQ scores [37], participants may conceive of intelligence more broadly [38] and select that aspect of intelligence where they believe they outperform others. If so, more than 50% of people might actually be above average in some aspect of intelligence even if only 50% can be above average on IQ. Still, our finding that more educated people are more likely to agree suggests that participants are thinking to at least some extent about general intelligence.

Second, people may choose different baselines when comparing themselves to “the average person.” If people define “average” differently, perhaps based on who they encounter regularly [39], then more than 50% of respondents might report greater than average intelligence. Note that for this possibility to hold true and to be inconsistent with overly optimistic beliefs, people would need to systematically calibrate their notion of average downward (less intelligent people would need to choose a lower “average” than more intelligent people). Finally, it may simply be the case that people are somewhat calibrated, though overly optimistic on average, in their beliefs about their own intelligence [35].

Because these results were collected from and weighted based on the United States population, we caution against generalizing our findings before they are replicated in other cultures and regions. Our methodology was limited by the static question order presented to participants. Although we had no a priori reason to expect an order effect in this context, future research should consider this possibility. In a nationally representative study of Americans’ beliefs about competency in handling firearms, overconfidence was measured using a similar one-item, direct comparison measure [40]. The authors reported no difference in overconfidence regardless of whether or not there was a neutral scale midpoint. These results were similar across 2-, 3-, 5-, and 13-point rating scales. Thus, we have no reason to believe that including a neutral midpoint would have meaningfully affected our results. The education-based analysis was limited to comparisons based on population characteristics, not objectively measured individual performance.

Despite these limitations, we conclude that Americans’ self-flattering beliefs about intelligence are alive and well several decades after their discovery was first reported. Our results update the textbook phenomenon of intelligence overconfidence by (1) replicating the effect using large, representative, contemporary samples and two distinct survey methods, (2) demonstrating a degree of calibration across levels of education, and (3) showing moderation based on sex and age. The endurance of the smarter-than-average effect is consistent with the possibility that a tendency to overrate one’s own abilities is a stable feature of human psychology.

References

  1. 1. Alicke MD. Global self-evaluation as determined by the desirability and controllability of trait adjectives. Journal of Personality and Social Psychology. 1985;49(6):1621–30. http://doi.org/10.1037//0022-3514.49.6.1621
  2. 2. Bachman JG. Youth in Transition. Volume II, The Impact of Family Background and Intelligence on Tenth-Grade Boys. Institute for Social Research, University of Michigan, Ann Arbor, Michigan; 1970.
  3. 3. Brim OG. College grades and self-estimates of intelligence. Journal of Educational Psychology. 1954;45(8):477–84. http://doi.org/10.1037/h0057492
  4. 4. Myers D. Social psychology. McGraw-Hill Higher Education, Columbus OH; 2012.
  5. 5. Torrance EP. Some practical uses of a knowledge of self-concepts in counseling and guidance. Educational and Psychological Measurement. 1954;14(1):120–7. http://doi.org/10.1177/001316445401400110
  6. 6. Wylie RC. The self-concept: Theory and research on selected topics. Revised Edition. Vol. 2. Lincoln: University of Nebraska Press; 1979.
  7. 7. Henrich J, Heine SJ, Norenzayan A. Beyond WEIRD: Towards a broad-based behavioral science. Behavioral and Brain Sciences. 2010;33(2–3):111–35. pmid:20550733
  8. 8. Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716. http://doi.org/10.1126/science.aac4716
  9. 9. Twenge JM, Campbell WK, Gentile B. Generational increases in agentic self-evaluations among American college students, 1966–2009. Self and Identity. 2011;11(4):409–27. http://dx.doi.org/10.1080/15298868.2011.576820
  10. 10. Trzesniewski KH, Donnellan MB, Robins RW. Do today's young people really think they are so extraordinary? Psychological Science. 2008;19(2):181–8. pmid:18271867
  11. 11. Campbell, M. The Globe and Mail. 100% Canadian [Internet]. (2000, Dec 30); Available from http://www.craigmarlatt.com/canada/symbols_facts&lists/100_Canadian.html.
  12. 12. YouGov. Respondent Intelligence. Question 1 of 2, April 30th—May 2nd, 2014; Available from http://cdn.yougov.com/cumulus_uploads/document/gjfw827qts/tabs_OPI_intelligence_20140502.pdf
  13. 13. Brim OG, Neulinger J, Glass DC. Experiences and attitudes of American adults concerning standardized intelligence tests. Technical Report. New York: Russell Sage Foundation. 1965.
  14. 14. Guenther CL, Alicke MD. Deconstructing the better-than-average effect. Journal of Personality and Social Psychology. 2010;99(5):755–70. http://dx.doi.org/10.1037/a0020959 pmid:20954785
  15. 15. Sedikides C, Gaertner L, Cai H. On the panculturality of self-enhancement and self-protection motivation: The case for the universality of self-esteem. Advances in Motivation Science 2015;2:185–241.
  16. 16. Chabris CF. Cognitive and neurobiological mechanisms of the Law of General Intelligence. In: Roberts, M.J., editor. Integrating the mind: Domain general versus domain specific processes in higher cognition. 2007:449–491.
  17. 17. Kuncel NR, Hezlett SA, Ones DS. Academic performance, career potential, creativity, and job performance: Can one construct predict them all? Journal of Personality and Social Psychology. 2004;86(1):148–161. pmid:14717633
  18. 18. Cesarini D, Lichtenstein P, Johannesson M, Wallace B. Heritability of overconfidence. Journal of the European Economic Association. 2009;7(2‐3):617–627.
  19. 19. Cesarini D, Sandewall Ö, Johannesson M. Confidence interval estimation tasks and the economics of overconfidence. Journal of Economic Behavior & Organization. 2006;61(3):453–470. https://doi.org/10.1016/j.jebo.2004.10.010
  20. 20. Benoît J-P, Dubra J, Moore DA. Does the better-than-average effect show that people are overconfident? Two experiments. Journal of the European Economic Association. 2014;13(2):293–329. http://doi.org/10.1111/jeea.12116
  21. 21. Heck PR, Krueger JI. Self-enhancement diminished. Journal of Experimental Psychology: General. 2015;144(5):1003. http://doi.org/10.1037/xge0000105
  22. 22. Keeter S, Hatley N, Kennedy C, Lau A. What low response rates mean for telephone surveys. Pew Research Center. 2017.
  23. 23. Chabris CF, Simons DJ. The invisible gorilla, and other ways our intuitions deceive us. New York: Crown; 2010.
  24. 24. Holtzman GS, Chabris, CF, Simons DJ. The Widespread Prevalence of Neuromyths, and Why Psychological Scientists Should Actively Debunk Them. Manuscript submitted for publication.
  25. 25. Simons DJ, Chabris CF. What people believe about how memory works: A representative survey of the US population. PLoS one. 2011;6(8):e22757. http://doi.org/10.1371/journal.pone.0022757 pmid:21826204
  26. 26. Simons DJ, Chabris CF. Common (mis) beliefs about memory: A replication and comparison of telephone and Mechanical Turk survey methods. PLoS one. 2012;7(12):e51876. http://doi.org/10.1371/journal.pone.0051876 pmid:23272183
  27. 27. Krueger JI, Heck PR. The heuristic value of p in inductive statistical inference. Frontiers in Psychology. 2017 Jun 9;8:108. http://doi.org/10.3389/fpsyg.2017.00908 pmid:28210235
  28. 28. Matarazzo JD. Wechsler's measurement and appraisal of adult intelligence. Oxford University Press; 1972.
  29. 29. Murray C. Coming apart: The state of white America, 1960–2010. New York: Crown Forum; 2013.
  30. 30. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology. 1999;77(6):1121. http://dx.doi.org/10.1037/0022-3514.77.6.1121 pmid:10626367
  31. 31. Moore DA, Healy PJ. The trouble with overconfidence. Psychological Review. 2008;115(2):502–17. http://doi.org/10.1037/0033-295X.115.2.502 pmid:18426301
  32. 32. Krueger JI, Heck PR, Asendorpf JB. Self-enhancement: Conceptualization and Assessment. Collabra: Psychology. 2017 Nov 22;3(1). http://doi.org/10.1525/collabra.91
  33. 33. Zell E, Krizan Z. Do people have insight into their abilities? A metasynthesis. Perspectives on Psychological Science. 2014;9(2):111–25. https://doi.org/10.1177/1745691613518075 pmid:26173249
  34. 34. Krueger JI, Mueller RA. Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of Personality and Social Psychology. 2002;82(2):180–8. http://dx.doi.org/10.1037/0022-3514.82.2.180 pmid:11831408
  35. 35. Moore DA, Small DA. Error and bias in comparative judgment: on being both better and worse than we think we are. Journal of Personality and Social Psychology. 2007;92(6):972. pmid:17547483
  36. 36. Benoît JP, Dubra J. Apparent overconfidence. Econometrica. 2011 Sep 1;79(5):1591–625. https://doi.org/10.3982/ECTA8583
  37. 37. Paulhus DL, Lysy DC, Yik MS. Self-report measures of intelligence: Are they useful as proxy IQ tests? Journal of Personality. 1998 Aug 1;6. https://doi.org/10.1111/1467-6494.00023
  38. 38. Furnham A. Self-estimates of intelligence: Culture and gender difference in self and other estimates of both general (g) and multiple intelligences. Personality and Individual Differences. 2001 Dec 1;31(8):1381–405. https://doi.org/10.1016/S0191-8869(00)00232-4
  39. 39. Galesic M, Olsson H, Rieskamp J. A sampling model of social judgment. Psychological Review. 2018 Apr;125(3):363. http://dx.doi.org/10.1037/rev0000096
  40. 40. Stark E, Sachau D. Lake Wobegon’s guns: Overestimating our gun-related competences. Journal of Social and Political Psychology. 2016;4(1):8–23.