Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

When intents to educate can misinform: Inadvertent paltering through violations of communicative norms

  • Derek Powell ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Psychology, Stanford University, Stanford, CA, United States of America

  • Lin Bian,

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Human Development, Cornell University, Ithaca, NY, United States of America

  • Ellen M. Markman

    Roles Conceptualization, Investigation, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Psychology, Stanford University, Stanford, CA, United States of America


Paltering is a form of deception whereby true statements are used to mislead and is widely employed in negotiations, marketing, espionage, and ordinary communications where speakers hold ulterior motives. We argue that paltering is accomplished through strategic violations of communicative norms such as the Gricean cooperative principles of relevance, quantity, quality and manner. We further argue that, just as genuine paltering deceives by deliberately violating communicative norms, inadvertent violations of these norms may be just as misleading. In this work, we demonstrated that educational information presented prominently on the American Diabetes Association website violated the Gricean communicative principles and disrupted readers’ performance on a test of diabetes knowledge. To establish the effects of these communicative violations, we revised the ADA's information to preserve the original content while better adhering to pragmatic principles. When these ADA explanations were judiciously revised to minimize pragmatic violations, they were transformed from misleading to educational.


Telling the truth can be used to mislead. The act of deceiving through truthful statements is known as “paltering” [1] and is widely employed in negotiations (e.g., [2]), political discourse (e.g., [3,4]), marketing and sales (e.g., [5,6]), and espionage (e.g., [7]) as well as in many ordinary interactions where speakers hold ulterior motives (e.g., [8,9]). Palterers imply rather than assert falsehoods, leading listeners to draw false inferences from true statements. Paltering is sometimes preferable to flat-out lying because hewing closer to the truth can be more persuasive, can be less likely to be discovered, if discovered can preserve deniability, and can hide speakers’ bias (e.g., [10]). Paltering also offers a way to deceive without experiencing the guilt of lying outright [2]. Although genuine paltering is a form of deliberate deception, inadvertent paltering is also possible. What would count as inadvertent paltering? Following Malle and Knobe’s [11] model of a folk concept of intentionality, we suggest that inadvertent paltering takes place when the speaker (a) does not desire to deceive and (b) is not aware of communicating in a misleading way. We have previously presented one case-study in which well-intentioned speakers inadvertently engaged in paltering ([12]; also see [10] for further instances of misleading through pragmatic implications). In the current work, we argue that deliberate paltering is accomplished through strategic violations of communicative norms and that inadvertent violations of these same norms can be equally misleading. We demonstrate the consequences of inadvertent paltering in materials presented on a widely-respected health website.

We first turn to one of the foundational theories of pragmatics formulated by Grice [13]. One of the fundamental insights of linguistic pragmatics is that listeners are continually drawing inferences, filling in blanks, interpreting, and making sense of what speakers say. It is important that these inferences are sound, reflecting the speaker’s communicative intent. Grice argued that this is made possible by speakers and listeners engaging in a cooperative enterprise where each is making assumptions about their mutual knowledge or common ground. Grice formulated a set of principles or maxims that listeners expect their communicative partners to adhere to. The assumption that a communicative partner will adhere to these principles guides listeners in their interpretations of their partners’ utterances. The Gricean principles of quality, quantity, relevance, and manner all work together to ensure successful communication.

The principle of “relevance” sets the expectation that a speaker’s utterances are relevant to the topic under discussion. This is essential for the interpretations of pronouns and determiners for example. “He, she, they, that, this” can only be correctly interpreted in a sentence if the preceding context makes it clear what they refer to. But many more subtle examples exist. For example, the utterance “It looks like rain today” as an answer to a question has radically different interpretations depending on the question asked. If asked, “Should we go on a picnic?” “It looks like rain today” means, “No, we shouldn’t”. If asked “Is the drought ever going to end?”, “It looks like rain today” means, “Yes, maybe the drought is ending.”

The principle of “quality” sets the expectation that the utterance will be truthful and that speakers have justification or evidence for believing their statements to be truthful. To illustrate, suppose you are at the airport when you overhear one passenger mention to another that their flight was delayed, and you realize this might be your flight to New York. Then, someone approaches you and asks whether the flight to New York is delayed. If you answered “yes,” the principle of quality would lead that person to assume you not only believed but had good evidence that the flight was delayed. In this case, they might decide to leave the area and go have lunch. Clearly, when evidence is merely hearsay, it is more felicitous to say, “I think so, but I’m not sure.”

The principle of “quantity” sets the expectation that speakers will provide an appropriate amount of information and detail: as much as is needed, but no more. Someone who apologizes for running late by saying “I had a hard time finding a parking space for my Porsche today,” is providing more specific information than is needed. This could lead listeners to wonder about the speaker’s communicative intent and infer the speaker is bragging as well as apologizing.

Finally, the principle of manner sets the expectation that speakers will avoid ambiguity, present information in an orderly manner, and be brief.

All of these principles can be flouted, that is blatantly violated in ways that are meant to be obvious to the listener. This technique often underlies irony, humor, metaphor, and sarcasm. Sarcasm, for example, flouts the quality principle. If someone has just mentioned seeing a really disturbing documentary on the holocaust and then says “It was a really lighthearted, upbeat movie”, listeners can readily infer she means the opposite. A listener will not be misled or deceived so long as they are able to detect the intentional violation of the principle. In contrast, deception relies on the strategic violation of these principles with the hope that listeners will not detect these violations. Outright falsehoods are clearly a violation of the maxim of quality. More subtle are the violations that constitute paltering—telling the truth but deftly manipulating other communicative principles to effectively mislead or deceive.

We turn now to some examples of genuine paltering. According to a review by Luscombe [7], palterers commonly seek to “evade certain elements of the truth through vague descriptions, misplaced emphasis and omissions” (also see [1]). We argue that these methods palterers employ to accomplish their deceptions all rely on exploiting and subtly violating the Gricean principles.

Vagueness is a violation of quantity and manner that, along with the principle of relevance, invites listeners to fill in gaps with plausible, but false, inferences. Rogers et al. [2] provide an example of an executive looking to fill a position who has identified only one candidate who fits. To reduce the leverage held by the candidate, the executive might say: “There is a great deal of demand for this position from a large number of impressive individuals.” Although it may be true that many “impressive” people applied, their credentials were not a good fit for this particular position. This deception relies on the listener interpreting the vague “impressive” as being relevant to the criteria used for hiring.

Palterers also emphasize and provide irrelevant information to deflect attention away from topics they wish to avoid (e.g., [6,9,2]). One common technique is to begin a story with details of secondary importance to the reader while postponing more essential facts [1]. Misplaced emphasis is a violation of the principle of relevance and quantity, according to which listeners are expecting a given utterance to be importantly relevant to the topic under discussion. Irrelevant information not only deflects attention, but can also be used to mislead more directly. For instance, presented with one question or topic of discussion, palterers sometimes address another. Here too, they violate the principles of relevance and quantity. Politicians are notorious for using these tactics (e.g., [3]). Emphasizing irrelevant information often goes hand-in-hand with the omission of crucial information, a further violation of the quantity principle.

Often, paltering involves violation of multiple Gricean principles. Suppose someone is selling a car and a potential buyer asks, “Does this car need any maintenance?” The car is scheduled for maintenance fairly soon, but a paltering seller might respond, “It’s a great car, it’s always run beautifully”. This violates relevance and quantity: It might be true that the car has run smoothly, but the seller is answering a different question (violating relevance) while failing to mention that the car needs maintenance soon (violating quantity).

Pragmatic violations of this sort, such as subtle and strategic implications, are widespread in marketing communications and product labeling [14]. Behavioral research has shown that pragmatic implications can be just as forceful as direct assertions in advertising contexts [15]. In some cases, a certain amount of disclosure is often legally mandated, such as in the marketing and sales of financial services and pharmaceutical products. Even in these cases, however, some companies contrive to provide just enough information to meet the disclosure requirements while still concealing many known risks from potential consumers [5], a violation of quantity. Still, if misrepresentation is significant enough, companies can be held liable for paltering communications (e.g. [16]).

Our present study focuses on how these same kinds of paltering techniques might be accomplished inadvertently if speakers do not take care to respect Gricean principles of communication. In a recent case study, Powell et al. [12] found that the American Diabetes Association’s website inadvertently paltered by presenting an infelicitous list of myths about diabetes, likely misleading health consumers. The American Diabetes Association website is a highly reputable and visible source for diabetes information [17]. This content is prominently displayed on the site and has also been linked, reproduced, or paraphrased on many other prominent health websites. In contrast to the other instances of paltering we have discussed, like Powell et al. [12], we do not believe that the ADA was trying to be misleading, but most likely meant to provide constructive information and advice.

The offending page presented a list of ten diabetes “myths” alongside ten explanations for these myths meant to correct those misconceptions [18]. Labeling a statement a “myth” marks it as both widely believed and robustly false. That is, a felicitous speaker would not label a statement a “myth” if it were false only on an uncharitable or highly technical reading. However, the ADA website listed as “myths” statements that were partly or even largely true. For instance, “Myth: People with diabetes can’t eat sweets or chocolates”. Obviously, labeling this statement a myth implies that people with diabetes can eat sweets and chocolates. It may even license a reader to further infer that people with diabetes may eat sweets and chocolates regularly, or just as people without diabetes do. Participants who read these infelicitous myths performed significantly worse on a test of basic diabetes knowledge than participants who had not been exposed to these infelicitous myths. Rephrasing the myths as questions which don’t have such presuppositions (e.g., “Can people with diabetes eat sweets or chocolates?”) eliminated their deleterious effects [12]. Thus, it seems that the infelicitous use of the “myth” label misled readers by an apparently unintentional form of paltering.

Powell et al. [12] focused only on the pragmatics of the ADA’s myths. As mentioned earlier, however, the ADA’s infelicitous myths were accompanied by explanations meant to clarify the myths. One striking and unpredicted finding from Powell et al. [12] was that not only did these explanations fail to counter the detrimental effects of reading the myths, the ADA’s explanations were themselves problematic. Recall that when the myths were rephrased as questions, participants’ diabetes knowledge was preserved. In contrast, when participants read the ADA’s explanations alongside those questions, their performance on the knowledge test suffered just as much as those participants who had read the infelicitous myths [12].

Here, we now argue that the ADA’s explanations were flawed because they contained the same communicative violations that characterize genuine paltering. To illustrate, consider the ADA’s explanation for the myth that “People with diabetes are more likely to get colds and other illnesses.”:

ADA’s explanation

You are no more likely to get a cold or another illness if you have diabetes. However, people with diabetes are advised to get flu shots. This is because any illness can make diabetes more difficult to control, and people with diabetes who do get the flu are more likely than others to go on to develop serious complications.

There are a number of pragmatic violations in this explanation that have the potential to mislead readers. In the first sentence, “you are no more likely to get a cold or another illness if you have diabetes,” the use of “another illness” is vague in a way that violates the principle of manner. It’s unclear whether illnesses include only things like colds and other viral infections, or also include things like pneumonia, or even things like heart disease or kidney disease. Depending on the scope of “illness,” this statement might readily be false: for example, people with diabetes do have an increased risk of pneumonia. Another problem is that the explanation violates quantity and relevance by focusing solely on the likelihood of contracting an illness. When considering risk for illness, the probability of contraction is not the only relevant consideration, but also the severity of the illness and its effect on function and well-being. By not specifically clarifying that their statements pertain only to the likelihood of contraction, the ADA’s explanation falsely invites the interpretation that people with diabetes are no more affected by illnesses than people without diabetes. Delaying this explicit point and eventually focusing only on the flu is a violation of both relevance and quantity. Altogether these violations give the false impression that diabetes has little detrimental effect on the immune system.

Our goal was to examine the effects of the pragmatic violations rather than semantic content of these explanations. Thus, we sought to revise them as minimally as possible, resolving the pragmatic violations without changing the actual content being presented. As much as possible, we sought to avoid adding content that was not originally presented, but in some cases we did remove distracting content in order to correct pragmatic violations. Here is a revision crafted under these constraints that we predicted should be informative rather than misleading:

Revised explanation

Although you may not be more likely to get a cold if you have diabetes, people with diabetes who do get the flu, for example, are more likely than others to go on to develop serious complications and any illness can make diabetes more difficult to control. For this reason, people with diabetes are advised to get flu shots.

In the revised explanation, we first addressed the violation of manner caused by the vague use of “another illness” by deleting this phrase. Then, we corrected the violations of quantity and relevance, whereby the ADA’s explanation focused solely on the likelihood of contraction. By beginning the first sentence with “although,” we explicitly set up a contrast between the likelihood of contraction and the potential severity of illness. By moving this point up in the explanation, we corrected the violation of the relevance principle. In addition, adding “for example” generalized the risk of increased severity beyond the flu by stipulating that complications from the flu were an example of a more general point.

On our analysis, five of the ADA’s explanations included violations of the four Gricean maxims, including violations of manner, relevance, quantity, or quality. Table 1 presents our analysis of the ADA’s explanations and their pragmatic violations in detail, along with the measures we took to correct them. By judiciously editing the ADA’s explanations to minimize pragmatic violations, we should be able to retain the important informational content while avoiding paltering. Once edited, the explanations should no longer mislead and instead should become informative.

Table 1. The ADA’s myths and accompanying explanations, along with those myths rephrased as questions, our analysis of the pragmatic violations in the ADA’s explanations, and our revised explanations.

Sources supporting the factual accuracy of our revised explanations are the same as those supporting the correct and incorrect answers for the diabetes knowledge scale, referenced in Table 2.


This study was approved by the Institutional Review Board Panel on non-medical human subjects at Stanford University (protocol: IRB-14174). Participants’ consent was obtained on a form presented online as part of the survey procedure.

Design overview

In this study we (1) analyzed how the communicative violations that drive paltering could have made the ADA’s explanations misleading and (2) revised the ADA’s explanations to avoid or minimize paltering. If our revisions were effective, these edited explanations should no longer have detrimental effects on people’s understanding.

To test this, we compared people’s knowledge of diabetes in six experimental conditions. The six conditions differed in the informational content that was presented before the diabetes knowledge test. This information included either the ADA’s original myths or those myths reframed as questions either alone, or paired with the ADA’s explanations or our revised explanations, resulting in a 2 (framing: myth-framing, question-framing) x 3 (presentation of explanations: no explanation, ADA’s explanation, revised explanation) factorial design.

Powell et al. [12] found that participants’ performance in the rephrased questions condition was identical to performance in a baseline control condition in which participants did not receive any information about diabetes. Thus, in the current study, the questions only condition served as a baseline condition. Further, we expected to replicate the earlier finding that the ADA’s explanations undermined participants’ baseline understanding of diabetes. Finally, in the key novel conditions, we presented edited explanations that avoided paltering. We predicted that these revised explanations should not impair knowledge when presented alongside the felicitous questions, and may even correct misconceptions induced by the infelicitous myths.


A sample size of 50 participants per condition was chosen based on the results of Powell et al. [12]. A total of 328 participants living in the United States were recruited from Amazon's Mechanical Turk service. Thirty-two participants were excluded from analyses after failing an attention check question, leaving 296 participants (Mean age = 33.95; 189 women, 107 men) in the final analysis. All participants were paid $0.75 for participation.

Among the people who responded, 14.9% reported that they have been diagnosed with diabetes. Among the non-diabetic participants, 14.6% reported that they are prediabetic and 67.8% reported that they have a family member or someone else who is close that has been diagnosed with diabetes.

Sixty-one percent of the participants were non-Hispanic white, 22.6% Asian American, 9.8% Black or African American, 4.4% Hispanic or Latino, 1.0% Native American and 1.0% other. The median yearly household income was $30,001 to $50,000. Sixty-four percent of the participants in the sample had at least a bachelor’s degree.

Procedure and materials

Participants were directed to an online Qualtrics survey. After a brief demographic questionnaire, they were asked to read some information from the American Diabetes Association website.

Next, participants were randomly assigned into one of six conditions in a 2 (framing: myth-framing vs. question-framing) X 3 (presentation of explanations: no explanation, ADA explanation, or revised explanation) between-subjects design. A first group of participants was recruited and each participant was randomly assigned to one of the three question-framing conditions. After reflection, we determined that another set of conditions where participants were exposed to the myths was needed to properly align the current study with the prior work of Powell et al. [12]. A second group of participants was recruited and each participant was randomly assigned to one of the three myths-framing conditions. For simplicity, we present these two phases together as a single experiment. We identified 5 explanations with pragmatic violations on the American Diabetes Association website ( For each explanation, we revised the language to avoid paltering, generating 5 revised explanations in total. A summary of the original and the revised explanations are presented in Table 1.

Participants in the myth-framing conditions were told they would read some myths about diabetes and then presented with the 5 relevant “diabetes myths” from the ADA’s website. Participants in the question-framing conditions were told they would read about some common questions that people have about diabetes and were then presented with 5 questions reframed from the myths. Alongside the myths or the rephrased questions, participants either read no explanations, the ADA’s explanations, or our revised explanations.

To measure participants’ diabetes knowledge in each condition, we adopted the ten true-false items from Powell et al. [12] (Table 1). The correct answers to these questions were determined based on information available on ADA websites as documented by Powell et al. [12] and reproduced in supplemental materials. Participants were asked to judge if each statement was true or false on a 4-point scale (“definitely true”, “probably true”, “probably false”, or “definitely false”). The scale allowed us to assess both participants’ knowledge accuracy and their confidence in their answers.

Table 2. Diabetes knowledge questions and answer sources (* correct answer is false).

Table reproduced from Powell et al. [12].

Participants were also asked to enter “probably false” for an attention check question included in the questionnaire. Those who failed this attention check were excluded from analyses.


The main goal of this study was to determine whether correcting the pragmatic violations in the ADA’s explanations would mitigate their negative effects on people’s diabetes knowledge.

Recall, participants were asked to judge if each statement was true or false on a 4-point scale (“definitely true”, “probably true”, “probably false”, or “definitely false”). Participants’ responses were re-coded as a binary accuracy measure, indicating correct and incorrect responses. The proportion of correct responses across conditions is shown in Fig 1. We also re-coded participants’ responses as either high or low confidence for both correct and incorrect answers (Fig 2).

Fig 1. Proportion of correct responses across the 10 items.

Red, circular dots show proportion correct for each individual participant (with a small amount of jitter). Black, filled points and bars indicate average proportion correct for each condition, with 95% bootstrapped confidence intervals.

Fig 2. Proportion of highly confident responses for correct (left) and incorrect (right) responses.

Red, circular dots show proportion correct for each individual participant (with a small amount of jitter). Black, filled points and bars indicate average proportion correct for each condition, with 95% bootstrapped confidence intervals.

We report Bayesian hierarchical regression analyses conducted using the BRMS R package (v2.2). The BRMS package implements Bayesian analyses using the probabilistic programming language Stan [27]. This approach allows us to model all of the data by using responses for each individual question rather than aggregating values for each participants, and to apply models that are consistent with the data generating process (e.g., applying logistic regression for binary responses). We assume a weakly-informative prior, Normal(0,1), for betas in each model, as suggested by Gelman and colleagues [28].

1. The Accuracy of Diabetes Knowledge

First, we examine how the presentation of information affected participants’ accuracy on the diabetes knowledge test. We examined condition differences by predicting accuracy (correct or incorrect) on each question from variables representing the myths and explanation type factors and their interactions and with random intercepts by item and by participant. This hierarchical logistic regression model can be expressed in the common mixed-effects formula syntax, as:

We first examined whether the primary findings of Powell et al. [12] were replicated in our new study. In prior work, Powell et al. [12] found that the infelicitous “myths” negatively impacted participants’ diabetes knowledge, but that diabetes knowledge was preserved when the myths were rephrased as questions. Replicating these findings, we found that diabetes knowledge was reduced by the myths phrasing relative to the rephrasing as questions, as the myths factor coefficient was credibly negative, B = -0.49, 95% Credible Interval (CI95) = [-0.97, -0.04]. In addition, Powell et al. [12] found that, rather than improving diabetes knowledge, the ADA’s explanations actually reduced diabetes knowledge. We replicated this finding again in our current study: Participants who read the ADA’s explanations scored worse on the diabetes knowledge test than participants who read no explanations at all, B = -.61, CI95 = [-0.15, -1.08].

Having replicated these findings, we then examined the impact of our revised explanations, which better respected Gricean communicative norms, on participants’ diabetes knowledge. As predicted, we found that our revised explanations improved participants’ accuracy on the diabetes knowledge test relative to participants who saw the ADA’s explanations, B = .74, CI95 = [0.24, 1.22].

To further examine differences among the conditions, we performed additional comparisons among the obtained condition coefficients. For each comparison, we report the mean difference between the estimated condition coefficients (Bdiff) with 95% credible intervals. First, we examined whether our revisions to the explanations actually rendered them informative, as opposed to merely not misleading. First, we examined whether the revised explanations were able to correct the misleading effects of the ADA’s infelicitous “myths” statements. Compared to participants who read the myths alone, performance on the diabetes knowledge test was superior among participants who read the myths paired with the revised facts, Bdiff = .837, CI95[1.365, .314]. Powell et al. [12] found that participants who read the myths rephrased as questions performed similarly on the diabetes knowledge test as participants in a baseline condition who read no information. Comparing participants who read the questions versus those who read the questions paired with our revised explanations, we did not see a significant improvement in diabetes knowledge Bdiff = .181, CI95 [-0.308, .675]. Thus, our revisions are informative enough to clarify or undo the damage done by the infelicitous myths, but they do not educate participants beyond baseline. This may be due to the relatively basic level of diabetes knowledge being addressed and assessed in this study.

To summarize, both the presentation of myths and the ADA’s explanations reduced participants’ knowledge of the basic diabetes facts. In contrast, our findings indicate that when the ADA’s explanations were revised to eliminate the pragmatic violations, they restored participants’ knowledge of diabetes.

2. Confidence in Correct and Incorrect Answers

Next, we considered how these materials influenced people’s confidence in their answers. Ideally, educational materials should increase people’s confidence in their correct responses and reduce their confidence in incorrect responses. Fig 2 shows participants’ average confidence across items for correct and incorrect responses.

Confidence in correct answers

In prior work we found that the ADA myths undermined people’s confidence in their correct responses, but pairing the ADA’s explanations with the myths helped restore this confidence [12]. To examine whether this effect was replicated, participants’ confidence for correct answers were submitted to a hierarchical logistic regression analysis, predicting degree of confidence (binary, high or low) from variables representing condition and with random intercepts for item and participants. Expressed in mixed-effects formula syntax, the model was:

First we assessed whether the infelicitous myths reduced confidence in correct responses. Compared to the questions-only condition, participants’ confidence in correct answers was again reduced in the myths-only condition, B = -0.90, 95% CI = [-1.44, -0.37]. Then, we examined how the explanations affected confidence for correct responses. Further replicating prior findings, the ADA’s facts did restore confidence in correct answers relative to the myths-only condition, Bdiff = .835, 95% CI = [.258, 1.432]. Moreover, our revised explanations restored confidence in correct responses to a similar degree, Bdiff = .928, 95% CI = [.375, 1.495].

Confidence in incorrect answers

Finally, we performed the same analysis for participants’ confidence for incorrect answers. First we assessed whether the infelicitous myths increased confidence in incorrect responses. Compared with the questions-only condition, the myths and ADA’s facts increased participants’ confidence in their incorrect answers, B = 1.29, 95% CI = [0.59, 2.00]). A similar pattern was observed for the questions and ADA facts condition, although the posterior distribution of this coefficient credibly includes zero, B = 0.68, 95% CI = [-0.08, 1.44].

The presence of ADA’s explanations inflated confidence in incorrect responses, indicating that these explanations generated additional confusion about basic diabetes knowledge. In contrast, our revised explanations without pragmatic violations not only protected confidence in correct responses, but did so without also inflating confidence in incorrect responses.


In this study, we identified and analyzed pragmatic violations on a prominent health website that we argue led to inadvertent paltering, rendering their explanations misleading rather than informative. In prior work, we identified how information on the American Diabetes Association’s “diabetes myths” page mislead health consumers through the infelicitous labeling of “myths.” Here, we turned to the more detailed explanations accompanying the “myths” on this page, and determined that they violated, at one point or another, each of the Gricean cooperative principles of quality, quantity, manner, and relevance. By revising these explanations to better adhere to the pragmatic maxims while preserving their content, we were able to minimize the materials’ misleading effects. These revised explanations were no longer misleading by themselves and, more importantly, were able to counter the confusion generated by the ADA’s infelicitous myths. To take a striking example, recall that one of these “myths” was that “People with diabetes are more likely to get colds and other illnesses.” Among participants who read the material as presented on the ADA’s website, containing both this myth and its pragmatically flawed explanation, only 41% correctly indicated that “People with diabetes have a compromised immune system and are more likely to have serious infections.” However, when the myths were paired with our revised explanation, 76% of the participants answered correctly. Thus, by correcting the Gricean violations in the original explanations, we were able to transform them from misleading to educational.

When people seek health information from reputable, expert sources, they expect those sources not only to convey accurate information, but also to guide and empower them to make positive health decisions. To be successful, material must be conveyed to readers in ways that honor the Gricean maxims. This is true for all of the maxims, but the maxim of quality may have special force when the speaker is an expert. This maxim sets the expectation that communications are not just truthful, but that the speaker has evidence or justification for what they say. People assume that experts draw on a large body of knowledge and evidence to back their assertions. This leads readers to have more credence in the information they receive from experts and to feel greater confidence to act on that information.

Inadvertent pragmatic violations may be more likely to occur when speakers have conversational goals beyond conveying truth, such as avoiding stigma, empowering patients, fostering inclusivity, and so forth (e.g., [29,30,31]). In this light, we suspect that ADA had the additional goals of not blaming or stigmatizing people who have diabetes and to avoid making the recommended lifestyle changes appear too difficult to achieve. It is possible that these goals have led the ADA to understate the role of lifestyle choices in diabetes and to underplay the control diabetic people can achieve over their own health.

We suspect that dramatic instances of inadvertent paltering like the ADA’s myths page are rare, but less egregious violations may be more common in health communications. There is evidence that the tension between goals of conveying information and avoiding upsetting patients is widespread in the medical profession. For example, although health professionals believe that people should be informed about their prognosis, many of them choose to withhold information to avoid depressing their patients [32]. Fallowfield, Jenkins, and Beveridge [33] presented an example of a doctor attempting to prepare a patient with lung cancer for a transition from curative to palliative care. The doctor told the patient “there are signs that things are progressing so we do not think that you should have anymore chemotherapy”. The patient believed that the doctor was saying that “things,” meaning the treatment, had progressed so well that there is no need to continue. What the doctor actually meant was that “things,” meaning the cancer, were progressing so aggressively that further treatment would not be effective. By speaking euphemistically rather than straightforwardly, the doctor inadvertently paltered and misled the patient.

Striving to avoid upsetting people can come at the cost of accurately conveying information. People who visit the ADA’s website are most likely to be primarily seeking advice for how they can avoid becoming diabetic or manage their diabetes. Underplaying the degree to which they can control their health may lead them to infer that there is little they can do or that there is no strong need for them to take action. Therefore, educators and science or health communicators should be conscious of their communicative goals. In some cases, it may be better to be more open and straightforward about these additional goals, rather than to subtly work to accomplish them unnoticed (e.g., see [34]). If these subtle machinations are not deftly realized, there is a considerable danger that unintentional paltering can result, and the public can be misled.

These concerns are all the more pressing when considered in light of the overwhelming evidence demonstrating the difficulties inherent in correcting misinformation once fallacious beliefs have taken hold (for a review see [35]). Because it is difficult to undo the damage of misinformation [36,37,38,39,40], health organizations have a responsibility to “do no harm,” and to be clear and considerate in their communications. Of course, this isn’t to say that correcting misconceptions is impossible; for instance, recent work examining educational interventions aimed at countering vaccine skepticism have shown improvements in participants’ attitudes toward vaccines [41]. One key to addressing misconceptions may be developing interventions that acknowledge and target the wider beliefs driving misconceptions [12]. Where misconceptions exist among the public, it is the responsibility of health organizations to combat them. Websites and social media posts are now an essential means for informing the public, so health organizations should work to ensure that the information presented on these forums is as clear and accurate as possible (cf. [42]).

Paltering may be seen as one member of a family of misleading communicative practices, among them bullshit [43], pandering [9], and white lies (e.g., [44]). Bullshitters attempt to persuade others without regard for truthfulness [43]. Likewise, panderers flatter without concern for the truth [9]. Those who tell white lies inflate the positives or twist the truth to spare others’ feelings. On our analysis, paltering results from the violation of communicative principles. That same kind of analysis might shed light on these other forms of deception. As a step in this direction, Yoon et al. [44] analyze white lies by assuming speakers balance two goals: truth and the feelings of the listener. Depending on the true state of affairs and the relative weights of these goals, speakers might choose to lie entirely (“You did an excellent job!” when it was a mediocre job) or to be truthful but not maximally informative (“Not bad!” when it was a mediocre job) which is more like paltering. It could be interesting to extend this line of work by considering the ways in which violations of Gricean norms underlie white lies as they do paltering.

To conclude, prior work on paltering has examined a number of different techniques by which the truth can be used to mislead, including misplaced emphasis, omissions, and vagueness. We argued that these diverse techniques are unified as violations of Gricean communicative norms. These same kinds of violations can occur without an explicit intent to deceive which can result in unintentional paltering. Intentional palterers abuse communicative norms to mislead their listeners with true statements. As our findings illustrate, the neglect of these norms can be just as detrimental to communication as their abuse.


  1. 1. Schauer F., & Zeckhauser R. (2009). Paltering. In Harrington B (Ed.), Deception: From ancient empires to Internet dating (pp. 38–54). Stanford, CA: Stanford University Press.
  2. 2. Rogers T., Zeckhauser R., Gino F., Norton M. I., & Schweitzer M. E. (2017). Artful paltering: The risks and rewards of using truthful statements to mislead others. Journal of personality and social psychology, 112(3), 456–473.
  3. 3. Clementson D., & Eveland W. P. (2016). When politicians dodge questions: An analysis of presidential press conferences and debates. Mass Communication and Society, 19(4), 411–429.
  4. 4. Rogers T., & Norton M. I. (2011). The artful dodger: Answering the wrong question the right way. Journal of Experimental Psychology: Applied, 17(2), 139–147.
  5. 5. Brown A. (2013). Understanding pharmaceutical research manipulation in the context of accounting manipulation. The Journal of Law, Medicine & Ethics, 41(3), 611–619.
  6. 6. Druz, M., Wagner, A. F., & Zeckhauser, R. J. (2015). Tips and tells from managers: How analysts and the market read between the lines of conference calls (No. w20991). National Bureau of Economic Research.
  7. 7. Luscombe A. (2018). Deception declassified: The social organisation of cover storying in a secret intelligence operation. Sociology, 52(2), 400–415.
  8. 8. DePaulo B. M., & Kashy D. A. (1998). Everyday lies in close and casual relationships. Journal of personality and social psychology, 74(1), 63–79.
  9. 9. Isaac A. M., & Bridewell W. (2014). Mindreading deception in dialog. Cognitive Systems Research, 28, 12–19.
  10. 10. Chestnut E. K., & Markman E. M. (2018). “Girls Are as Good as Boys at Math” Implies That Boys Are Probably Better: A Study of Expressions of Gender Equality. Cognitive science, 42(7), 2229–2249.
  11. 11. Malle B. F., & Knobe J. (1997). The folk concept of intentionality. Journal of Experimental Social Psychology, 33(2), 101–121.
  12. 12. Powell D., Keil M., Brenner D., Lim L., & Markman E. M. (2018). Misleading Health Consumers Through Violations of Communicative Norms: A Case Study of Online Diabetes Education. Psychological science, 0956797617753393.
  13. 13. Grice H. P. (1975). Logic and conversation. In Cole P. & Morgan J. L. (Eds.), Syntax and semantics: Vol. 3. Speech acts (pp. 41–58). New York, NY: Seminar.
  14. 14. Hastak M., & Mazis M. B. (2011). Deception by Implication: A Typology of Truthful but Misleading Advertising and Labeling Claims. Journal of Public Policy & Marketing, 30(2), 157–167.
  15. 15. Harris, R. J. (n.d.). Comprehension of Pragmatic Implications in Advertising, 6.
  16. 16. Stevens B. (1999). Persuasion, Probity, and Paltering: The Prudential Crisis. Journal of Business Communication, 36(4), 319–334.
  17. 17. Thakurdesai P. A., Kole P. L., & Pareek R. P. (2004). Evaluation of the quality and contents of diabetes mellitus patient education on Internet. Patient Education and Counseling, 53(3), 309–313.
  18. 18. American Diabetes Association. (2017). Diabetes myths. Retrieved from
  19. 19. American Diabetes Association (2017a). Overweight. Retrieved July 18, 2017 from
  20. 20. American Diabetes Association (2017b). Healthy eating. Retrieved July 18, 2017 from
  21. 21. Tsai, A. (2016). Important vaccines for people with diabetes. Diabetes Forecast. Retrieved July 18, 2017 from
  22. 22. Neithercott, T. (2012). Top tips for better foot care with diabetes. Diabetes Forecast. Retrieved July 18, 2017 from
  23. 23. American Diabetes Association (2017c). Frequently asked questions. Retrieved July 18, 2017 from
  24. 24. American Diabetes Association (2017d). Flu and pneumonia shots. Retrieved July 18, 2017 from
  25. 25. American Diabetes Association (2009). Toolkit No. 14: All about carbohydrate counting. Retrieved July 18, 2017 from
  26. 26. American Diabetes Association (2012). Toolkit No. 1: All about your risk for prediabetes, type 2 diabetes, and heart disease. Retrieved July 18, 2017
  27. 27. Carpenter B., Gelman A., Hoffman M. D., Lee D., Goodrich B., Betancourt M., et al. (2017). Stan: A Probabilistic Programming Language. Journal of Statistical Software, 76(1).
  28. 28. Gelman A., Lee D., & Guo J. (2015). Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization. Journal of Educational and Behavioral Statistics, 40(5), 530–543.
  29. 29. Funnell M. M., Anderson R. M., Arnold M., Barr P. A., Donnelly M., Johnson P. D., et al. (1991). Empowerment: An idea whose time has come in diabetes education. The Diabetes Educator, 17, 37–41.
  30. 30. Fallowfield L. J. & Jenkins V. A. (2004). Communicating sad, bad, and difficult news in medicine. Lancet, 63(9405):312–319.
  31. 31. Hancock K., Clayton J. M., Parker S. M., Walder S., Butow P. N., Carrick S., et al. (2007). Discrepant Perceptions About End-of-Life Communication: A Systematic Review. Journal of Pain and Symptom Management, 34(2), 190–200.
  32. 32. Lorensen M., Davis A. J., Konishi E., & Bunch E. H. (2003). Ethical Issues After the Disclosure of a Terminal Illness: Danish and Norwegian hospice nurses’ reflections. Nursing Ethics, 10(2), 175–185.
  33. 33. Fallowfield L. J., Jenkins V. A., & Beveridge H. A. (2002). Truth may hurt but deceit hurts more: communication in palliative care. Palliative medicine, 16(4), 297–303.
  34. 34. Brady, S. T., Walton, G. M., Fotuhi, O., Gomez, E. M., Cohen, G. L., & Urstein, R. (2018). A scarlet letter? Institutional messages can, but need not, induce shame and stigma. Manuscript in preparation.
  35. 35. Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., & Cook J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131.
  36. 36. Aikin K. J., Betts K. R., Donoghue A. C. O., Rupert D. J., Lee P. K., Amoozegar J. B., et al. (2015). Correction of Overstatement and Omission in Direct-to-Consumer Prescription Drug Advertising, 65, 596–618.
  37. 37. Aikin K. J., Southwell B. G., Paquin R. S., Rupert D. J., O’Donoghue A. C., Betts K. R., et al. (2017). Correction of misleading information in prescription drug television advertising: The roles of advertisement similarity and time delay. Research in Social and Administrative Pharmacy, 13(2), 378–388.
  38. 38. Chan M. S., Jones C. R., Hall Jamieson K., & Albarracín D. (2017). Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation. Psychological Science, 095679761771457–095679761771457.
  39. 39. Margolin D. B., Hannak A., & Weber I. (2018). Political Fact-Checking on Twitter: When Do Corrections Have an Effect? Political Communication, 35(2), 196–219.
  40. 40. Walter N., & Murphy S. T. (2018). How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs, 85(3), 423–441.
  41. 41. Horne Z., Powell D., Hummel J. E., & Holyoak K. J. (2015). Countering antivaccination attitudes. Proceedings of the National Academy of Sciences of the United States of America, 112(33), 10321–10324.
  42. 42. Gesser-Edelsburg A. (2016). Risk Communication and Infectious Diseases in an Age of Digital Media. Routledge.
  43. 43. Frankfurt H. (2005). On Bullshit. Princeton, NJ: Princeton University Press.
  44. 44. Yoon, E. J., MacDonald, K., Asaba, M., Gweon, H., & Frank, M. C. (2017). Balancing informational and social goals in active learning. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society.