Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Reflecting glory or deflecting stigma? The interplay between status and social proximity in peer evaluations

  • Erik Aadland,

    Roles Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Strategy and Entrepreneurship, BI Norwegian Business School, Oslo, Norway

  • Gino Cattani ,

    Roles Conceptualization, Project administration, Writing – original draft, Writing – review & editing

    gcattani@stern.nyu.edu

    Affiliation Department of Management & Organizations, Stern School of Business—NYU, New York, New York, United States of America

  • Denise Falchetti,

    Roles Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Strategy & Innovation, Questrom School of Business—Boston University, Boston, Massachusetts, United States of America

  • Simone Ferriani

    Roles Conceptualization, Funding acquisition, Writing – original draft, Writing – review & editing

    Affiliation Department of ‘Scienze Aziendali’, University of Bologna, Bologna, Italy

Abstract

How do candidates’ status and social proximity to members of the evaluating audience interact to shape recognition in peer-based evaluative settings? In this study, we shed light on this question by adopting a mixed-method approach. We first examined field data on the conferral of awards in a peer-based evaluative contest–“The Silver Tag”–which is one of the most prestigious digital advertising awards contests in Norway. The field study revealed the existence of a negative interaction between status and social proximity on the allocation of awards. We then conducted two experiments to probe the mechanisms responsible for this finding. In the first experiment, we replicated the main pattern observed in the field study. In the second experiment, we showed that the interaction effect is contingent on the nature of the evaluative setting. When audience members’ decisions were in the public domain (i.e., the other audience members knew them), social proximity tempered the effect of status on candidates’ recognition, but it did not when decisions were private (i.e., the other audience members did not know them). We conclude by discussing several implications of our study for research on the socio-psychological processes underlying evaluative outcomes in tournament rituals.

Introduction

Extensive evidence across cultural fields as diverse as academic publishing [1], wine tasting [2], film industry [3], advertising [4], and screenwriting agencies [5] reveals the role of status as a key driver of evaluation and choice. Prevailing explanations for the positive association between status and evaluative outcomes posit that status serves as a source of information about actors’ unobserved quality. In this vein, one’s relative standing in a social system [6] positively affects others’ expectations as well as behaviour toward the object of evaluation. Indeed, high-status actors are assumed to be more competent [7] and more frequently attended to [6]; they are usually granted more recognition for their performance relative to low-status actors for equivalent performance [8]. By contrast, low-status actors are more likely to be devalued or simply ignored [8, 9]. This explanation has found further support in a few recent studies–more sensitive to the role of the evaluative context–that show how the choice of high-status actors is also more easily defensible before other evaluators because it is based on what is publicly recognized as high quality [1012].

Social networks are also widely regarded as important drivers of evaluative outcomes. A rich body of empirical research–albeit perhaps not as systematic as the scholarship on status beliefs–supports this view across different evaluative settings in both art and science. Parsons and Shils [13] were among the first to highlight that social relationships between evaluators and candidates may shape reward allocation decisions, and so compromise universalistic standards of evaluation. Blau [14, p. 265] referred to possible social intercourses between evaluators and candidates, emphasizing how “the differentiating criterion is whether the standards that govern people’s orientation to each other are dependent on or independent of the particular relationships that exist between them.” One of the first studies to find empirical support for this intuition is Wenerås and Wold’s [15] analysis of the peer-review system of the Swedish Medical Research Council. They found that postdoctoral fellowship applicants who had relationships with reviewers (e.g., they came from the same academic institution) were judged to be more competent than those who had no such affiliation but were equally productive. Subsequent studies of academic settings have confirmed the existence of a positive association between audience-candidate network proximity and favourable evaluative outcomes [1618].

Experimental economists have offered complementary evidence in this direction. The work by Dimant [19] on the role of social proximity in magnifying the transfer of norms among peers, and the work by Charness et al. [20] on the role of a salient state of social identity in fostering favouritism towards those of stronger social kinship, are especially relevant. Similarly, scholars in organizational sociology have noted that the attribution of awards to creative professionals in fields of artistic production tends to map on the connectivity between candidates competing for recognition and members of the evaluating audience. Findings supportive of this claim include jurors’ preferential allocation of prizes to professionals sharing their networks in the feature film industry [21, 22], and recent findings in the context of the advertising industry exposing the patterning of award allocation choices along relational lines [4].

Overall, there is significant evidence showing how status and social proximity drive audiences’ preferential allocation of attention and recognition across competing candidates. Not surprisingly, two of the most widely used truisms to characterize how cultural markets channel resources, honours, and attention to cultural producers reflect precisely the essence of these two mechanisms: “You are as good as your last credit” and “It’s Not What You Know. It’s Who You Know.”

Although we know a great deal about how status and social proximity contribute to producing and reproducing comparative advantages in social evaluations, we know much less about how status and social proximity combine to produce evaluative outcomes. Social ties could either dampen or amplify any positive effect of status on recognition. On the one hand, given the universal nature of status-seeking [23], any positive effect the candidates’ status may have on their recognition could be even stronger when high-status candidates have ties to members of the evaluating audience. If recognition flows through the network [18], audience members will extend greater deference to high-status candidates who are connected to them because of the status boost that they themselves can indirectly enjoy via their personal affiliation to those actors. This tendency to strive for prestigious affiliations has been described as the “basking-in-reflected-glory effect” [24], or “basking in the reflection of a neighbor’s glory” [25], and it is based on the way individuals extend their perceptions from one subject to another. This logic implies that, all else being equal, the marginal effect of candidates’ status on their recognition will increase monotonically with their social proximity to audience members. On the other hand, the existence of social ties between audience members and candidates could reduce the saliency of status as a signal of the quality of candidates and their work. Status signals are most valuable when there are no or limited alternative ways of evaluating a firm’s actual quality [6]. So, when a candidate projects information both due to her status and her direct connection to the audience, one of those information devices may be redundant, as market participants are likely to satisfice on collecting and processing information [26]. Consistent with this idea, prior evidence at the organizational level of analysis has established that relationships that firms maintain with their customers and status are substitutive drivers of market entry decisions. Both types of social resources facilitate entry into a new market, but the importance of status diminishes in the presence of market ties, which “represent a more direct mechanism than […] status to reduce market uncertainty and increase exchange value” [27, p. 467]. Hence, assuming that status serves as a quality signal, we should expect social proximity to reduce the marginal effect of status on recognition.

Recent findings by Aadland, Cattani, and Ferriani [4] imply similar expectations but rely on different interpretations. While providing substantial evidence supporting the general stratifying effect of audience-candidate ties on candidates’ recognition, the authors also point to the plausibility of negative returns of social proximity to recognition–particularly at high levels of audience-candidate social proximity. The authors attribute this possibility to what they call “intellectual distance,” a term used to indicate audience members’ attempt to project their “interest in disinterestedness” [28, p. 112, see also pp. 87–88]. The general intuition is that the audience-candidate proximity in the social network might give rise to morally problematic interpretations of audience members’ true intentions and yield reputational concerns that inhibit favourable evaluations of socially proximate candidates. Following this logic, it is then plausible to expect audience members’ reliance on status cues to decline as social proximity increases. If intellectual distance kicks in, in fact, the glory enjoyed by the target of deference will deflect stigma on the evaluator. Thus, audience members might feel that at higher levels of social proximity elevating the status of candidates could more easily backlash. Even these arguments suggest a negative interaction effect between status and social ties, but rather than being the result of substitutive dynamics, the effect here derives from self-motivated concerns. In summary, how the status of candidates and their social proximity to audience members interact in evaluative contexts is unclear due to the coexistence of alternative perspectives that indicate different explanatory mechanisms. Which of these perspectives best characterizes the interaction effect between status and social proximity on the evaluative outcomes?

Our objective in this paper is to shed light on this theoretical question. To do so, we collected data on the conferral of prestigious awards to competing candidates in a peer-based evaluative contest–“The Silver Tag”–which is one of the most prestigious digital advertising awards contests in Norway, and examined how candidates’ status and audience-candidate proximity in the underlying social network, and their interaction, explain award allocation decisions. In this field study, we find a negative interaction effect between candidates’ status and audience-candidate proximity on the conferral of awards. We offer external validity to the field study and evidence on the mechanism responsible for the field study’s results by supplementing them with two experiments. The first experiment replicates the main pattern of the effects observed in the field study, which indicates a negative interaction between status and social proximity. The second experiment reveals how the interaction effect is contingent on the nature of the evaluative setting. In doing so, we seek to distinguish–theoretically and empirically–processes associated with stigmatic perceptions from alternative explanations that imply the same empirical patterns but rely on different assumptions about the interplay between status and social ties. We show that when the evaluation is public–and so potential violations of the meritocratic ideal in social evaluation are easier to detect and stigmatize, if not punish–social proximity mitigates the effect of status on candidates’ recognition, but it does not when those decisions are private (hence audience members do not have to discloser and justify their decisions before the other members). We conclude by discussing the implications of this study for research on the socio-cognitive processes underlying the evaluation of peers in ostensibly meritocratic settings and by identifying avenues for future research.

Overview of studies

To examine the interplay between status and social ties in peer-based evaluative settings, and the socio-cognitive drivers of recognition, we used a mixed-method approach. Specifically, we conducted one field study (Study 1) and two experiments (Study 2 and Study 3) to ensure the internal and external validity of our studies. We conducted the field study in a setting where status and social ties shape peer audience evaluations, but neither the status of the candidates nor their social ties to the members of the evaluating audience were manipulated. This feature of the field study allows us to establish whether status and social ties are additive (reinforce each other’s effect) or substitutive (reduce the effect of one when in the presence of the other).

The two experiments further examine the interaction effect and the conditions under which this effect is more or less likely observed. For the experimental studies, approval by the ethics committee was not required because the data were analyzed anonymously and we did not collect any personal identifiable information. Each participant also filled out an online consent form and voluntarily agreed to participate. Study 2 explores the joint effect of status and social ties on the probability of rewarding cultural works by manipulating the status of the candidates and the social ties between the candidates and the members of the evaluating audience. Study 2 replicates the findings of the field study, though with stricter quality controls. (One crucial strength of the experimental approach is the possibility of holding the project quality constant. The true quality of cultural producers’ work is unobservable and difficult to infer unequivocally even after consumption. The challenge is to adopt an approach that allows the researcher to ascertain the presence of evaluative drivers independent of the quality of the producers’ work. Study 3 probes the interaction effect by holding the presence of social ties between candidates and audience members constant, and manipulating the candidates’ status and the transparency of the evaluation process. Study 3 sheds light on the circumstances under which the negative interaction is more or less likely to operate, offering precious insights into the nature of the mechanism underlying what we observed in the field data.

Study 1

We examined the interplay between status and social ties in peer audience evaluations. As the advertising industry is project-based it is not uncommon for jury members to evaluate peers with whom they collaborated in the past.) in the Norwegian advertising industry, where it is customary to establish excellence through awards contests [29, 30].

Interviews with key informants

We interviewed a panel of field insiders consisting of élite advertising professionals, advertising professionals struggling to make their mark, advertising awards contest jurors, and representatives from industry associations (We gained access by first presenting our project to the main advertising organizations in Norway: Kreativt Forum and INMA. After securing their support, we approached a sample of agencies that varied on key dimensions of interest for our study and asked if they were willing to participate. All agencies agreed to participate and gave us access to key advertising professionals. Each interview lasted between twenty minutes and two hours. The topics covered during the interviews included collaborative practices, meaning and relevance of awards, advertising evaluation criteria, perceptions of distance or proximity in the social space, personal anecdotal evidence of jury decisions, and deliberation processesWe interviewed some of the interviewees several times to further probe their field experience. In total, we conducted 19 interviews and followed up with interviewees by email to validate our interpretation of the data. None of the organizations, agencies, or professionals received any compensation. Although these interviews did not form a representative sample of industry participants’ opinions, a considerable range of views was expressed and noteworthy themes emerged that enhanced our understanding of the award contests’ evaluative dynamics. Table 1 reports descriptive data on the sampled agencies and respondents. Our industry informants suggested that professionals’ social standing in the professional status hierarchy is a signal of their (uncertain) quality. This status information, in turn, influences jury evaluations as the following quote by a copywriter and former juror in an advertising agency illustrates:

“It’s a bit like that [well known high status creative teams] have a tendency to score incredibly well on work that is really only average. And that is because you are positively biased, because they make a lot of nice work. And you are a bit positively biased to begin with. You really want the work they do to be of high quality. And, sure, if you come in [to an awards contest], if you send in something from [an out of town agency] that is not highly regarded in the industry, then you will struggle a lot.”

thumbnail
Table 1. Descriptive data on agencies sampled for interviews.

https://doi.org/10.1371/journal.pone.0238651.t001

Our informants also knew that jury deliberations often are enveloped into “interpersonal patterns of value commitments” that channel attention, energy, and information, while subtly shaping attributions of ability [4, p. 893]. The following quote [4, p. 893] from an account manager is telling:

“If two projects are equally good, then the project where project members and jurors know each other will win […] these people share the same opinion about what is “important” and “not important,” as well as what is “right” and not “right.” They [the projects by candidates previously tied to the jurors] might, therefore, score higher on the criteria valued by the jurors who ‘administer the truth’ about what is good and not so good.”

Our informants recognized the influence of professionals’ status and ties to jury members in shaping such jury’s evaluations of their work. However, they were also deeply aware that the identity of the jury members is public information available to colleagues in the industry and that the professional relationships between the jury members and the candidate producers are relatively transparent with the other members of the industry, in particular other jury members take part in award decisions. In this type of socio-relational context, the social ties of a jury member with a candidate can sometimes translate into more of a liability than an advantage. Our informants have stressed that susceptibility to claims against impartiality in evaluations also tends to influence the results of the jury’s deliberations. An experienced jury member suggested how voicing a genuine preference for a particular project could potentially become a source of stigma due to prior collaborations with some project-team members [4, p. 894]:

“It is a big problem if they [i.e., the members of the industry] come to believe you have a vested interest. If you favour that project […] you may end up in big trouble. I usually keep quiet or alternatively try to mention what is good about other projects in such situations.”

In summary, our interviews seem to reveal a fundamental evaluative ambivalence caused by the strong susceptibility of jurors to claims against their authenticity. Avoiding conflicts of interest can be a matter of moral conviction or adherence to epistemic values. The composition of the jury is in the public domain; likewise, the existence of professional relationships between jury members and candidate producers is relatively visible to other members of the industry as professionals have a good sense of who has worked with whom. Lurking suspicions of deliberations along these relational lines, therefore, can easily emerge and question their moral character, even when the members of the jury sincerely approve these deliberations.

Secondary data

We investigate the interaction effect between status and social ties in peer audience evaluations using the novel “The Silver Tag” dataset first described in Aadland [31]. The dataset includes all projects entered into “The Silver Tag”–the monthly Norwegian digital advertising awards contest–from May 2003 to April 2010. The data comprise a total of 1,734 distinct individuals, 350 distinct organizations, and 902 projects corresponding to 11 competitions per year and 75 contest months over the study period. The Norwegian interactive marketing interest organization responsible for the contest, INMA, combines the contest months June and July each year into one contest generation. Also, INMA combined March/April 2004 and August/September 2004 into two distinct contest generations. The data contain all winners, recipients of honorable mentions, and losers. The data also track all jury members serving on juries in “The Silver Tag” awards contest from May 2003 to March 2010. Each jury served from May to April of the following year during the years 2003–2006 and from April to March during the years 2006–2010. In total, the dataset contains 7 juries, whose size over the study period varied from 4 (for the first jury) to 11 (for the last jury) members.

Dependent variable

Following Aadland et al. [4], the dependent variable measures the bestowal of an accolade (honourable mention or award) to projects competing in a given contest month. We coded the dependent variable 0 if a project did not receive any accolade; 1 if a project received an honourable mention; 2 if a project reached 1st place (i.e., won the award). The dependent variable is ordered in terms of levels, or intensity, of peer recognition.

Independent variables

Social ties.

We captured the effect of social proximity between audience members and candidates on the likelihood of receiving an accolade by looking at the impact of direct ties. We observed direct ties when the project and jury members had worked on the same project(s) in the past. We calculated this variable by first generating bipartite project affiliation network matrices based on the monthly digital awards contest “The Silver Tag” using Ucinet, version 6 for Windows [32]. We created the adjacency matrices with a 24-month long moving window that we updated monthly. Because our unit of analysis is the project, for each project we created the variable social ties by counting only the number of jurors with direct ties to project members [4]. We also looked at the impact of having mediated (i.e., indirect) ties to jury members on the likelihood of being rewarded by calculating the median geodesic distance between the project and jury members. Following Aadland et al. [33, p. 140], we first calculated the median geodesic distance between each project member and the jury members. Consistently with the six degrees of separation theory [34], we then grouped individual producers with a degree of separation from jurors equal to or greater than 6, and assigned them the value 6. To facilitate the interpretation of the results, we measured the variable in terms of nearness between jury members and producers by calculating the reciprocal of the median geodesic distance between each project member and the jury members. As our unit of analysis is the project, we created the social proximity variable by taking the median of each project member’s median distance from jury members.

Project status.

We relied on network centrality to measure status in line with previous research (for a review see [35]). We created the project status variable using Bonacich beta-centrality [36], a measure that is commonly used to derive status ordering from relational data [3740]. The beta-centrality measure captures a professional’s prominence within the peer network as a function of both the number and the centrality of professional peers to whom s/he is connected. The status of these peers is, in turn, a function of the number and centrality of the professional peers connected to them, and so on. Hence, the beta-centrality scores determine each professional’s position within the global network’s status hierarchy. When beta is set to zero, network centrality is akin to degree centrality, focusing only on the local structure. The larger the value of beta, the more the centrality measure reflects the global structure. In our analysis, we set beta to the reciprocal of the largest eigenvalue. However, even considering a range of values for b, we found no substantial differences in the status scores. We used UCINET version 6 [32] to calculate our status measure. Our project status measure counts the number of professionals in the project with a Bonacich beta-centrality above the median in the global “The Silver Tag” network over the total number of individuals working on the same project in a particular contest month. We calculated our centrality scores based on the same 24-month long moving affiliation network window we used for the social ties measures. We also chose a more conservative cutoff to define high-status–i.e., values greater than .85 (for a similar approach see [41])–which yielded very similar results.

Control variables

To rule out alternative explanations for the hypothesized relationships, we included several control variables in our models.

Project sophistication.

In “The Silver Tag,” jury members typically emphasize whether the advertising projects competing in a given contest month use new technology. The creative use of technology is perceived as a sign of technical sophistication and innovativeness that, in turn, represents a sign of higher quality digital advertising projects. Accordingly, the variable project sophistication differentiates projects based on the type of technologies that they employed. Following Aadland et al. [33, p. 141], “the variable counts the number of agencies specializing in 3D-animation, film production, radio production, or back-end streaming involved in a given project.” While not capturing directly the use of new technologies, this variable identifies projects for which those technologies in principle could have (and most likely were) employed.

Project size.

We controlled for the total number of individuals on each digital advertising project because the number of project participants serves as a proxy for larger project budgets and more available resources to create projects of higher quality.

Conflict of interest.

Jury members are not allowed to partake in the evaluation of a project whenever they have a conflict of interest in that project. For instance, when the project and jury members work for the same firm or jury members are involved in projects under evaluation, the jury member in question has to exit the jury room when the project in question is evaluated. Accordingly, we generated an indicator variable that is equal to 1 if one or more project members had a colleague in the jury or a juror was a member of the project, and 0 otherwise [4, 33].

Prior positive co-experience.

Some jurors may have collaborated with candidates and won with them on projects in the past. If prior candidate-juror interactions have resulted in the achievement of a positive outcome they are likely to affect the evaluators’ disposition toward the work of their past collaborators when the jurors in question cast their vote over the competing candidates [4]. Previous social network research has shown how social ties can be a source of social benefits (e.g., more favorable evaluations) or social liabilities (e.g., less favorable evaluations) depending on whether relationships between evaluators and candidates are positive or negative [42]. We then identified “The Silver Tag” projects in which a current candidate and a juror collaborated and won the award during the prior 24 months. We created the indicator variable prior positive co-experience, which is equal to1 if there were one or more such instances for a given project, and 0 if there were no such instances.

Project median experience.

Project members’ past experience with digital advertising projects might account for their differential ability to contribute to the project and understand what exactly jury members are looking for in a project. We measured project members’ past experience by tallying the number of projects before the focal project each producer had submitted to “The Silver Tag” contest. For each project, we then calculated the median experience of all producers involved [4, 33].

Competitive intensity.

The more projects compete for recognition in a given contest month, the more intense the competition and the lower the likelihood that a given project will win [4]. We controlled for competitive intensity by counting the number of projects competing in each contest month.

Reciprocity.

Reciprocity, the giving of gifts to another in return for gifts received, is also a distance-reducing mechanism between any two parties involved in a social exchange [43]. As Sherry [44, p. 158] observed, “The giving of gifts can be used to shape and reflect social integration (i.e., membership in a group) or social distance (i.e., relative intimacy of relationships).” Accordingly, we created the reciprocity variable that “captures the extent to which jury members reward projects whose members were jurors in the past and who–in that role–had rewarded one or more of the current jury members” [4, p. 897]. For each project, the measure counts the number of current jurors who won or received an honorable mention by project members serving as jurors over the previous two years and whose work happened to be under evaluation during the focal contest month.

Jury status.

We measured the status of the jury by counting the number of jurors in the jury with a Bonacich beta-centrality above the median in the global “The Silver Tag” network over the total number of jury members in a particular contest month.

Jury median experience.

We measured jurors’ past experience by counting the number of projects each juror had submitted to “The Silver Tag” contest before the focal month. For each jury and contest month, we then calculated the median experience of all jurors involved.

Method

We modelled the probability of each project receiving more favourable evaluations by the jury members in a given contest month by using generalized linear models [45, 46]. We estimated our models with the glm command in Stata 14, specifying the binomial family and setting the binomial denominator equal to the number of jurors evaluating the competing projects in each contest month [33]. We also specified the logit link and estimated our models with maximum likelihood. We clustered the standard errors on contest month to obtain robust standard errors. For each contest month, we modelled the probability of jury members assigning an outcome for each project of either no placement (0 points), honourable mention (1 point), or winning the award (2 points).

Results and discussion

We report descriptive statistics and correlation values for our measures in Tables 2 and 3, respectively. Since the condition number [47] for the matrix of independent variables to be 10.04 –well below the suggested threshold of 30 –multicollinearity is not likely to be an issue in our models.

We began by estimating a model with robust standard errors in which the only predictor is project status. The model stratifies by contest month, so each stratum corresponds to a choice set for the jury in a particular month. In Model 1 of Table 4, the coefficient for project status is 1.062 (p<.01). We then estimated a model with social ties only. In Model 2, the coefficient for social ties is .322 (p<.01). The pattern and significance of the two predictors remain stable when both variables are included together (Model 3). We then proceeded to estimate the interaction between project status and social ties and the main effects for the interaction term components. In Model 4, the coefficient for the project status x social ties interaction is -.420 (p<.01), while the coefficient for status is 1.189 (p<.01) and the coefficient for social ties is .503 (p<.01). The negative joint effect of status and ties suggests that jury members are less inclined to reward high-status candidates who are socially close to them.

thumbnail
Table 4. Generalized linear models (clustered on contest/month).

https://doi.org/10.1371/journal.pone.0238651.t004

The next model includes our control variables (Model 5). While project size, project sophistication, competitive intensity, and reciprocity are significant and the sign of their coefficient in the expected direction, the other controls are not statistically significant. When all these variables are controlled for (Model 6), the coefficients for project status and social ties are positive and significant. Model 7 presents the results of the full model including the controls, the interaction components, and the interaction effect. The coefficient for the interaction term is -.471 (p<.01), corresponding to an odds ratio of .624 and a 37.6% decrease in the odds of gaining recognition. The main effect coefficient for project status is 1.025 (p<.01), corresponding to an odds ratio of 2.787, and the coefficient for the main effect of social ties is .485 (p<.01) which corresponds to an odds ratio of 1.623. We also calculated the marginal effect of social ties for representative values of project status to further explore their interplay. Fig 1 plots this marginal effect. The plot reveals a positive marginal effect of social ties that decreases for higher levels of project status and eventually turns insignificant for very high levels of project status. Similarly, Fig 2 plots the average marginal effect of project status at representative values of social ties. As the proportion of project members with direct ties to members of the jury increases (i.e., the value of the variable gets closer to 5), the marginal effect of project status on receiving an honorable mention or winning (i.e., outcomes 1 and 2) decreases, suggesting that direct ties to jury members become increasingly important in shaping their reward allocation decisions. For values of social ties equal to or greater than 2, the marginal effect of project status is not significant.

We also calculated the adjusted predictions for the number of social ties at representative values of project status, holding the other variables constant at their means. Fig 3 plots the adjusted predicted probabilities. When social ties = 5 and project status = 0, the adjusted predictive margin is 1.639 (p<.01). Conversely, when social ties = 5 and project status = 1, the adjusted predictive margin is .493 (p<.01). The adjusted predicted probabilities suggest that the likelihood of reward is high for projects with higher levels of social ties and low levels of project status. The likelihood of rewarding projects with higher levels of social ties decreases when project status increases. By contrast, when social ties = 0 and project status = 0, the adjusted predictive margin is .171 (p<.01). When social ties = 0 and project status = 1, the adjusted predictive margin is .462 (p<.01). The adjusted predicted probabilities suggest that the likelihood of being rewarded is low for projects with lower levels of social ties and low levels of project status, but that the likelihood increases slightly when project status increases.

thumbnail
Fig 3. Adjusted predictions for the number of social ties at representative values of project status.

https://doi.org/10.1371/journal.pone.0238651.g003

Overall, these results identify an important boundary condition that may alter the saliency of status cues. While social ties appear to reduce the need to rely on status-based evaluation, as encoded in a publicly observable status hierarchy, they become less salient in driving recognition as the project members’ status increases. These results, in other words, suggest that status in the project and social ties are not additive, pointing instead to a substitution effect between them.

Robustness checks.

We conducted additional analyses to gauge the validity of our findings. In particular, we looked at the impact of having mediated (i.e., indirect) ties to jury members on the likelihood of being rewarded. We re-estimated the full model (Model 8) by interacting the alternative measure of social proximity with the project status variable. The pattern and significance levels in the model remain stable: the coefficient for the interaction term is -10.245 (p<.01), while the main effect coefficient for project status is 3.115 (p<.01) and the main effect coefficient for social proximity is 9.408 (p<.01). These results also confirm the existence of a substitution effect between status and social (direct and indirect) ties. We also re-estimated the full model (Model 8) after orthogonalizing all explanatory variables and the results were the same.

To provide an alternative control for the effect of project quality, we estimated the full model 7 with a latent project quality variable. The construction of the latent project quality variable is described in detail in [4]. The coefficient for project quality is positive, 1.336, and significant at (p<.01), while the overall pattern and significance levels remain stable. The coefficient for the interaction term is -.423 (p<.01), the main effect coefficient for project status is .902 (p<.01) and the main effect coefficient for social ties is .449 (p<.01).

We also collapsed the levels of the dependent variable–with non-wins = 0 and recognition (honorable mention or win) = 1 –and re-estimated the full model 7 with fixed effects conditional logistic regression. The variables competitive intensity, median jury experience, and jury status are dropped from the model due to lack of within contest month variance, but the pattern and significance levels for the remaining variables in this model also remain stable. Exposing the interaction effect, the coefficient for the interaction term is -.546 (p<.05), while the main effect coefficient for project status is 1.081 (p<.01), and the main effect coefficient for social ties is .632 (p<.01). Finally, the results were not affected when we clustered the standard errors on firm rather than contest/month. Although the results for the last analysis are not reported here, they are available from the authors upon request.

Study 2

During our observation window, the advertising field insiders we interviewed emphasized how projects of high quality were likely to exhibit certain measurable attributes separate from the un-measurable idiosyncratic aspects of the creative idea underlying each project. Accordingly, in the field study, we controlled for some of these project-level attributes. Yet, other unobserved characteristics not captured in our analysis might affect jury members’ perception of project quality, thereby affecting the chances of a project being rewarded. In Study 2, we tried to alleviate this concern by replicating our effects in an online experimental study. By asking participants to evaluate the same advertising project–thus keeping its quality constant and varying only descriptions of candidates’ status and their social ties to evaluators–Study 2 helps rule out quality differences among projects as an explanation for our results. We replicated the field study by priming all participants to think that their evaluations were in the public domain and telling them that the jury would select the winners collectively. We developed vignettes to describe an award contest–i.e., a fictional Digital advertising contest–in which we asked participants to serve as jury members and bestow an award on a commercial. In the vignettes, we employed different cues to manipulate the status level (status vs. no-status) of the commercials’ creators, and the presence of social ties (direct ties vs. no-direct ties) between them and the experiment participants (i.e., jury members). We used award propensity as the dependent variable.

Method

Participants.

Six hundred and fourteen participants were recruited online using Amazon’s Mechanical Turk. They received $1.00 for completing the study. Potential participants were restricted only to US residents with a 95% or greater approval rating on MTurk. To ensure that participants read and completed the questionnaire carefully, we applied two attention checks and excluded from the final analysis participants who missed the correct answers. To check if participants really watched the commercial, we asked them “What is the commercial about?” after they watched the video. Specifically, they had five options as possible answers: Financial Service (the correct answer), Nutrition Service, Medical Service, Recycling Service or Health Care Service. The second check was an instructional manipulation check [IMC, 48] to ensure participants read the text: specifically, we instructed participants to leave blank the following question: “Do you think commercials affect your purchasing decisions? Please justify your answer with an example.” Since we required participants to watch a commercial that lasted 55 seconds, we removed those participants who did not watch and/or spent too much time watching the video. We then recorded the time each participant spent on the page with the commercial and computed the percentiles for the time variable. In the analysis, we used the data on the participants included in the 5th and the 95th percentile–which corresponded to 50 and 122 seconds, respectively. All these procedures are strongly recommended to ensure the pool of subjects is of high quality and remove inattentive responses when online tools such as Mechanical Turk are used [4953]. The final sample consisted of 518 participants (52.5% female, Mage = 36.53 years, 75.3% Caucasian).

Material and procedure.

We randomly assigned participants to one of the four conditions in a 2 (status: status vs. no-status) x 2 (social ties: direct ties vs. no-direct ties) between-subjects experiment. Participants first read a vignette that informed them about a digital advertising competition where they had to serve as jury members. Then, they were asked to assign an award to a commercial after evaluating its aesthetic beauty and animation features. We chose these two evaluative criteria because they represent the qualities evaluated by the jurors in our field study. In order to replicate the evaluative process of the field study, we also informed the participants that “the jury selects the winner collectively thereby disclosing the vote cast by each jury member.” The purpose of this clarification is to induce the participants to think that they have to justify their evaluation before the other jury members. We used this vignette to describe the evaluative setting:

Advertising digital competition

“In your community, there are many initiatives, including an annual Competition in Digital Advertising. Everyone in the community can participate in the competition by submitting a commercial. Each commercial is judged and has the opportunity to win an award. Since you participated in the competition in the past, this year the organizers of the competition have asked you to become a jury member. As a jury member, you have to assign an award to a commercial after evaluating its aesthetic beauty and animation features.”

After reading about the evaluative setting, participants received more information concerning the commercial’s creators (authors in the vignettes). Specifically, we described the creators of the commercial in terms of their status and their social ties to the experimental participants. We designed the manipulation of status by varying the creators’ prestige and expertise. This manipulation was developed in line with the observation that expertise assessment is essentially “a status-organizing process” [54, p. 561] because individuals who have higher status are seen as more competent, whereas those who are of lower status are seen as less competent [55, p. 216]. In sum, in the status condition, the creators of the commercials were described as well-known experts in advertising, whereas in the no-status condition they were described as non-experts. The social ties manipulation was designed as the presence or absence of a direct tie to ensure consistency in the field study. Based on our manipulation, we informed the participants that they knew the commercial’s creators and had collaborated with them in the past (i.e., direct ties condition), or that they did not know any of the commercial’s creators and had never collaborated with them in the past (i.e., no-direct ties condition). Participants in the status and social ties condition read the description below (if assigned to the no-status and no-direct ties conditions, participants read the text in italics. Our scenario-based manipulations can be seen as a form of contextual priming, in which primed knowledge works as an anchor that is, as a standard of comparison for (re) evaluating the target, possibly resulting in contrasting the target away from the prime [56]. In following this approach, we relied on prior experimental studies using descriptive texts to prime status [e.g., 57, 58] or social ties [e.g., 5961].

“In addition to the video, the organizers provide you with some information about the authors of the commercial. Looking at this information, you realized that all the authors of the commercial are well-known experts (non-experts) in advertising, and that you know some (don’t know any) of them because you collaborated with them (never collaborated with them) on commercials in the past.”

After reading the vignettes, participants in all four conditions watched and evaluated the same commercial on a new financial service. We selected this commercial from an actual Internet advertising contest where leading industry experts serve as judges in assigning various awards to commercials. The commercial chosen for the experiment was recognized as the Best Computer: Software Online Video. Link to the competition site: http://www.iacaward.org/iac/medium/Online-Video/best-online-video.html#. Link to the commercial site: https://www.youtube.com/watch?v=JHpVhEjufyA.

Award propensity.

Our dependent variable measures whether participants are willing to assign an award to the commercial based on a 7-point scale (1 = “Definitely no”, 7 = “Definitely yes”; the question was the following: “Would you assign an award to the commercial?”).

Manipulation checks.

We included both status and social tie’s manipulation check. For the status manipulation check, we asked participants to answer the following question: “How much prestige do you think the authors have in advertising?” They rated the authors’ prestige on a 7-point scale (1 = very low prestige, 7 = very high prestige). For the social ties manipulation, we asked the participant the following question: “How familiar do you feel with the authors?” Participants reported their answer on a 7-point scale (1 = not at all familiar, 7 = extremely familiar).

Results and discussion

Pre-analysis.

We first checked the presence of outliers for our dependent variable (award propensity) and identified nine outliers based on the Z-scores threshold of 2.5 SD [53, 62]. We removed these subjects from all subsequent analyses. We removed outliers at 2.5 SD in our experiments to increase the power by reducing the error variance. This is also the same reason why we removed inattentive participants by using the two attention checks and the commercial watching time. As Meyvis and van Osselaer [53, p. 1161] argue, the removal of participants “is often both legitimate and preferable…as this may produce both a more powerful and a more accurate test of the hypothesis.”

Manipulation checks.

First, we assessed whether the participants perceived the status manipulation by running a 2 (status: status vs. no-status) x 2 (social ties: direct ties vs. no-direct ties) between-subjects ANOVA on the rating of the creators’ prestige. The analysis showed a significant main effect for status (F (1, 505) = 63.36, p<.001): participants in the status condition rated the commercial’s creators as more ‘prestigious’ than participants in the no-status condition (Mstatus = 4.68, SDstatus = 1.22; Mno status = 3.77, SDno status = 1.36). No other significant effects were observed in the results. Similarly, to test the social tie manipulation, we ran a 2 (status: status vs. no-status) x 2 (social ties: direct ties vs. no-direct ties) between-subjects ANOVA on the rating of the creators’ familiarity. The analysis showed a significant main effect for direct tie (F (1, 505) = 29.74, p<.001): participants in the direct ties condition perceived the commercial’s creators as more ‘familiar’ than participants in the no-direct ties condition (Mdirect ties = 2.84, SDdirect ties = 1.57; Mno-direct ties = 2.12, SDno-direct ties = 1.38). No other significant effects were observed in the results. Thus, we concluded that the manipulations of our two independent variables worked as expected.

Award propensity.

We ran a 2 (status: status vs. no-status) x 2 (social ties: direct ties vs. no-direct ties) between-subjects ANOVA on award propensity. Consistent with our field study, the results showed a significant two-way interaction (F (1,505) = 11.56, p = .001). The main effects of status and social ties were not significant. In support of this finding, simple effects tests revealed that participants with direct ties to the commercial’s creators were less willing to assign the commercial an award when creators with status (M = 4.37, SD = 1.15) rather than no-status (M = 4.81, SD = 1.13; F (1,505) = 7.86, p<.01) were involved. In contrast, participants with no direct ties to the commercial’s creators were more willing to assign the commercial an award if status (M = 4.61, SD = 1.14) rather than no-status (M = 4.31, SD = 1.44; F (1,505) = 3.99, p<.05) creators were involved. Fig 4 graphs the lines, Fig 5 reports the bar charts and Table 5 reports the results.

thumbnail
Fig 4. Study 2: The effect of status and social ties on award propensity.

https://doi.org/10.1371/journal.pone.0238651.g004

thumbnail
Fig 5. Study 2: The effect of status and social ties on award propensity.

Note: Error bars are ± 1 SE.

https://doi.org/10.1371/journal.pone.0238651.g005

These experimental findings corroborate the concomitant influence of status and social ties in shaping individual evaluative outcomes, thereby substantiating our results from the field study. The joint effect of status and ties is negative, confirming that audience members are less inclined to reward high-status candidates who are socially close to them. In the case of no ties, no recognition deterrent is present and the positive effect of status on award propensity prevails. Study 2 offers strong validation of the negative interaction effect because we manipulated status and social ties holding the project’s quality constant. Doing so significantly mitigates the possibility that project level features may account for the effects observed in Study 1. In the Appendix, we report the results of a replication (experimental) study (i.e., Study 4) where we use a different manipulation of status (‘famous’ and ‘not very famous’ instead of ‘well-known expert’ and ‘non-expert’) and a different scenario describing the evaluative setting and we replicate the negative interaction between status and social ties. Please, see the Appendix for the full description and the results of this additional study.

Study 3

While Study 2 increases the internal and external validity of Study 1’s findings, it does not allow us to isolate the precise mechanism responsible for the negative interaction. Two equally plausible mechanisms could explain such an empirical pattern. The first is based on the understanding of status and social ties as substitutive judgment devices, namely the idea that social ties may substitute for status in conveying inferential information on evaluative targets, which in turn may guide evaluative decisions. To the extent that social ties channel private information on the evaluation target, audience members with ties to candidates who are in the consideration set should be less sensitive to status information. Conversely, the signaling effect of status should be significantly stronger for audience members who lack such ties and thus have no first-hand information on which to rely in their evaluation. The second explanation relies on social pressure arguments, namely on audience members’ concern to be perceived as fair and disinterested in their evaluation since favoring candidates that are both high status and socially proximate to them can easily evoke suspicions of departure from the meritocratic ideal. The reflected glory evaluators enjoy through their connection to the winner of the tournament may predispose peers towards a morally problematic interpretation of those evaluators’ motives. Thus, while both mechanisms could account for the same result patterning, the underlying explanations are profoundly different. In the first case, audience members who are socially close to the candidates arguably are less likely to use status information to reduce their evaluative uncertainty because social ties represent a more direct mechanism to temper evaluative uncertainty. In the second, audience members who are socially close to the candidates whose work they are expected to evaluate are less sensitive to their status because any further elevation of status of the target could exacerbate the perception that they are pursuing self-serving interests (even when this concern entails overruling a genuine assessment of merit).

In Study 3, we seek to unravel this duality by manipulating the evaluative context. In particular, we reasoned that if the interaction effect reflects evaluators’ reputational concerns then the outcome of the evaluation should depend significantly on whether individual choices are private or in the public domain–and therefore subject to others’ scrutiny. Note that in the field study the decisions of each jury member are collectively socialized and hence known to the other jury members. While this is often the case in peer-based evaluative settings in cultural fields (e.g., Cannes Film Festival, NSF evaluations), there are also evaluative settings in which decision-makers remain oblivious to each other’s deliberations (e.g., Grammies, Oscars). Study 3 reproduces the previous studies as closely as possible; however, evaluators are explicitly told whether or not their evaluations are in the public domain (i.e., known to other evaluators). We then varied only the description of candidates’ status, keeping constant their ties to evaluators. Accordingly, we asked all the study participants to evaluate a commercial created by peers with whom they were directly connected, and we manipulated both the status of the authors of the commercial and the transparency of the evaluation process–i.e., whether or not evaluators’ decisions are openly and collectively debated. Specifically, we developed two distinct descriptions of the contest: one in which the participants are told that their vote will be disclosed and the other in which participants are told that their vote will not be disclosed to the other jury members–which we label public evaluation and non-public evaluation, respectively. These manipulations allow us to ensure that only the participants asked to evaluate high-status peers in the public condition might be susceptible to stigmatizing perceptions. If the pressure to pre-empt potential reputational concerns shapes evaluative considerations, then the propensity to reward any given commercial should decline when the (socially proximate) author of the commercial is high-status and the evaluator’s assessment is public. Stated differently, we should expect the probability to bestow an award on status peers–as opposed to no-status peers–to decline only when the vote is public. By contrast, when the vote is not in the public domain, we should not expect status peers to differ from no-status peers in terms of award propensity. If, on the contrary, status and social ties operate as substitute informational devices, then we should expect no difference between the public and the private conditions.

Method

Participants.

A sample of five hundred and twenty participants was recruited via Prolific, an online UK-based platform [63]. The participants were compensated .70 GBP for completing the study. The recruitment was limited only to residents in the United Kingdom. As in the prior study, we used two attention check questions to exclude participants who did not pay attention while taking the survey. We used the same attention check of Study 2 to ensure that participants watched the commercial, and a similar IMC that instructed participants to leave the text-entry space blank for the following question: “In your opinion, which are the characteristics of good commercials? Please describe these characteristics below.” Since in our vignettes we used the same commercial as in Study 1, we ensured consistency with the first experiment by including in the analysis only the participants who watched the commercial for more than 50 seconds and less than 122 seconds. As explained earlier, these methods are recommended to remove inattentive responses from online surveys to make sure that the pool of subjects is of high quality. The final sample consisted of 402 participants (68.8% female, Mage = 35.58 years, 84.8% Caucasian).

Material and procedure.

We randomly assigned participants to one of the four conditions in a 2 (status: status vs. no-status) x 2 (public domain: public evaluation vs. non-public evaluation) between-subjects experiment. Participants were asked to read the same vignette used in Study 2, except for the information concerning the evaluation’s public domain. In particular, to manipulate the public domain of the evaluation, participants in the public evaluation condition read: “Your vote will be publicly disclosed to the other jury members to collectively select the winner.” On the other hand, in the non-public evaluation condition, participants read: “Your vote will not be disclosed to the other jury members to collectively select the winner.” Like in Study 2, participants received specific information regarding the commercial’s creators. In all four conditions, we held direct ties between the creators of the commercial and the experimental participants constant by telling the participants that they knew the commercial’s creators and had collaborated with them in the past. We then manipulated the status of the creators by applying the same manipulation as in Study 2. Finally, participants were asked to watch and evaluate the commercial already employed in Study 2.

Award propensity.

The same question from Study 2 was used to measure the propensity to award the commercial.

Manipulation checks.

We included both status and public domain manipulation checks. Consistently with Study 2, we used the same manipulation check for status by asking participants how much prestige they think the authors have in advertising on a 7-point scale (1 = very low prestige, 7 = very high prestige). For the public domain’s manipulation, we asked the participant the following question: “Do you think the award decision is anonymous?” Participants reported an answer on a 7-point scale (1 = definitely no, 7 = definitely yes).

Results and discussion

Pre-analysis.

Following the same approach of Study 2, from an analysis of outliers on our dependent variable–award propensity–we identified five outliers based on the Z-scores threshold of 2.5 SD [62, 53]. We removed these subjects from subsequent analyses.

Manipulation checks.

We tested the effectiveness of the status manipulation with a 2 (status: status vs. no-status) x 2 (public domain: public evaluation vs. non- public evaluation) between-subjects ANOVA on the rating of the creators’ prestige. The result revealed a significant main effect for status (F (1, 393) = 174.23, p<.001): participants in the status condition considered the creators of the commercial more prestigious than participants in the no-status condition (Mstatus = 5.34, SDstatus = 1.04; Mno status = 3.69, SDno status = 1.46). The analysis also showed a significant main effect for public domain (F (1, 393) = 4.29, p = .039; Mnon-public = 4.61, SDnon-public = 1.44; Mpublic = 4.46, SDpublic = 1.57) and a marginally significant interaction (F (1, 393) = 3.84, p = .051). Since these results suggest a potential confound in our experimental manipulation, we followed Perdue and Summers’ [64, p. 323] recommendation and compared the effect sizes of each factor to check the validity of our manipulation. The effect size of the status manipulation (η2status = .307) was 28 times greater than the effect size of the public domain’s manipulation (η2public domain = .011) and 31 times greater than the effect size of the interaction effect (η2intercation = .01), suggesting that our status manipulation worked as expected. To test the public domain’s manipulation, we ran a 2 x 2 ANOVA on a decision’s anonymity. The analysis showed a significant main effect for public domain (F (1, 393) = 389.71, p<.001): participants in the non-public condition perceived the decision to be anonymous compared to the public condition (Mnon-public = 5.28, SDnon-public = 1.54; Mpublic = 2.28, SDpublic = 1.48). We found no other significant effects on the results. Thus, our public domain’s manipulation was effective.

Award propensity.

A two-way ANOVA on award propensity revealed no significant main effect for status (F (1, 393) = 2.24, p>.05), public domain (F (1, 393) = .024, p>.05.), and the interaction (F (1, 393) = 1.75, p>.05). Yet, more importantly, the results of planned contrasts analysis showed the expected pattern. Meyvis and van Osselaer [53] argue that “requiring authors to demonstrate a reliable main effect before allowing the testing of planned contrasts (which more precisely test the hypothesis) is not a statistically sound argument” [53, pp. 1171–1172]. Also, Keppel and Wickens [65] state that “when an experiment has been designed to investigate a particular planned contrast, it should be tested regardless of the significance of the omnibus F statistics” [65, p. 116]. Since we did not expect some of the experimental conditions to differ from each other, the planned contrasts analysis is more precise and powerful than the omnibus F test.

First, we examined the effect by comparing the “status and public condition” vs. the “no status and public condition.” As expected, the planned contrast revealed that participants were less likely to assign an award to a commercial created by status peers (Mstatus and public = 4.25, SDstatus and public = 1.17) compared to a commercial created by no status peers when their vote was public (Mno status and public = 4.58, SDno status and public = 1.12; t (393) = -2.00, p<.05). To test the other planned contrast, we compared the “status and non-public condition” to the “no status and non-public condition.” As expected, this contrast was not significant, t (393) = -.12, p>.05 (Mstatus and non-public = 4.42, SDstatus and non-public = 1.17, Mno status and non-public = 4.44, SDno status and non-public = 1.14). Overall, the contrast tests confirmed previous findings, further supporting our speculation that a stigma deflection effect is at play because a commercial’s likelihood to receive an award declined significantly only when jurors evaluated the work of status peers directly connected to them and their vote was public. We observed no such effect when the evaluation occurred privately. Although we cannot conclusively rule out the alternative explanation that status and social (ties) information operate as substitutive information devices, we believe that this alternative explanation is unlikely to affect the results. Fig 6 graphs the lines, Fig 7 reports the bar charts and Table 6 reports the results.

thumbnail
Fig 6. Study 3: The effect of status and public domain on award propensity.

https://doi.org/10.1371/journal.pone.0238651.g006

thumbnail
Fig 7. Study 3: The effect of status and public domain on award propensity.

Note: Error bars are ± 1 SE.

https://doi.org/10.1371/journal.pone.0238651.g007

Discussion and conclusions

In the sociological and organizational literature, tournament rituals operate by selectively allocating recognition among competing candidates [66, 67]. Well-known examples of ceremonies in the cultural domain that signal creative achievement epitomizing peer-based recognition are the Academy Awards in motion picture [68], the Grammies in music [67], the John Bates Clark Medal in economics [69], the Nobel Prize for advances in culture and science [70], and so on. Operating as markers of distinction, these ceremonies shape a cultural field’s status ordering [71]. As such, they have received significant attention from scholars interested in understanding the socio-cognitive mechanisms underlying these evaluative efforts [7274, 21, 75, 25]. Prior research, therefore, has shown how status and social networks are pervasive forces that produce and reproduce attributions of distinction in artistic and scientific evaluative settings. Although several studies have documented the importance of status or social networks across a variety of domains of cultural production, very limited research has focused on how the two mechanisms operate in tandem to shape evaluative outcomes.

We argued that proximity in the social network could moderate the almost universal association between status and recognition. Specifically, social ties to members of the evaluating audience could increase or decrease the positive effect of the candidate’s status. In a field study of award conferrals in advertising contests, we found social proximity between advertising professionals and jury members to attenuate the positive association between status and recognition. Conversely, social ties between audience members and candidates were particularly beneficial in fostering the recognition of candidates who lacked status credentials. Supplementary online experiments aimed at further corroborating this finding and clarifying the underlying mechanism confirmed the negative interaction effect. It is indeed reassuring that we observe the same pattern of effects in the real world (Study 1) and in the online experiments (Studies 2 and 3), which supports the internal and external validity of our results–an important criterion for impactful research in the social sciences [76].

However, Studies 1 and 2 cannot distinguish between two slightly different mechanisms that could underlie the negative interaction effect. One possibility–rooted in the classic understanding if social ties as channels of private information–is that social proximity reduces the saliency of status as an information device. The second possibility is that participants rely less on status when they try to project an image of disinterestedness by not rewarding high-status candidates with whom they have a history of collaboration. To understand whether there is more in the interplay of status and ties than a compensative flow of information, in Study 3 we manipulated the nature of the evaluative context. As in Study 2, we found that status reduces the probability of bestowing an award on socially proximate candidates, but only when participants are induced to think that their decisions are public–i.e., they have to justify their evaluation before other jury members. Conversely, when the evaluation is private, the negative effect of status disappears. Since this result is obtained by keeping the audience-candidate social tie constant across different conditions, it is plausible to conclude that reputational concerns–as opposed to inferential information–shape evaluators’ decisions [77]. While we can only speculate on the exact nature of this concern, the observation of differential effects across the public/non-public conditions is strongly evocative of Goffman’s [78] “front stage”–“back stage” tension that may envelop audience members’ evaluative choices. When decisions are in the “front stage,” the perception that evaluators are pursuing implicit personal gains rather than adhering to disinterested rules and practices is likely to elicit efforts to project such disinterestedness and hence deflect attention away from any signal that would render their choices overly susceptible to sceptical scrutiny. In our case, we speculate that insofar as social proximity heightens exposure to public criticism for the alleged pursuit of self-serving interests, audience-candidate relationships that may be publicly perceived as structuring the awarding process may also dampen the signalling saliency of status. The reason is not that the private information channelled via a social relationship replaces the information conveyed by the candidate’s status, but that such a relationship “influences how others perceive the actor” [39, p. 563], thereby lowering the evaluator’s permeability to status cues. This presumption seems particularly pertinent to ostensibly meritocratic cultural settings characterized by a strong vocational drive and professional ethos [79], where the potential stigma stemming from the alleged transgression of the meritocratic ideal may be particularly severe for one’s reputation. Future experimental research more geared toward the analysis of the evaluators’ decision-making patterns and other micro intervening processes underlying our effects is needed to probe the plausibility of this interpretation.

The findings of this study are important because they shift the emphasis away from either status or social network explanations for recognition and refocus the attention on how these two mechanisms combine in the creation of prestige hierarchies and how such a combinatory effect may itself vary with the specific relational context in which the evaluative activity occurs. Specifically, the negative interaction effect between status and social ties has interesting implications for the dynamics of cumulative advantage. The finding that marginal returns to status diminish with social network proximity adds another piece of evidence that there might be endogenous constraints on the Mathew effect [80]. It is often assumed that the self-reinforcement of status and networks inevitably leads to “winner-take-all” dynamics as cumulative benefits accrue mostly to those who, even by small margins, are in superior positions [81, 82]. This assumption needs to be qualified especially if, under certain conditions, status considerations may lose saliency in the eyes of audience members in charge of relinquishing material and symbolic resources to competing candidates. Further research should elucidate which conditions are worthy of future inquiry.

We believe that our paper takes an important step towards a more precise articulation, in both theoretical and empirical terms, of the role of evaluating audiences in explaining status-based recognition mechanisms. A better understanding of how audiences shape status dynamics is important to mitigate the tension between achievement and ascription that is at the core of meritocratic evaluative settings–whereby audiences are supposed to justify their deliberations based on standards that can be articulated independently of available options [11]. In addition, understanding how audience evaluations may change with the degree of scrutiny to which they are amenable seems crucial in light of ever-increasing calls for transparency and fairness in public life [19]. It is therefore worthwhile to probe the interaction between candidates’ status and audience-candidate connectivity, particularly if one considers that social proximity between producers and audience members is a constitutive feature of peer-based evaluative settings. Because of the role-switch structure of these settings, audience members are also members of the same community as the candidates they evaluate, even though they take on different roles [83], and so–more often than not–they may have few degrees of separation from each other [22, 84].

More generally, our findings speak to prior work attentive to the role of the social-relational context in shaping assessments of merit. Ridgeway and Correll [85] consider a social-relational context any situation in which an actor has to take into account the expected reactions of others in determining how to act, because such reactions will have consequences for her interests. Other lines of scholarship point to how personal preferences often seem to fade in salience relative to what is publicly endorsed in a status hierarchy [11, 12]. This phenomenon is especially apparent in research on “politics of dissimulation”–e.g., Norbert Elias’ [86] scholarship on authoritarian systems–whereby public displays of allegiance to the official credo may mask a great deal of private disagreement. Findings in experimental economics also suggest that individuals tend to change their strategic choices depending on whether they are isolated or part of a group, and on whether there is an audience observing their choices [20]. Likewise, research on social evaluation [87], as well as studies on the social transmission of ideas of fairness [88, 89], show that the apparent social validity of something (e.g., social norms on pro-social behaviour) shapes individual decision-makers’ allocative choices, independent of their personal assessments of quality or fairness. Our study adds conceptual and empirical nuance to these lines of research by revealing how the influence of crucial social cues–such as status and ties–on evaluators’ choices may depend on whether members of the evaluating audience can infer key motivational premises from those choices. In light of recent findings revealing the conditions under which social norms in collective decision-making settings may play a smaller role than previously assumed [90, 91], the extent to which socio-normative pressures–as opposed to an individual’s professional ethos, concern for her image or other behavioural mechanisms–are indeed driving this interaction effect could be an exciting area for future inquiry. To this end, we might posit a variety of peer-based evaluative contexts from those akin to our setting, where individual choices are collectively socialized and visible to other decision-makers (e.g., Cannes Film Festival, National Science Foundation), to those where individual choices remain private (e.g., Grammies, Academy Awards).

Several questions merit further attention. Our data do not capture the process by which jury members collectively make their decisions, in particular how their (often) conflicting opinions are reconciled and consensus on which projects to reward is reached. The combination of archival and interview data offered some suggestive insights, but the use of an ethnographic approach would be better suited to gain a more nuanced understanding of the processes by which reward allocation decisions are collectively made. One could then probe more deeply the conditions under which the desire to distance oneself from morally dubious evaluations is, in fact, driving those decisions, independently from the evaluator’s actual personal beliefs. Another issue is that prior project collaborations capture only a subset of relevant audience-candidate interpersonal relationships. While we endeavoured to control for other possible manifestations of these social intercourses, an extension could be to supplement project collaboration data with additional data on more informal interactions (e.g., advice, friendship, mentorship, and so on) and examine whether patterns of rewards allocation can then be explained more accurately. Along this line, another interesting research direction would be to compare and contrast the role of formal vis-à-vis informal ties in shaping evaluative outcomes for different levels of status. These are but some of the many questions that future research could explore in greater depth.

Supporting information

S1 Fig. Study 4: The effect of status and social ties on award propensity.

https://doi.org/10.1371/journal.pone.0238651.s002

(DOCX)

S2 Fig. Study 4: The effect of status and social ties on award propensity.

https://doi.org/10.1371/journal.pone.0238651.s003

(DOCX)

References

  1. 1. Azoulay P, Stuart TE, Wang Y. Matthew: Effect or Fable? Management Science. 2013; 60(1):92–109. https://doi.org/10.1287/mnsc.2013.1755
  2. 2. Zhao W, Zhou X. Status Inconsistency and Product Valuation in the California Wine Market. Organization Science. 2011; 22(6):1435–448. https://doi.org/10.1287/orsc.1100.0597
  3. 3. Waguespack DM, Sorenson O. The Ratings Game: Asymmetry in Classification. Organization Science. 2011; 22(3):541–553. https://doi.org/10.1287/orsc.1100.0533
  4. 4. Aadland E, Cattani G. Ferriani S. Friends, Gifts and Cliques: Social Proximity and Recognition in Peer-Based Tournament Rituals. Academy of Management Journal. 2019; 62(3):883–917. https://doi.org/10.5465/amj.2016.0437
  5. 5. Bielby WT, Bielby D.D. “All Hits Are Fluke”: Institutionalized Decision Making and the Rhetoric of Network Prime-Time Program Development. American Journal of Sociology. 1994; 99(5):1287–313. https://www.jstor.org/stable/2781150
  6. 6. Podolny JM. Status Signals. Princeton, NJ: Princeton University Press; 2005.
  7. 7. Berger J, Rosenholtz SJ, Zelditch M Jr. Status Organizing Processes. Annual review of sociology. 1980; 6(1):479–508.
  8. 8. Merton RK. The Matthew Effect in Science: The reward and communication systems of science are considered. Science. 1968; 159(3810):56–63. pmid:5634379
  9. 9. Ridgeway CL. Framed by gender: How gender inequality persists in the modern world. Oxford University Press; 2011.
  10. 10. Ridgeway CL, Correll SJ. Consensus and the Creation of Status Beliefs. Social Forces. 2006; 85(1):431–53. https://doi.org/10.1353/sof.2006.0139
  11. 11. Correll SJ, Ridgeway CL, Zuckerman EW, Jank S, Jordan-Bloch S, Nakagawa S. It’s the Conventional Thought that Counts: How Third-Order Inference Produces Status-Advantage. American Sociological Review. 2017; 82:297–327. https://doi.org/10.1177/0003122417691503
  12. 12. Sharkey AJ, Kovács B. The Many Gifts of Status: How Attending to Audience Reactions Drives the Use of Status. Management Science. 2017; 64(11), 5422–5443. https://doi.org/10.1287/mnsc.2017.2879
  13. 13. Parsons T, Shils EA. Toward a General Theory of Action: Theoretical foundations for the Social Sciences. New Brunswick, NJ: Transaction; 1951.
  14. 14. Blau PM. Exchange and Power in Social Life. New Jersey, NJ: John Wiley and Sons; 1964.
  15. 15. Wennerås C., Wold A. Nepotism and sexism in peer-review. Nature. 1997; 387: 341–343. pmid:9163412
  16. 16. Godechot O. The Chance of Influence: A Natural Experiment on the Role of Social Capital in Faculty Recruitment. Social Networks. 2016; 46:60–75. https://doi.org/10.1016/j.socnet.2016.02.002
  17. 17. Zinovyeva N. Bagues M. The Role of Connections in Academic Promotions. American Economic Journal: Applied Economics. 2015; 7(2):264–92. http://dx.doi.org/10.1257/app.20120337
  18. 18. Teplitskiy M, Acuna D, Elamrani-Raoult A, Körding K, Evans J. The Sociology of Scientific Validity: How Professional Networks Shape Judgement in Peer Review. Research Policy. 2018; 47(9):1825–841. https://doi.org/10.1016/j.respol.2018.06.014
  19. 19. Dimant E. Contagion of pro-and anti-social behavior among peers and the role of social proximity. Journal of Economic Psychology. 2019; 73:66–88. https://doi.org/10.1016/j.joep.2019.04.009
  20. 20. Charness G, Rigotti L, Rustichini A. Individual behavior and group membership. American Economic Review. 2007; 97(4):1340–1352. https://doi.org/10.1257/aer.97.4.1340
  21. 21. Cattani G, Ferriani S, Allison P. Insiders, Outsiders and the Struggle for Consecration in Cultural Fields: A Core-Periphery Perspective. American Sociological Review. 2014; 79(2):258–81. https://doi.org/10.1177/0003122414520960
  22. 22. Cattani G, Ferriani S. Networks and Rewards among Hollywood Artists: Evidence for a Social Structural Ordering of Creativity. In Kaufman JC, Simonton DK, editors. The Social Science of Cinema. New York, NY: Oxford University Press; 2013; pp. 185–206.
  23. 23. Hogan R, Hogan J. Personality and Status. In Gilbert DG, Connolly JJ, editors. Personality, Social Skills, and Psychopathology. Perspectives on Individual Differences. Springer, Boston, MA; 1991; pp. 137–54.
  24. 24. Kilduff M, Krackhardt D. Bringing the individual back in: A structural analysis of the internal market for reputation in organizations. Academy of Management Journal. 1994; 37(1):87–108. https://doi.org/10.5465/256771
  25. 25. Reschke BP, Azoulay P, Stuart TE. Status Spillovers: The Effect of Status-Conferring Prizes on the Allocation of Attention. Administrative Science Quarterly. 2018; 63(4):819–47. https://doi.org/10.1177/0001839217731997
  26. 26. Hallen BL, Pahnke EC. When do entrepreneurs accurately evaluate venture capital firms’ track records? A bounded rationality perspective. Academy of Management Journal. 2016; 59(5):1535–1560. https://doi.org/10.5465/amj.2013.0316
  27. 27. Jensen M. The Role of Network Resources in Market Entry: Commercial Banks’ Entry into Investment Banking, 1991–1997. Administrative Science Quarterly. 2003; 48(3):466–97. https://doi.org/10.2307/3556681
  28. 28. Bourdieu P. Practical reason: On the Theory of Action. Stanford University Press; 1998.
  29. 29. Baker WE, Faulkner RR, Fisher GA. Hazards of the Market: The Continuity and Dissolution of Interorganizational Market Relationships. American Sociological Review. 1998; 63(2):147–77. https://www.jstor.org/stable/2657321
  30. 30. Helgesen T. Advertising Awards and Advertising Agency Performance Criteria. Journal of Advertising Research. 1994; 34(4):43–53.
  31. 31. Aadland E. Status Decoupling and Signaling Boundaries: Rival Market Category Emergence in the Norwegian Advertising Field, 2000–2010. Series of Dissertation 5/2012. BI Norwegian Business School; 2012.
  32. 32. Borgatti SP, Everett MG, Freeman LC. Ucinet 6 for Windows: Software for Social Network Analysis. Harvard: Analytic Technologies; 2002.
  33. 33. Aadland E, Cattani G. Ferriani S. The Social Structure of Consecration in Cultural Fields: The Influence of Status and Social Distance in Audience–Candidate Evaluative Processes. Research in the Sociology of Organizations. 2018; 55:129–57.
  34. 34. Milgram S. The small world problem. Psychology Today. 1967; 2: 60–67.
  35. 35. Sauder M, Lynn FB, Podolny JM. Status: Insights from Organizational Sociology. Annual Review of Sociology. 2012; 38:267–83. https://doi.org/10.1146/annurev-soc-071811-145503
  36. 36. Bonacich P. Power and Centrality: A Family of Measures. American Journal of Sociology. 1987; 92(5):1170–182. https://www.jstor.org/stable/2780000
  37. 37. Podolny JM. A status-based model of market competition. American Journal of Sociology. 1993; 98(4):829–872. https://doi.org/10.1086/230091
  38. 38. Podolny JM. Market uncertainty and the social character of economic exchange. Administrative Science Quarterly. 1994; 39(3):458–483. https://www.jstor.org/stable/2393299
  39. 39. Benjamin BA, Podolny JM. Status, quality, and social order in the California wine industry. Administrative Science Quarterly. 1999; 44(3):563–589. https://doi.org/10.2307/2666962
  40. 40. Podolny JM. Networks as the pipes and prisms of the market. American Journal of Sociology. 2001; 107(1):33–60. https://doi.org/10.1086/323038
  41. 41. Jensen M. The Use of Relational Discrimination to Manage Market Entry: When Do Social Status and Structural Holes Work Against You?. Academy of Management Journal. 2008; 51(4):723–43. https://doi.org/10.5465/amr.2008.33665259
  42. 42. Labianca G, Brass DJ. Exploring the social ledger: Negative relationships and negative asymmetry in social networks in organizations. Academy of Management Review. 2006; 31(3): 596–614. https://doi.org/10.5465/amr.2006.21318920
  43. 43. Parker JN, Corte U. Placing Collaborative Circles in Strategic Action Fields: Explaining Differences between Highly Creative Groups. 2017; Sociological Theory 35(4):261–87. https://doi.org/10.1177/0735275117740400
  44. 44. Sherry J. Gift Giving in Anthropological Perspective. Journal of Consumer Research. 1983; 10(9):157–68. https://doi.org/10.1086/208956
  45. 45. McCullagh P, Nelder JA. Generalized Linear Models (2nd ed.). London: Chapman and Hall/CRC; 1989.
  46. 46. Hardin JW, Hilbe JM. Generalized Linear Models and Extensions (3rd ed.). College Station, TX: Stata Press; 2012.
  47. 47. Belsley DA, Kuh E, Welsch RE. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley and Sons; 1980.
  48. 48. Oppenheimer DM, Meyvis T, Davidenko N. Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology. 2009; 45(4):867–872. https://doi.org/10.1016/j.jesp.2009.03.009
  49. 49. Mason W., Suri S. Conducting Behavioral Research on Amazon’s Mechanical Turk. Behavior Research Methods. 2012; 44(1):1–23. pmid:21717266
  50. 50. Maniaci MR, Rogge RD. Caring about Carelessness: Participant Inattention and Its Effects on Research. Journal of Research in Personality. 2014; 48:61–83. https://doi.org/10.1016/j.jrp.2013.09.008
  51. 51. Fiske ST. How to publish rigorous experiments in the 21st century. Journal of Experimental Social Psychology. 2016; 66:145–47. https://doi.org/10.1016/j.jesp.2016.01.006 pmid:30555180
  52. 52. Curran PG. Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology. 2016; 66: 4–19. https://doi.org/10.1016/j.jesp.2015.07.006
  53. 53. Meyvis T, Van Osselaer SMJ. Increasing the Power of Your Study by Increasing the Effect Size. Journal of Consumer Research. 2017; 44(5):1157–173. https://doi.org/10.1093/jcr/ucx110
  54. 54. Bunderson SJ. Recognizing and Utilizing Expertise in Work Groups: A Status Characteristics Perspective. Administrative Science Quarterly. 2003; 48:557–91. https://doi.org/10.2307/3556637
  55. 55. Bunderson SJ, Barton MA. Status Cues and Expertise Assessment in Groups: How Group Members Size One Another Up… and Why It Matters. In Pearce JL, editor. Status in management and organizations. Cambridge, UK: Cambridge University Press; 2011; pp. 215–237.
  56. 56. Strack F, Mussweiler T. Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology. 1997; 73(3):437–446. https://doi.org/10.1037/0022-3514.73.3.437
  57. 57. Hahl O, Zuckerman EW, Kim M. Why elites love authentic lowbrow culture: Overcoming high-status denigration with outsider art. American Sociological Review. 2017; 82(4):828–856. https://doi.org/10.1177/0003122417710642
  58. 58. Sgourev SV, Althuizen N. Is it a masterpiece? Social construction and objective constraint in the evaluation of excellence. Social Psychology Quarterly. 2017; 80(4):289–309. https://doi.org/10.1177/0190272517738092
  59. 59. Zhao M, Xie J. Effects of social and temporal distance on consumers’ responses to peer recommendations. Journal of Marketing Research. 2011; 48(3):486–496. https://doi.org/10.1509/jmkr.48.3.486
  60. 60. Perry-Smith J. E. Social network ties beyond nonredundancy: An experimental investigation of the effect of knowledge content and tie strength on creativity. Journal of applied psychology. 2014; 99(5):831–846. https://doi.org/10.1037/a0036385 pmid:24684668
  61. 61. Brands RA, Mehra A. Gender, brokerage, and performance: a construal approach. Academy of Management Journal. 2019; 62(1):196–219. https://doi.org/10.5465/amj.2016.0860
  62. 62. Van Selst M, Jolicoeur P. A Solution to the Effect of Sample Size on Outlier Elimination. The Quarterly Journal of Experimental Psychology. 1994; 47(3):631–50. https://doi.org/10.1080/14640749408401131
  63. 63. Peer E, Brandimarte L, Samat S, Acquisti A. Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology. 2017; 70:153–163. https://doi.org/10.1016/j.jesp.2017.01.006
  64. 64. Perdue BC, Summers JO. Checking the success of manipulations in marketing experiments. Journal of Marketing Research. 1986; 23(4):317–326. https://doi.org/10.1177/002224378602300401
  65. 65. Keppel G, Wickens TD. Design and Analysis: A Researcher’s Handbook, Upper Saddle River, NJ: Pearson; 2004.
  66. 66. Appadurai A. Introduction: Commodities and the Politics of Value. In Appadurai A, editor. The Social Life of Things. Cambridge, England: Cambridge University Press; 1986; pp. 3–63.
  67. 67. Anand N, Watson MR. Tournament Rituals in the Evolution of Fields: The Case of the Grammy Awards. Academy of Management Journal. 2004; 47(1):59–80. https://doi.org/10.5465/20159560
  68. 68. Baumann S. Hollywood Highbrow. Princeton, NJ: Princeton University Press; 2007.
  69. 69. Gallus J, Frey BS. Awards: A Strategic Management Perspective. Strategic Management Journal. 2016; 37(8):1699–714. https://doi.org/10.1002/smj.2415
  70. 70. Zuckerman H. Nobel Laureates in Science: Patterns of Productivity, Collaboration, and Authorship. American Sociological Review. 1967; 32(3):391–403. https://www.jstor.org/stable/2091086 pmid:6046812
  71. 71. Rossman G, Schilke O. Close, But No Cigar: The Bimodal Rewards to Prize-Seeking. American Sociological Review. 2014; 79(1):86–108. https://doi.org/10.1177/0003122413516342
  72. 72. Allen MP, Parsons NL. The Institutionalization of Fame: Achievement, Recognition, and Cultural Consecration in Baseball. American Sociological Review. 2006; 71:808–825. https://doi.org/10.1177/000312240607100505
  73. 73. Cattani G, Ferriani S. A Core/Periphery Perspective on Individual Creative Performance: Social Networks and Cinematic Achievements in the Hollywood Film Industry. Organization Science. 2008; 19:824–44. https://doi.org/10.1287/orsc.1070.0350
  74. 74. Rossman G, Esparza N, Bonacich P. I’d like to Thank the Academy, Team Spillovers, and Network Centrality. American Sociological Review. 2010; 75(1):31–51. https://doi.org/10.1177/0003122409359164
  75. 75. Shymko Y, Roulet TJ. When Does Medici Hurt Da Vinci? Mitigating the Signaling Effect of Extraneous Stakeholder Relationships in the Field of Cultural Production. Academy of Management Journal. 2017; 60(4):1307–338. https://doi.org/10.5465/amj.2015.0464
  76. 76. Winer RS. Experimentation in the 21st century: The importance of external validity. Journal of the Academy of marketing Science. 1999; 27(3), 349.
  77. 77. Baum JA., Oliver C. Institutional linkages and organizational mortality. Administrative Science Quarterly, 1991; 36(2):187–218. https://doi.org/10.2307/2393353
  78. 78. Goffman E. The Presentation of Self in Everyday Life. New York, NY: Anchor Book; 1959.
  79. 79. Heinich N. The Sociology of Vocational Prizes: Recognition as Esteem. Theory, Culture and Society. 2009; 26(5):85–107. https://doi.org/10.1177/0263276409106352
  80. 80. Gould SJ. The Structure of Evolutionary Theory. Cambridge, MA: Harvard University Press; 2002.
  81. 81. Magee JC, Galinsky AD. 8 Social Hierarchy: The Self-Reinforcing Nature of Power and Status. Academy of Management Annals. 2008; 2(1):351–98. https://doi.org/10.5465/19416520802211628
  82. 82. Ko Kuwabara, Anthony D, Horne C. In the Shade of a Forest Status, Reputation, and Ambiguity in an Online Microcredit Market. Social Science Research. 2017; 64:96–18. pmid:28364857
  83. 83. Wijnberg NM. Selection Processes and Appropriability in Art, Science and Technology. Journal of Cultural Economics. 1995; 19:221–35.
  84. 84. Lamont M. How professors think. Cambridge, MA: Harvard University Press; 2009.
  85. 85. Ridgeway CL, Correll SJ. Unpacking the gender system: A theoretical perspective on gender beliefs and social relations. Gender & Society. 2004; 18(4), 510–531.
  86. 86. Elias N. The Court Society. Translated by Edmund Jephcott. Oxford, Basil Blackwell; 1983.
  87. 87. Lamont M. Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology. 2012; 38: 201–221. https://doi.org/10.1146/annurev-soc-070308-120022
  88. 88. Hugh-Jones D, Ooi J. Where do fairness preferences come from? Norm transmission in a teen friendship network. No. 2017–02. School of Economics, University of East Anglia, Norwich, UK., 2017.
  89. 89. Krupka E, Weber RA. The focusing and informational effects of norms on pro-social behavior. Journal of Economic psychology. 2009; 30(3):307–320. https://doi.org/10.1016/j.joep.2008.11.005
  90. 90. Capraro V, Rand DG. Do the Right Thing: Experimental evidence that preferences for moral behavior, rather than equity or efficiency per se, drive human prosociality. Judgment and Decision Making. 2018; 13(1):99–111. http://dx.doi.org/10.2139/ssrn.2965067
  91. 91. Capraro V, Vanzo A. The power of moral words: Loaded language generates framing effects in the extreme dictator game. Judgment and Decision Making. 2019; 14(3):309–317. http://journal.sjdm.org/19/190107/jdm190107.pdf