Table 1.
Content analyses that have examined science journalists’ source choices.
Fig 1.
Frequency distribution of journalistic references to scientific journals by a) eight German science beats between 1995 and 1996 (N = 408) and b) the New York Times (N = 545) between 1998 and 2012 [33,41]. Source: Own presentation based on data a) from [33]: 408 references were analysed; these were for 1. Nature (74), 2. Science (55), 3. NEJM (43), 4. Lancet (35), 5. New Scientist (21), 6. PNAS (13), 7. JAMA (11), 8. Nature Medicine (11), 9. BMJ (9), 10. Münchner Medizinische Wochenschrift (MMW) (8), and 128 other journals (number of references missing in the publication) and from b) [41]: 1,054 references were analysed; these were for Nature (171), Science (126), PNAS (99), NEJM (44), JAMA (42), Archives of Internal Medicine (26), Lancet (26), and 267 other journals (number of references missing in the publication).
Fig 2.
Distribution of parallel selections of specific occasions in one international and three national studies.
Source: own presentation based on data from [77,78,80]. A) Data from three studies of German newspapers and TV news programmes. B) Data from an international study of TV news programmes. We only show shares for up to seven media titles because in Rössler’s studies, shares of events reported by more than seven media titles were only presented in aggregated form.
Fig 3.
Development of the number of Social Impact Papers (SIPs) and journals with SIPs.
A) Comparison of the number of studies with MSM score > = 50 in the different years of the study period, showing more than a quadrupling from 2014 to 2016. B) Comparison of the number of journals with at least one SIP in the different years of the study period, with a similar development as the number of SIPs.
Fig 4.
Complementary Cumulative Distribution Function (CCDF) of Social Impact Papers (N = 4,186) on journals between August 2016 and July 2018 (N = 1,036).
A) CCDFs and power law fits for the share of journals with a certain number of SIPs for the last two years of the study period (N = 709 for the number of journals from August 2016 to July 2017 and N = 646 for August 2017 to July 2018). The distributions are very similar. B) CCDF and power law fit for aggregated data from panel A).
Table 2.
Results of likelihood ratio tests for different heavy-tailed distributions.
Table 3.
Parameters of power law and truncated power law fits and results of goodness-of-fit tests for the distribution of SIPs per journal in the different time periods.
Fig 5.
Complementary Cumulative Distribution Functions (CCDFs) of the number of studies with a specific MSM score between August 2016 and July 2018 (N = 4,186).
A) CCDFs and power law fits for the share of studies with a certain MSM score for the last two years of the study period (N = 2,213 for the number of SIPs from August 2016 to July 2017 and N = 1,973 for August 2017 to July 2018). While the power law fit is a good representation of the data up to values of around 150, there is a steep drop-off in the tail. B) CCDF and power law fit for aggregated data from panel A).
Table 4.
Parameters of power law and truncated power law fits and results of goodness-of-fit tests for the distribution of MSM scores in the different time periods.