Figures
Abstract
Many citizen science projects depend on colour vision. Examples include classification of soil or water types and biological monitoring. However, up to 1 in 11 participants are colour blind. We simulate the impact of various forms of colour blindness on measurements with the Forel-Ule scale, which is used to measure water colour by eye with a 21-colour scale. Colour blindness decreases the median discriminability between Forel-Ule colours by up to 33% and makes several colour pairs essentially indistinguishable. This reduces the precision and accuracy of citizen science data and the motivation of participants. These issues can be addressed by including uncertainty estimates in data entry forms and discussing colour blindness in training materials. These conclusions and recommendations apply to colour-based citizen science in general, including other classification and monitoring activities. Being inclusive of the colour blind increases both the social and scientific impact of citizen science.
Citation: Burggraaff O, Panchagnula S, Snik F (2021) Citizen science with colour blindness: A case study on the Forel-Ule scale. PLoS ONE 16(4): e0249755. https://doi.org/10.1371/journal.pone.0249755
Editor: Adrian G. Dyer, RMIT University, AUSTRALIA
Received: February 12, 2021; Accepted: March 24, 2021; Published: April 19, 2021
Copyright: © 2021 Burggraaff et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying this study are from Novoa et al. 2013 (http://dx.doi.org/10.2971/jeos.2013.13057). All findings can be entirely replicated using the data from Novoa et al. 2013 and the protocol in our Methods section. The authors did not have any special access privileges that others would not have.
Funding: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 776480. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Colour measurements are common in citizen science. They are often done using red-green-blue (RGB) consumer cameras such as smartphones [1–3], but also with the human eye. Human colour measurements are used in such diverse fields as coral reef monitoring [4], snail evolution [5], soil surveying [6], climate adaptation [7], and water colour [8–10]. The data are expressed through a qualitative label [5] or by comparison with a colour chart [4, 6, 8–11]. Colour is a useful proxy for underlying properties such as chemical composition [11, 12] and the simplicity of measuring with the eye enables low-cost measurements over large areas and long time series [12, 13].
Accessibility and inclusivity are key to successful citizen science [14, 15]. A large and diverse group of participants increases the social and scientific impact of citizen science [5, 14, 16, 17]. However, recruiting and retaining participants is challenging [14, 15, 18, 19]. Important motivations to participate are a feeling of contributing to science and environmental protection [6, 16–19], learning [7, 17, 18], and simply having fun [9, 14, 16, 17, 19]. Common reasons to stop participating include mis- or not understanding the project [17], perceiving the data as not valuable [4, 16, 18], and difficulty in performing the measurements [6, 9, 18, 20].
While colour vision is often assumed to be universal, many differences exist between individuals. Colour blindness, or colour deficiency, affects up to 9% of men and 2% of women, depending on ethnicity and other genetic factors [21]. It reduces or even eliminates one’s ability to distinguish certain colours, most commonly red and green [21]. Colour blindness is typically congenital [21–25], but can also be acquired through age or disease [25, 26].
Three forms of colour blindness exist, namely anomalous trichromacy, dichromacy, and monochromacy. Each affects the eye’s three pigments in a different way. These pigments are labelled LMS for long-, medium-, and short-wave, respectively, with peak sensitivity wavelengths of 560, 530, and 420 nm [22]. In anomalous trichromacy, a single pigment has an atypical spectral response, reducing one’s colour discrimination abilities [22, 23, 27]. This is called protanomaly, deuteranomaly, or tritanomaly, for the respective LMS pigments. Dichromacy is a complete lack of one pigment, similarly called protanopia, deuteranopia, or tritanopia [22]. Finally, monochromacy is a complete lack of multiple cones, causing a full lack of colour vision. Monochromacy is exceedingly rare [22, 24] and is not discussed further in this work.
Colour blindness is often treated as a continuous spectrum from regular colour vision (all pigments present and typical) through degrees of anomalous trichromacy (one pigment atypical) to dichromacy (one pigment wholly missing) [21, 22, 26]. For simplicity, the three LMS deficiencies are referred to as protan, deutan, and tritan, respectively [21]. Protan and deutan are the most common, affecting for example up to 9% of men and 0.6% of women in Europe, as well as 7% of men and 2% of women in China [21]. The prevalence of tritan in the West is on the order of 1:10 000 [25], though higher prevalences have been reported in other locations [28].
Colour blindness limits the accessibility of citizen science that involves colour measurements for up to 1 in 11 participants. However, to our knowledge, little research has gone into its potentially far-reaching consequences. Such work has been done for science communication, for example in designing inclusive colour maps [27, 29].
As a case study, we investigate the impact of colour blindness on water colour measurements with the Forel-Ule (FU) scale. This scale quantifies human water colour measurements [30] by assigning a numerical value from 1–21 to a predetermined set of colours, shown in Fig 1. These range from indigo blue (FU 1) through green (FU 11) to cola brown (FU 21). First used in the 1890s by Forel and Ule [31, 32], it provides the longest continuous record of ocean colour [13]. For instance, Wernand and Van der Woerd used 17 171 archival FU measurements from 1930 to 1999 to derive long-term biogeochemical trends in the Pacific Ocean [12]. Properties of a water body that can be derived from its colour include suspended particles, dissolved organic matter, and algal pigments such as chlorophyll-a [12, 13, 33].
The individual FU colours are shown on the right, a comparison to the human gamut on the left. The gamut is plotted in (x, y) chromaticity, normalized from CIE XYZ and shown with a constant brightness, and converted to sRGB. The FU scale increases from 1 (bottom left) to 21 (far right). The shaded area represents the full gamut of regular colour vision, while the coloured triangle represents the sRGB colour space, which most computer monitors are limited to. The perceived colours may vary depending on monitor or printer settings and the reader’s own colour vision.
The FU scale is commonly used by professionals [12, 34] and by citizen scientists [8, 10]. Measurements are done by comparing a physical standard colour scale to a water body. For citizen science, the original scale made from 21 vials of pigment mixes [35] may be replaced with plastic filters [10] or a printed version [8], making it easier to use. Having this physical reference reduces the effects of variations in illumination, though in all cases it is difficult to guarantee colour consistency.
We use simulations to determine the effects of colour blindness on FU measurements. Such digital simulations accurately reproduce colour blind vision [27, 36]. The discriminability of the resulting shifted colours is assessed using the CIE ΔE00 colour difference measure [37]. This way, the impact of colour blindness on FU measurements is quantified.
Based on these results, we make general recommendations for dealing with colour blindness in citizen science. These include guidelines for data entry protocols and training materials, benefiting citizen motivation and data quality. Moreover, the methods applied in this work are easily generalized to other colour-based tools. This enables authors to account for colour blindness in the design stage of new citizen science projects. While some projects have opted for simplified colour scales [11], this significantly reduces the information content [33] of all data, including those from colour blind participants. Simplified colour scales are thus generally not an ideal solution to this problem.
Section 2 describes the methods used to simulate colour deficiency and assess colour discriminability. Results are presented in Section 3 and discussed in Section 4. Finally, conclusions and recommendations are drawn up in Section 5.
2 Methods
The colour blindness simulations and analysis were implemented in custom Python scripts available from https://github.com/burggraaff/cbfu.
2.1 Forel-Ule scale
Tristimulus (CIE XYZ) values for the FU scale were derived by Novoa et al. from transmission spectroscopy [35]. The corresponding (x, y) chromaticities are shown in Fig 1.
Four illuminants were considered, namely E (equal-energy) and D55, D65, and D75 (daylight). These illuminants quantify differences in lighting conditions and are used to express colour appearance in a standardised manner [38]. The FU scale is defined with an E illuminant [35] but measurements take place in daylight, making D-type illuminants more representative [10]. Conversion between illuminants was done in XYZ space using the Bradford chromatic adaptation matrices provided on Bruce Lindbloom’s website [39].
The tristimulus values were first converted to the LMS colour space, representing the relative excitations of the LMS cones [27, 40]. This was done through the Hunt-Pointer-Estevez matrix [40], as shown in Eq (1). Here [L M S]T and [X Y Z]T are the vector representations of a single colour in LMS and XYZ, respectively. (1)
2.2 Simulation of colour blindness
Colour blindness was simulated by mapping colours from the LMS colour space representing regular vision to a reduced colour space representing colour deficiency [27, 36, 41]. This is a mathematical representation of how colour appearances shift due to colour blindness, based on the observed colour perceptions of dichromats [36]. Since for dichromats and anomalous trichromats, two out of three cones are unaffected, the responses of those cones to a given colour are unchanged. The simulation determines the response of the third, deficient cone that imitates for a regular observer the colour perceived by a colour blind person [36, 41]. This in turn allows us to apply discriminability metrics developed for regular colour vision to the simulated perceived colours.
The LMS-space vectors were modified using a cone-deficiency transfer matrix Tk. Tk is the identity matrix I3 with one diagonal element (, , for protan, deutan, tritan, respectively) reduced to a relative cone contribution k. This is shown in Eqs (2) and (3) for protan with its respective matrix [27, 41]. k ranges continuously from 1 (regular vision) to 0 (dichromacy). It represents the relative contribution of a specific cone to colour vision but does not correspond directly to a physical property of the eye. The elements q1 and q2 of Tk shift the response from the deficient cone (L in the example) to the others. (2) (3)
The cone transfer matrices for protan , deutan , and tritan are as follows: (4)
The elements q1, q2 were determined by noting that colour blind people retain regular vision for white and a complementary colour (blue for protan and deutan, red for tritan) [27, 36, 41]. In other words, Tk has eigenvectors (white) and either (blue) or (red) with eigenvalues 1. This is shown in Eqs (5) and (6). (5) (6)
For each case, a system of two equations with two unknowns q1, q2 and one variable k was derived, with Lb, Mb, Sb the LMS coordinates of the blue reference vector and Lr, Mr, Sr those of : (7) (8)
Solving for q1, q2 gave the following expressions: (9) (10)
The sRGB blue and red primaries are typically used for and , respectively, as this technique is used in the field of computer graphics [27, 41]. While other primaries could be used, such as monochromatic wavelengths [36], this makes little difference [27] so we followed the convention.
We calculated Tk for protan, deutan, and tritan with 1 ≥ k ≥ 0 in intervals of 0.01, and transformed the 21 FU colours with each Tk. The modified vectors were then transformed back to XYZ and analyzed. This was implemented in Python through NumPy’s einsum method [42].
2.3 Colour discrimination
Discriminability of the transformed FU colours was assessed in the CIE Lab (1976) colour space. CIE Lab is approximately perceptually uniform, its components representing lightness (L*), green-red (a*), and blue-yellow (b*) [38]. While FU colour assignment is typically done in (x, y) chromaticity (normalized XYZ) through the hue angle [3, 33], this approach does not work for dichromacy, which reduces the chromaticity plane to a line [36]. The Euclidean distance in XYZ coordinates also could not be used, as XYZ is not perceptually uniform [43].
Discriminability was quantified through the ΔE00 metric [37], which expresses the difference between colour pairs. The full formula for ΔE00 is given in [37] and not reprinted here due to its length; our Python implementation passed all the example cases in said paper. A value of ΔE00 = 2.3 corresponds to a just-noticeable difference (JND), the smallest difference an average observer can distinguish [38, 44].
For each deficiency simulation, the ΔE00 difference between each of the 21 transformed FU colours was calculated, giving a 21 × 21 confusion matrix. In this, any colour pairs where ΔE00 < 1 JND cannot be discriminated at all, while pairs with 1 ≤ ΔE00 ≤ 3 are discriminable with difficulty. Pairs with ΔE00 > 3 were considered discriminable.
3 Results
3.1 Colour blindness simulation
The appearance of the FU scale with varying degrees of colour blindness, simulated as in Section 2.2, is shown in Fig 2. The observed changes qualitatively match those seen in previous work [27, 41] and were anecdotally confirmed by one of the authors (deuteranomalous) and a colleague (protanopic). The largest colour shifts are seen for tritan, as expected since it affects the perception of blue light and many FU colours are shades of blue.
The (modified) XYZ coordinates were adapted to a D65 illuminant, then converted to the sRGB colour space and gamma expanded [1] for visualization purposes. The perceived colours may vary depending on monitor or printer settings and the reader’s own colour vision. Readers who cannot distinguish between the colours shown here may benefit from taking a colour vision test; many variants are freely available online. The anomalous examples correspond to k = 0.50.
Colour blindness narrows the gamut of the FU scale, as shown in Fig 3. It has little effect on the lightness (L*) of the FU scale but affects its colour components. Protan and deutan (red-green blindness) reduce the range of a* (red-green) while tritan reduces the range of b* (blue-yellow). These shifts imply that colour blindness reduces the ability to discriminate FU colours based on hue, meaning the user will have to rely more on lightness.
Both regular and deficient vision are included. Regular vision is hidden in the top and bottom panels behind protan and deutan. These affect the a* (green-red) coordinate the most while tritan affects b* (blue-yellow) the most. None of the deficiencies significantly affect L* (lightness).
3.2 Colour discrimination
The discriminability of FU colours is reduced by colour blindness. The confusion matrices for regular and deficient vision, calculated as in Section 2.3, are shown in Fig 4. They show that the reduced range in a* (red-green) for protan and deutan and in b* (blue-yellow) for tritan, observed in Section 3.1, reduce the discriminability at opposite ends of the FU scale. The former primarily affect FU 10–21 (green–brown) while tritan affects FU 1–9 (blue–green).
The top panels show the full range of ΔE00, while the bottom panels have a narrower colour bar, in units of just-noticeable difference (JND, ΔE00 = 2.3). Even with regular vision, some pairs of FU colours are difficult to distinguish (ΔE00 ≤ 3 JND) Protan and deutan primarily decrease the discriminability of the middle (green) and high (brown) colours, while tritan primarily affects the low (blue) colours, as expected.
Several pairs of FU colours become fully indistinguishable. Deuteranopia causes two colour pairs (FU 19-20 and 20-21) to fall within 1 JND and thus become indistinguishable. For tritanopia, six pairs become indistinguishable, namely 1-2, 1-3, 2-3, 3-4, 4-5, and 5-6. Protanopia does not cause indistinguishable pairs.
Additionally, many more pairs exhibit reduced discriminability. While most adjacent pairs are <3 JND apart even with regular colour vision, deficiency extends this further off the diagonal. In particular, protan and deutan cause confusion between the central colours (FU 9–13), which is also apparent from Fig 3 as they have similar L*, a*, and b*. On the other hand, tritan significantly reduces the discriminability of FU 1–9. As seen in Fig 5, the number of pairs within 3 JND increases from 17 (regular) to 24 (protanopia), 21/24 (deuteranopia/deuteranomaly), or 30 (tritanopia).
The median and minimum (left) ΔE00 difference between FU colour pairs, and the number of pairs within 3 and 1 JND (right), are shown as a function of the relative cone contribution k. k ranges from 1 (full colour vision) to 0 (dichromacy), with intermediate values representing partial colour blindness (anomalous trichromacy). Pairs with ΔE00 < 1 JND are fully indistinguishable, pairs with <3 JND are difficult to distinguish (Section 3.2).
These trends also apply to partial colour blindness (anomalous trichromacy). Fig 5 shows the relation between k and median/minimum ΔE00 as well as the number of indistinguishable pairs. The median decreases smoothly for protan, deutan, and tritan (from 33 to 27, 26, and 22, respectively) from k = 1 to 0. The minimum ΔE00 decreases smoothly for protan and deutan (from 3.3 to 2.5 and 2.2, respectively) while the tritan curve is piecewise smooth. Fully indistinguishable pairs (ΔE00 < 2.3) appear at k ≤ 0.20 for deutan and tritan.
Chromatic adaptation with a daylight illuminant (Section 2.1) did not affect these results. While the ΔE00 between some pairs changed by up to 1 JND, the patterns seen in Figs 4 and 5 remained, as did the previously discussed pairs of non-discriminable colours.
3.3 Practical consequences
In practice, FU measurements always have an uncertainty of ≥1 FU units. This is due to viewing conditions at the time of measurement including waves, specular reflections, and uneven illumination. As seen in Section 3.2, adjacent pairs of FU colours are difficult to distinguish (ΔE00 < 3 JND) even with regular vision.
Colour blindness increases the uncertainty on FU measurements. Observers with protan or deutan experience increased difficulty in distinguishing adjacent pairs. Moreover, protans have difficulty distinguishing FU 9–13 while for deutans, FU 19-20 and 20-21 are completely indistinguishable. For a FU 20-type water body, a deutan cannot specify their observation more precisely than 19–21. Furthermore, ΔE00 = 2.33 for FU 18 and 20, further reducing this precision to 18–21 given imperfect viewing conditions. Similarly, since tritans cannot distinguish six pairs of colours in the FU 1–6 range, they can provide little precision on 99% of global surface waters [33].
This increased uncertainty affects data quality and user motivation. This is further discussed in Section 4 and recommended guidelines for considering these issues are given in Section 5.
4 Discussion
Simulating the effects of colour blindness on Forel-Ule (FU) measurements, we have found significant reductions in colour discriminability and hence precision (Sections 3.2 and 3.3). This matches the authors’ and colleagues’ experiences in the field, and the simulation methods are well-attested in other contexts [27, 29, 36]. However, wider validation specific to the FU scale, with participants representing different types of colour blindness, is desirable.
The reduction in precision due to colour blindness reduces the quality and value of citizen science data. The magnitude of this effect depends on the type and severity of colour blindness, as described in Section 3.3. Protans and deutans, the vast majority of colour blind people in the West [21, 25], experience a reduction in median discriminability (ΔE00) between FU colours of up to 21%; for tritans this is 33%. The uncertainty in FU data increases correspondingly, though not evenly. For example, tritans’ ability to identify green-brown waters (FU 10–21) changes little, but they cannot distinguish the blue water types (FU 1–6) that represent most global surface waters [33].
This reduction in data value can be addressed by modifying data entry protocols to include uncertainties. Currently, many citizen science projects require users to provide a single value, for example FU 9 or 10. An entry field for uncertainty, or allowing the user to enter multiple values, accounts for the decrease in selectivity. Participants can estimate this uncertainty themselves. Even FU measurements by participants with regular colour vision have a typical uncertainty of ±1 FU (Section 3.2), which should be accounted for when using them to validate remote sensing data [34]. Colour blindness, particularly dichromacy, increases this uncertainty to up to ±3 FU.
We propose three methods to include uncertainties in data entry forms. The first is simply to include two fields, one for the best estimate (for example FU 9) and one for the estimated uncertainty (e.g. ±2 FU). This method is commonly used in scientific publications but it may be difficult for citizens to understand and apply [45], especially for asymmetric uncertainties. The second method is to have participants estimate a sequential range of possible values (e.g. FU 8–11), optionally including a single best estimate (e.g. FU 9). This is intuitive, simple to apply, and easily translated into traditional uncertainty intervals. It is most applicable for sequential scales like FU where confusion occurs primarily between adjacent numbers (Fig 4). The third method is to have participants select any number of possible values (e.g. FU 8, 9, 11). This is the most general method for discrete colour scales but makes the uncertainties more difficult to process. It is best suited to colour scales with many non-adjacent indistinguishable pairs. Our Python code (Section 2) can be adapted to other colour scales to determine which method is most suitable. A more detailed discussion on handling uncertainty in citizen science data is provided in [46].
Colour blindness can also affect the motivation of citizen scientists. As discussed in Section 1, participants need to feel they are contributing to science with valuable data. A participant presented with a colour scale where multiple colours appear indistinguishable may dismiss the method as either too difficult or nonsensical, and stop participating [4, 6, 9, 16–18, 20]. This is especially true for one unaware of their colour blindness. Since citizen science benefits from a large and diverse group of participants [5, 14, 16, 17], participant retention is important.
Demotivation can be prevented by modifying training materials. Explaining the choice of colour scale and how colour blindness affects its appearance helps participants understand the method. Particular care should be taken in emphasising the value of citizen data, even with colour blindness. For example, while tritans cannot distinguish the FU colours covering the open sea, their ability to distinguish FU 10–21 differs little from regular vision. These cover many inland waters [3, 47], which are commonly studied with the FU scale [30], so training materials should emphasise the value of tritans’ observations there.
Training participants to estimate and provide uncertainties would further help them understand the value of their data [45]. Moreover, since uncertainty estimation is an integral part of professional science, citizen scientists may even gain motivation from learning about it [7, 17, 18]. For existing applications, if modifying data entry forms is impossible, explaining why colours may appear similar and how to pick a single colour would reduce the perceived difficulty.
The severity of these motivational effects and the efficacy of these preventative measures should be tested in practice. Comparing the retention of participants with regular and deficient colour vision, with and without modified training materials and data entry forms, would serve this purpose. This is ideally done in the design stage, as part of a co-creation process [7, 17].
Additional future work includes investigating the effects of other variations in colour perception. Even among those with regular colour vision, variations in colour perception exist [22], including demographical trends [21, 28]. Moreover, monochromacy was not discussed in this work because of its rarity [24] but likely has an even more pronounced effect on colour discriminability than the deficiencies investigated here.
Finally, unrelated to human observations, Fig 3 highlights the importance of lightness in distinguishing FU colours. Many FU index algorithms, which apply the FU scale to remote sensing data, only account for chromaticity [3, 33, 34]. Introducing lightness to these algorithms may improve their precision and accuracy.
5 Conclusions & recommendations
Citizen science projects that depend on colour vision should account for colour blindness, which affects up to 1 in 11 participants. For Forel-Ule water colour measurements, colour blindness reduces the median discriminability between colours by up to 33% and makes multiple pairs of colours fully indistinguishable. This affects data quality and citizen motivation.
Modifying data entry forms to include uncertainty estimates would reduce the impact on data quality. This can be done by letting participants estimate the uncertainty in their measurement or choose multiple colours on the scale. Our provided Python code can be adapted to determine the best suited method for different colour scales. Learning how to estimate uncertainties may also increase participants’ motivation and understanding of science.
The impact on motivation is reduced by including colour blindness in training materials. This includes explaining the colour scale and the difficulties colour blind participants may face, but also emphasising the continued value of their data. Through improved retention, this increases the number and diversity of the participants, which in turn increases both the social and scientific impact of citizen science.
Acknowledgments
The authors wish to thank Mortimer Werther, Steele Farnsworth, Emmanuel Boss, and Akupara Panchagnula for valuable discussions relating to this work. Data analysis and visualization were done using the Matplotlib, NumPy, and colorio libraries for Python. Fig 4 uses the cividis colour map [29]; the line colours in Figs 3 and 5 were obtained from https://davidmathlogic.com/colorblind/.
References
- 1. Burggraaff O, Schmidt N, Zamorano J, Pauly K, Pascual S, Tapia C, et al. Standardized spectral and radiometric calibration of consumer cameras. Optics Express. 2019;27(14):19075–19101. pmid:31503673
- 2. Leeuw T, Boss E. The HydroColor app: Above water measurements of remote sensing reflectance and turbidity using a smartphone camera. Sensors. 2018;18(1):256.
- 3. Novoa S, Wernand MR, van der Woerd HJ. WACODI: A generic algorithm to derive the intrinsic color of natural waters from digital images. Limnology and Oceanography: Methods. 2015;13(12):697–711.
- 4. Marshall NJ, Kleine DA, Dean AJ. CoralWatch: Education, monitoring, and sustainability through citizen science. Frontiers in Ecology and the Environment. 2012;10(6):332–334.
- 5. Worthington JP, Silvertown J, Cook L, Cameron R, Dodd M, Greenwood RM, et al. Evolution MegaLab: A case study in citizen science methods. Methods in Ecology and Evolution. 2012;3(2):303–309.
- 6. Bone J, Archer M, Barraclough D, Eggleton P, Flight D, Head M, et al. Public participation in soil surveys: Lessons from a pilot study in England. Environmental Science and Technology. 2012;46(7):3687–3696. pmid:22393874
- 7. Bremer S, Haque MM, Aziz SB, Kvamme S. ‘My new routine’: Assessing the impact of citizen science on climate adaptation in Bangladesh. Environmental Science & Policy. 2019;94:245–257.
- 8. Brewin RJW, Brewin TG, Phillips J, Rose S, Abdulaziz A, Wimmer W, et al. A Printable Device for Measuring Clarity and Colour in Lake and Nearshore Waters. Sensors. 2019;19(4):936. pmid:30813342
- 9. Scott AB, Frost PC. Monitoring water quality in Toronto’s urban stormwater ponds: Assessing participation rates and data quality of water sampling by citizen scientists in the FreshWater Watch. Science of the Total Environment. 2017;592:738–744.
- 10. Novoa S, Wernand MR, van der Woerd HJ. The modern Forel-Ule scale: A’do-it-yourself’ colour comparator for water monitoring. Journal of the European Optical Society: Rapid Publications. 2014;9:14025.
- 11. Castilla EP, Cunha DGF, Lee FWF, Loiselle S, Ho KC, Hall C. Quantification of phytoplankton bloom dynamics by citizen scientists in urban and peri-urban environments. Environmental Monitoring and Assessment. 2015;187(11):690.
- 12. Wernand MR, van der Woerd HJ. Ocean colour changes in the North Pacific since 1930. Journal of the European Optical Society: Rapid Publications. 2010;5:10015s.
- 13. Wernand MR, van der Woerd HJ, Gieskes WWC. Trends in Ocean Colour and Chlorophyll Concentration from 1889 to 2000, Worldwide. PLOS ONE. 2013;8(6):e63766.
- 14. Brouwer S, Hessels LK. Increasing research impact with citizen science: The influence of recruitment strategies on sample diversity. Public Understanding of Science. 2019;28(5):1–16.
- 15. Rambonnet L, Vink SC, Land-Zandstra AM, Bosker T. Making citizen science count: Best practices and challenges of citizen science projects on plastics in aquatic environments. Marine Pollution Bulletin. 2019;145:271–277.
- 16. Alender B. Understanding volunteer motivations to participate in citizen science projects: a deeper look at water quality monitoring. Journal of Science Communication. 2016;15(03):A04.
- 17. Land-Zandstra AM, Devilee JLA, Snik F, Buurmeijer F, van den Broek JM. Citizen science on a smartphone: Participants’ motivations and learning. Public Understanding of Science. 2016;25(1):45–60.
- 18. Asingizwe D, Poortvliet PM, Koenraadt CJM, van Vliet AJH, Ingabire CM, Mutesa L, et al. Why (not) participate in citizen science? Motivational factors and barriers to participate in a citizen science program for malaria control in Rwanda. PLOS ONE. 2020;15(8):e0237396. pmid:32833984
- 19.
Vohland K, Land-Zandstra A, Ceccaroni L, Lemmens R, Perelló J, Ponti M, et al. The Science of Citizen Science. 1st ed. Springer International Publishing; 2017. Available from: https://www.springer.com/gp/book/9783030582777.
- 20. Budde M, Schankin A, Hoffmann J, Danz M, Riedel T, Beigl M. Participatory Sensing or Participatory Nonsense?—Mitigating the Effect of Human Error on Data Quality in Citizen Science. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2017;1(3):1–23.
- 21. Birch J. Worldwide prevalence of red-green color deficiency. Journal of the Optical Society of America A. 2012;29(3):313–320.
- 22. Neitz J, Neitz M. The genetics of normal and defective color vision. Vision Research. 2011;51(7):633–651.
- 23. Jordan G, Deeb SS, Bosten JM, Mollon JD. The dimensionality of color vision in carriers of anomalous trichromacy. Journal of Vision. 2010;10(8):1–19.
- 24. Pentao L, Lewis RA, Ledbetter DH, Patel PI, Lupski JR. Maternal uniparental isodisomy of chromosome 14: Association with autosomal recessive rod monochromacy. American Journal of Human Genetics. 1992;50(4):690–699.
- 25. Alpern M, Kitahara K, Krantz DH. Classical tritanopia. The Journal of Physiology. 1983;335(1):655–681.
- 26. Verriest G. Further studies on acquired deficiency of color discrimination. Journal of the Optical Society of America. 1963;53(1):185–195.
- 27. Viénot F, Brettel H, Mollon JD. Digital Video Colourmaps for Checking the Legibility of Displays by Dichromats. Color Research & Application. 1999;24(4):243–252.
- 28. Hashemi H, Khabazkhoob M, Pakzad R, Yekta A, Heravian J, Nabovati P, et al. The prevalence of color vision deficiency in the northeast of Iran. Journal of Current Ophthalmology. 2019;31(1):80–85. pmid:30899851
- 29. Crameri F, Shephard GE, Heron PJ. The misuse of colour in science communication. Nature Communications. 2020;11:5444.
- 30. Wernand MR, van der Woerd HJ. Spectral analysis of the Forel-Ule ocean colour comparator scale. Journal of the European Optical Society: Rapid Publications. 2010;5(0):10014s.
- 31. Ule W. Die bestimmung der Wasserfarbe in den Seen. Kleinere Mittheilungen Dr A Petermanns Mittheilungen aus Justus Perthes geographischer Anstalt. 1892; p. 70–71.
- 32. Forel FA. Une nouvelle forme de la gamme de couleur pour l’étude de l’eau des lacs. Archives des Sciences Physiques et Naturelles/Société de Physique et d’Histoire Naturelle de Genève. 1890;6(25).
- 33. Pitarch J, van der Woerd HJ, Brewin RJW, Zielinski O. Optical properties of Forel-Ule water types deduced from 15 years of global satellite ocean color observations. Remote Sensing of Environment. 2019;231:111249.
- 34. Nie Y, Guo J, Sun B, Lv X. An evaluation of apparent color of seawater based on the in-situ and satellite-derived Forel-Ule color scale. Estuarine, Coastal and Shelf Science. 2020;246:107032.
- 35. Novoa S, Wernand MR, van der Woerd HJ. The Forel-Ule scale revisited spectrally: preparation protocol, transmission measurements and chromaticity. Journal of the European Optical Society: Rapid Publications. 2013;8(0):13057.
- 36. Brettel H, Viénot F, Mollon JD. Computerized simulation of color appearance for dichromats. Journal of the Optical Society of America A. 1997;14(10):2647–2655.
- 37. Sharma G, Wu W, Dalal EN. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research and Application. 2005;30(1):21–30.
- 38.
Sharma G. Digital Color Imaging Handbook. Taylor & Francis Ltd; 2003.
- 39.
Lindbloom B. Chromatic Adaptation; 2017. http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html.
- 40. Hunt RWG. Revised colour-appearance model for related and unrelated colours. Color Research & Application. 1991;16(3):146–165.
- 41.
Schmitz J. Color Blindness Simulation Research; 2020. https://ixora.io/projects/colorblindness/color-blindness-simulation-research/.
- 42. Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, et al. Array programming with NumPy. Nature. 2020;585(7825):357–362. pmid:32939066
- 43. Chickering KD. Perceptual Significance of the Differences between CIE Tristimulus Values. Journal of the Optical Society of America. 1969;59(8):986–990.
- 44. Mahy M, Van Eycken L, Oosterlinck A. Evaluation of Uniform Color Spaces Developed after the Adoption of CIELAB and CIELUV. Color Research & Application. 1994;19(2):105–121.
- 45. Gustafson A, Rice RE. A review of the effects of uncertainty in public science communication. Public Understanding of Science. 2020;29(6):614–633.
- 46. Jiménez M, Triguero I, John R. Handling uncertainty in citizen science data: Towards an improved amateur-based large-scale classification. Information Sciences. 2019;479:301–320.
- 47. Wang S, Li J, Zhang B, Spyrakos E, Tyler AN, Shen Q, et al. Trophic state assessment of global inland waters using a MODIS-derived Forel-Ule index. Remote Sensing of Environment. 2018;217:444–460.