Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Discounting of delayed rewards: Missing data imputation for the 21- and 27-item monetary choice questionnaires

  • Yu-Hua Yeh,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Psychology Department, Illinois College, Jacksonville, IL, United States of America

  • Allison N. Tegge,

    Roles Conceptualization, Investigation, Methodology, Software, Writing – review & editing

    Affiliations Fralin Biomedical Research Institute at VTC, Roanoke, VA, United States of America, Department of Statistics, Virginia Tech, Blacksburg, VA, United States of America

  • Roberta Freitas-Lemos,

    Roles Data curation, Investigation, Writing – original draft, Writing – review & editing

    Affiliation Fralin Biomedical Research Institute at VTC, Roanoke, VA, United States of America

  • Joel Myerson,

    Roles Data curation, Funding acquisition, Writing – review & editing

    Affiliation Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States of America

  • Leonard Green,

    Roles Data curation, Funding acquisition, Writing – review & editing

    Affiliation Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States of America

  • Warren K. Bickel

    Roles Funding acquisition, Supervision, Writing – review & editing

    wkbickel@vtc.vt.edu

    Affiliation Fralin Biomedical Research Institute at VTC, Roanoke, VA, United States of America

Abstract

The Monetary Choice Questionnaire (MCQ) is a widely used behavioral task that measures the rate of delay discounting (i.e., k), the degree to which a delayed reward loses its present value as a function of the time to its receipt. Both 21- and 27-item MCQs have been extensively validated and proven valuable in research. Different methods have been developed to streamline MCQ scoring. However, existing scoring methods have yet to tackle the issue of missing responses or provide clear guidance on imputing such data. Due to this lack of knowledge, the present study developed and compared three imputation approaches that leverage the MCQ’s structure and prioritize ease of implementation. Additionally, their performance was compared with mode imputation. A Monte Carlo simulation was conducted to evaluate the performance of these approaches in handling various missing responses in each observation across two datasets from prior studies that employed the 21- and 27-item MCQs. One of the three approaches consistently outperformed mode imputation across all performance measures. This approach involves imputing missing values using congruent non-missing responses to the items corresponding to the same k value or introducing random responses when congruent answers are unavailable. This investigation unveils a straightforward method for imputing missing data in the MCQ while ensuring unbiased estimates. Along with the investigation, an R tool was developed for researchers to implement this strategy while streamlining the MCQ scoring process.

Introduction

Delay discounting, the decrease of the present value of a delayed reward with the increase in the time to its receipt, captures important human decision-making processes [1]. An individual’s delay discounting rate, measured by the parameter k in a hyperbolic discounting model proposed by Mazur [2], is associated with various maladaptive behaviors, including substance addiction, gambling, and obesity [35]. Emerging evidence suggests delay discounting is not simply an indicator of poor cognitive functioning or a personality trait of impulsivity [6]. Noticeably, delay discounting has been proposed as a candidate behavioral marker of addiction and obesity and a transdiagnostic process in psychiatric disorders, highlighting its unique role in clinical research [79].

The Monetary Choice Questionnaire (MCQ) is one of the commonly used behavioral tasks that measure individuals’ rates of delay discounting [5, 1013] is currently included in the PhenX Toolkit (https://www.phenxtoolkit.org), a catalog providing recommendations on data collection protocols for biomedical research. Two validated versions, the 21- and 27-item MCQs, are available [14, 15]. Both versions comprise a series of binary choice questions corresponding to different rates of delay discounting (i.e., k) based on Mazur’s hyperbolic discounting model, V = A / (1 + kD), where V is the present, discounted value, A is the amount of the delayed reward, D is the delay to its receipt, and k is a parameter governing the rate at which value is discounted with delay [2]. For example, a choice question “$34 tonight or $35 in 43 days” in the 21-item MCQ corresponds to a k value of 0.0007; a choice question “$31 today or $85 in 7 days” in the 27-item MCQ corresponds to a k value of .25. By analyzing individuals’ choice patterns on the questionnaire, the individual k values can be estimated.

The utility of the MCQ is evident from the amount of research in various fields, such as addiction, education, and health [1618]. Moreover, the MCQ is a pragmatic and efficient assessment that measures the same construct as lengthier adjusting-based behavioral tasks (e.g., titrating the amount of the immediate reward to approximate the subjective value of a delayed reward), making it particularly useful in clinical applications [10, 19]. Additionally, the MCQ is low-cost and easy to administer [5, 11, 12] and can be completed either by paper and pencil or through computerized programs. The wide application of the MCQ merits further research to enhance the rigor of its measurement, which motivates the current investigation. Like many other behavioral tasks, researchers frequently need to deal with missing data in the MCQ when forcing a response is not possible or required. Even when a response can be forced, a participant may choose to discontinue for various reasons (e.g., indifferent between the choice options) and leave an incomplete observation. Significant challenges arise in the presence of missing data, as any number of missing responses interferes with the MCQ scoring. While the methodologies for handling missing data have evolved and improved significantly in recent decades, their utilization is not completely automated, which can pose a hurdle to their implementation [20]. Finding a balance between the trade-off of complexity and convenience in missing data imputation relies on researchers’ discretion, especially due to the lack of a consensus on the best practice for addressing missing responses in the MCQ. Commonly, researchers choose to exclude such data from the analysis, although this approach inevitably reduces statistical power (e.g., [2123]). In order to address this challenge, the present study aims to provide guidance on handling missing values in the MCQ by evaluating different imputation approaches that emphasize ease of implementation.

The complexity involved in scoring the MCQ should be recognized. Different tools have been developed to automate the scoring process and reduce the risk of human error. For example, Kaplan et al. [24] developed freely available Excel-based scoring spreadsheets for both the 21- and 27-item MCQs. Their tool provides comprehensive information about the individual scores, including the k value, its log and natural log transformation, and the summary statistics. However, their tool is limited to scoring up to 1000 responses at a time. Gray et al. [11] developed the syntax to score both the 21- and 27-item MCQs in SPSS and R. Essentially, their tool conducts lookup operations using premade tables for any given choice patterns. Myerson et al. [25] proposed scoring the MCQ by calculating the proportion of the items in which a larger delayed reward is selected. Their method significantly simplifies the scoring process, and the proportional measure is highly correlated with the k value derived from the MCQ. Although convenient scoring methods are now available, none address missing responses. Some recommend data deletion if any or more than one response is missing for scoring [11, 24].

The structure of the MCQ may be capitalized on to conveniently impute missing responses. Specifically, the items in the MCQ can be grouped into three amount sets (i.e., small, medium, and large), which were intended to investigate the effect of the amount on the rate of delay discounting. These small, medium and large items may be thought of as three alternative forms of the same questionnaire. Thus, an individual k can be estimated with one of the three amount sets. Alternatively, a missing response to an item may be imputed with the non-missing responses to the items in the other amount sets. With this general concept, we developed three novel approaches to imputing missing data in the MCQ (see Methodology section).

The objectives of the current study were to provide empirical guidance on and a practical tool for imputing missing data in the MCQ. To achieve this, we compared three different imputation approaches capitalizing on the amount set structure with mode imputation, a simple method for handling missing values in binary variables. Our investigation focused on whether these approaches prioritizing ease of implementation would yield unbiased estimates and, if so, which would produce the most accurate estimates. Specifically, a Monte Carlo simulation was conducted to evaluate the performance of each imputation approach in handling different numbers of missing responses. The findings from this investigation offer valuable insights for researchers seeking an effective imputation approach for their MCQ data analysis and contribute to enhancing the rigor of delay discounting measurement.

Methods

Monetary choice questionnaire

Both 21- and 27-item MCQs comprise a series of dichotomous choices between smaller-immediate and larger-delayed hypothetical monetary rewards [14, 15]. By design, each item in the MCQ corresponds to a k value with which the two options (i.e., the smaller-immediate and the larger-delayed rewards) are subjectively equal according to the simple hyperbolic discounting model [2]. In addition, the items are divided into three sets based on whether the delayed amounts are small, medium, or large, and similar k values are used across the three sets. The items are ordered based on their associated k values from the smallest to the largest to score the MCQ. Specifically, when a choice pattern involves only one switching point at which a preference for a smaller-immediate reward changes to that for a larger-delayed reward, the individual k estimate can be inferred to be between the k values of the items where the switch occurs.

When a choice pattern involves more than one switching point, a consistency score that considers all questionnaire responses must be calculated for each item. The calculation consists in counting the instances of choosing the smaller-immediate amount before the given k value and the instances of choosing the larger-delayed amount at and following the given k value. The items with the greatest consistency scores will be used to infer the individual k estimate. Noticeably, because the items in the MCQ can be grouped into three sets by amount, a single choice pattern to the questionnaire can generate at least four different k estimates (i.e., one k value derived for all the items, and k values for items for the small, medium, and large delayed amounts). An additional fifth k estimate can also be derived by calculating the geometric mean of the resulting k values for the small, medium, and large amounts (i.e., a composite value). Following the terms used in Kaplan et al. [24], we refer to these five k values as overall, small, medium, large, and composite ks throughout this article.

We followed the Excel-based tool developed by Kaplan et al. [24] to develop an MCQ scorer in R. We also added new functions to allow our tool to impute missing data. These new functions are based on the imputation approaches described below. The R scoring tool, detailing every step in the procedure, is freely available at https://osf.io/p29uk/, and instruction on how to use it is provided in the supplementary document of this article (see S1 Appendix).

Imputation approaches

Approach 1 –mode imputation.

This imputation approach substitutes each missing value with the mode of the responses to the corresponding item. For example, if the majority of responses to “$31 today or $85 in 7 days” in a given sample is “$85 in 7 days”, this response is used to replace any missing responses for this item.

Approach 2 –group geometric mean (GGM).

The standard scoring procedure calculates the composite k only when all small, medium and large ks are available. This imputation approach relaxes this prerequisite and calculates the composite k when at least one of the three amount set ks is fully available. For example, if the small k cannot be derived due to missing responses, the composite k will be calculated with the medium and large ks; if both small and medium ks cannot be derived due to missing responses, the composite k will be equal to the large k. In sum, this imputation approach permits the estimation of an individual composite k when the missing responses do not appear in all three amount sets.

Approach 3 –item nearest neighbor (INN) without random.

This imputation approach replaces the missing value with the congruent non-missing responses to the items corresponding to the same k value. For example, in the 21-item MCQ, if the response to “$34 tonight or $35 in 43 days” (small amount) is missing, then the responses to “$53 tonight or $55 in 55 days” (medium amount) and “$83 tonight or $85 in 35 days” (large amount) will be referenced because all three items correspond to the same k value (i.e., 0.0007). Suppose the responses to the medium and large amount items are congruent (i.e., choosing immediate or delayed rewards for both items). In that case, the same choice will be assumed for the small amount item. However, if the responses to the medium and large amount items are incongruent, the response to the small amount item will be left missing. Notice that in the case where only one item could be referenced (e.g., the responses to both the medium and large amount items are missing, but the response to the small amount item is non-missing), the single non-missing data will replace the missing responses to the other items corresponding to the same k value.

Approach 4 –item nearest neighbor (INN) with random.

This imputation approach is identical to approach 3, except that when a missing response cannot be resolved, this datum will be randomly replaced with 0 or 1, corresponding to choosing immediate or delayed rewards, respectively. As such, no data will be missing when this imputation approach is implemented.

Approach evaluation

Datasets.

To evaluate the performance of each imputation approach, two datasets from previously published studies were utilized [26, 27]. The first and the second datasets comprise 900 and 512 complete observations (i.e., no missing data) for the 21- and 27-item MCQs, respectively. In both studies, a choice questionnaire with a similar structure to the MCQ was developed to measure probability discounting, decreasing the subjective value of a probabilistic reward as the likelihood of its occurrence decreases. The permissions to use the datasets were obtained from contacting the corresponding authors. Both datasets were de-identified, and only the information relevant to the analysis in this study was included.

Monte Carlo simulation.

Monte Carlo simulation is a what-if analysis that relies on repeated random sampling to quantify the uncertainty associated with different data conditions [28]. In this study, we examined the performance of each imputation approach in handling different numbers of missing responses (rs) to the MCQ, ranging from 1 to 5 for the 21-item version and from 1 to 7 for the 27-item version (approximately 25% of the items in each version), with 1000 iterations. In each iteration, a series of performance measures were computed for the 12 conditions (5 conditions for the 21-item version and 7 conditions for the 27-item version). Specifically, under each condition, a fixed number of responses was randomly chosen and removed for each observation, which resulted in a dataset with missing data. Each imputation approach was then applied to this dataset to obtain individual composite k values. The mean difference between the composite ks derived from the datasets with and without missing data was calculated as a performance measure at the group level. The root-mean-square deviation (RMSD), the square root of the mean of the squares of the differences between the actual and imputed composite ks, was calculated as a measure of performance at the individual level. Furthermore, the correlation between the true and imputed composite ks was computed. Unlike mode imputation (approach 1) and INN with random (approach 4), the ability of GGM (approach 2) and INN without random (approach 3) to impute a dataset depends on the pattern of missing responses across the amount sets. Thus, the proportion of the observations that these approaches for each dataset could not handle was also calculated. Finally, to evaluate the influence of the imputation approaches on changing the delay discounting research results, the correlation between the natural logarithmic composite k (to approximate a normal distribution) and the probability discounting measure was calculated and pitted against the true value from the complete dataset.

Results

Figs 1 and 2 depict the results of the simulations for the 21- and 27-item MCQs, respectively. Meanwhile, Tables 1 and 2 provide a summary of mean and standard deviation of each distribution. As anticipated, with an increase in r, the deviations of the imputed composite k grew at both group and individual levels across all three approaches. As may be seen, the mean difference in mode imputation slightly increased with r. In contrast, the distributions of mean difference in the other imputation approaches were centered around 0, indicating unbiased estimation at the group level. Notably, across all performance measures (i.e., mean difference, RMSD, and correlation with the true score), INN with random (approach 4) consistently outperformed the other approaches and has the advantage of imputing all observations. Similarly, in the distributions of the correlation between the natural logarithmic composite k and probability discounting, INN with random and mode imputation exhibited comparable performance, yielding the least biased measure (see Fig 3).

thumbnail
Fig 1. Performance of each imputation approach for the 21-item MCQ.

Panels A-D, E-H, I-L, and M-P are the results from mode imputation (approach 1), group geometric mean (GGM; approach 2), item nearest neighbor without random (INN w/o Random; approach 3) and item nearest neighbor with random (INN w/ Random; approach 4), respectively. RMSD = root-mean-square deviation; r = number of missing responses in each observation.

https://doi.org/10.1371/journal.pone.0292258.g001

thumbnail
Fig 2. Performance of each imputation approach for the 27-item MCQ.

Panels A-D, E-H, I-L, and M-P are the results from mode imputation (approach 1), group geometric mean (GGM; approach 2), item nearest neighbor without random (INN w/o Random; approach 3) and item nearest neighbor with random (INN w/ Random; approach 4), respectively. RMSD = root-mean-square deviation; r = number of missing responses in each observation.

https://doi.org/10.1371/journal.pone.0292258.g002

thumbnail
Fig 3.

The distributions of the correlations between the imputed estimates and the probability discounting for the (A-D) 21-item and (E-H) 27-item MCQs. Panels A and E, B and F, C and G, and D and H are the results from mode imputation (approach 1), group geometric mean (GGM; approach 2), item nearest neighbor without random (INN w/o Random; approach 3) and item nearest neighbor with random (INN w/ random; approach 4), respectively. The black solid vertical lines indicate the observed correlations calculated from the original datasets. r = number of missing responses in each observation.

https://doi.org/10.1371/journal.pone.0292258.g003

thumbnail
Table 1. Average performance of imputation approaches across performance measures for the 21-item MCQ.

https://doi.org/10.1371/journal.pone.0292258.t001

thumbnail
Table 2. Average performance of imputation approaches across performance measures for the 27-item MCQ.

https://doi.org/10.1371/journal.pone.0292258.t002

Discussion

The aim of the current study was to evaluate three novel approaches designed to impute missing data in the 21- and 27-item MCQs and compare their performance with mode imputation. The Monte Carlo simulation with different numbers of missing responses in each observation revealed that INN with random (approach 4) consistently outperformed the other approaches, including mode imputation. This approach involves replacing the missing value with the congruent non-missing responses from items corresponding to the same k value. Any residual missing values are then replaced with random responses.

The fact that adding randomness to the data produced a more precise, unbiased estimate may appear puzzling. However, this unambiguous finding can be explained by the scoring procedure of the MCQ. Instead of deriving a score by summing responses like many other questionnaires, the scoring of the MCQ relies on the overall choice pattern that determines the consistency score of each item. Replacing a missing value with a random response has no effect on the individual k estimate unless that response would change the item with the highest consistency score. Noticeably, in the current study, INN without and with random (approaches 3 and 4) are mostly identical and would produce the same results if the former approach can fully impute a tested dataset. The advantage of INN with random observed in the performance measures is simply because this approach could impute all missing data across conditions. Such a finding indicates that adding randomness is more beneficial than removing observations to handle missing data in the MCQ, highlighting the importance of maximizing the sample size with imputation methods.

Unlike GGM (approach 2), which is designed to impute the individual k estimate, INN without and with random (approaches 3 and 4) are intended to impute the missing data at the item level, which makes them versatile. Specifically, alternative ways to score the MCQ without yielding a k value exist, such as calculating the proportion of the items in which a larger delayed reward is selected [25]. For researchers who opt to use alternative scoring approaches, both INN without and with random can still be used to impute the missing data. Although the current study did not evaluate the influence of these two imputation approaches on changing the alternative scores, similar outcomes can be assumed, considering the high correlations that have been observed between the k value and the alternative measures [25, 29]. The mode imputation is another convenient way to handle missing data in this regard. However, our simulation results recommend against its use because it may produce biased estimates.

Although not in the scope of this research, multilevel logistic modeling is a relevant approach to treating missing data and may be a preferred method to score the MCQ. The multilevel modeling approach makes inferences from all observed data and does not rely on imputation. When the assumption of missing completed at random (i.e., no systematic differences between the missing and the observed data) or missing at random (other observed variables can entirely explain the systematic differences between the missing and the observed data) is met, this approach is free from sample and estimate biases [30]. In addition, multilevel modeling eliminates any issues accompanied by the two-stage analytical approach in which individual discounting rates are determined and later used in subsequent analysis. However, multilevel logistic modeling significantly increases the overall complexity of the analysis, and the estimates may fail to converge [29, 31]. Moreover, this approach produces two separate coefficients, one for Amount and one for Delay, instead of a unified discount rate estimate (e.g., k value), which complicates the comparison with existing literature. Thus, some researchers may choose the conventional MCQ scoring tools over multilevel logistic modeling, and our imputation approaches are complementary to handling missing data in this scenario.

The present study possesses several limitations that warrant acknowledgment. Firstly, our evaluation of different imputation approaches is restricted to observations with up to 5 and 7 missing responses for the 21- and 27-item MCQs, respectively. Consequently, the precise imputation capacity of these approaches for observations with higher numbers of missing responses remains uncertain. Secondly, both datasets employed in this study consisted of online samples from general populations. As the MCQ holds clinical relevance, the generalizability of our findings to diverse populations necessitates further investigation. Thirdly, an imputation approach discussed by Gray et al. [11] that also capitalizes on the structure of the MCQ was not evaluated. That approach replaces the missing data with the response to the item with the closest k value in the same amount set. In other words, a response to the item corresponding to a different k value is used to replace the one that is missing. Given this approach will inevitably produce a biased estimate, we opted not to include it in the current investigation.

Finally, whether other generic imputation approaches such as logistic regression would perform better than INN with random remains unclear. Nonetheless, such investigation is out of the scope of the current study as our primary objective is to offer guidance on handling missing values in the MCQ while balancing complexity and convenience. In contrast to these more complex methods that often incorporate additional information, such as demographics or performance on other measures, the approaches examined in our study rely solely on MCQ data. This distinction underscores the simplicity of our chosen approaches, potentially promoting their adoption, enhancing the scientific integrity of measurement, and facilitating data reproducibility.

Supporting information

S1 Appendix. Instruction of the R tool for scoring the 21- and 27-item Monetary Choice Questionnaire (MCQ).

https://doi.org/10.1371/journal.pone.0292258.s001

(DOCX)

References

  1. 1. Green L, Myerson J. A discounting framework for choice with delayed and probabilistic rewards. Psychol Bull. 2004;130:769–92. pmid:15367080
  2. 2. Mazur JE. An adjusting procedure for studying delayed reinforcement. In: Commons ML, Mazur JE, Nevin JA, editors. The effect of delay and of intervening events on reinforcement value. New Jersey: Lawrence Erlbaum Associates, Inc; 1987. p. 55–73.
  3. 3. Amlung M, Petker T, Jackson J, Balodis I, MacKillop J. Steep discounting of delayed monetary and food rewards in obesity: a meta-analysis. Psychol Med. 2016;46:2423–34. pmid:27299672
  4. 4. Ioannidis K, Hook R, Wickham K, Grant JE, Chamberlain SR. Impulsivity in gambling disorder and problem gambling: a meta-analysis. Neuropsychopharmacology. 2019;44:1354–61. pmid:30986818
  5. 5. MacKillop J, Amlung MT, Few LR, Ray LA, Sweet LH, Munafò MR. Delayed reward discounting and addictive behavior: a meta-analysis. Psychopharmacology. 2011;216:305–21. pmid:21373791
  6. 6. Yeh Y-H, Myerson J, Green L. Delay discounting, cognitive ability, and personality: what matters? Psychon Bull Rev. 2021;28:686–94. pmid:33219456
  7. 7. Amlung M, Marsden E, Holshausen K, Morris V, Patel H, Vedelago L, et al. Delay discounting as a transdiagnostic process in psychiatric disorders: a meta-analysis. JAMA Psychiatry. 2019;76:1176–86. pmid:31461131
  8. 8. Bickel WK, Koffarnus MN, Moody L, Wilson AG. The behavioral- and neuro-economic process of temporal discounting: a candidate behavioral marker of addiction. Neuropharmacology. 2014;76 Pt B:518–27. pmid:23806805
  9. 9. Bickel WK, Freitas-Lemos R, Tomlinson DC, Craft WH, Keith DR, Athamneh LN, et al. Temporal discounting as a candidate behavioral Marker of Obesity. Neurosci Biobehav Rev. 2021;129:307–29. pmid:34358579
  10. 10. Amlung M, Vedelago L, Acker J, Balodis I, MacKillop J. Steep delay discounting and addictive behavior: a meta-analysis of continuous associations. Addiction. 2017;112:51–62. pmid:27450931
  11. 11. Gray JC, Amlung MT, Palmer AA, MacKillop J. Syntax for calculation of discounting indices from the monetary choice questionnaire and probability discounting questionnaire. J Exp Anal Behav. 2016;106:156–63. pmid:27644448
  12. 12. Matta A, Gonçalves FL, Bizarro L. Delay discounting: concepts and measures. Psychol Neurosci. 2012;5(2):135–46.
  13. 13. Weinsztok S, Brassard S, Balodis I, Martin LE, Amlung M. Delay discounting in established and proposed behavioral addictions: a systematic review and meta-analysis. Front Behav Neurosci. 2021;15:786358. pmid:34899207
  14. 14. Kirby KN, Maraković NN. Delay-discounting probabilistic rewards: rates decrease as amounts increase. Psychon Bull Rev. 1996;3:100–4. pmid:24214810
  15. 15. Kirby KN, Petry NM, Bickel WK. Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. J Exp Psychol Gen. 1999;128:78–87. pmid:10100392
  16. 16. Duckworth AL, Seligman MEP. Self-discipline outdoes IQ in predicting academic performance of adolescents. Psychol Sci. 2005;16:939–44. pmid:16313657
  17. 17. Kirby KN, Finch JC. The hierarchical structure of self-reported impulsivity. Pers Individ Dif. 2010;48:704–13. pmid:20224803
  18. 18. Kirby KN, Petry NM. Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction. 2004;99:461–71. pmid:15049746
  19. 19. Wan H, Myerson J, Green L. Individual differences in degree of discounting: do different procedures and measures assess the same construct? Behav Processes. 2023;208:104864. pmid:37001683
  20. 20. Enders CK. Missing data: an update on the state of the art. Psychol Methods. 2023. pmid:36931827
  21. 21. Mishra S, Lalumière ML. Associations between delay discounting and risk-related behaviors, traits, attitudes, and outcomes. J Behav Decis Mak. 2017;30:769–81.
  22. 22. Simmen-Janevska K, Forstmeier S, Krammer S, Maercker A. Does trauma impair self-control? Differences in delaying gratification between former indentured child laborers and nontraumatized controls. Violence Vict. 2015;30:1068–81.
  23. 23. Teti Mayer J, Nicolier M, Tio G, Mouchabac S, Haffen E, Bennabi D. Effects of high frequency repetitive transcranial magnetic stimulation (HF-rTMS) on delay discounting in major depressive disorder: an open-label uncontrolled pilot study. Brain Sci. 2019;9. pmid:31514324
  24. 24. Kaplan BA, Amlung M, Reed DD, Jarmolowicz DP, McKerchar TL, Lemley SM. Automating scoring of delay discounting for the 21- and 27-item monetary choice questionnaires. The Behavior Analyst. 2016;39:293–304. pmid:31976983
  25. 25. Myerson J, Baumann AA, Green L. Discounting of delayed rewards: (a)theoretical interpretation of the Kirby questionnaire. Behav Processes. 2014;107:99–105. pmid:25139835
  26. 26. Jarmolowicz DP, Bickel WK, Carter AE, Franck CT, Mueller ET. Using crowdsourcing to examine relations between delay and probability discounting. Behav Processes. 2012;91:308–12. pmid:22982370
  27. 27. Yeh Y-H. Evaluating everyday behaviors with delayed and/or probabilistic consequences through a discounting framework [dissertation]. St. Louis (MO): Washington University in St. Louis; 2021.
  28. 28. Mooney CZ. Monte Carlo simulation. New York: SAGE; 1997.
  29. 29. Wileyto EP, Paul Wileyto E, Audrain-Mcgovern J, Epstein LH, Lerman C. Using logistic regression to estimate delay-discounting functions. Behav Res Methods. 2004;36:41–51. pmid:15190698
  30. 30. Gueorguieva R, Krystal JH. Move over ANOVA: progress in analyzing repeated-measures data and its reflection in papers published in the Archives of General Psychiatry. Arch Gen Psychiatry. 2004;61:310–17. pmid:14993119
  31. 31. Young ME. Discounting: a practical guide to multilevel analysis of indifference data. J Exp Anal Behav. 2017;108:97–112. pmid:28699271