Reader Comments
Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Thank You!
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.
closeComments on your recent publication in PLOS ONE
Posted by sabel on 18 Apr 2016 at 23:06 GMT
Dear authors:
I have read your recent article, published in PLOS One, with interest. I have some background in Statistics, and so, I understood your paper. I am also in the process of learning R.
It is commendable that you are working on this problem, particularly for adjusting the critical value to restore the Type 1 error rate. The publication biases that you address are important, indeed. I have encountered such difficulties, where unpublished data or other information that a published report is based on, causes problems. Often authors are including such information as on line supplemental material. However, such data can be difficult to obtain. There is also the lack of standardized definitions.
Critical values are fascinating subjects of study. Critical values for a test of hypothesis depend upon a test statistic, which is specific to the type of test, and the significance level, α, which defines the sensitivity of the test. A value of α= 0.05 implies that the null hypothesis is rejected 5 % of the time when it is in fact true. The choice of α is somewhat arbitrary, although, in practice, values of 0.1, 0.05, and 0.01 are common. Critical values are essentially cut-off values that define regions where the test statistic is unlikely to lie; for example, a region where the critical value is exceeded with probability α if the null hypothesis is true. The null hypothesis is rejected if the test statistic lies within this region which is often referred to as the rejection region(s). Making reasonable and appropriate adjustments is important.
You have also referred to satisficing models. That is currently often studied in the field of Artificial intelligence, who define "satisficing models" as approximate models that enable agents to reliably perform at an acceptable level of effectiveness. However, the agents who are involved in such efforts must be able to evaluate the accuracy and reliability of their current models, predict the implications of building more accurate models, and analyze which components of their world models will yield the maximum incremental payoff upon enhancement. It seems that, in the fields of File Drawer bias, a satisficing model may be worthy.