To the Editor:
One major problem not mentioned in your article, “A New Theory on How Researchers Can Solve the Reproducibility Crisis: Do the Math” (The Chronicle, June 28), is the frequent lack of consensus among even supposedly “expert” statisticians when it comes to proper selection of analytic tools and interpretation of results. In my field we rely quite a bit on analysis of time series data over time via graphic presentation. This approach is often criticized as overly subjective, etc. What I always tell my students is that it is no different in the statistical arena. Once you get beyond basic statistical issues (e.g., sample size, power, etc.) you can ask 10 different statistical experts about selecting and interpreting the “right” tests/analyses and get 10 different answers. This becomes very evident in the federal grant-application process. One can employ a statistical expert to help with that aspect of a proposal, and then gnash one’s teeth in frustration when the reviews come back with the proposed analytic methods torn to shreds by the reviewers. So, this may be a reflection of the lack of consensus among statisticians more than supposed errors committed by researchers.
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com