What follows is a slight reworking of a letter (or the modern electronic version of such) that I wrote to a friend who sought my advice on the standards by which I hold myself in my scientific pursuits and how best to conduct original research:
I’ve been getting very disillusioned by the whole academic practice of science of late. while I strive for rigor and scientific integrity, I have seen so much lackluster work getting distributed and a highly distressing culture of publish-or-perish and the impact of one’s reputation, with people scooping each other’s ideas and the lack of a collaborative spirit, all of which are making me despise the whole establishment way of conducting science. Disclaimer: I have not systematically surveyed the entire establishment and my observations are likely to be limited in how broadly they may be applied; I am merely reporting from my own milieu, hopelessly trapped within my epistemic bubble, being as I am.
My complaints notwithstanding, I do not wish to abandon the pursuit of knowledge. Research is still most definitely my thing. But, I’m angling for an industry position conducting scientific research. The way I see it, I’ll design experiments and ask the interesting questions, but will be getting compensated for it if I’m so fortunate to work for a highly profitable business and it will not need to define my career the same way that a university academic needs to publish constantly and apply for grants, etc.
With regards to the standards for conducting research: one lesson I’ve learned regarding scientific integrity is to have a whole lot determined prior to actually running an experiment. This came after my many failed fishing expeditions where I would follow a hunch and then inevitably end up with uninterpretable data. I have seen it become a very common practice for researchers to scramble to piece together a mish-mash of a paper out of the various findings from an exploratory experiment and it all ends up being post-hoc but made to seem legit in the publication. So, I would highly encourage that a very detailed description of the hypothesis and research question, the full design, the number of participants needed, and all other details of the experiment be determined in advance, even before piloting anything. Of course, after piloting, methodologies may still change, but there ought to be a protocol for the experiment devised beforehand and it should be adhered to. This applies just as much to highly computational projects as to simpler experimental designs: in all cases, researchers should have as much of the methodology and analysis planned out upfront, so that the tendency to “p-hack” is minimized.
A word about this “p-hacking”: I know I’ve done it. It’s become somewhat of a common practice in our field, despite the vehement condemnation of it from the very same people doing it. Unfortunately, the way in which the behavioural sciences make use of statistical conventions all but guarantees that the temptation to do it will be nearly irresistible. The search for that sacrosanct p of 0.05 has dwarfed all other means of determining significance of a finding and I feel very much discomfort at this deplorable state of affairs. I am glad that the field is moving away from the binary nature of this criterion and that journals are now asking for reports of things like effect sizes and confidence intervals, as well as the advent of Bayesian methods for hypothesis testing, but I am not sure how long it will take to overcome the inertia of poor science.
Finally, humility is a really important trait in scientists – as it is in humans more generally, of course. We do have to stand on the shoulders of giants after all. So, being extremely thorough in your reading of the literature will benefit you enormously, in contrast to the hubris of too much faith in one’s own scientific intuition. Try to develop a habit of reading the major journals and articles in your broader area very routinely. This will help a lot. And it’s something that was never really taught to me so I had to learn to develop that habit on my own. You would be at your best as a scientist when you are situating your questions within the broader field in which you are working. This may seem obvious, but I’ve all too often seen people just operating out of a vacuum in complete oblivion to the many other findings that connect to their interests. Of course a reviewer may bring several of these to light and ask you to reference them in the discussion. But it would serve you very well to have been aware of them while you were designing the experiment. In other words, the motivation for an experiment ought to be to answer the open questions that can be synthesized from an extremely thorough reading of the literature. After all, Einstein’s theory of relativity would not have been possible without his extensive knowledge of the frustrations of the Michelson-Morley experiment as well as the many patents he read during his time working at the Patent Office.
In addition, another way in which humility is important is in not concluding more than the data can support. Again, this is something I keep seeing where people want to oversell their results as revolutionary and ground-breaking, and it’s all in the interest of developing their reputations and trying to get bigger and better grants and to get noticed by the community: all of which detracts from the honest and diligent application of the scientific method. I much prefer remaining honest in terms of what the data say and the legitimate limitations of the experimental design, even if it waters down the conclusions to being highly tentative. I feel very strongly that even my null findings are of the utmost importance and strongly oppose the bias to only publishing positive results. But, unfortunately this is not the currently academic climate that we operate in.
I am a proponent of the scientific method. I believe that it is an admirable application of pure reason and observation, the critiques of the former aside. If we are to take the epistemic leap and eschew solipsism, then this method provides the very best that we humans can do to advance our collective knowledge. However, this relies on the integrity and honesty of each scientist who applies this method, and thus the method is inevitably error-prone. But even that is acceptable, because the criterion of replicability makes the method fool-proof, but only in the long run. In the meantime, we must acquiesce to being surrounded by the lackluster for a while longer, until the next genius comes along and revolutionizes our field by unearthing the real treasures that Nature so mischievously hides from us.
It’s been 20 years now since I completed a research grant to help with understanding the hormonal implications to repeated stress. I was an undergraduate at Boulder working in Dr. Spencer’s Neuroscience lab. I was working on a tiny part of the bigger picture in what we were doing in the lab, but that experience, gave me a glimpse into this world of publish or perish. It seems things haven’t changed that much… I think we called it p-chasing back then… The puffing up of studies… The lack of a coordinated mission / thorough review of the literature prior to embarking on research.
Nevertheless, you’re doing some cool stuff and I’m not sure if the private sector is much better. More lucrative, but maybe shadier in some ways.
Thanks for the comment, Cort! Yeah I agree with what you said re private sector but I’m at a point where I’m wanting to try something different to see how I like it. You may be right about it being shadier but perhaps the compensation will make up for that, or maybe I’ll find my way back to the academic land. Who knows what life has in store for us!?