Evaluating personality research on its own merits
The credibility revolution has been going strong in personality psychology for the last decade, building on many decades of strong emphasis on methods and rigour in personality science. However, a fundamental question in research evaluation continues to be: How can we tell which scientific findings are credible? Peer-reviewed journals, even prestigious ones, do not provide much assurance regarding the credibility of any individual paper. Ideally, we would read each paper carefully when deciding what to trust, but this is often impossible (e.g., when we lack the expertise to evaluate the methods) or impractical (e.g., when we need to evaluate research at scale). I present a proposal for eliciting structured quantitative ratings of quality for personality and assessment research. Scores along multiple dimensions could be combined into a variety of metrics, or “Quality Factors” (QFs), that vary in the weight placed on different qualities. These QFs would provide easily digestible and flexible quality ratings of individual papers that could be useful to other scientists, to journalists and policymakers, and to the public. QFs would also help incentivize authors to “get it right” rather than just get published in prestigious journals, because rewards and recognition could be tied to these more transparent, accountable, and valid metrics rather than to journal prestige.
Speaker bio: Simine Vazire’s research examines whether and how science self-corrects, focusing on psychology. She studies the research methods and practices used in psychology, as well as structural systems in science, such as peer review. She also examines whether people know themselves, and where our blind spots are in our self-knowledge. She teaches research methods. She is editor in chief of Collabra: Psychology, one of the Principal Investigators on the repliCATS project, and was the co-founder (with Brian Nosek) of the Society for the Improvement of Psychological Science.