CORE Lab Logo

Publications



READ OUR RESULTS

Selected Research Articles






CORE Lab

VOIS: A framework for recording Voice Over Internet Surveys

Abstract

Verbal data provides researchers insight beyond that offered by text-based responses, including tone, reasoning elaboration, and experienced difficulty; among other processes. Additionally, it offers a less cognitively taxing way for participants to provide long responses. Verbal data collection methods are found in a variety of fields, mostly conducted in lab-based settings or requiring specialized hardware. Restricting verbal protocols to lab-based settings can have several drawbacks, including decreased sample sizes, biased populations, reduced adoption, and incompatibility with potential social distancing requirements. No method currently exists for researchers to collect data in major online survey collection platforms. The current paper offers a user-friendly approach for collecting verbal data online, where a researcher can copy-and-paste JavaScript code into the desired survey platform. By providing a framework that does not require any advanced programming ability, researchers can collect verbal data in a scalable way using familiar modalities

Citation

Ristow, T., Hernandez, I. (2023) VOIS: A framework for recording Voice Over Internet Surveys. Behavior Research Methods https://doi.org/10.3758/s13428-022-02045-6






CORE Lab

The AI-IP: Minimizing the Guesswork of Personality Scale Item Development Through Artificial Intelligence

Abstract

We propose a framework for integrating various modern natural language processing models to assist researchers developing valid psychological scales. Transformer-based deep neural networks offer state-of-the-art performance on a variety of natural language tasks (Vaswani et al., 2017). This project adapts the transformer model GPT-2 (Radford et al., 2019) to learn the structure of personality items, to generate the largest openly available pool of personality items, consisting of one million new items. We then use that Artificial Intelligence- based Item Pool (AI-IP) to provide a subset of items that are potentially relevant to a specified desired construct. To provide recommendations related to a desired construct, we train a paired neural network based classification BERT model to predict the observed correlation between personality items using only their text. We also demonstrate how Zero-shot models can help balance desired content domains. In combination with the AI-IP, these models narrow the large item pool to items most correlated with a set of initial items. We demonstrate the ability of this multi-model framework to develop longer cohesive scales from a small set of construct-relevant items. We found reliability, validity, and fit equivalent for AI-assisted scales compared to scales developed and optimized by traditional methods. By leveraging neural networks’ ability to generate text relevant to a given topic and infer semantic similarity, this project demonstrates how to support creative and open-ended elements of the scale development process to increase the likelihood of one’s initial scale being valid, and minimize the need to modify and re-validate the scale.

Citation

Hernandez, I., & Nie, W. (2022). The AI‐IP: Minimizing the guesswork of personality scale item development through artificial intelligence. Personnel Psychology, peps.12543. https://doi.org/10.1111/peps.12543






CORE Lab

The importance of being unearnest: Opportunists and the making of culture.

Abstract

Opportunistic actors--who behave expediently, cheating when they can and offering minimal cooperation only when they have to--play an important role in producing some puzzling phenomena, including the flourishing of strong reciprocity, the peculiar correlation between positive and negative reciprocity within cultures of honor, and low levels of social capital within tight and collectivist cultures (that one might naively assume would produce high levels of social capital). Using agent-based models and an experiment, we show how Opportunistic actors enable the growth of Strong Reciprocators, whose strategy is the exact opposite of the Opportunists. Additionally, previous research has shown how the threat of punishment can sustain cooperation within a group. However, the present studies illustrate how stringent demands for cooperation and severe punishments for noncooperation can also backfire and reduce the amount of voluntary, uncoerced cooperation in a society. The studies illuminate the role Opportunists play in producing these backfire effects. In addition to highlighting other features shaping culture (eg, risk and reward in the environment,"founder effects" requiring a critical mass of certain strategies at a culture's initial stage), the studies help illustrate how Opportunists create aspects of culture that otherwise seem paradoxical, are dismissed as "error," or produce unintended consequences.(

Citation

Hernandez, I., Cohen, D., Gruschow, K., Nowak, A., Gelfand, M. J., & Borkowski, W. (2022). The importance of being unearnest: Opportunists and the making of culture. Journal of Personality and Social Psychology, 123(2), 249–271. https://doi.org/10.1037/pspa0000301






CORE Lab

Results everyone can understand: A review of common language effect size indicators to bridge the research-practice gap.

Abstract

Health psychology, as an applied area, emphasizes bridging the gap between researchers and practitioners. While rigorous research relies on advanced statistics to illustrate an underlying psychological process or treatment effectiveness, these statistics have less immediate applicability to practitioners who require knowing the relative magnitude in practical benefits. One way to reduce this research-practice gap is to translate reported effects into nontechnical language whose focus is on the likelihood of benefiting an individual. Common Language Effect Size (CLES) indicators offer a more intuitive way to understand statistical results from research but may not be widely known to researchers. Method: This article synthesizes the literature of available CLES indicators and how they overcome limitations from traditional effect sizes. To promote adoption, we summarize all existing measures in a compact table, which includes their analogous effect size, context, interpretation, calculation, and citation. We present evidence describing the effectiveness of CLES indicators at facilitating research interpretability compared to traditional effect size indicators. We discuss some limitations of CLES indicators and reasons that they are not used in psychology. Finally, this review offers some future directions for the use and study of CLES indicators moving forward. In general, CLES indicators are tools that can benefit health psychology because of their shared goals to aid practitioners in understanding research findings and making informed decisions.

Citation

Mastrich, Z., & Hernandez, I. (2021). Results everyone can understand: A review of common language effect size indicators to bridge the research-practice gap. Health Psychology, 40(10), 727–736. https://doi.org/10.1037/hea0001112






CORE Lab

Curbing curbstoning: Distributional methods to detect survey data fabrication by third-parties.

Abstract

Curbstoning, the willful fabrication of survey responses by outside data collectors, threatens the integrity of the inferences drawn from data. Researchers who outsource data collection to survey collection panels, field interviewers, or research assistants should validate whether each collection agent actually collected the data. Our review of the survey auditing literature demonstrates a consistent presence of curbstoning, even at professional levels. This study proposes several general simple survey questions that have statistical distributions known a priori, as a method to detect curbstoning. By exploiting common deficiencies in statistical understanding, survey collectors imputing data to these questions can leverage empirically known distributions to determine deviation from the expected distribution of responses. We examined both authentic and fabricated surveys that included these questions and we compared the observed distributions with the expected distributions. The majority of the proposed methods had Type I error rates near or below the specified alpha level (.05). The methods demonstrated the ability to detect false responses correctly 48%–90% of the time across two samples when surveying at least 50 participants. While the methods varied in effectiveness, combining these methods demonstrated the highest statistical power, with Type I error rates lower than 1%. Additionally, even in situations with smaller sample sizes (e.g., N = 30), combining these methods allows them to be effective in detecting curbstoning. These methods provide a simple and generalizable way for researchers not present during data collection to possess accurate data.

Citation

Hernandez, I., Ristow, T., & Hauenstein, M. (2022). Curbing curbstoning: Distributional methods to detect survey data fabrication by third-parties. Psychological Methods, 27(1), 99–120. https://doi.org/10.1037/met0000403






Book Chapters






CORE Lab

Big Data in Social Psychology

Abstract

Social psychology studies how situations and interactions with others affect (often in subtle ways), a person’s thoughts, feelings, and behaviors (Gilovich, Keltner, Chen, & Nisbett, 2016). These behavioral, affective, and cognitive outcomes occur in a variety of social contexts with countless possible precursors found in the social environment. As a result of the topic diversity, social psychologists apply a broad array of metrics and use a great deal of creativity in translating hypotheses to a multitude of social contexts. Big data, a new form of data made possible by recent computational advances, has become increasing leveraged by social psychologists to help address existing questions in novel ways.

Citation






CORE Lab

Twitter Analysis: Methods for Data Management and a Word Count Dictionary to Measure City-level Job Satisfaction