Researchers in the psychological sciences can use generative AI systems for tasks such as generating simulated data and new stimuli and for gaining insights into data. Responsible use of these AI systems requires consideration of how sociocultural systems such as racism are embedded in their development and training.
This is a preview of subscription content, access via your institution
Access options
Subscribe to this journal
Receive 12 digital issues and online access to articles
$59.00 per year
only $4.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Demszky, D. et al. Using large language models in psychology. Nat. Rev. Psychol. 2, 688–701 (2023).
Birhane, A. et al. The forgotten margins of AI ethics. In Proc. 2022 ACM Conference on Fairness, Accountability, and Transparency 948–958 (ACM, 2022).
Remedios, J. D. Psychology must grapple with Whiteness. Nat. Rev. Psychol. 1, 125–126 (2022).
Prather, R. W. et al. What can cognitive science do for people? Cognitive Science 46, e13167 (2022).
Ray, V. A theory of racialized organizations. Am. Sociol. Rev. 84, 26–53 (2019).
Dancy, C. L. & Saucier, P. K. AI and Blackness: towards moving beyond bias and representation. IEEE Trans. Technol. Soc. 3, 31–40 (2022).
Workman, D. & Dancy, C. L. Identifying potential inlets of man in the artificial intelligence development process: man and antiblackness in AI development. In Proc. CSCW '23 Companion: Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing 348–353 (ACM, 2023).
Hutchinson, B. et al. Towards accountability for machine learning datasets: practices from software engineering and infrastructure. In Proc. ACM Conference on Fairness, Accountability, and Transparency 560–575 (ACM, 2021).
Perrigo, B. Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time https://time.com/6247678/openai-chatgpt-kenya-workers/ (Time USA, 2023).
Scao, T. L. et al. Bloom: a176b-parameter open-access multilingual language model. Preprint at ArXiv https://doi.org/10.48550/arXiv.2211.05100 (2022).
Jiang, H. H. et al. AI art and its impact on artists. In Proc. 2023 AAAI/ACM Conference on AI, Ethics, and Society 363–374 (ACM, 2023).
Metz, R. & Ford, B. Adobe’s ‘ethical’ Firefly AI Was trained on Midjourney image. Bloomberg News https://www.bloomberg.com/news/articles/2024-04-12/adobe-s-ai-firefly-used-ai-generated-images-from-rivals-for-training (Bloomberg Media Distribution, 2024).
Mitchell, M. et al. Model cards for model reporting. In Proc. 2019 ACM Conference on Fairness, Accountability, and Transparency 220–229 (ACM, 2019).
Gebru, T. et al. Datasheets for datasets. Commun. ACM 64, 86–92 (2021).
Acknowledgements
The author thanks members of The Human in Computing and Cognition (THiCC) Lab for helpful comments. This work was supported by the NSF AI institute for Societal Decision Making (AI-SDM) under grant number IIS 2229881.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Rights and permissions
About this article
Cite this article
Dancy, C.L. How to use generative AI more responsibly. Nat Rev Psychol (2024). https://doi.org/10.1038/s44159-024-00339-4
Published:
DOI: https://doi.org/10.1038/s44159-024-00339-4