Social scientists must address ChatGPT’s ethical challenges before using it for research

Outlining challenges that ChatGPT pose, researchers from Penn’s School of Social Policy & Practice and Annenberg School for Communication have written recommendations in five areas for ethical use of the technology in a new paper.

Researchers at Penn’s School of Social Policy & Practice (SP2) and Annenberg School for Communication have published recommendations to ensure the ethical use of artificial intelligence resources such as ChatGPT by social work scientists. The paper, titled “ChatGPT for Social Work Science: Ethical Challenges and Opportunities,” is published in the Journal of the Society for Social Work and Research.

An AI chatbot selects an option from a screen.
Image: iStock/Guillaume

 The article is co-authored by Desmond Upton Patton, Aviv Landau, and Siva Mathiyazhagan. Patton, a pioneer in the interdisciplinary fusion of social work, communications, and data science, holds joint appointments at Annenberg and SP2 as the Brian and Randi Schwartz University Professor.

Outlining challenges that ChatGPT and other large language models pose across bias, legality, ethics, data privacy, confidentiality, informed consent, and academic misconduct, the piece provides recommendations in five areas for ethical use of the technology: transparency, fact-checking, authorship, anti-plagiarism, and inclusion and social justice.

Of particular concern to the authors are the limitations of artificial intelligence in the context of human rights and social justice.

“Similar to a bureaucratic system, ChatGPT enforces thought without compassion, reason, speculation, or imagination,” they write.

Read more at SP2 News.