Artificial Intelligence – “AI” – continues to be the subject of hot debate around the world as governments seek ways to regulate it to protect the public, and developers continue to push towards AI with more human-like capabilities. What’s at stake depends on who you listen to: some extoll the benefits of AI to “transform” the way we live and work, downplaying the potential for negative impacts on society, while others warn of an existential threat to humanity. Most perspectives land somewhere in between. We see AI, like other technological advances before it, as an exciting tool with tremendous potential. As such, it is not inherently helpful or harmful: its impacts depend on how it is used. Now is the perfect time for the thoughtful and extensive integration of social science evidence and expertise into AI development, deployment, implementation, and use so that AI can be optimally, positively effective while minimizing risks of harm to society.
AI has existed in various forms for decades and until recently, was developed under tightly constrained parameters to do specific tasks. However, the 2022 launch of easy-to-access Large Language Model (LLM) tools such as ChatGPT, which input massive amounts of data and generate responses to questions in conversational language, had leaders across many sectors – from education to business – scrambling to set guidelines and parameters for AI’s use in their domains. Indeed, AI and other technologies are not typically implemented in isolation but in systems. Social science approaches can help us understand and address these technologies’ reach, implications, and impact within these “AI systems.”
Read More