How Inclusive is Generative AI? DSI’s Emerging Data Science Program ChatGPT Workshop Sparks Dialogue

by Sara Elhawash

In the expansive realm of generative AI, where innovation thrives, researchers examine inherent biases within technologies. 

This issue was a focus of the Fairness – ChatGPT Workshop held on January 26 and 27. Professionals, researchers, and students met to explore the responsible development and ethical implementation and usage of generative AI, focusing particularly on the impact of ChatGPT on diverse communities. 

“The people who really benefit from AI are those who are already privileged,” said Professor Munmun De Choudhury of the Georgia Institute of Technology, whose keynote address laid the foundation for discussions on how inherent biases contribute to some of the challenges and ethical considerations surrounding generative AI. 

The Data Science Institute funds the Toward a Fair and Inclusive Future of Work with ChatGPT program as part of its Emerging Data Science Program. The initiative is led by University of Toronto Professors Syed Ishtiaque Ahmed (Department of Computer Science, Faculty of Arts & Science), Shurui Zhou (Edward S. Rogers Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering), Shion Guha and Anastasia Kuzminykh (Faculty of Information) and Lisa Austin (Faculty of Law).  

“It is our mission to unravel the complexities of generative AI’s impact on marginalized communities,” says Professor Zhou. “In the realm of responsible technology, our workshop sought to bridge the gap between innovation and inclusivity. Together, we’ve set the stage for a future where AI understands the importance of fairness and ethical considerations in its applications.”  

Day One of the workshop featured presentations from researchers and industry leaders who provided participants with insights and tools to comprehend ChatGPT and its impact on diverse communities. The focus was on understanding the capabilities, limitations and ethical considerations of AI. As an example, “ChatGPT provides the most accurate results only when using the English language setting,” said Ping Hu, a PhD student at the Ontario Institute for Studies in Education . “If you use ChatGPT from different regions, you may get different results that are not reliable.” 

Professor Matt Ratto, Faculty of Information, questioned what is considered ‘human-like’ and how these concepts impact AI design, while Professor Dakuo Wang of Northeastern University shifted the focus to Human-Centered AI (HCAI), exploring the paradigm of human-AI collaboration. 

Gender disparities in rankers based on Large Language Models (LLMs) were addressed by Professor Ebrahim Bagheri of Toronto Metropolitan University, who emphasized the need for automated ways to judge datasets. Professor Diyi Yang of Stanford University proposed a human-AI collaboration model to address conflicts and improve communication. 

“Can we think about tools that will allow people to personalize the process of building the models that are more accessible?” added Professor Swati Mishra of McMaster University. 

On the second day of the workshop, a panel discussion on Integrating LLM into Education, moderated by Professor Zhou, brought together industry experts and researchers to explore the multifaceted role of LLMs in education and featured two panels.  

The first, led by Professor Guha of the Faculty of Information, explored Responsive LLM Development. The second, moderated by Prof. Zhou, focused on the integration of LLM into education. These panel discussions included industry valuable insights from Dr. Alex Williams from Amazon and Mr. Hunter Kemeny from IBM Quantum. 

“If you have a hard time teaching a person something, then you will have a hard time teaching it to a machine,” emphasized Dr. Williams 

A question was proposed to the attendees: “In the process of creating systems, should we let conceptual ideas shape their development, or does the actual development of these systems shape and refine the nuances of the concepts?” 

A working group report integrated collaborative efforts and key insights generated during the workshop. The event wrapped up with a closing keynote delivered by Professor Edith Law from the University of Waterloo. She explored the challenges of aligning AI technologies with human values, highlighting the nuanced nature of human values in practical contexts. 

The Fairness – ChatGPT Workshop served as a platform for dialogue and laid the groundwork for a community committed to responsible AI development, with the goal of promoting trust, accountability and transparency in the evolving landscape of generative AI. This workshop is one of many activities that will come out of this program, including a speaker series and more outlined here. 

Data Sciences Institute

All Posts
Share on twitter
Share on facebook
Share on email

Upcoming Events

No event found!