Fairness – ChatGPT Workshop

Please note: the event has ended. The info is being left up for reference.
Stay tuned for next year’s event.

ChatGPT and other forms of generative AI hold tremendous potential for local and global innovation. But what ethical considerations are raised when we study the relationship between their theoretical design, development, and evaluation and their real-world applications? And how might we envision and bring into practice a fair and inclusive future of work with ChatGPT, particularly for minority and under-served groups?

Join us in a two-day event where we will begin to answer these questions. On day one, presentations from Canadian academics and industry leaders will equip participants with a practical and intellectual toolkit for understanding what ChatGPT is and what we know about its impact on diverse communities. On day two, participants will reconvene for breakout group activities that will address and engage concerns and opportunities for leveraging ChatGPT, and will offer hands-on experience in exploring the ethical dimensions of generative AI. 

This Workshop is part of the Toward a Fair and Inclusive Future of Work with ChatGPT program and will pave the way for a community of Canadians united by a commitment to the responsible development of guidelines, policies, and safeguards that can lead to the fair and ethical use of generative AI by promoting trust, accountability, and transparency. It is designed to create an inclusive space for the public – including students, instructors, practitioners, academics, artists, and members of minority communities – to share their opinions on the evolving landscape of generative AI, to participate in lively discussion, and to be a part of social and technological transformation.   

Breakfast and light refreshments will be provided.

Program

January 26, 9:00 am – 5:00 pm
January 27, 9:00 am – 12:30 pm

In-person only
10th floor DSI Seminar room 10031/10032
700 University Avenue,
Toronto, ON

January 26, 2024
9:00 – 9:30 am
Registration and Breakfast
9:30 – 9:40 am
Opening Remarks
Prof. Shurui Zhou,Edward S Rogers Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering University of Toronto
Prof. Ishtiaque Ahmed, Department of Computer Science, Faculty of Arts & Science, University of Toronto
Prof. Shion Guha, Faculty of Information, University of Toronto
Prof. Anastasia Kuzminykh, Faculty of Information, University of Toronto
9:40 – 10:20 am
Keynote
Large Language Models Towards (In)equitable Futures in Healthcare
Prof. Munmun De Choudhury, School of Interactive Computing, Georgia Institute of Technology

10:20 – 11:00 am
Humanness and AI
Prof. Matt Ratto, Faculty of Information, University of Toronto

Abstract: The power of generative AI systems is often described as their ability to produce ‘human-like’ creative outputs, including images and text. But what is considered ‘human-like’? What humans and behaviors serve as the models for generative AI systems? And how might unexamined concepts of ‘humans’ impact the design and operation of AI systems? As has been described by scholars including Sylvia Wynter, Rosi Braidotti, and Katherine Hayles, reductive concepts of humanness have been the source of many inequities in society including apartheid (Bowker and Star, 2000), colonialism (Fanon, 1967; Irigaray, 1985; Said, 2003), and gender inequality (Spivak, 1988, Haraway,1991; Braidotti, 2013,2019). In this talk, I will explore why a deeper engagement with ‘humanness’ is important for both critical and creative approaches to AI, using concepts from critical and posthumanist scholarship to interrogate key concepts of current use in AI design including the ‘godspeed’ scale (Bartnek et al, 2009). To advance our understanding of humanness in the context of generative AI, three empirical questions are important: what kinds of humans serve as the models for AI; what human traits are considered appropriate and appropriable; and how are these traits operationalized within specific AI systems?
11:00 – 11:20 am
Break
11:20 – 12:00 pm
Designing Human-Centered AI Systems for Human-AI Collaboration
Prof. Dakuo Wang, Khoury College of Computer Sciences and the College of Arts, Media and Design Northeastern University

Abstract: Human-Centered AI (HCAI) refers to the research effort that aims to design and implement AI techniques to support various human tasks, while taking human needs into consideration and preserving human control. Prior work has focused on human-AI interaction interface design and explainable AI research (XAI). However, despite these fruitful research results, why do many so-called “human-centered” AI systems still fail in the real world? In this talk, I will discuss the human-AI interaction paradigm, and show how we can learn from human-human collaboration to design and build AI systems that lead to a successful interaction paradigm. This work serves as a cornerstone towards the ultimate goal of Human-AI Collaboration, where AI and humans can take complementary and indispensable roles to achieve a better outcome and experience.
12:00 – 12:40 pm
Gender Disparities in LLM-based Rankers
Prof. Ebrahim Bagheri, Department of Electrical, Computer and Biomedical Engineering at Toronto Metropolitan University
12:40-1:30pm
Lunch
1:30-2:10pm
Human-AI Collaboration in the Age of Large Language Models 
Prof. Diyi Yang,Computer Science, Stanford University

Abstract: Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI collaboration using LLMs. The first one explores how large language models transform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part looks at social skill learning via LLMs by empowering therapists and learners with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI collaboration via LLMs can empower individuals and foster positive change. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems.

2:10-2:50pm
Panel: Responsive LLM Development
Moderator – Prof. Shion Guha,Faculty of Information and Department of Computer Science, Faculty of Arts & Science, University of Toronto
Panelists –
Dr.Alex Williams, Interactive Machine Intelligence AWS AI, Amazon
Prof. Nicholas Vincent, School of Computing Science, Simon Frazer University
Prof. Diyi Yang, Computer Science, Stanford University
Prof. Swati Mishra, Computer Science and Software Engineering, McMaster University
2:50-3:30pm
UofT Lightning Talks
Prof. Shurui Zhou,

Prof. Shion Guha,
Prof. Anastasia Kuzminykh
3:30-5:00pm
Refreshments and Social Hour
January 27,2024
9:00 – 9:30 am
Breakfast
9:30 – 10:30am
Panel: Integrating LLM into Education
Moderator – Prof. Shurui Zhou,Edward S Rogers Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering University of Toronto
Panelist –
Prof. Swati Mishra, Computer Science and Software Engineering, McMaster University
Prof. Ian Arawjo,Human-Computer Interaction at the University of Montréal in the Department of Computer Science and Operations Research (DIRO)
10:30-10:40am
Break
10:40-11:20am
Breakout Discussions
11:20 – 11:30am
Break
11:30-11:45am
Working group report
11:45-12:25pm
Closing keynote: Consideration of Human Values in the Design of Technology
Prof. Edith Law, Director, Augmented Intelligence Lab David R. Cheriton School of Computer Science, University of Waterloo

Abstract: AI technologies have become a prevalent tool in our society, forcing seismic changes in how we learn, create and work. There has also been an overwhelming amount of discussion around value alignment, i.e., how to ensure that AI systems are designed to respect human values. In this talk, I will describe the challenges of value alignment, the nuanced and complex nature of human values in the real world, and the implications for technology design
12:25-1:30pm
Lunch

Speakers

Ishtiaque Ahmed 

Assistant Professor  
Department of Computer Science, Faculty of Arts & Science, University of Toronto 

Ishtiaque Ahmed directs the Third Space research group. His research focuses on the intersection of computer science and critical social sciences, addressing issues of bias and oppression. Ishtiaque is an advocate for diversity in academia, co-directing the PRISM program for marginalized students and organizing the UofT Critical Computing Seminar. He has received various prestigious awards, including being a Connaught Scholar and Schwartz Reisman Fellow. 

Ian Arawjo 
Assistant Professor  
Human-Computer Interaction, Université de Montréal 

Ian Arawjo leads the Montréal Human Computer Interaction (HCI) Group at the Université de Montréal. Previously he served as a Postdoctoral Fellow at Harvard University under Prof. Elena Glassman.  He has experience applying a range of HCI methods, from ethnographic fieldwork, to archival research, to developing novel systems and running usability studies. Currently, works on projects at the intersection of programming, AI, and HCI, such as how new AI capabilities can help us reimagine the practice of programming. He works on LLM evaluation tooling, through high-visibility open-source projects such as ChainForge. His first-authored papers have won awards at top HCI conferences, including at CHI, CSCW, and UIST.  

 

 

Ebrahim Bagheri 
Professor  
Department of Electrical, Computer and Biomedical Engineering at Toronto Metropolitan University

Ebrahim Bagheri is an interdisciplinary researcher with focus on Efficient and Responsible Information Retrieval methods who has impacted industry, government and civil society through community engagement and knowledge translation. His NSERC Responsible AI initiative is unique in that it highlights the need to balance economic development with social good. He co-founded the International Workshop on Mining Actionable Insights from Social Networks (MAISoN). He is the Associate Editor for IEEE Transactions on Network Science and Engineering and ACM Transactions on Intelligent Systems and Technology. 

Munmun De Choudhury 
Associate Professor 
School of Interactive Computing, Georgia Institute of Technology 

Munmun De Choudhury’s is best known for laying the foundation of a new line of research that develops AI and machine learning approaches to understand how social media can inform us of or influence varied mental health outcomes. Dr. De Choudhury has been recognized with the 2023 SIGCHI Societal Impact Award, the 2023 AAAI ICWSM and the 2022 Web Science Trust Test-of-Time Awards, the 2021 ACM-W Rising Star Award, the 2019 Complex Systems Society – Junior Scientific Award, over a dozen best paper and honorable mention awards from the ACM and AAAI. Her work has been featured in popular press like the New York Times, the NPR, and the BBC. Dr. De Choudhury has served on a committee by the National Academies of Sciences, Engineering, and Medicine that examined the impact of social media on the wellbeing of young people. She has also contributed to the Office of U.S. Surgeon General’s 2023 Advisory on The Healing Effects of Social Connection.

Shion Guha 
Faculty of Information and Department of Computer Science, Faculty of Arts & Science, University of Toronto 

Shion Guhadirects the Human-Centered Data Science Lab and is part of the broader Critical Computing research community. His research interests are broadly concerned with the nascent field of Human-Centered Data Science that he has helped to develop. He is interested in algorithmic decision-making, especially in public services, as well as the intersection between AI and public policy. 

Anastasia Kuzminykh  
Assistant Professor 
Human-Computer Interaction, Faculty of Information, University of Toronto 

Anastasia Kuzminykhs research group explores the role of technology in Communication, Organization of knowledge, and information ecosystems -the COoKIE Group. She is founder and the director of the Toronto Human-AI Interaction research school – the THAI RS.  

Edith Law 
Associate Professor and Director of the Augmented Intelligence Lab  
David R. Cheriton School of Computer Science 
University of Waterloo 

Edith Law is broadly interested in social computing technology that coordinates small groups to large crowds, new models of interactions with machine intelligence, and how technology can be designed to foster and celebrate certain human values. The research conducted by Law and her students has received several best paper awards and honorable mentions at the CHI, CSCW and DIS conference. Her recent research direction is focusing on the design of personal tools and collaboration systems for value discovery, articulation and negotiation. 

Swati Mishra 
Assistant Professor 
Computer Science and Software Engineering, McMaster University 

Swati Mishra’s research focuses on designing tools that improve ML system’s usability, reliability, and interactivity with stakeholders. She is interested in leveraging Machine Teaching (MT), an inverse problem of Machine Learning, to improve teacher efficiency in applications for healthcare and computational journalism. Before joining McMaster, she received her Ph.D. in Information Science from Cornell University, where her research was funded by a multi-year Data Science Fellowship from Bloomberg AI. She also received an M.Sc. in Computer Science from Cornell University and an M.Sc. in Human-Computer Interaction from Indiana University. She has worked in the AI industry for 9 years, building and leading state of the art AI products. Her research has been published in top-tier conferences and journals like ACM SICGHI, CSCW, TEI, UMAP, SIGIR and IEEE VIZ and won best paper awards at ACM SIGCHI. Her lab currently focuses on applying cognitive modeling techniques to understand end-user behavior and leveraging it to build reliable AI systems for decision-making, sensemaking, and storytelling.

 

Matt Ratto  
Professor and Associate Dean 
Research in the Faculty of Information, University of Toronto 

Matt Ratto studies and practices ‘critical making’, work that combines humanities insights and engineering practices and has published extensively on this concept. He publishes across a wide range of disciplines including recent work on hope and interventional digital projects (CSCW 2023), generative AI and mental health (JMIR 2023), and additive manufacturing and prosthetics(CJPO 2020; JPO 2020). 

Nicholas Vincent 
Assistant Professor 
School of Computing Science, Simon Fraser University 

Nicholas Vincent’s research focuses on studying the relationship between human-generated data and modern computing technologies, including systems often referred to as “AI”. The overarching goal of this research agenda is to work towards an ecosystem of widely beneficial, highly capable AI technologies that mitigate inequalities in wealth and power rather than exacerbate them. His work touches on concepts such as “data dignity”, “data as labor”, “data leverage”, and “data dividends”.

Dakuo Wang 
Associate Professor 

Northeastern University and Visiting Scholar at Stanford University

Dakuo Wang’s research lies at the intersection of human-computer interaction (HCI) and artificial intelligence (AI), with a focus on the exploration, development, and evaluation of human-centered AI (HCAI) systems. The overarching goal is to democratize AI for every person and every organization, so that they can access their own AI and collaborate with these real-world AI systems (human-AI collaboration). Before joining Northeastern, Dakuo was a Senior Staff Member at IBM Research, Principal Investigator at MIT-IBM Watson AI Lab, and a Visiting Scholar at Stanford Institute for Human-Centered Artificial Intelligence. He serves on organizing and program committees, and editorial boards for a variety of venues, and ACM has recognized him as an ACM Distinguished Speaker. 

Alex Williams 
Applied Scientist II 
Interactive Machine Intelligence AWS AI, Amazon 

Alex Williams is an applied scientist in the Human-in-the-Loop Science team at AWS AI where he designs, engineers, and studies facets of interactive machine intelligence. Before joining AWS, he was a professor in the University of Tennessee, Knoxville’s EECS Department and a postdoctoral researcher in the University of California, Irvine’s Informatics Department. During his PhD, he spent several summers with the Next-Generation Productivity group at Microsoft Research and the Emerging Technologies team at Mozilla Research. He earned his PhD in Computer Science from the University of Waterloo and holds Master’s and Bachelor’s degrees in Computer Science from Middle Tennessee State University. 

Diyi Yang 
Assistant Professor 
Computer Science, 
Stanford University 

Diyi Yang is affiliated with the Stanford NLP Group, Stanford HCI Group, and Stanford Human-Centered Artificial Intelligence (HAI)Her research focuses on natural language processing, machine learning, and computational social science. Her work has received multiple best paper nominations or awards at top NLP and HCI conferencesShe is a recipient of  EEE “AI 10 to Watch” (2020), Intel Rising Star Faculty Award (2021), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), and an ONR Young Investigator Award (2023).   

Shurui Zhou 
Assistant Professor  
Edward S Rogers Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering, University of Toronto 

Shurui Zhou leads FORCOLAB, focusing on enhancing collaboration in software development, especially in modern open-source and interdisciplinary projects, including AI-enabled systems. Her research applies software engineering best practices to improve collaborative Computer-Aided Design (CAD).