Cormac Rea

Questioning Reality: Exploring the Future of Virtual Reality and Impact on Social Interactions 

Photo courtesy of Pablo Perez, Nokia XR Labs, Madrid, Spain

By: Cormac Rea

The Data Sciences Institute (DSI) at the University of Toronto hosts the annual Questioning Reality: Explorations of Virtual Reality conference where leading scholars, industry professionals, and VR enthusiasts are invited to discuss the future of virtual reality (VR) and its impact on social interactions.

The conference is led by Bree McEwan, DSI lead for Responsible Data Science and Associate Professor in the Institute for Communication, Culture, Information and Technology at the University of Toronto Mississauga and Sun Joo (Grace) Ahn, Director of the Center for Advanced Computer-Human Ecosystems and Professor at the University of Georgia.

The 2025 Questioning Reality conference will feature speaker, Dr. Pablo Pérez, a unique researcher in the extended reality (XR) field. Pérez has a deep understanding of both technical challenges and social communication processes related to improving human interactions via immersive technologies.

Pérez is the lead researcher in Nokia’s XR labs in Madrid, Spain, drawing on his extensive experience in both academic and industry environments. His work helps us to understand the way that visual images and communication processes come together to create rich and meaningful co-presence in mediated environments.

Profs. McEwan and Ahn invited Dr. Pérez to speak on the challenges and opportunities for the VR field. VR as artificial intelligence (AI) is integrated into VR social experiences, including generative imagery and large language models that run virtual agents.

“Developments in artificial intelligence will drive the next generation of immersive environments, whether it is making the metaverse come alive through virtual imagery generated in real-time or interacting with virtual agents who might populate these virtual scenes,” says Prof. McEwan.

“Dr. Perez’s research stands at the bleeding edge of interdisciplinary inquiries of AI, its integration into metaverse spaces, and social interactions between humans and machines. I have been following his research with interest for quite some time now and we are delighted to have him join the Questioning Reality 25 conference,” says Prof. Ahn.

In advance of his talk, DSI spoke with Dr. Pérez about the “Realverse,” XR, the “realism” that AI can bring to social interactions and the concerns that society should have about these technologies. 

Click here for event registration and further information about speaker, Dr. Pablo Pérez. 

What drew you to research extended reality and the “Realverse”?  

Eight years ago, Nokia launched a new research lab in Madrid to investigate the end-to-end delivery of VR and AR. At that time, we were looking for a research direction which might have impact in the long term, in a similar way as smartphones revolutionized our lives. And then I asked myself: which reason could lead my 70-year-old mother to wear a VR headset? The only answer which came to me: to visit my brother, who lives abroad. This was the inspiration to explore the potential of XR technologies in bringing people together. 

What types of experiences are better suited to XR and immersive technologies than the physical world?   

I don’t think that any technology can be better than a face-to-face communication. But what XR can do is helping us break some barriers that we eventually encounter when communicating. The most obvious one is the distance. Telegraphy made it possible to have instantaneous news distribution around the world. Telephony extended this capability to personal communications. Video calls have made face-to-face conversations possible. XR can bring a next step, where I not only see your face when talking to you, but I can see what you see and share your space. This has an enormous potential to connect people, but it also has tremendous economic implications. Imagine that you could hold a remote meeting, or set up a remote workplace, exactly with the same effectiveness as in-person. This would change everything. 

How can social XR be designed to highlight the “human” side of communication, like emotions and support?   

Distance is not the only barrier to overcome; mediated communication makes it difficult to convey emotional cues such as face or body language. But it also provides an advantage: there is already a device which is taking part in communication, so we can use the power of artificial intelligence to augment our emotional intelligence. The key here is how we address the problem: not using the system to gain advantage over the other, to try to detect what the other is trying to hide, but to gain agency in the emotions that we want to include in the conversation. Let’s give a couple of examples. An XR system could be trained to detect and code my emotional cues and represent them in a different way. When I smile, it could subtly modify the environment to display a warmer color palette, for instance. This would help me express my emotions in a way that I control. A second example is personalized emotion regulation. The system could be trained to detect the moments where I am getting overly emotional in the conversation, such as when I get too angry, and alert me so that I can rethink what I am doing and let my long-term rational sense take over. This would be hacking the fast-thinking system and letting the slow-thinking system kick-in when needed, in the terms of Daniel Kahneman. Note that in both examples the user has full control over the system and its outcomes, there is no unethical intrusion in other’s inner state. This is the key. 

How realistically can AI-based agents simulate social interactions in virtual environments?   

The explosion of large language models has shown that it is relatively easy for a virtual agent to communicate in natural language. In a way, simulating a social interaction is an almost-solved problem in a text chat. Translating this into a virtual environment requires solving two problems: the interface and the role. Regarding the interface, LLMs currently operate mostly with discrete blocks of text or multi-modal inputs, but this is not how a conversation works. Next-generation agents should be able to continuously process a flow of information and decide when and how to take part in the conversation, including interrupting, taking turns, and deciding on what to do at any time. This is not an extremely hard problem, but it is not solved yet. The second problem is understanding what the role of a virtual agent in the conversation should be. AI-based agents are already being incorporated as NPCs in gaming, or as support systems in customer attention. But social XR could bring new use cases, for instance as personalized agents that could be used for asynchronous communication. Imagine that, instead of sending you a recorded message, I send you a representation of myself which tells you the message and it is also able to have a conversation about it, because it knows the context of the message itself. It won’t be equivalent to being there in person, but it could be better than not being present at all. 

But how can this technological toolbox be strategically leveraged to find the “killer app” that drives widespread adoption of XR communication?   

A big problem with XR technologies is that the “wow effect” makes people evaluate very indulgently the first impressions of the technology but, in the long run, users get quickly tired of wearing an HMD regularly. As a side-effect, XR devices and applications are normally designed for “geeks”: you need to have a strong adaptation period to be able to handle XR devices regularly. It might not be obvious if you are a frequent technology user, but it appears quickly when you try to make a non-technical person use XR. So probably it would be better to design the system together with people who are not able or willing to adapt. In our lab, we have learned a lot by using our systems with old adults or with people with intellectual disabilities. Now we think that any long-term vision must be first validated by and, when possible, co-designed with users that are going to experience difficulties with your technology. By adopting an inclusive-by-design approach, XR technology can enhance human communication by addressing individual limitations and augmenting personal capabilities—effectively providing each user with personalized “superpowers” that improve accessibility and empathy in daily interactions.  

What concerns should society have about these technologies?   

XR technologies can augment the way we communicate, which is in principle positive, but of course is not free from risks. The good news is that those risks are basically the ones that are already identified in other technological flavours. All the concerns about social media problems and overuse of screens, such as losing the connection with reality, privacy issues, echo chambers, loss of attention span…, will still be there for social XR. It is key for the research community to address them upfront, so that we steer the development of XR in the direction of mitigating them, instead of reinforcing them. 

This talk and reception are co-sponsored by the Alfred P. Sloan Foundation and U of T’s Schwartz Reisman Institute for Technology & Society (SRI).

The Sloan Foundation is a not-for-profit, mission-driven grantmaking institution dedicated to improving the welfare of all through the advancement of scientific knowledge.

SRI’s mission is to deepen knowledge of technologies, societies, and what it means to be human by integrating research across traditional boundaries and building human-centred solutions that really make a difference.

The talk is hosted at the Schwartz Innovation Campus at the heart of Toronto’s innovation district. 

Drawing on AI and Other Data Sciences to Design Next-Gen Joint Replacements

Photo courtesy of Faculty of Applied Sciences and Engineering, University of Toronto (credit: Neil Ta)

By: Cormac Rea

A major challenge for the Canadian healthcare system involves creating biomedical implants such as knee and hip replacements that will not require extensive follow-up or revision surgery. The demand for expensive revision surgeries continues to grow as the population ages, so there is an urgent need to reduce the revision rates. When University of Toronto researcher, Yu Zou, learned of the problem – he wanted to help.  

“I’m a material scientist and really want to make materials that are useful to people and society,” said Zou.  

Zou also needed to understand how and why implants fail, as post-surgery complications are attributed to various failure modes of implant materials are also associated with patients’ identity factors, such as sex, age, physical disability, activity level, and body mass index, as well as the regions that patients live in. An interdisciplinary team was needed to employ sound data science methods to identify these variables from national health data sets.  

“I had a chance to speak with some hospital doctors and they told me there can be problems with the materials, specifically the durability of implants. Millions of dollars from the healthcare system are spent on joint replacements, often leading to revision surgery if certain parts don’t work well,” he added.  

Supported by a Data Sciences Institute catalyst seed grant, Professors Zou (Associate Professor, Faculty of Applied Science & Engineering, University of Toronto), Qiang Sun (Associate Professor Department of Statistical Sciences and Department of Computer Science, University of Toronto), and Adele Changoor (Staff Scientist Orthopaedic Surgery, Lunenfeld-Tanenbaum Research Institute and Assistant Professor, Department of Laboratory Medicine & Pathology, Temerty Faculty of Medicine, University of Toronto) came together to employ data science methodologies combined with AI tools to analyze massive datasets on joint replacement patients to help design complex microstructure materials. 

In developing new implants, Zou’s team needed to work with expensive materials that were more common in aerospace or airplane engineering to come up with microstructures that could provide the necessary strength, durability but lower elastic modulus required of a human joint.   

“In our lab we use data science tools and AI tools together to help us develop and manufacture new generation materials for extreme environments,” said Zou.  

Using data science insights from the hip and knee replacement revision surgery data registries, the researchers created algorithms to help drive insights from machine learning tools, in turn expediting the development of new implant materials. 

“It is just like ‘cooking’ meals,” said Zou. “We tried something and tested it, tried something different and tested again, and so on. So initially the efficiency was very low and there was a very high cost, both in terms of the funding required but also the time cost for those working on the project.”  

“Statistics and AI can streamline the lengthy trial-and-error process, narrowing thousands of possibilities down to a select few best options,” added Sun. 

“In this way, we only need to test about ten samples instead of thousands. This greatly shortens the research cycles and associated costs,” concluded Zou.  

The researchers continue to develop the data sets and necessary microstructures with the intent of further developing partnerships with hospitals, with a vision to develop a product that can be used by frontline hospital clinicians. Given that patient-specific biology (e.g. bone density, activity levels) contributes to implant survivability, the long-term goal is to build open-source tools for clinicians to be able to easily use at hospitals.  

“In the future, doctors could possibly visualize an accelerated simulation of the joint implant’s suitability, based on the patient, and see how the materials would change or degrade over five, ten or twenty years,” revealed Zou. 

With preliminary results of their research in place, Zou’s team was successful in applying for external funding in 2024.  

“The initial support funding from DSI was very helpful in securing external funding streams,” said Zou. “The New Frontiers in Research Fund from the federal government will support us in our work for another two years.”  

“Statistics and data sciences, including AI, have the potential to transform fields that heavily rely on trial-and-error approaches,” said Sun. “Their impact will likely be seen across many disciplines.”

Bridging The Gap: CrossTALK Bootcamp Unites Computational And Experimental Scientists For Drug Discovery

By: Sofia Mellou

The buzz around artificial intelligence (AI) in drug discovery is undeniable, but a major bottleneck remains. There are no openly available, high-quality, large-scale datasets needed to train machine learning (ML) models and advance drug discovery efforts. As the Structural Genomics Consortium (SGC) enters its third decade, it is tackling this challenge head-on by generating open science, ML-ready protein-ligand training datasets at an unprecedented scale. To support these efforts, SGC’s research site at the University of Toronto launched CrossTALK Bootcamp; a training program designed to bring together computational scientists and experimental researchers in a unique setting.

Funded by the Data Sciences Institute (DSI) at the University of Toronto as part of the DSI Emergent Data Sciences Program, this innovative program aims to train the next generation of drug discovery experts by providing them with the skills to interpret complex experimental data and harness AI-driven approaches. The program is led by a powerhouse team of professors: Matthieu Schapira, Rachel Harding, Mohamed Moosavi, Chris Maddison, Benjamin Sanchez-Langeling, Hui Peng, and Benjamin Haibe-Kains drawing expertise from pharmacology, chemistry, engineering, and AI.

What makes this initiative so exciting? It’s not just another training program; It is a hands-on, interactive experience where computational scientists step into the lab, and experimentalists take a 15-hour dive into the world of data science. The quarterly workshops feature dynamic sessions and lab visits, fostering real collaboration between two fields that often work in silos.

A Look Inside the CrossTALK Bootcamp Launch

The energy at the launch of the first Bootcamp last month was palpable. After an introductory overview of the program by Dr. Matthieu Schapira, Dr. Benjamin Sanchez-Lengeling’s question took the stage and set the tone: Why do we need molecules? From there, he took the audience on a whirlwind tour of the molecular discovery pipeline, where creativity, diversity, and scientific rigor collide to shape the future of early drug discovery. He emphasized that transformative breakthroughs require not just data and cutting-edge tools, but also the right people coming together to innovate.

Dr. Rachel Harding followed with a deep dive into the mechanics of experimental data generation and hit validation. Using the perfect metaphor of a key and a lock, she illustrated the complexity of molecular binding to its target protein and the crucial role SGC plays in validating hits. “If we combine AI with high-quality experimental validation, we can change the game in drug discovery,” she emphasized.

Excitement from the Experts

The enthusiasm for this initiative was evident when we caught up with two of the program’s leaders after the event.

“It’s thrilling to see machine learning gaining momentum in drug discovery. The response has been phenomenal. Over 140 applicants for our first Bootcamp cohort! We could only take 30 this time, but the demand is clear. There will be many more opportunities to join in following quarters and while this pilot initiative is focused on the University of Toronto at the moment, my dream is to expand it nationally and beyond,” Dr. Matthieu Schapira commented.

“What excites me most is the broadness of backgrounds and different disciplines among the participants. Seeing computational and bench scientists side by side, eager to learn from each other, is exactly what this field needs. Each cohort gets hands-on lab experience at SGC-Toronto, learning how we validate hits, produce proteins, and design assays. “This is just the beginning,” Dr. Harding added.

The registration for the second CrossTALK: Cross-Training in AI and Laboratory Knowledge for Drug Discovery is now open! The second 9-week bootcamp open to students, postdoctoral researchers and staff with computer or biological science backgrounds will take place from April to June, 2025. Interested individuals are encouraged to submit their applications, in order to secure their spot with complimentary registration.  

More information: https://datasciences.utoronto.ca/early-stage-drug-discovery/ 

Emergent Data Solutions: Harnessing Data Science and AI to Revolutionize Aging and Neurodegeneration Research 

By: Cormac Rea

There is an urgent need amongst healthcare researchers for creative solutions to address the challenges of caring for our growing aging population in diverse healthcare settings, including the need to predict disease development and treatment outcomes. Data science, including artificial intelligence (AI), can revolutionize how we understand the human brain, offering more affordable, precise tools for detecting neurodegenerative conditions.

However, AI’s integration into clinical practice faces a major barrier in terms of the multidisciplinary collaboration necessary to design, implement, and refine AI tools effectively. Data scientists working with predictive algorithm development may lack clinical context to tailor these tools for real-world healthcare workflows. At the same time, healthcare leaders must collaborate with both scientists and clinicians to ensure AI-informed decisions are sound and impactful. 

Enter Advancing Aging and Neurodegeneration Research through Data Science — a unique initiative by the Data Sciences Institute’s (DSI) Emergent Data Science Program that aims to bridge these gaps by fostering learning and training opportunities between data scientists, basic scientists, clinicians, and educators. The initiative is led by professors Rosanna Olsen (Rotman Research Institute, Baycrest Academy for Research and Education; Department of Psychology, Faculty of Arts & Science, University of Toronto); Malcolm Binns (Rotman Research Institute, Baycrest Academy for Research and Education; Dalla Lana School of Public Health, University of Toronto);  Bradley Buchsbaum (Rotman Research Institute, Baycrest Academy for Research and Education; Department of Psychology, Faculty of Arts & Science, University of Toronto); Jean Chen (Rotman Research Institute, Baycrest Academy for Research and Education; Department of Medical Biophysics, Temerty Faculty of Medicine, University of Toronto) and Kamil Uludag (Krembil Brain Institute, University Health Network; Department of Medical Biophysics, Temerty Faculty of Medicine, University of Toronto).   

The AI & Aging team will provide training and learning opportunities, bringing together data scientists, clinicians, educators, to explore the development of new areas of research, which may ultimately benefit the treatment and healthcare service. Additionally, the initiative will showcase experts in data science and aging, creating a forum to highlight and discuss key emergent issues.  

The program will launch this spring with a talk from a renowned researcher in neuroimaging and machine learning. Prof. Christos Davatzikos, Wallace T. Miller Sr. Professor of Radiology at the University of Pennsylvania, and Director of the recently founded AI2D Center for AI and Data Science for Integrated Diagnostics, will speak about Machine learning in neuroimaging: understanding the heterogeneity of brain aging and neurodegeneration, and building personalized imaging biomarker on March 27.  

Dr. Davatzikos has been the founding Director of the Center for Biomedical Image Computing and Analytics since 2013, and the director of the AI in Biomedical Imaging Lab (AIBIL). He oversees a diverse research program ranging from basic problems of imaging pattern analysis and machine learning to a variety of clinical studies of aging and Alzheimer’s disease, schizophrenia, brain cancer, and brain development. He is an IEEE fellow, and a fellow of the American Institute for Medical and Biological Engineering. 

“I am thrilled to co-lead this exciting new EDSP program, Advancing Aging and Neurodegeneration Research through Data Science, supported by the Data Science Institute at the University of Toronto,” said Olsen. “This initiative brings together experts who use cutting-edge AI and data-driven approaches to tackle some of the most pressing challenges in aging and neurodegenerative disease research.”  

We are especially honored to kick off the series with Dr. Christos Davatzikos, a true leader in AI-driven biomedical imaging, whose work is transforming how we understand and detect different types of brain disorders.”   

The DSI spoke with Dr. Davatzikos about his background, research focus, and the potential future uses of machine learning in aging.  

Tell us a little bit about yourself. How did you become interested in your area of research (neuroimaging, aging, machine learning)?  

CD: I went through education and training in engineering and computer science but was always interested in biomedical applications of technologies, especially in neuroscience. As machine learning methods were in their infancy in the 90s, I thought that they are the tools needed to help us… to see in the data what we can’t otherwise see. For example, to see brain signatures of neuropsychiatric and neurodegenerative diseases that cannot be detected visually and/ or are predictive of clinical outcomes. 

Do you have a favorite paper or research finding from your own group or from other researchers that you would like to share with us?  

CD: A recent paper in Nature Medicine on Brain aging patterns in a large and diverse cohort of 49,482 individuals is one of my favorites. It helps us understand the heterogeneity of brain aging trajectories, as well as their genetic, clinical and lifestyle correlates.   

What are you most excited about for the future of our field?  Do you anticipate any breakthroughs in the field of aging research in the next five years?  

CD: Among many potentially exciting directions, I am particularly excited about seeing more emphasis on prevention and early detection. Improving our understanding of the role of genetic and lifestyle risk factors, and being able to identify individuals at risk, can inform clinical trials and personal health management.  

Machine learning can play a significant role in this direction in many ways, two of them being the following: 1) it helps us develop endophenotypes, in part by looking at complex patterns of biomarkers of all sorts, and hence identifying individuals who not only have a risk factor, but who also seem to be “expressing” respective endophenotypes/patterns that have been linked with that risk factor; 2) it helps us build predictive models of future brain and clinical trajectories.   

Another exciting direction is that of using machine learning methods for drug repurposing and development, by learning more about genetic correlates of brain aging and associated neuropathologic processes and identifying drugs that can slow down these processes.  

Since the development process for applied machine learning tools requires multidisciplinary input across an array of clinical, measurement and data experts, do you have suggestions for optimizing collaboration and communication across professionals with different immediate goals? 

CD: As other similar technical fields, which have become an integral part of medicine and biomedical research (e.g. medical physics and biostatistics), I think that a new generation of biomedical scientists and clinicians will emerge: people who have cross-training and interests in both data science/AI and biomedical domains.  

Do you have any thoughts on sustainable AI, in health research and beyond? 

CD: AI is a technology that will become an integral part of our daily lives, including medicine and biomedical research, pretty much like other technologies from farming machines and the automobile, the cellphone and the internet. As such, we will have to develop mechanisms that constantly maintain and enhance AI tools. Due to its nature, AI is a technology that continuously adapts and learns from new data and new knowledge: the more we use it, the better it will become. 

Emergent Data Sciences Program 
Through the Emergent Data Science Program, DSI funds a broad span of activity that can lead to the development of innovative data science methodologies, deep connections with computation and applied disciplines, new training programs, collaboration, knowledge mobilization, and impact beyond the academy. Applications for the 2025 program are now being accepted. LOI Deadline: March 28   
Learn more about the application process.

Upskill Canada Boosts Investment and Propels Growth for the Data Sciences Institute’s In-Demand Skills Certificates  

By: Cormac Rea

Certificates that are equipping hundreds of professionals with skills in data science and machine learning software have received a vote of confidence with renewed investment from their government partner. 

Following a dynamic launch year offering in-demand skills training and career wayfinding for professionals, Upskill Canada has invested a second wave of funding for the University of Toronto’s Data Sciences Institute’s (DSI) Data Science and Machine Learning Software Foundations certificates – bringing the overall investment to $3.9M by 2026. This key funding will enable 680 total participants to access critical training over two and-a-half years, preparing them for jobs in key innovation sectors.

“Our ongoing partnership with DSI allows more workers to gain the knowledge and skills necessary for the jobs of tomorrow,” said Ann Buller, Interim CEO of Palette Skills. “The program has received such excellent feedback from employers and students alike — we are thrilled to offer continued support.”

In the first year of this DSI certificate, which pairs technical skills with job-readiness support and strong employer connections, almost half of all graduates secured job success via new employment, received promotions, or transitioned into new roles within six months of completion. 

“The confidence shown in the DSI through this renewed investment reflects our success at connecting learners with the immense demand for data science literacy and skills at the heart of the Canadian digital economy,” says Lisa Strug, Director of the Data Sciences Institute and Professor in the Departments of Statistical Sciences and Computer Science (Faculty of Arts & Science) and the Division of Biostatistics (Dalla Lana School of Public Health) at the University of Toronto (U of T). Strug is also a Senior Scientist at The Hospital for Sick Children.

The DSI certificates are an initiative of Upskill Canada, powered by Palette Skills and funded by the Government of Canada. Upskill Canada is designed to meet the talent needs of high-growth sectors to increase productivity and innovation in Canada. 

“We are proud to receive further financial support, allowing our work in targeting training in key areas of data science and machine learning to evolve, and increasing the available data science talent pool in Canada across a range of sectors,” added Strug.  

Equipping workers with these skills creates new career pathways for Canadians and better positions Canadian companies to compete both domestically and internationally. The funding will enable the DSI to continue its mission to accelerate the impact of data sciences, leveraging U of T’s global reputation in data science and machine learning.  

“Coming from a non-technical background, this journey has been both challenging and incredibly rewarding,” said David Vaz, who completed the Machine Learning Software Foundations Certificate and started a new job in October 2024 as a Manager of Strategic Initiatives and Partnerships at Skills for Change.  

“The Certificate has equipped me with a comprehensive foundation in data science and machine learning, covering everything from fundamental programming to cutting-edge AI applications.” 

Given the need for data science training across a range of sectors, the certificates are designed to empower participants with the skills needed to succeed in cutting-edge careers.  

“Looking ahead, I’m particularly excited about exploring applications of computer vision and NLP to create human-centered AI solutions,” added Vaz. “My goal is to contribute to the growing field of AI, focusing on developing tools that enhance and support people’s daily lives.” 

“I found the job readiness sessions extremely helpful in updating my LinkedIn profile and resume, allowing me to better highlight my skills, experience, education, and certifications,” said Zarrin Rasizadeh, who completed the Machine Learning Software Foundations Certificate and was recently hired.  

“Additionally, the mock interviews were invaluable in boosting my confidence and preparing me more effectively for actual job interviews.” 

Both DSI certificates offer foundational concepts in data science and machine learning and provide opportunities for practical application through employer case studies. Each certificate also includes sessions dedicated to career advancement, from support for resume writing to networking and interview skills development. 

About the Data Sciences Institute Upskilling Certificates 

The certificate modules and job readiness sessions are offered part-time over 16 weeks, allowing learners time to balance existing commitments and still accomplish their career goals. The training is offered to learners at a substantially reduced rate of $525 (+HST) per certificate, thanks to the support of Upskill Canada. The DSI has also committed to accessibility funding for those with financial need. To learn more for upcoming sessions: https://certificates.datasciences.utoronto.ca/