Archives for November 22, 2023

Data Sciences Institute Supported Research Reveals How Automating Food Analysis Can Improve Health Policy

by Sara Elhawash

When purchasing foods, many consumers give food labels cursory scans, taking in information such as calorie levels or sodium content. Why is streamlining this process crucial from a public and policy perspective? 

Creating and maintaining the databases needed by researchers and others to establish food policies and monitor the food supply is a significant task. This involves classifying and analyzing hundreds of thousands of foods, a process that is typically done manually and infrequently. 

Guanlan Hu, Postdoctoral Fellow in the Department of Nutritional Sciences (Temerty Faculty of Medicine, U of T), is on a mission to simplify this complex process. Her research explores the use of pre-trained language models and supervised machine learning to analyze unstructured food label text, thereby streamlining food categorization and other important classification tasks. Among her primary goals is to revolutionize the understanding and categorization of ultra-processed foods (UPFs), particularly for the benefit of the public and policy makers. Her aim is to improve public health and streamline the analysis of food, underscoring the broader impact and significance of her research. 

Supervised by Professor Emerita Mary R. L’Abbé (Temerty Faculty of Medicine, U of T), and co-authored by Postdoctoral Fellow Mavra Ahmed and PhD student Nadia Flexner, Hu’s presentation at the DSI Research Day signals a shift in the landscape of food classification and health policy.  

“Using cutting-edge language models and machine learning, we’ve automated food categorization, nutrition quality scoring and food processing level classification,” says Hu. “This streamlines food analysis and holds promise for swift, scalable monitoring of the global food supply, particularly in identifying ultra-processed foods.” 

Leveraging pre-trained language models and the XGBoost multi-class classification algorithm, Hu’s methodology achieved an impressive accuracy score of 0.98 in predicting both major and sub-category classification of foods, outperforming traditional bag-of-words methods and presenting a powerful tool for efficiently determining food categories and food processing levels.  

“The research holds the potential to expedite the monitoring and regulation of ultra-processed foods in the global food supply, offering a transformative impact on public health and regulatory practices,” says Professor L’Abbé. 

This research is part of a DSI Catalyst Grant project, Using deep learning and image recognition to develop AI technology to measure child-directed marketing on food and beverage packaging and investigate the relationship between marketing, nutritional quality and price, awarded to L’Abbé and Professors David Soberman (Joseph L. Rotman School of Management), Laura Rosella (Dalla Lana School of Public Health), and Steve Mann (Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering). The Collaborative Research Team includes trainees such as Hu. 

By refining food analysis and offering a better method for policymakers to monitor and regulate UPFs, Hu especially hopes to improve public health and dietary understanding in countries where highly processed foods contribute significantly to daily energy intake, such as Canada, the United States and Argentina, where Hu has applied her work. 

Her just-completed research, though, is simply a first step. “Much like the continual evolution of technology,” says Hu, “our work demands continuous development and evolution in this pioneering field.” 

In the meantime, Hu’s work underscores the potential of machine learning and natural language processing in nutrition sciences and the interdisciplinary nature of such breakthroughs, reflecting the importance Data Sciences Institute grants in fostering collaborative research. 

As a collaborative community, the DSI promotes innovation and facilitates the exchange of ideas, connecting diverse groups of researchers and trainees spanning various disciplines. One of the many ways that trainees can get involved is through the DSI’s Postdoctoral Fellowship, designed to support multi and interdisciplinary training and collaborative research in data sciences. 

The Interdisciplinary Work Forging a Path between Causal Inference and Policy

By Kate Baggott 

“Causal inference is hard.”  

That’s not a conclusion. It’s an observation Rahul G. Krishnan was brave enough to make at the Forging a Path: Causal Inference and Data Science for Improved Policy Workshop on November 10th to over 100 faculty, students and participants from organizations.  

The difficulty of causal inference is not a matter of methodological rigour or reporting. The difficulty comes from the interdisciplinary nature of the process. The community doing causal inference is not one community, Krishnan reminded those present. Rather, causal inference is a process that engages different communities; biostatisticians, economists, epidemiologists, computer scientists, and data scientists, among others; engage in to make decisions and form policies.  

“Among these communities, different language is used to describe the same phenomenon,” Krishnan said.”  

The workshop was created to bring together practitioners of multiple disciplines who are employing a variety of methodologies. The Data Science Institute funds the Causal Inference Emerging Data Science Program and held the workshop in collaboration with theForward Society (FOS) Lab. The program was initiated by University of Toronto‘s Linbo Wang (Department of Statistical Sciences, University of Toronto Scarborough), Gustavo J. Bobonis (Department of Economics, Faculty of Arts & Science), Ismael Mourifié (Department of Economics, Faculty of Arts & Science), and Raji Jayaraman (Department of Economics, Faculty of Arts & Science). The workshop was the first of three workshops and a seminar series over the new two-years of the emerging data science program. 

The challenge put to participants was not to create a common language, but to create a shared understanding for how to manage the reams of data collected on human activity and explain it to help policymakers improve their decision-making in all areas from public health to education, and from social security to law and justice.  

Throughout the presentations from practitioners, there was an emphasis on description, shared definitions, and clear communication when working with decision-makers. 

Econometrician and empirical microeconomist Alberto Abadie (MIT Economics) talked about estimating the value of evidence-based decision-making (EBDM) itself in his keynote presentation.  

“Despite the ubiquity of EBDM, we are unaware of empirical tools that organizations can use to assess the value of their EBDM practices,” he reminded attendees of the workshop. “Part of the challenge in evaluating the value of EBDM is that it requires a description of what organizations will do with and without various amounts of evidence that they can choose to generate at some cost.” 

Professor Elizabeth Halloran (Fred Hutchinson Cancer Center)  is a world leader in using mathematical and statistical methods to study infectious diseases and a pioneer in the design and analysis of vaccine studies.  

“Important examples of global public health policies where causal inference with interference can make a difference include vaccines and vaccination programs,” she reminded participants.  

Causal estimates demonstrating indirect effects of intervention programs, she said, can make policies in all fields more cost-effective. 

The workshop concluded with a student-led roundtable discussion where Vahid Balazadeh, Sonia Markes, Stephen Tino, Dario Toman, and Atom Vayalinkal outlined next steps in the efforts to bring together causal inference and data sciences communities.