Report from Workshop on Public Service Media and Cultural and Social Perspectives on Machine Learning
On March 5th Katrine F. Baunvig, Georgina Born and Kristoffer Nielbo hosted a workshop on Public Service Media and Cultural and Social Perspectives on Machine Learning. The purpose of the workshop was to explore the possibilities of future collaboration and address whether it is timely to develop a collaborative initiative between academics and PSM professionals.
The workshop took place at Aarhus Institute for Advanced Studies (AIAS), Aarhus University, and took up one full day of presentationa followed by discussions in plenum.
Session 1: Commonality, Personalization, Curation and the Challenges to Public Service Media
This session started out by introducing recent research on translating normative principles from public service media (PSM) into algorithmic design via a new ‘commonality’ metric, dwelling on the methodological challenges involved – a project now in dialogue with the BBC. It goes on to consider a collaboration involving the Australian ABC aimed at overcoming the limitations of personalisation while integrating public service values. Finally, it considers recent developments in Danish Radio.
- Georgina Born (University College London) & Fernando Diaz (Carnegie Mellon Univ. & Google) – Translating Values Into Design – the Commonality Metric and Its Methodological Basis
We will outline the foundations of our research translating public service media principles into the design of a ‘commonality’ metric for recommender systems which is intended to measure shared (or ‘universal’) exposure across a population of users to progressive editorial interventions – specifically, the promotion of diverse cultural content in the service of strengthening cultural citizenship. Resulting from sustained interdisciplinary collaboration, commonality complements existing metrics and suggests the need for non-personalized interventions in recommender systems attuned to wider cultural processes. We are in early stages of a dialogue about testing it out experimentally with the BBC and Radio Canada.
- Mark Andrejevic (Monash University) – An Editorial Value Algorithm for Public Service Media: Australia’s ABC
This is a developing collaboration between the ABC (the Australian public broadcaster) and the ARC Centre for Automated Decision Making and Society (a government funded research consortium). The collaboration focuses on both internal editorial processes and automated forms of curation and distribution. The goal is two-fold: both to develop a model for content distribution on ABC platforms that overcomes the limitations of personalisation while integrating public service values and to use the algorithm for internal editorial processes including story commissioning. This project is still in its very early stages, but I will map out the steps the project will follow and the challenges we face.
- Kåre Vedding (Danish Broadcasting Corporation DR) – AI Public Service in DR
New technologies and digitalization have always been a blessing and a curse for PS media. How and when to embrace it is the million-dollar question. Is AI just another round of this or are we playing a whole new ball game? Vedding will talk about how DR embraces AI and what our considerations are when it comes to internal use and when Danish citizens are involved.
Session 2: Applications of the Commonality Metric to News Provision – and Beyond
2024 is a record year for elections around the world. Given the alarming levels of disinformation circulating in recent elections, we ask whether and how the commonality metric could be applied to news provision by PSMs and other responsible news organisations committed to principles of impartiality, accuracy, balance and diversity, and also to other kinds of content. We also address if the commonality metric might be useful for supporting local democracy and encouraging the formation of publics, and the trade-offs between popularity and public service value when deploying personalized recommender systems.
- Sanne Vrienhoek (University of Amsterdam) – Diversity and Commonality – Two Sides of the Same Coin?
At the University of Amsterdam, we have been working on conceptualizing metrics that evaluate news recommender systems on their normative diversity – or, on whether they inform users, and enable them to play an active role in democratic society. However, diversity is an ambiguous and multi-faceted concept, and may mean something different depending on the purpose of the recommender system it is built into. In this talk I briefly describe our method towards conceptualizing diversity, and compare it to the commonality metric. We will discuss similarities, differences, and most importantly: how and where the metrics can complement each other.
- Anna Schjøtt Hansen (University of Amsterdam) – Towards a ‘Publicist’
Use of News Recommenders: How the Commonality Metric Might Address Three Recurrent Challenges This talk will address how the commonality metric might address three recurrent challenges for ‘publicist’ media based on insights from fieldwork amongst teams developing recommender systems at a large Danish regional media organisation and at the BBC. It will particularly discuss how the commonality metric (1) could be a doorway to supporting local democracy, (2) could help to serve underserved audiences and deliver on universality (+ growth), and (3) could help to better monitor the ‘global’ effect of recommenders on media consumption.
- Oli Elliott (BBC Public Value team) – Measuring Public Purposes in BBC Digital Products
The BBC’s digital products, iPlayer, Sounds, News & Sport, are becoming increasingly personalised and led by user engagement metrics. To complement this work, we are developing metrics to measure the BBC public purposes. We hope this will help editorial, product and data science practitioners make informed trade-offs between popularity and public service, when each audience member sees different content. This talk will share our work so far and speculate on the applications of commonality in BBC News and BBC Sounds.
Session 3: Normative Perspectives on Contemporary Language Technologies
Bias and cultural alignment are fundamental quality challenges in large language models. These issues stem from the data that models are trained on and insufficient validation. Without careful attention and corrective measures and model validation, LLMs will perpetuate or amplify these issues, impacting fairness and equity in various applications. In this session we want to go beyond simple corrective procedures for de-biasing models and discuss a multifaceted approach, including diverse data collection, ethical AI practices, and continuous evaluation of LLMs for bias and fairness.
- Rebekah B. Baglini – Contemporary NLP
This talk covers recent developments in large language models, focusing particularly on implications of their increased accessibility and sources of disconnect between the cultural and value alignments of industry AI developers and the interests of the public.
- Kenneth Enevoldsen & Lasse Hansen – National Language Models
This session delves into the state of the art in Danish Natural Language Processing (NLP). The session will begin with an introduction to the developments in national generative AI effort, followed by an in-depth presentation of the collaborative efforts between the Alexandra Institute and the Center for Humanities Computing at Aarhus University on building a Danish-first LLM.
- Ida Marie Schytt Lassen – Silencing in automated systems
In this presentation, Lassen will examine how bias and marginalization in recommender systems can be illuminated through the framework of epistemic injustice with a focus on silencing. Conceptualizing cultural content as epistemic content allows us to see the normative issues in recommender systems as an epistemic concern and not only as a distributive injustice. Through the concept of commonality, we can see epistemic participation as cultural citizenship, emphasizing the importance of inclusive representation. A substantial part of the literature on philosophy of algorithms deals with the philosophical issues related to opacity aspects of data-driven systems. However, Lassen’s work takes a distinct approach as she directs the attention towards data science practices to explore sources of epistemic injustice.