The program below is provisional so please check for updates
Each paper presentation is allocated 15 minutes INCLUDING time for questions. We recommend that presenters speak for 10-12 minutes to allow a few minutes for questions. The paper sessions will be strictly timed so that we don’t overrun. Presenters will also be asked to submit their presentation slides in advance and to be in the room ahead of their session (e.g. during the preceding break period) to meet the session Chair. Further details will provided to presenters in due course.
The proceedings are now available in the Digital Library.
-
ECR Event
Join us for the Early Career Researcher Event at the TAS Symposium, a special gathering designed to connect, inspire, and support PhD students, postdoctoral researchers and early career academics in trustworthy autonomous systems. The event will include invited speakers and discussion activities. Participants will have opportunities to network, socialise, gain mentorship insights and share their experiences in a more relaxed and informal setting.
Please sign up for the ECR Event during symposium registration.
-
Morning workshops
Please register for a workshop during symposium registration—there is no extra cost.
-
Trustworthy AI for Enhancing Decision-Making in HealthcareAs AI technology advances, its role in healthcare decision-making becomes increasingly prominent, necessitating a focus on trustworthiness and collaboration. This workshop on AI and Healthcare is dedicated to building community and exploring new partnerships and project opportunities in this important area. First, we will introduce the TAME Pain case study, showcasing how AI-driven solutions can improve pain management through successful collaborations in the UK and US, highlighting both the challenges faced and the impactful results achieved. Then, we will introduce the HEAD collaboration and its mission to foster interdisciplinary research and innovation in AI and healthcare. We will discuss three innovative projects that emerged from the recent summer residency program, detailing their objectives, methodologies, and potential impacts. Each presentation will include a feedback and discussion session, fostering engagement and insights into the future of AI in healthcare. This workshop will provide a platform for knowledge exchange and the formation of meaningful collaborations.
-
Coming Together: Addressing Ethical AI with Diverse Teams and PerspectivesIt is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.
-
Automating Telepresence and Computer-mediated Communication: Identifying Implications and ChallengesThe implementation of autonomous features in computer-mediated communication and telepresence technologies (from videoconferencing to VR and robotic telepresence) is a growing trend. Whilst automation in such technologies can offer many benefits -e.g., faster communication and reduced mental workload -, it can also reduce the users' agency over how they present themselves in social contexts. In this workshop, we will be identifying and discussing the ethical considerations, as well as implications relating to usability and accessibility, that automation introduces when used in systems for remote communication. This will be a half-day, hybrid workshop consisting of brainstorming and guided discussion activities, aiming to initiate a conversation regarding the multifaceted implications of automation in this field and set directions for future work. Please visit [our website](https://sites.google.com/view/tas24automatingtelepresence/home) for more information on the workshop and how you can participate!
-
-
Lunch (12:30–13:30)
-
Afternoon workshops
Please register for a workshop during symposium registration—there is no extra cost.
-
Human-AI Alignment: Developing a research agenda by bridging interdisciplinary approachesWith the increased capability and proliferation of AI systems across various domains, ensuring that these systems align with human intentions and values is crucial. Research across disciplines, such as computer science, human-computer interaction, philosophy, and policy, usually targets one aspect of AI alignment, leading to a siloed understanding of its challenges. This workshop aims to foster a comprehensive understanding of human-AI alignment through the integration of diverse disciplinary perspectives.
-
Responsible Internet Futures
The Internet is of significant interest to research communities across academic disciplines and across the globe. Areas of current focus include digital divides, online privacy, online wellbeing, governance, censorship, and information propagation. It is essential our communities come together to consider how it might be possible to foster responsible Internet futures. This involves identifying the opportunities and challenges regarding a safe, equitable, accessible and useful Internet. It also requires taking a global perspective.
In this half-day workshop we bring together interested researchers to suggest and debate visions for responsible Internet futures. We will combine invited speaker presentations with discussion sessions and we seek to co-create an initial pathway towards responsibility as part of these discussions. All TAS’24 Symposium attendees are welcome to join the workshop.
Outline of the event:
1.30pm: Welcome and Introduction
1.45pm: Provocations. Four invited speakers will give their responses (10-15 mins each) to either or both of the following questions:
What could a responsible Internet look like?
What is a suitable pathway towards achieving a responsible InternetThe speakers are:
Robin Wilton, Director for Internet Trust, the Internet Society
Sharon Strover, Philip G. Warner Regents Professor in Communication, University of Texas at Austin
Thiago Guimaraes Moraes, PhD candidate at Universidade de Brasilia (UnB) and Vrije Universiteit Brussels, and Coordinator of Innovation and Research at the Brazilian Data Protection Authority
Jeremie Clos, Assistant Professor of Computer Science, University of Nottingham3.10pm: Break
3.30pm: Small group discussions: what is possible for a responsible Internet, who needs to be involved, what are the obstacles, how might they be overcome?
4.10pm: Plenary discussion and next steps
4.45pm: Close
The workshop is being run as within the UKRI-funded Responsible AI UK project ‘TAS-Hub and Good Systems Strategic Collaboration’. Two further online workshops on Responsible Internet Futures are planned within the project for 2025. If interested, attendees at the TAS’24 workshop will be invited to attend these subsequent ones too
To find out more about the workshop, please contact helena.webb@nottingham.ac.uk.
-
A Hands-on Workshop for Responsible Research and InnovationResponsible Research and Innovation (RRI) is a continuous process to anticipate how research/innovation outcomes and processes may affect people and the environment in the future, and act in the present to gain the most benefit, minimise risks, and avoid harm. There remains a gap between the theory and practice of RRI. Through collaborative activities using Responsible Innovation Prompts and Practice Cards with case studies, attendees gain knowledge and hands-on experience in systematically identifying responsibility challenges, reflect, and make action plans to ensure inclusive practices, foster ethical and responsible decision-making, and embed RRI in projects.
-
The regulation of workplaces in the age of collaborative robotics towards trustworthy embodied autonomous systems
Since the first industrial revolution, workplaces have been a highly regulated and governed area of activity. From the early developments of health and safety law to development around working time, the relationship between humans, their employers and their fellow employees has been an important area of intervention.
When static robots were introduced onto production lines, they were required to be guarded like any other tool. However, with the development of robotics and the embodiment of artificial intelligence in robots made to collaborate, the old models of regulation of robots are outdated. Human-robot collaboration has the potential to make a huge contribution to the future economic, enabling manufacturing process that bring together the best of both humans and robots. In order for this to be the future, we have to ensure that the appropriate regulatory framework is in place, to enable workers to both feel and be safe, and for businesses to have comfort in introducing collaborative robots into their workplaces.
This workshop aims to explore the challenges of regulating a workplace that uses (or wishes to use) collaborative robots. It seeks to identify the issues that require further research or would benefit from consideration by policy-makers.
-
-
Welcome Reception
Join us for a reception the night before the main symposium activities kick off.
-
The main symposium activities start tomorrow!
-
Registration
Register and collect your badges for TAS '24. Refreshments on arrival.
-
Welcome to TAS '24!
A welcome from the TAS '24 General Chairs, Liz Dowthwaite and Justin Hart.
-
The TAS Program and its legacy
-
Dr. Kate Devlin will give a keynote address about the purpose and legacy of the Trustworthy Autonomous System Program. Dr Devlin is a Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King's College London. With an undergraduate degree in archaeology (Queens University Belfast) and an MSc (Queens University Belfast) and PhD (University of Bristol) in computer science, her research investigates how people interact with and react to technologies, both past and future. Kate is the author of the critically-acclaimed book Turned On: Science, Sex and Robots (Bloomsbury, 2018), which examines the ethical and social implications of technology and intimacy.
-
-
Refreshment break (10:30–10:50)
-
Human-Machine/AS interaction
-
Negotiating Autonomy and Trust when Performing with an AI MusicianSteve Benford, Marco Amerotti, Bob Sturm and Juan Martinez Avila
-
Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users’ Perceptions, Engagement, and Trust in Text-to-Image Generative AI ToolsCheng Chen, Sangwook Lee, Eunchae Jang and S.Shyam Sundar
-
Fostering Trust Through User Interface Design in Multi-Drone Search and RescueJohanna Ahlskog, Maria-Theresa Bahodi, Artur Lugmayr and Timothy Merritt
-
Show Me What’s Wrong: Impact of Explicit Alerts on Novice Supervisors of a Multi-Robot Monitoring SystemMaria-Theresa Bahodi, Niels van Berkel, Mikael Skov and Timothy Merritt
-
-
Lunch and poster session (12:00–13:30)
-
Poster panel
Extended Abstract Posters
-
Design and Evaluation of a Tool to assist Small-Medium Organisations (SMOs) to implement Automated Decision-Making (ADM).Kathryn Baguley, Joel Fischer and Richard Hyde
-
An Interdependence Frame for (Semi) Autonomous Robots: The Case of Mobile Robotic TelepresenceAndriana Boudouraki and Gisela Reyes-Cruz
-
Investigating the Impact of Generative AI on Students and Educators: Evidence and Insights from the LiteratureJeremie Clos and Yoke Yie Chen
-
Designing and Evaluating a Discourse Analysis DashboardXin Yu Liew, Nazia Hameed, Jeremie Clos and Joel Fischer
-
Responsibility and Regulation: Exploring Social Measures of Trust in Medical AIGlenn McGarry, Andrew Crabtree, Alan Chamberlain and Lachlan D Urquhart
-
The Final 100 Meters: Touch and Joystick Controls for Guiding Autonomous VehiclesTimothy Merritt, Eleftherios Papachristos, Victor Barsted, Hogir Hiva Ibrahim Sabir and Peter Højer Holdensgaard
-
Sound Strategies for Safe Driving: Exploring Auditory Interventions to Counteract Passive Driver FatigueEleftherios Papachristos, Timothy Merritt, Eike Schneiders, David Jahanshiri, Alef Pir and Andrei Ciobanu
-
Responsibility Statement on research project outputs- to who and what for?Virginia Portillo, Helena Webb, Peter Craigon, Robin Wilton, Liz Dowthwaite and Ephraim Luwemba
-
Reimagining the Design of Mobile Robotic Telepresence: Reflections from a Hybrid Design WorkshopGisela Reyes-Cruz, Juan Martinez Avila, Eike Schneiders and Andriana Boudouraki
-
Meta-Meme: a Responsible Researcher's Tool for the analysis of Internet MemesGiovanni Schiazza, Helena Webb, Jeremie Clos, Patrick Brundell and Annemarie Walter
-
A Survey of Lay People's Willingness to Generate Legal Advice using Large Language Models (LLMs)Tina Seabrooke, Eike Schneiders, Liz Dowthwaite, Joshua Krook, Natalie Leesakul, Jeremie Clos, Horia Maior and Joel Fischer
-
Uneven Eyes: The Impact of Inconsistent Local Surveillance Policies on Public TrustSharon Strover, Sheila Lalwani and Azza El-Masri
-
Responsible AI in policingHelena Webb, Nicholas Fitzroy-Dale, Saamiya Aqeel, Anna Maria Piskopani, Quentin Stafford-Fraser, Christos Nikolaou, Liz Dowthwaite, Derek McAuley and Christoper Hargreaves
-
Examining the Feasibility of AI-Generated Questions in Educational SettingsOmar Zeghouani, Zawar Ali, William Simson van Dijkhuizen, Jia Wei Hong and Jeremie Clos
Non-Archival Posters
-
A Facilitated Residential Approach: Coalescing Collaborative Research Projects for Health Equity in AI Decisions - HEAD Residency 2024Pepita Barnard
-
Digital Placemaking and Co-Constitutive Evolution: The Role of Virtual Reality in How People Form a Sense of PlaceTakayuki Suzuki
-
U-Trustworthy Models. Reliability, Competence, and Confidence in Decision-MakingRitwak Vashistha
-
TAME Pain: Trustworthy AssessMEnt of Pain from Speech and Audio for the Empowerment of PatientsTu-Quyen Dao
-
Utilising AI to measure the unmeasurable: Trust in a black-box solution to measure citizen science impactJames Sprinks
-
Guiding Concepts for Responsible AIMinha Lee and Joel Fischer
-
Health AI Platform for Decision Support toward Equitable Delivery of Healthcare to Multi-Factor Isolated CommunitiesMatt Kammer-Kerwick
-
Creating a dynamic archive of responsible ecosystems in the context of creative AIPat Brundell, Oliver Miles, Megan Drury, Lydia Farina, Helena Webb, Steve Benford, Bernd Stahl, Craig Vear, Elvira Perez, Spencer Jordan, Gabriella Giannachi and John Moore
-
-
Refreshment break (14:30–14:50)
-
Robots and robotics
-
Human-Robot Interaction Experiment: Minor Changes; Significant DifferencesZahra Rezaei Khavas and Paul Robinette
-
Swift Trust in Mobile Ad Hoc Human-Robot TeamsSanja Milivojevic, Mehdi Sobhani, Nicola Webb, Zachary Madin, James Ward, Sagir Yusuf, Chris Baber and Edmund Hunt
-
Mapping Safe Zones for Co-located Human-UAV InteractionAyodeji Abioye, Lisa Bidgood, Sarvapali Ramchurn and Mohammad Soorati
-
Trust Transfer in Robots between Task EnvironmentsTheresa Law, Meia Chita-Tegmark and Matthias Scheutz
-
-
Refreshment break (15:40–16:00)
-
Governance and regulation
-
Ethical AI Regulatory Sandboxes: Insights from cyberspace regulation and Internet governanceThiago Moraes
-
I can do anything with my AV data (but I won’t do that): Public attitudes towards data recorders in self-driving carsJo-Ann Pattinson, Carolyn Ten Holter, Keri Grieman, Pericle Salvini, Lars Kunze and Marina Jirotka
-
Not the Law's First Rodeo: Towards regulating trustworthy collaborative industrial embodied autonomous systemsNatalie Leesakul, Jeremie Clos and Richard Hyde
-
Trustworthy Airspaces of the Future: Hopes and concerns of experts regarding Uncrewed Traffic Management systemsHarriet Cameron, Neil McBride, Paschal Ochang and Bernd C. Stahl
-
-
Close of Day 1
-
Symposium banquet (optional)
Join us for a banquet at the Texas Science and Natural History Museum. Included in symposium registration.
-
Morning refreshments (09:00–09:30)
-
Human-Centered approaches
-
What’s missing from this picture? Ethical, legal, and practical challenges for autonomous-vehicle data-recordersCarolyn Ten Holter, Lars Kunze, Jo-Ann Pattinson, Pericle Salvini, Jonathan Attias and Marina Jirotka
-
Encoding Social & Ethical Values in Autonomous Navigation: Philosophies Behind an Interactive Online DemonstrationYun Tang, Luke Moffat, Weisi Guo, Corinne May-Chahal, Joe Deville and Antonios Tsourdos
-
"Trust equals less death - it's as simple as that" : Developing a Socio-technical Framework for Trustworthy Defence and Security Automated Systems"Asieh Salehi Fathabadi and Pauline Leonard
-
Brokerbot: A Cryptocurrency Chatbot in the Social-technical Gap of TrustMinha Lee, Lily Frank and Wijnand IJsselsteijn
-
-
Refreshment break (10:40–11:00)
-
Toward Trustworthy Automation: Ensuring Human Agency with Warranted Cues and Interactive Actions
-
This keynote talk will discuss psychological aspects of autonomous systems by drawing out the tension between machine agency and human agency, especially as they play out in the context of online media platforms. While automated features offer many conveniences, they also threaten our privacy, promote addictive use, and lead us down rabbit holes of extreme content, making us vulnerable to misinformation. If users are to avoid such negative consequences, they will need to be more deliberate and mindful in their interactions, which might detract from the very purpose of automation. This poses a design challenge, which could be addressed by making automation a technological affordance, and drawing upon concepts and mechanisms from the speaker’s model of Human-AI Interaction based on his Theory of Interactive Media Effects (HAII-TIME), as we attempt to build socially responsible trust in autonomous systems.
S. Shyam Sundar (PhD, Stanford University) is Evan Pugh University Professor and James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory, and Director of the Center for Socially Responsible Artificial Intelligence at Penn State University. Prof. Sundar is a theorist as well as an experimentalist. His theoretical contributions include several original models on the social and psychological consequences of communication technology such as Modality-Agency-Interactivity-Navigability (MAIN) Model, and the Theory of Interactive Media Effects (TIME), along with its extension to Human-AI Interaction (HAII-TIME). His research examines social and psychological effects of interactive media, specifically the role played by technological affordances in shaping user experience of mediated communications. Current research pertains to psychological effects of Human-AI interaction in media contexts, ranging from personalization and recommendation to fake news and content moderation. His research portfolio includes extensive examination of user responses to online sources, including machine sources such as chatbots and smart speakers. His research is supported by National Science Foundation (NSF) and National Institutes of Health (NIH), among others. He is editor of the first-ever Handbook of the Psychology of Communication Technology (Blackwell Wiley, 2015). He served as editor-in-chief of the Journal of Computer-Mediated Communication, 2013-2017.
-
-
Lunch (posters available) (12:00–13:00)
-
AS risk and failure
-
A Taxonomy of Domestic Robot Failure Outcomes: Understanding the impact of failure on trustworthiness of domestic robotsHarriet R. Cameron, Simon Castle-Green, Muhammad Chughtai, Liz Dowthwaite, Ayse Kucukyilmaz, Horia Maior, Victor Ngo, Eike Schneiders and Bernd C. Stahl
-
A risk-based trust framework for assuring the humans in human-machine teamingZena Assaad
-
A Multimethod Analysis of US Perspectives towards Trustworthy Autonomous SystemsPepita Barnard, Andriana Boudouraki and Jeremie Clos
-
Supporting Ethical Decision-Making for Lethal Autonomous WeaponsSpencer Kohn, Marvin Cohen, Athena Johnson, Mikhail Terman, Gershon Weltman and Joseph Lyons
-
-
Refreshment break (14:10–14:30)
-
The Limits of Automation
-
Machine learning is increasingly being used to inform decision-making in high-stakes settings. Crucial questions in such contexts are: What can or should be automated? What is the role of human decision-makers in the era of AI? How to design systems for human-AI complementarity? In this talk, I will focus on a fundamental reason for why certain decisions cannot and should not be automated: what we can predict is often not what we care about. I will provide conceptual groundings for characterizing this problem and an empirical illustration of its consequential implications. I will conclude by offering a path forward by proposing an affordance-based perspective for the design and evaluation of AI capabilities.
Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.
-
-
Refreshment break (15:15–15:35)
-
Building trust in automation
-
Technology for Environmental Policy: Exploring Perceptions, Values, and Trust in a Citizen Carbon Budget AppLiz Dowthwaite, Gisela Reyes-Cruz, Yang Lu, Justyna Lisinska, Peter Craigon, Anna-Maria Piskopani, Elnaz Shafipour, Sebastian Stein and Joel Fischer
-
When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systemsCheng Chen, Mengqi Liao and S.Shyam Sundar
-
LOOM: a Privacy-Preserving Linguistic Observatory of Online MisinformationJeremie Clos, Emma McClaughlin, Pepita Barnard, Tino Tom and Sudarshan Yajaman
-
Measurable Trust: The Key to Unlocking User Confidence in Black-Box AIPuntis Palazzolo, Bernd Stahl and Helena Webb
-
-
Symposium ends