TAS '24 program

The program below is provisional so please check for updates

Each paper presentation is allocated 15 minutes INCLUDING time for questions. We recommend that presenters speak for 10-12 minutes to allow a few minutes for questions. The paper sessions will be strictly timed so that we don’t overrun. Presenters will also be asked to submit their presentation slides in advance and to be in the room ahead of their session (e.g. during the preceding break period) to meet the session Chair. Further details will provided to presenters in due course.

The proceedings are now available in the Digital Library.

  • ECR
    Starts at 14:00
    Ends at 18:00

    ECR Event

    Join us for the Early Career Researcher Event at the TAS Symposium, a special gathering designed to connect, inspire, and support PhD students, postdoctoral researchers and early career academics in trustworthy autonomous systems. The event will include invited speakers and discussion activities. Participants will have opportunities to network, socialise, gain mentorship insights and share their experiences in a more relaxed and informal setting.

    Please sign up for the ECR Event during symposium registration.

  • Workshops
    Starts at 09:00
    Ends at 12:30

    Morning workshops

    Please register for a workshop during symposium registration—there is no extra cost.

    1. As AI technology advances, its role in healthcare decision-making becomes increasingly prominent, necessitating a focus on trustworthiness and collaboration. This workshop on AI and Healthcare is dedicated to building community and exploring new partnerships and project opportunities in this important area. First, we will introduce the TAME Pain case study, showcasing how AI-driven solutions can improve pain management through successful collaborations in the UK and US, highlighting both the challenges faced and the impactful results achieved. Then, we will introduce the HEAD collaboration and its mission to foster interdisciplinary research and innovation in AI and healthcare. We will discuss three innovative projects that emerged from the recent summer residency program, detailing their objectives, methodologies, and potential impacts. Each presentation will include a feedback and discussion session, fostering engagement and insights into the future of AI in healthcare. This workshop will provide a platform for knowledge exchange and the formation of meaningful collaborations.
    2. It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.
    3. The implementation of autonomous features in computer-mediated communication and telepresence technologies (from videoconferencing to VR and robotic telepresence) is a growing trend. Whilst automation in such technologies can offer many benefits -e.g., faster communication and reduced mental workload -, it can also reduce the users' agency over how they present themselves in social contexts. In this workshop, we will be identifying and discussing the ethical considerations, as well as implications relating to usability and accessibility, that automation introduces when used in systems for remote communication. This will be a half-day, hybrid workshop consisting of brainstorming and guided discussion activities, aiming to initiate a conversation regarding the multifaceted implications of automation in this field and set directions for future work. Please visit [our website](https://sites.google.com/view/tas24automatingtelepresence/home) for more information on the workshop and how you can participate!
  • Lunch (12:30–13:30)
  • Workshops
    Starts at 13:30
    Ends at 17:00

    Afternoon workshops

    Please register for a workshop during symposium registration—there is no extra cost.

    1. With the increased capability and proliferation of AI systems across various domains, ensuring that these systems align with human intentions and values is crucial. Research across disciplines, such as computer science, human-computer interaction, philosophy, and policy, usually targets one aspect of AI alignment, leading to a siloed understanding of its challenges. This workshop aims to foster a comprehensive understanding of human-AI alignment through the integration of diverse disciplinary perspectives.
    2. The Internet is of significant interest to research communities across academic disciplines and across the globe. Areas of current focus include digital divides, online privacy, online wellbeing, governance, censorship, and information propagation. It is essential our communities come together to consider how it might be possible to foster responsible Internet futures. This involves identifying the opportunities and challenges regarding a safe, equitable, accessible and useful Internet. It also requires taking a global perspective.

      In this half-day workshop we bring together interested researchers to suggest and debate visions for responsible Internet futures. We will combine invited speaker presentations with discussion sessions and we seek to co-create an initial pathway towards responsibility as part of these discussions. All TAS’24 Symposium attendees are welcome to join the workshop.

      Outline of the event:
      1.30pm: Welcome and Introduction
      1.45pm: Provocations. Four invited speakers will give their responses (10-15 mins each) to either or both of the following questions:
           What could a responsible Internet look like?
           What is a suitable pathway towards achieving a responsible Internet

      The speakers are:
      Robin Wilton, Director for Internet Trust, the Internet Society
      Sharon Strover, Philip G. Warner Regents Professor in Communication, University of Texas at Austin
      Thiago Guimaraes Moraes, PhD candidate at Universidade de Brasilia (UnB) and Vrije Universiteit Brussels, and Coordinator of Innovation and Research at the Brazilian Data Protection Authority
      Jeremie Clos, Assistant Professor of Computer Science, University of Nottingham

      3.10pm: Break

      3.30pm: Small group discussions: what is possible for a responsible Internet, who needs to be involved, what are the obstacles, how might they be overcome?

      4.10pm: Plenary discussion and next steps

      4.45pm: Close

      The workshop is being run as within the UKRI-funded Responsible AI UK project ‘TAS-Hub and Good Systems Strategic Collaboration’. Two further online workshops on Responsible Internet Futures are planned within the project for 2025. If interested, attendees at the TAS’24 workshop will be invited to attend these subsequent ones too

      To find out more about the workshop, please contact helena.webb@nottingham.ac.uk.

    3. Responsible Research and Innovation (RRI) is a continuous process to anticipate how research/innovation outcomes and processes may affect people and the environment in the future, and act in the present to gain the most benefit, minimise risks, and avoid harm. There remains a gap between the theory and practice of RRI. Through collaborative activities using Responsible Innovation Prompts and Practice Cards with case studies, attendees gain knowledge and hands-on experience in systematically identifying responsibility challenges, reflect, and make action plans to ensure inclusive practices, foster ethical and responsible decision-making, and embed RRI in projects.
    4. Since the first industrial revolution, workplaces have been a highly regulated and governed area of activity. From the early developments of health and safety law to development around working time, the relationship between humans, their employers and their fellow employees has been an important area of intervention.

      When static robots were introduced onto production lines, they were required to be guarded like any other tool. However, with the development of robotics and the embodiment of artificial intelligence in robots made to collaborate, the old models of regulation of robots are outdated. Human-robot collaboration has the potential to make a huge contribution to the future economic, enabling manufacturing process that bring together the best of both humans and robots. In order for this to be the future, we have to ensure that the appropriate regulatory framework is in place, to enable workers to both feel and be safe, and for businesses to have comfort in introducing collaborative robots into their workplaces.

      This workshop aims to explore the challenges of regulating a workplace that uses (or wishes to use) collaborative robots. It seeks to identify the issues that require further research or would benefit from consideration by policy-makers.

  • Reception
    Starts at 17:30
    Ends at 19:30

    Welcome Reception

    Join us for a reception the night before the main symposium activities kick off.

  • The main symposium activities start tomorrow!
  • Registration
    Starts at 09:00
    Ends at 09:30

    Registration

    Register and collect your badges for TAS '24. Refreshments on arrival.

  • plenary
    Starts at 09:30
    Ends at 09:45

    Welcome to TAS '24!

    A welcome from the TAS '24 General Chairs, Liz Dowthwaite and Justin Hart.

  • keynote
    Starts at 09:45
    Ends at 10:30

    The TAS Program and its legacy

    1. Dr. Kate Devlin will give a keynote address about the purpose and legacy of the Trustworthy Autonomous System Program. Dr Devlin is a Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King's College London. With an undergraduate degree in archaeology (Queens University Belfast) and an MSc (Queens University Belfast) and PhD (University of Bristol) in computer science, her research investigates how people interact with and react to technologies, both past and future. Kate is the author of the critically-acclaimed book Turned On: Science, Sex and Robots (Bloomsbury, 2018), which examines the ethical and social implications of technology and intimacy.
  • Refreshment break (10:30–10:50)
  • Papers
    Starts at 10:50
    Ends at 12:00

    Human-Machine/AS interaction

    1. Negotiating Autonomy and Trust when Performing with an AI Musician
      Steve Benford, Marco Amerotti, Bob Sturm and Juan Martinez Avila
    2. Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users’ Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools
      Cheng Chen, Sangwook Lee, Eunchae Jang and S.Shyam Sundar
    3. Fostering Trust Through User Interface Design in Multi-Drone Search and Rescue
      Johanna Ahlskog, Maria-Theresa Bahodi, Artur Lugmayr and Timothy Merritt
    4. Show Me What’s Wrong: Impact of Explicit Alerts on Novice Supervisors of a Multi-Robot Monitoring System
      Maria-Theresa Bahodi, Niels van Berkel, Mikael Skov and Timothy Merritt
  • Lunch and poster session (12:00–13:30)
  • Panel
    Starts at 13:30
    Ends at 14:30

    Poster panel

    Extended Abstract Posters
    1. Design and Evaluation of a Tool to assist Small-Medium Organisations (SMOs) to implement Automated Decision-Making (ADM).
      Kathryn Baguley, Joel Fischer and Richard Hyde
    2. An Interdependence Frame for (Semi) Autonomous Robots: The Case of Mobile Robotic Telepresence
      Andriana Boudouraki and Gisela Reyes-Cruz
    3. Investigating the Impact of Generative AI on Students and Educators: Evidence and Insights from the Literature
      Jeremie Clos and Yoke Yie Chen
    4. Designing and Evaluating a Discourse Analysis Dashboard
      Xin Yu Liew, Nazia Hameed, Jeremie Clos and Joel Fischer
    5. Responsibility and Regulation: Exploring Social Measures of Trust in Medical AI
      Glenn McGarry, Andrew Crabtree, Alan Chamberlain and Lachlan D Urquhart
    6. The Final 100 Meters: Touch and Joystick Controls for Guiding Autonomous Vehicles
      Timothy Merritt, Eleftherios Papachristos, Victor Barsted, Hogir Hiva Ibrahim Sabir and Peter Højer Holdensgaard
    7. Sound Strategies for Safe Driving: Exploring Auditory Interventions to Counteract Passive Driver Fatigue
      Eleftherios Papachristos, Timothy Merritt, Eike Schneiders, David Jahanshiri, Alef Pir and Andrei Ciobanu
    8. Responsibility Statement on research project outputs- to who and what for?
      Virginia Portillo, Helena Webb, Peter Craigon, Robin Wilton, Liz Dowthwaite and Ephraim Luwemba
    9. Reimagining the Design of Mobile Robotic Telepresence: Reflections from a Hybrid Design Workshop
      Gisela Reyes-Cruz, Juan Martinez Avila, Eike Schneiders and Andriana Boudouraki
    10. Meta-Meme: a Responsible Researcher's Tool for the analysis of Internet Memes
      Giovanni Schiazza, Helena Webb, Jeremie Clos, Patrick Brundell and Annemarie Walter
    11. A Survey of Lay People's Willingness to Generate Legal Advice using Large Language Models (LLMs)
      Tina Seabrooke, Eike Schneiders, Liz Dowthwaite, Joshua Krook, Natalie Leesakul, Jeremie Clos, Horia Maior and Joel Fischer
    12. Uneven Eyes: The Impact of Inconsistent Local Surveillance Policies on Public Trust
      Sharon Strover, Sheila Lalwani and Azza El-Masri
    13. Responsible AI in policing
      Helena Webb, Nicholas Fitzroy-Dale, Saamiya Aqeel, Anna Maria Piskopani, Quentin Stafford-Fraser, Christos Nikolaou, Liz Dowthwaite, Derek McAuley and Christoper Hargreaves
    14. Examining the Feasibility of AI-Generated Questions in Educational Settings
      Omar Zeghouani, Zawar Ali, William Simson van Dijkhuizen, Jia Wei Hong and Jeremie Clos
    Non-Archival Posters
    1. A Facilitated Residential Approach: Coalescing Collaborative Research Projects for Health Equity in AI Decisions - HEAD Residency 2024
      Pepita Barnard
    2. Digital Placemaking and Co-Constitutive Evolution: The Role of Virtual Reality in How People Form a Sense of Place
      Takayuki Suzuki
    3. U-Trustworthy Models. Reliability, Competence, and Confidence in Decision-Making
      Ritwak Vashistha
    4. TAME Pain: Trustworthy AssessMEnt of Pain from Speech and Audio for the Empowerment of Patients
      Tu-Quyen Dao
    5. Utilising AI to measure the unmeasurable: Trust in a black-box solution to measure citizen science impact
      James Sprinks
    6. Guiding Concepts for Responsible AI
      Minha Lee and Joel Fischer
    7. Health AI Platform for Decision Support toward Equitable Delivery of Healthcare to Multi-Factor Isolated Communities
      Matt Kammer-Kerwick
    8. Creating a dynamic archive of responsible ecosystems in the context of creative AI
      Pat Brundell, Oliver Miles, Megan Drury, Lydia Farina, Helena Webb, Steve Benford, Bernd Stahl, Craig Vear, Elvira Perez, Spencer Jordan, Gabriella Giannachi and John Moore
  • Refreshment break (14:30–14:50)
  • Papers
    Starts at 14:50
    Ends at 15:40

    Robots and robotics

    1. Human-Robot Interaction Experiment: Minor Changes; Significant Differences
      Zahra Rezaei Khavas and Paul Robinette
    2. Swift Trust in Mobile Ad Hoc Human-Robot Teams
      Sanja Milivojevic, Mehdi Sobhani, Nicola Webb, Zachary Madin, James Ward, Sagir Yusuf, Chris Baber and Edmund Hunt
    3. Mapping Safe Zones for Co-located Human-UAV Interaction
      Ayodeji Abioye, Lisa Bidgood, Sarvapali Ramchurn and Mohammad Soorati
    4. Trust Transfer in Robots between Task Environments
      Theresa Law, Meia Chita-Tegmark and Matthias Scheutz
  • Refreshment break (15:40–16:00)
  • Papers
    Starts at 16:00
    Ends at 17:10

    Governance and regulation

    1. Ethical AI Regulatory Sandboxes: Insights from cyberspace regulation and Internet governance
      Thiago Moraes
    2. I can do anything with my AV data (but I won’t do that): Public attitudes towards data recorders in self-driving cars
      Jo-Ann Pattinson, Carolyn Ten Holter, Keri Grieman, Pericle Salvini, Lars Kunze and Marina Jirotka
    3. Not the Law's First Rodeo: Towards regulating trustworthy collaborative industrial embodied autonomous systems
      Natalie Leesakul, Jeremie Clos and Richard Hyde
    4. Trustworthy Airspaces of the Future: Hopes and concerns of experts regarding Uncrewed Traffic Management systems
      Harriet Cameron, Neil McBride, Paschal Ochang and Bernd C. Stahl
  • Close of Day 1
  • Reception
    Starts at 18:30
    Ends at 20:30

    Symposium banquet (optional)

    Join us for a banquet at the Texas Science and Natural History Museum. Included in symposium registration.

  • Morning refreshments (09:00–09:30)
  • Papers
    Starts at 09:30
    Ends at 10:40

    Human-Centered approaches

    1. What’s missing from this picture? Ethical, legal, and practical challenges for autonomous-vehicle data-recorders
      Carolyn Ten Holter, Lars Kunze, Jo-Ann Pattinson, Pericle Salvini, Jonathan Attias and Marina Jirotka
    2. Encoding Social & Ethical Values in Autonomous Navigation: Philosophies Behind an Interactive Online Demonstration
      Yun Tang, Luke Moffat, Weisi Guo, Corinne May-Chahal, Joe Deville and Antonios Tsourdos
    3. "Trust equals less death - it's as simple as that" : Developing a Socio-technical Framework for Trustworthy Defence and Security Automated Systems"
      Asieh Salehi Fathabadi and Pauline Leonard
    4. Brokerbot: A Cryptocurrency Chatbot in the Social-technical Gap of Trust
      Minha Lee, Lily Frank and Wijnand IJsselsteijn
  • Refreshment break (10:40–11:00)
  • Keynote
    Starts at 11:00
    Ends at 12:00

    Toward Trustworthy Automation: Ensuring Human Agency with Warranted Cues and Interactive Actions

    1. This keynote talk will discuss psychological aspects of autonomous systems by drawing out the tension between machine agency and human agency, especially as they play out in the context of online media platforms. While automated features offer many conveniences, they also threaten our privacy, promote addictive use, and lead us down rabbit holes of extreme content, making us vulnerable to misinformation. If users are to avoid such negative consequences, they will need to be more deliberate and mindful in their interactions, which might detract from the very purpose of automation. This poses a design challenge, which could be addressed by making automation a technological affordance, and drawing upon concepts and mechanisms from the speaker’s model of Human-AI Interaction based on his Theory of Interactive Media Effects (HAII-TIME), as we attempt to build socially responsible trust in autonomous systems.

      S. Shyam Sundar (PhD, Stanford University) is Evan Pugh University Professor and James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory, and Director of the Center for Socially Responsible Artificial Intelligence at Penn State University. Prof. Sundar is a theorist as well as an experimentalist. His theoretical contributions include several original models on the social and psychological consequences of communication technology such as Modality-Agency-Interactivity-Navigability (MAIN) Model, and the Theory of Interactive Media Effects (TIME), along with its extension to Human-AI Interaction (HAII-TIME). His research examines social and psychological effects of interactive media, specifically the role played by technological affordances in shaping user experience of mediated communications. Current research pertains to psychological effects of Human-AI interaction in media contexts, ranging from personalization and recommendation to fake news and content moderation. His research portfolio includes extensive examination of user responses to online sources, including machine sources such as chatbots and smart speakers. His research is supported by National Science Foundation (NSF) and National Institutes of Health (NIH), among others. He is editor of the first-ever Handbook of the Psychology of Communication Technology (Blackwell Wiley, 2015). He served as editor-in-chief of the Journal of Computer-Mediated Communication, 2013-2017.
  • Lunch (posters available) (12:00–13:00)
  • Papers
    Starts at 13:00
    Ends at 14:10

    AS risk and failure

    1. A Taxonomy of Domestic Robot Failure Outcomes: Understanding the impact of failure on trustworthiness of domestic robots
      Harriet R. Cameron, Simon Castle-Green, Muhammad Chughtai, Liz Dowthwaite, Ayse Kucukyilmaz, Horia Maior, Victor Ngo, Eike Schneiders and Bernd C. Stahl
    2. A risk-based trust framework for assuring the humans in human-machine teaming
      Zena Assaad
    3. A Multimethod Analysis of US Perspectives towards Trustworthy Autonomous Systems
      Pepita Barnard, Andriana Boudouraki and Jeremie Clos
    4. Supporting Ethical Decision-Making for Lethal Autonomous Weapons
      Spencer Kohn, Marvin Cohen, Athena Johnson, Mikhail Terman, Gershon Weltman and Joseph Lyons
  • Refreshment break (14:10–14:30)
  • Plenary
    Starts at 14:30
    Ends at 15:15

    The Limits of Automation

    1. Machine learning is increasingly being used to inform decision-making in high-stakes settings. Crucial questions in such contexts are: What can or should be automated? What is the role of human decision-makers in the era of AI? How to design systems for human-AI complementarity? In this talk, I will focus on a fundamental reason for why certain decisions cannot and should not be automated: what we can predict is often not what we care about. I will provide conceptual groundings for characterizing this problem and an empirical illustration of its consequential implications. I will conclude by offering a path forward by proposing an affordance-based perspective for the design and evaluation of AI capabilities.

      Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.
  • Refreshment break (15:15–15:35)
  • Papers
    Starts at 15:35
    Ends at 16:45

    Building trust in automation

    1. Technology for Environmental Policy: Exploring Perceptions, Values, and Trust in a Citizen Carbon Budget App
      Liz Dowthwaite, Gisela Reyes-Cruz, Yang Lu, Justyna Lisinska, Peter Craigon, Anna-Maria Piskopani, Elnaz Shafipour, Sebastian Stein and Joel Fischer
    2. When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
      Cheng Chen, Mengqi Liao and S.Shyam Sundar
    3. LOOM: a Privacy-Preserving Linguistic Observatory of Online Misinformation
      Jeremie Clos, Emma McClaughlin, Pepita Barnard, Tino Tom and Sudarshan Yajaman
    4. Measurable Trust: The Key to Unlocking User Confidence in Black-Box AI
      Puntis Palazzolo, Bernd Stahl and Helena Webb
  • Papers
    Starts at 16:45
    Ends at 17:00
  • Symposium ends
Add sessions to your calendar Subscribe to the program in your calendar