Workshops at TAS '24

The day prior to the main program, 16th September 2024, we will be hosting a series of workshops related to the themes of the conference. You will be able to register for these when you register for the main conference, at no extra cost. All workshops will take place in the Julius Glickman Conference Center.

The Internet is of significant interest to research communities across academic disciplines and across the globe. Areas of current focus include digital divides, online privacy, online wellbeing, governance, censorship, and information propagation. It is essential our communities come together to consider how it might be possible to foster responsible Internet futures. This involves identifying the opportunities and challenges regarding a safe, equitable, accessible and useful Internet. It also requires taking a global perspective.

In this half-day workshop we bring together interested researchers to suggest and debate visions for responsible Internet futures. We will combine invited speaker presentations with discussion sessions and we seek to co-create an initial pathway towards responsibility as part of these discussions. All TAS’24 Symposium attendees are welcome to join the workshop.

The workshop is being run as within the UKRI-funded Responsible AI UK project ‘TAS-Hub and Good Systems Strategic Collaboration’. Two further online workshops on Responsible Internet Futures are planned within the project for 2025. If interested, attendees at the TAS’24 workshop will be invited to attend these subsequent ones too.

To find out more about the workshop, please contact helena.webb@nottingham.ac.uk.

The implementation of autonomous features in computer-mediated communication and telepresence technologies (from videoconferencing to VR and robotic telepresence) is a growing trend. Whilst automation in such technologies can offer many benefits -e.g., faster communication and reduced mental workload -, it can also reduce the users’ agency over how they present themselves in social contexts. In this workshop, we will be identifying and discussing the ethical considerations, as well as implications relating to usability and accessibility, that automation introduces when used in systems for remote communication. This will be a half-day, hybrid workshop consisting of brainstorming and guided discussion activities, aiming to initiate a conversation regarding the multifaceted implications of automation in this field and set directions for future work. Please visit our website for more information on the workshop and how you can participate!

As AI technology advances, its role in healthcare decision-making becomes increasingly prominent, necessitating a focus on trustworthiness and collaboration. This workshop on AI and Healthcare is dedicated to building community and exploring new partnerships and project opportunities in this important area. First, we will introduce the TAME Pain case study, showcasing how AI-driven solutions can improve pain management through successful collaborations in the UK and US, highlighting both the challenges faced and the impactful results achieved. Then, we will introduce the HEAD collaboration and its mission to foster interdisciplinary research and innovation in AI and healthcare. We will discuss three innovative projects that emerged from the recent summer residency program, detailing their objectives, methodologies, and potential impacts. Each presentation will include a feedback and discussion session, fostering engagement and insights into the future of AI in healthcare. This workshop will provide a platform for knowledge exchange and the formation of meaningful collaborations.

It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.

With the increased capability and proliferation of AI systems across various domains, ensuring that these systems align with human intentions and values is crucial. Research across disciplines, such as computer science, human-computer interaction, philosophy, and policy, usually targets one aspect of AI alignment, leading to a siloed understanding of its challenges. This workshop aims to foster a comprehensive understanding of human-AI alignment through the integration of diverse disciplinary perspectives.

Since the first industrial revolution, workplaces have been a highly regulated and governed area of activity. From the early developments of health and safety law to development around working time, the relationship between humans, their employers and their fellow employees has been an important area of intervention.

When static robots were introduced onto production lines, they were required to be guarded like any other tool. However, with the development of robotics and the embodiment of artificial intelligence in robots made to collaborate, the old models of regulation of robots are outdated. Human-robot collaboration has the potential to make a huge contribution to the future economic, enabling manufacturing process that bring together the best of both humans and robots. In order for this to be the future, we have to ensure that the appropriate regulatory framework is in place, to enable workers to both feel and be safe, and for businesses to have comfort in introducing collaborative robots into their workplaces.

This workshop aims to explore the challenges of regulating a workplace that uses (or wishes to use) collaborative robots. It seeks to identify the issues that require further research or would benefit from consideration by policy-makers.

Responsible Research and Innovation (RRI) is a continuous process to anticipate how research/innovation outcomes and processes may affect people and the environment in the future, and act in the present to gain the most benefit, minimise risks, and avoid harm. There remains a gap between the theory and practice of RRI. Through collaborative activities using Responsible Innovation Prompts and Practice Cards with case studies, attendees gain knowledge and hands-on experience in systematically identifying responsibility challenges, reflect, and make action plans to ensure inclusive practices, foster ethical and responsible decision-making, and embed RRI in projects.