This is a virtual workshop scheduled for 14 November 14:00 – 15:00 GMT.
The Cambridge Centre for AI in Medicine is inviting you to partake in a workshop about building safe, stable, and trustworthy AI.
The workshop focuses on the challenges surrounding the development and application of AI technologies in environments requiring guaranteed and simultaneously fulfilled safety, robustness, accuracy, and trust. These demands are inherent in healthcare and medical applications but are also imperative in many other critical areas of significant national interest and impact such as energy production and distribution, autonomous driving and automated logistics, Law, security (including policing), and defence.
We envision that this workshop will provide a platform for participants and stakeholders from industry, academia, and relevant sectors to express their views on what safety, robustness, trust, and accuracy mean in the context of their work, discuss any issues and limitations of the state-of-the-art understanding and methodologies, and debate about what needs to be done in the future to make a step towards safe, robust, accurate, and trustworthy AI.
The workshop is structured as follows. We will begin with a short overview of the most pressing challenges and issues. The challenges are organised into three major groups: the notion and challenges of reality-centric AI, current foundational and methodological barriers, and paradoxes of stability and adversarial data. This will be followed by an open forum and discussion.
A preliminary agenda of the meeting is provided below: