This is a virtual workshop scheduled for 14 November 14:00 – 15:00 GMT.
The Cambridge Centre for AI in Medicine is inviting you to partake in a workshop about building safe, stable, and trustworthy AI.
The workshop is jointly organised by Prof Anders Hansen, Prof Mihaela van der Schaar (both CCAIM), and Prof Ivan Tyukin (King’s College London).
The workshop focuses on the challenges surrounding the development and application of AI technologies in environments requiring guaranteed and simultaneously fulfilled safety, robustness, accuracy, and trust. These demands are inherent in healthcare and medical applications but are also imperative in many other critical areas of significant national interest and impact such as energy production and distribution, autonomous driving and automated logistics, Law, security (including policing), and defence.
We envision that this workshop will provide a platform for participants and stakeholders from industry, academia, and relevant sectors to express their views on what safety, robustness, trust, and accuracy mean in the context of their work, discuss any issues and limitations of the state-of-the-art understanding and methodologies, and debate about what needs to be done in the future to make a step towards safe, robust, accurate, and trustworthy AI.
The workshop is structured as follows. We will begin with a short overview of the most pressing challenges and issues. The challenges are organised into three major groups: the notion and challenges of reality-centric AI, current foundational and methodological barriers, and paradoxes of stability and adversarial data. This will be followed by an open forum and discussion.
A preliminary agenda of the meeting is provided below:
1. The call for reality-centric AI and the need to adapt to change (Prof Mihaela van der Schaar)
2. Foundational and methodological barriers of modern AI (Prof Anders Hansen)
3. Paradoxes of stability, robustness, and adversarial data in AI (Prof Ivan Tyukin)
Followed by an open discussion with the audience on the list of relevant topics including but not limited to:
Q1: what would they expect from reality-centric AI?
Q2: if methodological barriers are there, what would be their solutions? Shall we accept the limitations of AI and learn how to live with them? Shall we consider a change of the current approaches and paradigms in AI development? (10 mins)
Q3: Are we happy with instabilities and the lack of predictability as long as AI solves our problems in lab conditions? What stability guarantees are acceptable?