Jean Monnet Centre of Excellence Annual Lecture by Professor Helga Nowotny
This lecture has been jointly produced by the University of South Australia's Jean Monnet Centre of Excellence, Australia, and the University of Vienna, Austria.
20 July 2021
As we move into a world in which algorithms, robots and avatars play an ever increasing role, we need to better understand the nature of AI and its implications for human agency. In this lecture Professor Nowotny argues that at the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future.
These developments alter our temporal bearings and the ways in which we experience the present and see the future. We create a mirror world, entering into multiple and dynamic interactions with the digital Others that inhabit it and inducing a rise of identity anxieties. We are now moving into an era where control over the digital machines created by us becomes limited as AI monitors our actions, posing the threat of surveillance, while also offering the opportunity to reappropriate control and transform it into care. The narrative of progress which dominated modernity is no longer sufficient as guidance.
Presented by the UniSA Jean Monnet Centre of Excellence and UniSA Justice & Society in association with the University of Vienna
Helga Nowotny is Professor emerita of Science and Technology Studies, ETH Zurich, and a founding member of the European Research Council. In 2007 she was elected ERC Vice President and from March 2010 until December 2013 President of the ERC. Currently she is member of the Austrian Council and Vice-President of the Council for the Lindau Nobel Laureate Meetings. She is Nanyang Technological University Singapore Visiting Professor.
From 2014-2019 she was Chair of the ERA Council Forum Austria,
She holds a Ph.D. in Sociology from Columbia University, NY. and a doctorate in jurisprudence from the University of Vienna. She has held teaching and research positions at the Institute for Advanced Study, Vienna; Kings College, Cambridge; University of Bielefeld; Wissenschaftskolleg zu Berlin; Ecole des Hautes Etudes an Sciences Sociales, Paris; Science Center for Social Sciences, Berlin; Collegium Budapest; Budapest.
Before joining ETH Zurich, Professor Nowotny was Professor for Science and Technology Studies at the University of Vienna. Among other, Helga Nowotny is Foreign Member of the Royal Swedish Academy of Sciences and continues to serve on many international advisory boards in Austria and throughout Europe. Just to mention a few: she is Member of the Steering Board of the Falling Walls Foundation, Chair of the Scientific Advisory Board of the Complexity Science Hub Vienna, Chair of Advisory Board of The Center for Research and Interdisciplinarity Paris, Member of the Scientific Advisory Board of the Institut d etudes avancees de Paris and Member of the Strategic Research Advisory Board of the Austrian Institute of Technology. Helga Nowotny has published widely in Science and Technology Studies, STS, and on social time. Throughout her professional career Helga Nowotny has been engaged in science and innovation policy matters and continues to serve as advisor at national and EU level. From 2001 till 2005 she was Chair of the European Research Advisory Board, EURAB, advising the European Commission.
Professor Nowotny's lecture draws on themes and research from her upcoming book, In AI We Trust: Power, Illusion and Control
of Predictive Algorithms, available 30 September 2021.
One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.
At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.
As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.