AI no longer has a plug (EN)
Part III in the series 'The future of artificial intelligence (AI)' - Making choices in and for the future
Forword: A handle for the future of AI
By Maria de Kleijn-Lloyd
Senior Principal, Kearney,
Chairperson think tank STT futures exploration AI
What is the future of AI in the Netherlands? That is a question that is almost impossible to answer. Because: what is AI exactly, which future scenarios are there and who determines ‘what we want’, and on what basis? The third part of the STT trilogy ‘the future of AI’ focuses on that third, normative question. The aim is to generate a broad social discussion about this issue, because AI will touch us all in one way or another: directly, as users of apps, but also indirectly, when other people and organizations use AI, for instance doctors who let a scan get analysed algorithmically to be able to make a diagnosis. This is not science fiction; a lot of it is already possible. Even today, the impact of AI is significant and it is expected that the impact will only grow. That’s why it is good to focus explicitly on the associated ethical and social choices.
A lot of work is already being done. A high level expert group of the EU, for instance, has described the main ethical principles of AI, like explainability and fairness in great detail. But that is not enough, because, it is relatively easy to agree when it comes to general principles: of course we want privacy, of course we want fair results. In discussions about a vital infrastructure, these are also known as feel good principles. Of course we are in agreement.
Things tend to become more complicated when we try to translate the principles into practical applications, when we are faced with two challenges. Firstly, we need to find a way to apply the principle in practice. For example, what is a transparent algorithm? One of which the entire code – sometimes multiple terabytes – is published, but which can only be understood by a select group of experts? Or one that comes with information written in laymen’s terms regarding the main design choices, source data, operation and side-effects? Secondly, some principles can conflict when put into practice, for instance transparency and privacy. Again, the context is important: medical information is different from Netflix preferences. Who decides which principle takes precedence? We need to take that complicated next step together, because these are choices that the designers of algorithms are already making every day.
This foreword was written during the intelligent lockdown of the corona-crisis, in April of 2020. For those who were worried that the Netherlands would miss out when it comes to digitization and AI: our physical infrastructure (from cables to cloud) turns out to be robust enough and most consumers and businesses were also able to switch to working from home with relative ease. We all develop new digital skills with remarkable speed. But what is perhaps even more interesting is that we also use the media on a massive scale to take part in the dialogue about algorithms and apps. For example the ‘appathon’ that the Ministry of Health organized surrounding the corona app. How do you create that app in such a way that it safeguards the privacy of citizens, cannot be misused and is accurate at the same time? And when we say accurate, does that mean ‘not to miss any corona cases’ (no false negatives) or ‘nobody being quarantined needlessly’ (no false positives)? As such, the current situation, no matter how sad, helps us create clarity regarding a number of ethical choices in AI. With 17 million participants nationwide.
I hope that, with the help of this study and in particular via the interactive online components, we will be able to continue a focused and broad dialogue and translate it into practical handles that will lead to an AI future in the Netherlands that we not only accept but can really embrace as well.
Read the full publication here.