Onlangs vertrok Young STT voor 3 dagen naar Cambridge en Londen om met professoren, technologen, beleidsmakers en startups te praten over Superintelligentie. Vanuit Nederland kreeg de groep van challengers Mariette Hamer (SER) en Bernard ter Haar (SZW) vraagstukken mee over de impact en sociale gevolgen van kunstmatige intelligentie. Clym Stock-Williams en zijn groep hielden zich bezig met de impact van AI op onze banenmarkt en Clym deelt hier zijn inzichten.
Recently I had the privilege and pleasure of joining a study group trip to the UK to discuss the ethics of Artificial Intelligence, organised by STT. In our group of 5, we had the theme of “the consequence of AI on the Dutch job market” – it was such an interesting experience I want to share a summary of our experiences and conclusions.
We started off in Cambridge with a very interesting discussion of the influence of AI on cybersecurity with Shahar Avin, one of the lead authors of a recent report on this topic. The most important message I took away was the need for the machine learning community to consider more carefully exactly how open we are. The potential uses of a piece of software or algorithm should be carefully considered, and publication chosen accordingly on a very finely-graded scale, from TensorFlow-style (open source code with lots of written and video tutorials) all the way to “old-fashioned” proprietary secrets.
Alan Turing Institute
Another highlight was spending time with the Alan Turing Institute, where among other things we considered ways to reduce bias in the machine learning systems which are used to approve (or deny) loans. Their work on developing methods which can demonstrate the reasons for these systems’ decisions – working equally well on the most opaque algorithms as the simplest – is clearly an important step, if we are to ensure that society’s current biases and inequalities are not embedded into a future with pervasive AI.
This lead us onto two important points:
- It is easy to assume that the world is in a steady state; after all, this certainly makes modelling easier. But when you’ve collected that lovely database of debtors’ default status, addresses, genders, ages, names, etc. etc., you really have to consider whether the generalisations your machine learning model is going to pull out of that database are going to stand the test of time.
- AI has the potential to be far more effective than humans at working out how to reach goals. However, it cannot set those goals. As humans we have to become much better at discussing the kind of society we want to live in: what is really important to us.
Applying this to the workplace, it means challenging applications of machine learning to recruitment which just serve to justify and embed current biases. We see, as an alternative, great potential for systems which use NLP and recommender systems to match employer and employee skills, experiences and preferences, globally. Not to make the recruitment decisions, but to enable the right people to meet each other.
In all our discussions, we ended up optimistic, always seeing ways to avoid the nightmare futures that are sometimes presented. And this is the area of public analysis which we believe hasn’t seen enough attention: investigation of which policy actions and inactions can result in, or lead away from, those ultimate utopias or dystopias.
Fundamentally, AGI can free us up to reach our unique human potential: setting meaningful long-term goals. The tyranny of using man as a machine in the workplace could be coming to an end. But making a success of the transition period, which has already started, will require important decisions to be taken at the right time. Social unrest is a very real outcome unless widespread education and support is made available, to enable people to make informed choices about their lives and careers. Decisions taken by individuals will therefore be as important as those taken at national and supra-national organisations.
We developed the “4R Framework” below, to categorise the likely impact of AI on job types. We hope this can be informative to categorise jobs and thereby determine what plans should be put in place during this transition, according to the needs of the individual or policy-maker.
- “Retained”. Some jobs will change very little, often because they involve ultimate responsibility for human safety. One example is the airline pilot, who, despite being in charge of a machine which can fly itself under normal circumstances, is occasionally called upon to save lives. Maybe taxi and bus drivers are in this category… what do you think?
- “Removed”. The opposite of the previous category, and the focus of much attention, there are certainly some jobs which will be no longer required, at least in the same quantities as now. One likely example, as automated calendar scheduling and chatbots develop, is the secretary. Of course, the Industrial Revolution didn’t entirely kill off job types: for instance, the desire for expensive hand-made bread still enables a niche business, but it is not a mass-employer.
- “Re-invented”. Some jobs will move much more towards a supervisory role, with changes in skill requirements and use of time. One example is the doctor, where diagnosis and surgery will become much more supported by machine learning and robotics.
- “Realised”. Every revolution in technology produces new jobs that did not exist before. My personal bet is on the “goal engineer”: a person skilled in defining unambiguous objectives which reinforcement learning systems will then optimise towards, taking account of complexity which would be unimaginable for humans. Here are some more ideas!
We will continue to work on these ideas, and others, particularly with a focus on the Dutch economy. In the meantime, we’d love to hear your thoughts. How are you planning to adapt to the developments in AI you expect in the next 10 or 20 years? What would you like your government to be focusing on?
(c) April 2018 Clym Stock-Williams, Dhoya Snijders, Maarten van der Lee, Peter Biever, Regina Luttge