A positioning paper by the International Federation of Robotics A note on artificial intelligence and autonomy Much of the debate around potential job losses revolves around rapid advances in artificial intelligence. Though there is no commonly agreed definition of artificial intelligence the term is often highly emotive, as it implies a parallel version of – and potentially superior substitute for - human intelligence, for which there is equally no agreed definition. Complicating the issue further is the fact that the term artificial intelligence is often used to refer to what computer scientists define as general artificial intelligence. This is the ability of a software programme, or collection of programmes, to apply insights, predictions and conclusions from one set of data or tasks to a wide range of data and circumstances – a capability currently unique to humans. General artificial intelligence does not exist today and predictions as to when and whether it will ever exist – as well as whether it should be encouraged – vary widely. All the current capabilities in the field of artificial intelligence relate to what is defined as specific (or narrow) artificial intelligence. This describes the ability of a software programme to perform specific tasks within narrow parameters. Similar confusion and emotive associations exist around the term autonomous. Artificial intelligence capabilities are often associated with giving software and robots autonomy - that is the freedom to act independently or to self-govern. This conjures up images of software - generally personified in a robot - that has free will and can take unpredictable, unstoppable and unexplainable actions. A more accurate description of the current capabilities enabled by artificial intelligence would be automated. This describes the ability of a software programme to carry out steps in a process without human intervention, yet within parameters and towards an end goal specified by the programmer. Advances in artificial intelligence mean that programmers can set those parameters more broadly than in the past, leading to a far higher potential for unintended consequences and giving the appearance of autonomy. As a theoretical example, a robot programmed to clean up a room may throw out valuable objects left lying around in addition to sandwich wrappers that have fallen on the floor unless it is programmed to recognise objects that match a taxonomy of ‘valuable objects’ or programmed to only remove objects fitting a taxonomy of ‘rubbish’. It may navigate the room using a pathway that the programmer could not predict. Yet the robot still does not have autonomy. Its goal of tidying the room was programmed, as were the parameters governing its actions to achieve that goal (Amodei, et al. 2016). Whilst its exact pathway in this example has not been programmed, its actions on that pathway (avoid wet surfaces, move around objects that display certain characteristics, etc.) have. This also applies in the case of reinforcement learning, where the software programme discovers which actions lead to a desired result by trial and error. It also applies in inverse reinforcement learning, where the software’s end goal is to work out the goal or intention behind a series of actions (human or otherwise). The end goal is still specified by the programmer, not by the algorithm. Humans set the parameters within which artificial intelligence algorithms operate. They are guided by legal requirements and safety standards as well as by an assessment of commercial risk. This will vary between sectors. In the industries covered in this paper, robust safety standards and regulations together with customer requirements for extremely high reliability and precision mean these parameters are likely to remain tightly set in robot applications to ensure predictability. 5 Positioning Paper March 2018

Robots & the Workplace of the Future - Page 6 Robots & the Workplace of the Future Page 5 Page 7