4. Wider impacts 28 Box 3: Can robots replace the human Robot ethics touch? ElliQ can respond to voice, gaze and touch, and Moor (2009) proposed that a robot may be four kinds of suggest personalised activities at the right time to ethical agents: keep her companions sharp, active and engaged. 1. Ethical impact agents whose actions have But ElliQ is not human; she is an AI-driven social ethical consequences whether intended or not; companion robot designed to help the elderly who 2. Implicit ethical agents which have ethical live alone keep connected with their family, friends considerations built into (i.e. implicit in) their and the world around them (Elliq, 2019). Her design (e.g. safety and security considerations); creator, San Francisco-based Intuition Robotics, 3. Explicit ethical agents which can identify and describes ElliQ as ‘the sidekick for happier ageing’. process ethical information about a variety of situations and make sensitive determinations According to human-robot interaction researcher about what should be done; and Danielle Ishak, Intuition Robotics’ research has 4. Full ethical agents which make ethical found that ElliQ’s human beta testers tend to form judgements about a wide variety of situations. an attachment with the robot and are also more likely to open up to the robot, telling her when Robots and cobots can be generally classed as implicit ethical agents, in that they are specified and designed they’re depressed or lonely because they don’t feel they will be judged (Bloomberg, 2018). Despite this to operate safely in a given environment. It is however, however, these is only so much ElliQ can do to possible that some cobots may soon be intended as cheer those who are severely depressed through explicit ethical agents (though there is some argument loneliness and isolation; no source of AI robot will about the distinction between ‘ethical’ and ‘safe’, see for ever be able to replace human companionship or example Sharkey (2017)). care, Ishak adds. Winfield (2018)identified three risks associated with robots intended as explicit ethical agents: 1. Unscrupulous manufacturers might insert some unethical behaviours into the robots. 2. Robots that have user-adjustable ethics settings (e.g. choice between maximising length of life or quality of life) may have their settings somehow set outside an ‘ethical envelope’, 3. The rules may be vulnerable to malicious hacking. Winfield concluded that even with strong encryption, there is always a risk of hacking, so the responsibility for ethical behaviour must always lie with human beings. Cave et al (2019) explored the risks beyond safety considerations and surmised that unless they can be properly managed, it is unwise to develop explicit ethical machines. Taking control: robots and risk
Robots & Risk Page 27 Page 29