Andrew Joint, commercial technology partner at Kemp Little, outlines the EU Parliament’s current discussions on giving robots special legal status, the highly complex issue of liability as AI becomes more autonomous, and how genuine legal discussion is not helped by science fiction.
Will robots be given special legal status under the proposed new laws?
This is certainly a point which the EU Committee on Legal Affairs considers should be considered in their report published in January 2017: http://www. europarl.europa.eu/sides/getDoc.do?pubRef=-// EP//NONSGML%2BCOMPARL%2BPE582.443%2B01%2BDOC%2BPDF%2BV0//EN. They explicitly wonder whether there should be legislation to categorise certain types of robots/AI – those who can be considered as having a significant amount of autonomy – in the way that legislation has looked to do so across various jurisdictions with natural persons (humans), legal persons (e.g. corporations or trusts), animals and other objects. The report queries whether this personhood status should be the same as other categories of ‘person’ – or perhaps be a new/ different category. The granting of some form of legal status makes it easier to deal with concepts such as liability and ownership (as they currently exist and operate). Being clear on a legal status and the categorization for obtaining that legal status will be an important part of the development of legislation generally for AI/Robots.
What does that mean in practice and what are the implications?
Once something has a legal status, a legal personhood, it means that the thing has: (i) the potential to benefit from the protections that laws typically give things with a legal personality; and (ii) the potential to take on some legal responsibility or liability.
It means that the law could start to allocate responsibility and ownership to the AI/Robot if the legislators determined that it was appropriate for it to sit there. Once the robot had the ability to own something or be responsible for something – it means our usage of them, and our interaction with them, will need to fundamentally change. If you knew your robot might own the shopping you asked it to buy for you, or that you might have a duty of care as to how you treat your robot, would that force you to think differently about how you use it? These are the sort of impacts of giving a Robot/AI some sort of legal personhood.
Who will be liable for mistakes or damage?
The liability section of the EU report notes that currently the various liability regimes will not hold robots liable per se for their actions. Regardless of the changing sophistication and autonomy of the technology the law doesn’t recognise the robot/AI having an impact and traditional legal rules will look to apply liability to either the user of the technology, or the provider of the technology – where such liability sits depending on the circumstances of the use. The newer thinking indicated both in the EU report and hinted at in the Science and Technology Committee of the Houses of Parliament in October 2016 (http://www.publications.parliament.uk/pa/ cm201617/cmselect/cmsctech/145/14502.htm) is that the current liability regime could be amended. It could move to an allocation of responsibility with a focus on a proportionate allocation of liability linked to the actual level or instructions given to the robot/AI. Interestingly the EU report also notes that the availability of insurance schemes might help shape where the liability should lie.
Will robots start owning their IP?
Again, the scope for any future IP ownership regime is not clear from the current report/proposals. As with the rules on liability, the current IP ownership regimes do not recognise that robots/ AI may own IP – and this is consistent with other analogous challenges to IP law – most famously demonstrated by the ‘monkey selfie’ discussions and cases of 2014/15: http:// www.bbc.co.uk/news/uk-wales-37061484. The question isn’t really ‘will robots start owning IP’, it is ‘should we agree that there is a good moral/legal/philosophical/commercial reason why robots should own IP’?
What legal implications is the engineer subject to?
Whilst generally the thinking and legal debate is fairly ‘high level’ there are some clear indicators which developers and engineers should strongly keep in mind. Firstly, Asimov’s 3 rules of robotics (https://en.wikipedia.org/wiki/Three_Laws_ of_Robotics) from 1942 still form a fundamental part of the basis for AI/robotics development and engineers should be cognisant that embedding those rules into any Robot/ AI would be a sensible ‘future proofing’ exercise – it isn’t a legislative requirement yet but the indicators are that these rules are something that legislators see as a bedrock for AI/ robot developments. In addition, the EU report contains an advisory code of conduct for robotics engineers aimed at guiding the ethical designs, production and use of robots. Operating within the scope of that code would be sensible – whilst not a binding code it is likely to form the basis of a binding version.
Is it likely that liability will be dealt with by an insurance scheme?
The reports note that the availability and accessibility of insurance is an important part of the considerations as to where liability should sit and in what proportion. This is as clear an indicator as we’ve yet had from legislators that insurance schemes will strongly direct the legislative direction of travel with regards to liability. In some ways the legislators have already been overtaken by the insurance industry with regards to AI/robot liability – the Department for Transport has noted it plans to expand compulsory motor insurance to driverless cars (http://www.insurancebusinessmag.com/ uk/news/breaking-news/government-announces-plans-fordriverless-car-insurance-42395.aspx) and car manufactures have started to be clear that they’ll take on liability for driverless cars (http://www.theregister.co.uk/2015/10/13/volvo_to_ accept_full_liability_for_crashes_involving_driverless_cars/). Where the legislators have been slow to react then industry typically steps into the certainty vacuum, and it is likely that the insurance industry will be a significant part of the shaping of any future legislation.
Does Parliament perceive robotics and AI to be a threat? Should they?
It isn’t clear yet what status parliament want to give AI. It is clear that they want to consider whether Robots/AI have some form of legal status. Whether that status is the same as a human, different but with a legal status or giving it no status of all – is what the current rounds of discussion and feedback is all about. Personally, I think we need to consider the status of AI/ Robots who can show true autonomous actions and cognitive behaviour. I believe we will see technology that does mean we need to revisit how the law works in a number of areas which touch on fundamental aspects of law – ownership, liability and responsibility.
The real issue is trying to deal in a general way with Robotics/AI technology which can be very different from product to product – the debate is not particularly nuanced at the moment. There might be some circumstances where a robot should be liable for actions to the extent it excuses its human or corporate owners but currently the technology doesn’t seem to be there to bring real scenarios where this might be the case. It means we are currently discussing the issue at a philosophical rather than factual level – it isn’t fatal to us reaching a conclusion but it does make it harder to be definitive when the examples are often still based in science fiction rather than actual technology.
This article first appeared in the January Legal IT Insider newsletter – register here to receive your free monthly copy – http://legaltechnology.com//latest-newsletter/
See link to story that broke earlier today: http://uk.businessinsider.com/european-meps-to-vote-on-robots-ai-2017-2?r=US&IR=T