b5e925b9-7a5d-4b1e-8cf9-901f0a71b64bby Gregory Bufithis

The growing capabilities and widening use of artificial intelligence applications (AI apps) in mainstream consumer devices (eg Siri on the iPhone 6S and Amazon’s Alexa being just two examples, plus the whole conversations-as-a-platform development) are converging to poise interesting intellectual property challenges. While currently the most sophisticated of these apps are, at best, in an advanced-alpha or early-beta version, this technology is fueled by innovation moving at an exponential rate.

About six months ago I became involved in an intellectual property infringement case involving artificial intelligence applications. Once an IP lawyer, always an IP lawyer. But – finally – my academic work in AI was bearing fruit.

The case involves what are called Level B apps, part of a computational capability-continuum first proposed by Eran Kahana who is a technology and intellectual property attorney with extensive IP experience and a senior Fellow at Stanford Law School. His categorization of AI apps based on such capabilities offers a glimpse into a legal framework designed to deal with the behavior of such apps. The common denominator for all four levels of AI apps identified here is that they can be programmed with every datum of known IP law. Additionally, each of the more sophisticated app iterations can perform all of the functions of the lesser-sophisticated ones.

Level A apps vary in their query-response sophistication capacity, they are programmatically constrained to perform that specific operation and are incapable of operational variance.

Level B apps respond to user queries and commands relative to retrieving data from sources external to the host device (such as an iPhone). Examples of these sources can be websites and other apps resident on any mobile devices that have granted the necessary access rights (whether on a device level or app level). The Level B apps also feature infringement-minimizing instruction sets to fit various IP environments in which they are intended to operate.

Level C apps feature autonomous decision-making capabilities. The Level C app can, for example, dynamically evaluate and decide from what source and what data to retrieve and how most effectively to present it.

Finally, the Level D app manifests intelligence levels so sophisticated that it can identify and reprogram any portion of its behavior (in unpredictable ways); ie it has a “self-awareness” capacity and can create other apps without human involvement. The Level D can also use data it finds in any manner it decides, in ways that indistinguishably replicate (and even exceed) human behavior.
Current IP law does not support a finding of infringement that is independent of human involvement. If there is infringement, a human-based “smoking gun” is a prerequisite for liability and appropriate remedy. For example, certain web content is copied and misused through use of a spider. Liability is attributed and remedies are sought against the designer, master or both. Simple enough. In contrast, similar activity undertaken by a Level D app is activity and harm for which the law currently has no answer. A default strict liability standard against the human developer/deployer is misguided for a number of reasons.

Perhaps most significantly among these is that it is probable that (e.g., in Level D app cases) an individual could not have reasonably foreseen the infringement. Instead, Kahana proposes that an iterative liability (IL) standard be adopted. Under it, infringement inquiry can begin with the original developer/deployer, but where the facts indicate that the AI app behaved sufficiently independently, that individual should not be held liable. He says:

Once we dispose with the human-centric side of the inquiry, we need a legal framework that can handle assigning liability and dispensing remedy vis-à-vis these hyper intelligent AI apps. We could take a pull-the-plug approach and conclude that any AI app that is deemed infringing will be summarily deleted. While that may be feasible in the short-run, there is no assurance this will work in the long-term, especially where Level D apps self-propagate and know how to evade detection.

The nature of artificial intelligence programs almost begs for ownership disputes. This week in law.com Scott Graham notes an interesting approach by Southwestern Law School professor Ryan Abbott who believes that computers are even generating patentable subject matter. We just don’t know about it, he says, because disclosing it on an application might render the invention unpatentable. In a scholarly article published last month, Abbott called on Congress, the courts and the U.S. Patent and Trademark Office to start setting some ground rules. His provocative recommendation: that computers themselves be eligible for inventorship status, with patent rights assigned to their owners:

“Treating nonhumans as inventors would incentivize the creation of intellectual property by encouraging the development of creative computers”.

Graham notes that some patent prosecutors say that the ability of machines to create patentable subject matter on their own remains well off in the future. Human beings still have to train the computer and then provide the input that launches the AI process. He notes one commentator:

“I haven’t seen anyone say, ‘Machine, run, come up with an invention'”.

And Abbott’s proposal is swimming against the tide of IP law. The Copyright Office formally declared in 2014 that it will not register works generated by machines, and U.S. District Judge William Orrick III endorsed that logic earlier this year when he ruled in what will be a legendary decision, no doubt, that a monkey could not legally author a copyrighted work. The selfie industry was devastated :-)

But the law is less settled in the patent realm. As I noted, the nature of artificial intelligence programs almost begs for ownership disputes. Those issues will come up, and they will be complicated. Graham notes one case which he terms a “sneak peak”. No, not robots but interesting nonetheless.

It is Meng v. Chu, a dispute between a university laboratory leader who says he conceived of a new superconducting material, and a research assistant who says she is the person who actually invented the process for synthesizing it. In this analogy, the research assistant acting on a supervisor’s instructions can be seen as the rough equivalent of a computer carrying out programming.

Any true AI won’t simply be a function of an algorithm written by a programmer, but a combination of programming by one source and training by other sources and learning/adaptation by the AI itself. So you can imagine multiple claimants to IP generated by the AI.