Copyright & AI Symposium Q&A with Nancy Wolff

March 2, 2020


Share on facebook
Share on twitter
Share on linkedin
Share on email

Rick: Nancy, you get to have all the fun! Recently you attended the all day symposium co-sponsored by the United States Copyright Office and the World Intellectual Property Organization on Copyright & Artificial Intelligence in DC.

Nancy: As a copyright geek, it was a version of fun, but mostly something I have been thinking about this for some time. Artificial intelligence (AI) is infiltrating all parts of modern life and the effects on the content licensing industry, creators and copyright will be enormous. Even mind boggling at times.

Rick: What topics did it cover?

Nancy: The symposium, and I quote “took an in-depth look at how the creative community currently is using artificial intelligence (AI) to create original works, the relationship between AI and copyright; what level of human input is sufficient for the resulting work to be eligible for copyright protection; the challenges and considerations for using copyright-protected works to train a machine or to examine large data sets; and the future of AI and copyright policy.”

Rick: Your biggest take away?

Nancy: No one has any answers yet, just questions. For example, do current copyright laws in the US and abroad adequately address the copyright issues that arise with AI? In the U.S. copyright authorship must be made by humans. With AI, where does the human authorship stop, and computer-generated content begin? EU copyright is based on the personality of the artist in the work. Do computers have personality? The basis of US Copyright Law in the U.S. constitution offers incentives to authors to create works. Do we want to incentivize machines over humans? Will works created by AI be in the public domain? Who will be liable if the AI output infringes preexisting works? Will the owners of the AI created works be shielded from copyright liability because the machine (software) did not create the infringing work by any volitional act.

Rick: I have visions of Arnold as Content Terminator hunting Martin Scorsese and the first robot to win an Oscar. Tell me more.

Nancy: Other questions relate to consent. For example, do you need consent from the owner to use copies of in-copyright works solely for machine learning? For example, in AI learning, the software needs to be trained to create new works by being fed lots of similar content from which it learns. Software designed to create new flowers must be “fed” many images of flowers to learn what makes an object look like a flower. Does the programmer need consent from the content owners if the content is only used for “training” and may not be recognizable in the output? In the US, would this fall under “fair use” or not? To date, cases that involved ingesting high quantities of content, such as Google books, did so for noncommercial purposes, i.e. to offer “snippets” so you can find the content.

Rick: I see a derivative, albeit micro-derivative use. And what if the content is designed to compete with the “ingested” content?

Nancy: At least for now, AI can only mimic what it has been taught, so yes, the output is very derivative. The questions can go on and on. The answers lie both in policy and in law as many of the questions are ethical.

Rick: Something you didn’t know?

Nancy: Just how pervasive AI is in the many tools we already use and are developing. AI is being used in a wide range of design and development tools, text to audio, AI news reports, “yoga music”, self driving cars, medical information, anti poaching, you name it. AI is becoming integral to many products through all levels of society. AI could replace or reduce the need for many authors and artists in the future.

Rick: The biggest risks in developing content or products using AI?

Nancy: Copyright issues are not the only risks and challenges in using AI. There are other legal concerns anyone developing a product must consider. For example, companies who use images of people to create facial recognition software, have to address privacy laws. Many of these laws are being enacted at the state level, and there can be real liability. For example, Illinois has enacted a law that requires consent from the person which I believe can be withdrawn. Bias in the training database is a big concern, as bias may be difficult to detect. Requiring databases built for training to keep track of the content source would be important to reduce liability and risk.

Rick: How do you see DMLA’s role here?

Nancy: In addition to being part of the conversation and discussion, DMLA members can offer some partial solutions. DMLA members have vast digital databases that could potentially be licensed for training purposes without the same risks at scraping images from the internet. These databases have many released images that would avoid violating privacy laws that require content diverse databases of visual content that have the consent of the models.