Question and Answer

In the U.S., is it legal for developers to use copyrighted material to train generative AI tools?

There isn’t a clear answer yet. Some say no, it’s unlawful. There are several lawsuits underway against companies like OpenAI, Microsoft, and Stability AI. Many artists and writers feel that AI is appropriating their work without consent or compensation, threatening their creative livelihoods.

Others say that training AI models on copyrighted works is fair use. They argue that AI models learn from these works to generate transformative original content, so no infringement occurs. 

Many scholars and librarians agree that training AI language models on copyrighted works is fair use and essential for research. If restricted to public domain materials, AI models would lack exposure to newer works, limiting the scope of inquiries and omitting studies of modern history, culture, and society from scholarly research. 

This issue is complex, and it will likely take a long time before the lawsuits are settled. Some courts have thrown out parts of the lawsuits, but kept others. Some cases may be settled out of court.

In the meantime, companies like Adobe, Google, Microsoft, and Anthropic have offered to pay legal bills from lawsuits against users of their tools.

Learn more

Related FAQs

    Frequently Asked Questions