Who owns what an AI produces? The question sounds philosophical. In practice, it determines whether a company can protect its AI-generated marketing assets, whether a creator can register an AI-assisted illustration, and whether a platform is exposed when its AI tool produces output that resembles someone else's protected work.
Brazilian law has a position — but it is not yet a complete answer.
What the Copyright Law Requires for Authorship
Brazil's Copyright Law (LDA, Law No. 9,610/1998) protects intellectual works created by natural persons. This is explicit: the law recognizes the author as the natural person who creates a literary, artistic, or scientific work. Legal entities can hold economic rights through assignment, but they cannot be original authors.
AI systems are not natural persons. Under the current LDA framework, a work generated entirely by an AI system — without meaningful human creative contribution — does not fit the definition of a protected work. It may fall into the public domain from the moment of creation.
This position is consistent with most jurisdictions that have addressed the question (US, UK, EU), though each has reached it through different legal paths.
Outputs 100% Generated by AI: Public Domain or Grey Zone?
The clearest scenario: a user types a one-word prompt into an image generator and publishes the first result without modification. The human contribution is minimal. Under the LDA, there is a strong argument that no copyright protection attaches to that output.
The practical consequence: anyone can copy, reproduce, and use that image without infringing copyright — because no one holds it.
For businesses building marketing libraries with AI-generated assets, this creates a specific risk: competitors can freely use the same or similar outputs, and the "assets" may not be defensible as intellectual property.
Human Contribution: Where Protection May Arise
The analysis shifts as human contribution increases. Three types of contribution that strengthen the copyright argument:
- Elaborate prompting: a detailed, multi-step prompt that involves creative choices — composition, style, mood, specific exclusions, iterative refinement — reflects creative authorship, even if the final rendering is done by the model
- Selection and curation: choosing which output among many is the final work involves aesthetic judgment — a form of creative decision analogous to photography (the photographer does not create the scene, but selects it)
- Post-production editing: modifying, combining, retouching, or compositing AI output with human-created elements creates a derivative work with its own copyright in the human-added elements
The more a human can demonstrate creative decision-making throughout the process, the stronger the case for protection. Documentation matters: prompt logs, version history, selection notes.
Registering an AI-Assisted Work in Brazil
Brazil's Copyright Office (EDA at the National Library Foundation) handles registration of literary, artistic, and similar works; software has its own registration regime at INPI. Both remain available for AI-assisted material with meaningful human contribution. There is currently no administrative requirement in Brazil to declare AI use in the registration process — but this may change as the regulatory framework develops.
The prudent approach: maintain detailed documentation of the creative process (prompts used, iterations, selection criteria, editing steps), and be prepared to demonstrate human authorship if the registration is challenged.
Training Data and Third-Party Rights
A distinct but related risk: using AI models trained on copyrighted works. Brazil's LDA does not include an explicit exception for text-and-data mining for AI training purposes — a gap that creates legal uncertainty for both model developers and users.
The risk materializes most clearly when:
- An AI output closely resembles a specific protected work in a way that goes beyond style
- The output could substitute for the protected work in the market (e.g., reproduces recognizable characters, melodies, or visual signatures)
- The AI provider's terms of service disclaim liability for infringement in outputs
That last point matters: most major AI providers include indemnification carve-outs or limitations of liability for IP infringement in their terms. Understanding what protection — if any — your contract with the AI provider offers is essential before deploying AI-generated content at scale.
PL 2,338/2023: What the Pending AI Bill Would Change
AI Bill No. 2,338/2023 (Marco Legal da IA) is pending in Brazil's Congress as of this publication. It proposes a risk-based regulatory framework inspired by the EU AI Act. Key provisions relevant to businesses using AI for content:
- Transparency obligation: systems interacting with users must identify themselves as AI when asked
- Synthetic content labeling: AI-generated audio, video, or image content may need to be labeled as such
- Civil liability: operators of high-risk AI systems face liability for damages caused by those systems
- Prohibited uses: social scoring, manipulation through subliminal techniques
The bill does not directly resolve the copyright authorship question — that remains a matter for the LDA and its eventual reform. But it adds a compliance layer for any company deploying AI systems in Brazil.
Companies should monitor the bill's progress, begin internal impact assessments, and start drafting AI governance policies now rather than after enactment.
AI Clauses in Contracts
Any contract involving content creation — with a creator, agency, or freelancer — should now address AI use explicitly. Minimum elements:
- Disclosure: does the contract require the creator to disclose use of AI tools? Under what circumstances?
- IP warranty: the creator warrants that AI-generated content does not infringe third-party rights and that the creator holds (or has cleared) rights to any training data inputs
- Synthetic content labeling: obligation to label AI-generated content where required by law or platform policy
- Output ownership: who owns the copyright in AI-assisted content? This requires a specific assignment clause — not just a general "work for hire" provision, which may not apply as expected under Brazilian law
- Confidentiality: prompts and sensitive business data input into AI systems should be treated as confidential
We assist companies in navigating AI and intellectual property questions under Brazilian law. Our practice covers AI and law and intellectual property.
FAQ
Brazil's Copyright Law (Law No. 9,610/1998) protects works by natural persons, not systems. Output generated entirely by AI without meaningful human creative contribution may not be protected by copyright in Brazil. With substantial human contribution — detailed prompting, output curation, editing, and refinement — protection may arise for the human user. Each provider's terms (OpenAI, Anthropic, Google, etc.) also allocate output rights and should be reviewed.
Yes, provided there is meaningful human creative contribution. The Copyright Law requires human authorship but does not prohibit the use of tools — camera, editing software, or AI. The greater the human contribution to conception, selection, and refinement, the stronger the position. Brazil's copyright registration office (EDA at the National Library Foundation) handles literary, artistic, and similar works; software has its own registration regime at INPI. There is no express administrative guidance in Brazil yet mandating disclosure of AI use at registration. Documenting the creative process is prudent.
There is relevant legal risk, especially in jurisdictions with active litigation (US, UK, EU). Brazil's Copyright Law does not provide an explicit text-and-data-mining exception for AI training, creating uncertainty. Risk is higher when output reproduces recognizable elements of protected works or interferes with their normal economic exploitation. Indemnification clauses in AI provider contracts are critical for mitigation.
No. AI Bill No. 2,338/2023 (Marco Legal da IA) is pending in Brazil's Congress. It proposes a risk-based framework inspired by the EU AI Act, with transparency obligations, synthetic content labeling, a ban on social scoring, and civil liability for high-risk systems. Companies should monitor the bill's progress and begin internal impact assessments before enactment.
Suggested minimum: (i) representation on AI use in producing the content; (ii) warranty that delivered material does not infringe third-party rights; (iii) obligation to label synthetic content where required; (iv) liability for inputs supplied to AI systems; (v) ownership of outputs; (vi) confidentiality on prompts and sensitive data. The clause should align with the company's internal AI policy.
