The question creators and infoprodutores ask in 2026 is no longer "should I use AI?" — it is "what are the legal limits when I do?" Every digital business is now built on top of some AI layer: ChatGPT for copy, Midjourney for visuals, ElevenLabs for voice, Synthesia for video. The legal questions follow the use cases: terms of service, copyright, image rights, consumer protection, platform rules.

This article cuts through the noise and addresses the three questions that actually matter when AI-generated content moves from creation to commerce.

Question one — does the tool's license allow commercial use?

This is the first filter, and the most often skipped.

Each AI platform sets its own terms about commercial use of outputs. The terms vary by tool, by tier, and over time:

  • OpenAI (ChatGPT, DALL·E) — paid tiers generally permit commercial use of outputs; the free tier has had more variation
  • Midjourney — paid plans generally permit commercial use; the basic free tier has been more restrictive at various points
  • Stable Diffusion (self-hosted, open source) — depends on the specific model license; many permit commercial use, but lineage of the model and its training data matters
  • Adobe Firefly — explicitly licensed for commercial use, with indemnification provisions for IP claims
  • ElevenLabs / Synthesia / similar voice and video tools — terms typically restrict use of cloned voices/likenesses to specific authorizations

Operational practice:

  • Read the terms in effect at the moment of generation
  • Keep a screenshot or PDF of those terms
  • Confirm the tier under which the content was generated
  • If a project will run for years, periodically re-check terms (they change)

This is housekeeping, not legal theatre. When a third party challenges use of AI-generated content, the platform terms become the first document anyone asks for.

Question two — does the output infringe third-party rights?

More dangerous than the licensing question. Generative models were trained on massive datasets and can reproduce, at varying levels:

  • Copyrighted works — phrases, passages, visual elements from training data
  • Registered trademarks — logos, names, slogans
  • Protected likenesses — recognizable people, including living artists with distinctive style
  • Protected characters — copyrighted fictional characters

Before commercializing AI-generated content, run the basic checks:

For text content

  • Plagiarism check tools (Grammarly Plagiarism, Copyscape, Turnitin) catch direct passages from sources
  • For technical content: check if the AI is recapitulating definitions or structures from specific copyrighted books or articles

For visual content

  • Reverse image search (Google Images, TinEye) on the generated image
  • Trademark search at INPI for any text or logo elements
  • Similarity check against well-known characters, slogans, or trade dress

For audio/voice content

  • If using AI voice cloning, ensure the voice is original or has explicit license — cloning a real person's voice without authorization implicates personality rights and, depending on use, copyright
  • For music generation, similarity to known compositions is a real risk

When an output is too close to an existing protected work, the risk is real even if the AI generated it. Regenerate, adjust the prompt, or use a different output.

Question three — what is the buyer actually buying?

Brazil's Consumer Protection Code (CDC, Law No. 8,078/1990) prohibits misleading advertising in Art. 37. The rule is broader than "do not lie" — it includes omissions and framings that lead the consumer to conclude something different from reality.

AI-generated content has specific exposure here:

  • An ebook sold as "the result of 20 years of personal experience" but actually generated by ChatGPT in an afternoon implicates Art. 37
  • A course presented as containing "exclusive insights from the instructor" when sections are AI-paraphrased without disclosure can be challenged on similar grounds
  • A consultancy positioned as "personal advice" but delivering AI-generated reports without meaningful expert review carries the same risk

The line is not to prohibit AI use. It is to align what is sold with what is delivered. Practices that hold up:

  • Use AI as a production accelerator, but maintain human curation, editing, and accountability over the final product
  • When the value proposition is specifically about the seller's personal expertise (executive consulting, personal coaching, expert analysis), be transparent if AI tools are part of the production process
  • When the value proposition is about output quality and reliability (a polished ebook, an effective marketing email), the AI involvement is generally not material to disclose, as long as the quality matches the claim

Platform rules

Platforms used to distribute or sell content have their own policies on AI:

Amazon Kindle Direct Publishing (KDP)

KDP has updated its policies in recent years to require disclosure when AI is used to create or significantly contribute to a book's content. The rule distinguishes "AI-generated" content from "AI-assisted" content (where AI was used as a tool but the final work reflects substantive human authorship).

Stock content platforms

Adobe Stock, Shutterstock, Getty, and similar platforms have specific policies for AI-generated content — some accept it with disclosure, some prohibit it, some require specific provenance or training data attestations.

Course platforms

Udemy, Hotmart, Kajabi and similar tend to focus on content quality and learner outcomes rather than the production method. AI-assisted content with strong human curation typically passes; AI-generated content of poor quality fails on the quality dimension regardless of the method.

Social platforms

Instagram, TikTok, YouTube have evolving labeling requirements for AI-generated or significantly AI-modified content, especially when depicting real people or events.

Practical synthesis

For a creator or infoprodutor planning to monetize AI-generated content in Brazil:

  1. Verify the tool license — paid tier with commercial use permission, terms saved
  2. Check the output for infringement — reverse image, plagiarism, trademark search proportional to the use
  3. Match the offer to reality — what the buyer is paying for matches what is delivered, no Art. 37 exposure
  4. Comply with platform rules — disclose AI use where required by the distribution platform
  5. Document the process — prompts, outputs, edits, decisions; this matters if anything later is challenged
  6. Register trademarks for visual identity — copyright on AI output is uncertain ground; trademark registration at INPI is the robust protective layer

AI does not create a new legal universe. It interacts with frameworks that already exist — the LDA, the LPI, the CDC, the LGPD, the CC. The operators who do well are not the ones who avoid the legal questions; they are the ones who answer them clearly and proceed.

FAQ

Do ChatGPT, Midjourney, etc. terms really allow commercial use?

Generally, paid tiers of the main tools (ChatGPT/DALL·E, Midjourney, Stability AI, Adobe Firefly) allow commercial use of outputs. But terms change — checking the version in effect at the moment of generation and keeping a copy is essential. Typical watch-outs: (i) the free tier often has a more restrictive license; (ii) content that reproduces protected elements (logos, characters, style of living artists) may be outside the license scope; (iii) some tools offer indemnification against copyright claims — Adobe Firefly is an example —, others do not.

Do I have copyright over the content the AI generated for me?

Frontier issue in Brazil. The Copyright Act (Law No. 9,610/1998) Art. 11 defines the author as a natural person, and majority doctrine holds that originality presupposes a human creative act. Pure AI output, without relevant human creative input, likely does not receive full copyright protection. Output with substantive editing, elaborated prompting, authorial curation, tends to be treated as a protected work to the extent of the human contribution. There is no consolidated guidance from Brazil's Superior Court of Justice (STJ) yet. Strategy: document the human creative process and, for visual identity, register the trademark at INPI.

What if the AI generates something similar to an existing work?

Real and relevant risk. Generative models were trained on large datasets and can reproduce, at varying levels, elements protected by copyright, trademarks, or image rights. Before commercializing AI-generated content: (i) reverse image search if visual (Google Images, TinEye); (ii) textual similarity search in plagiarism tools if text; (iii) trademark search at INPI if logo; (iv) verification of similarity to known characters, slogans, or trade dress. If too similar, regenerate.

Can I sell ebooks, courses, or consultancy written by AI?

Yes, with adequate transparency. The Brazilian Consumer Code (Art. 37) prohibits misleading advertising — presenting as "original work, fruit of 20 years of experience" something generated in 5 minutes by ChatGPT may constitute deception about the nature of the product. The line is not to prohibit AI use, it is to avoid misleading the buyer about what they are buying. Healthy practices: use AI as a production accelerator but ensure human curation, editing, and accountability over the final product; be transparent when the offer model suggests specific personal expertise.

Do sales platforms (Kindle, Hotmart, Udemy) accept AI content?

Each platform has its own rules, and they change. Observed patterns: (i) Amazon Kindle Direct Publishing (KDP) requires disclosure about AI use in book creation — the rule has evolved in recent years; (ii) some stock platforms (Adobe Stock, Shutterstock) have specific policies for generative AI; (iii) course platforms (Udemy, Hotmart, Kajabi) tend to focus on content quality and originality, not the production method. Checking the platform's current policies before listing the product prevents removal and account blocks.

// PRACTICE AREA
Author

Managing Partner and founder of Hosaki Advogados. Practice in intellectual property, digital law, and creator economy. Over 10 years at the intersection of technology and law.