Most companies sign AI provider agreements the same way they sign any SaaS contract: click-through, terms accepted, move on. The difference is that AI contracts allocate rights over what the system produces, grant licenses over what you feed into it, and create liability exposure that ordinary SaaS contracts don't.
For companies operating in Brazil and using AI tools that process business data, client data, or personal data, the stakes are higher. The LGPD adds a compliance layer to every AI integration. Understanding the contract before signing is not legal formalism — it is risk management.
Why an AI Contract Is Not "Just Another SaaS"
A standard SaaS contract licenses software access. An AI provider contract does three additional things simultaneously:
- Receives your inputs — prompts, documents, data, images — and grants the provider a license to process them
- Produces outputs — text, code, images, analysis — and allocates rights over those outputs
- May use your inputs to improve its model — this is the clause most companies miss
Each of these dimensions requires attention that a standard software license review does not address.
Input and Output Ownership: What Each Model Delivers
Inputs: when you submit a prompt, the provider typically receives a license to process it to deliver the service. The key question: does that license extend to using your input for model training or improvement?
Outputs: most major providers assign output ownership or a broad usage right to the user. But three caveats are common:
- Other users may independently generate identical or similar outputs — the provider's terms typically disclaim exclusivity
- The provider may reserve rights to use outputs for safety monitoring
- Inherently unprotectable material (e.g., factual summaries, ideas) cannot be "owned" regardless of what the contract says
Under Brazil's Copyright Law (LDA, Law No. 9,610/1998), a further layer applies: output without meaningful human creative contribution may not receive copyright protection at all, making the contractual "ownership" economically hollow if the output is not protectable in the first place.
Training Data Use: Opt-Out, Retention, Deletion
This is the clause most frequently overlooked. Standard consumer plans from major providers typically include a right to use submitted content to train, improve, or fine-tune the model — often with an opt-out mechanism that is not enabled by default.
For enterprise and API plans, the standard position is usually reversed: no training use by default, with limited and defined retention periods. But "no training use" does not automatically mean "immediate deletion" — retention for safety, legal compliance, or abuse-prevention purposes may still apply for defined periods.
Before integration, confirm in writing:
- Is training use enabled or disabled for your plan?
- What is the retention period for prompts and outputs?
- How can you request deletion of previously submitted data?
For data subject requests under the LGPD (Art. 18): if personal data submitted to an AI provider cannot be deleted upon request, you are exposed. The sub-processor arrangement must include deletion cooperation.
Confidentiality: Prompts and Sensitive Data
Prompts frequently contain sensitive business information: client names, financial data, product strategy, unpublished research. The confidentiality clause in an AI contract governs whether the provider can use, reference, or share that information.
Before submitting sensitive data, verify:
- Is the provider's confidentiality obligation contractual (in the enterprise terms) or merely a policy?
- Which employees of the provider may access your data for safety review?
- Are prompts retained in logs that could be subpoenaed in third-party litigation?
- Does the sub-processor list include parties in jurisdictions with broad government surveillance access?
An internal AI use policy should classify data by sensitivity and specify which categories may be submitted to which providers under which configurations. This is not an optional exercise for regulated industries or companies handling client data.
Indemnification: Scope and Practical Limits
Some major providers have introduced intellectual property indemnification for enterprise customers: if a third party claims that the provider's output infringes their IP, the provider will defend and indemnify the customer, subject to conditions and caps.
Before relying on this protection, understand:
- Scope: does it cover copyright, trademark, trade secret, or only specific categories?
- Cap: what is the monetary ceiling?
- Trigger conditions: must you notify the provider promptly? Must you cede control of the defense?
- Exclusions: most indemnification clauses exclude situations where the customer submitted infringing inputs, disabled safety features, or used the output in a way that violated the provider's terms
In high-volume content production, the indemnification clause may be determinative in provider selection. An enterprise plan with IP indemnification provides a materially different risk profile than a standard API plan without it.
LGPD Compliance in AI Integrations
Every AI integration that involves personal data of Brazilian data subjects requires a LGPD analysis:
- Legal basis: what is the legal basis for processing personal data through the AI system? (Art. 7, LGPD)
- Transparency: data subjects must be informed that their data is processed by an AI system
- Data Processing Agreement (DPA): the AI provider acting as a processor must sign a DPA that covers LGPD obligations — including sub-processor controls, security requirements, and data subject request cooperation
- International transfer: if the provider's servers are outside Brazil, the transfer must comply with LGPD's international transfer requirements
Most major AI providers offer DPAs for enterprise customers. Verifying that the DPA's scope covers your specific use case — and that it addresses LGPD's requirements rather than only GDPR — is the due diligence step most companies skip.
Contract Review Checklist
Before signing or renewing an AI provider agreement:
- Who owns the outputs? Is exclusivity disclaimed?
- Is training use enabled or disabled for your plan? Can it be disabled contractually?
- What is the prompt retention period? How do you request deletion?
- Is the confidentiality clause contractual or a changeable policy?
- What does the indemnification clause cover — and what does it exclude?
- Is there a DPA that covers LGPD obligations, including sub-processors?
- Where are servers located, and does the international transfer mechanism comply with LGPD?
- What are the SLA commitments, and what is the migration/exit path if the provider discontinues the model?
Our practice covers AI and law and digital contracts. See also: Who Owns AI-Generated Content in Brazil?.
FAQ
It depends on the contract. Most terms assign ownership or broad usage rights over the output to the user, with limits — including reservations on inherently unprotectable material. Brazil's Copyright Law (Law No. 9,610/1998) adds a layer: without meaningful human creative contribution, the output may lack copyright protection regardless of what the contract says about ownership.
It depends on the plan and configuration. On consumer plans, training use is often the default, with opt-out available from some providers. On enterprise/API plans, the standard is usually no training use, with limited retention. For personal data, the configuration must be compatible with the LGPD legal basis and with your contractual terms with your own clients and users.
It can be. Before submitting confidential, secret, or personal data, verify: the provider's confidentiality clause, the training opt-out configuration, prompt retention, sub-processors, and server location. For personal data, you also have LGPD obligations on legal basis, transparency to data subjects, and international transfer requirements. The internal AI use policy should list what can and cannot be submitted.
It varies. Some cover third-party IP claims over the output, with caps and exclusions. Others exclude output indemnification entirely. Points to verify: scope (IP, privacy, trademark?), monetary cap, triggering conditions, and exclusions (use outside terms, illegal data submitted by user). In intensive use, this clause can drive provider selection.
On consumer/standard plans, rarely — they are adhesion contracts. On enterprise plans, high-volume API, and strategic partnerships, yes — typically negotiable: no-training-use clause, prompt retention, expanded indemnification, SLA, governing law, and venue. For companies in Brazil, it is prudent to seek clauses expressly recognizing LGPD application and providing data-subject request cooperation mechanisms.
