An AI agent is not a SaaS vendor. The agent doesn't just process — it acts, decides, and sometimes contracts on your behalf. Schedules meetings. Closes orders. Buys ads. Replies to customers. Updates records.
When the agent does the right thing, you save time. When the agent does the wrong thing, you answer for it as if you'd done it yourself.
This guide gives you the 12 essential clauses your AI agent contract needs — and the questions to ask before signing.
Read first: for the broader context on AI vendor contracts, see AI vendor contracts in Brazil (Portuguese). This post focuses specifically on agents — autonomous AI systems that take actions on your behalf.
First Question: Do You Actually Need an Agent Contract?
Not every AI tool is an agent. Quick test:
| If the tool... | It's a... |
|---|---|
| Generates content for you to review | SaaS / tool |
| Suggests action you approve | Recommender |
| Executes action without per-action approval | Agent |
| Negotiates and closes deals autonomously | Agent |
| Maintains state and acts over time | Agent |
Agent = takes action on your behalf. If your tool fits the bottom three rows, you need an agent contract.
Who Are the Parties?
Three layers in any agent deployment:
- You (the user / principal) — accountable for what the agent does
- The agent provider — sells access to the agent
- The model provider (sometimes the same, sometimes underneath) — runs the underlying model
The contract is usually between you and the agent provider. The model provider sits behind, with its own terms that flow through to you. Understanding that chain is the first piece of the puzzle.
The 12 Essential Clauses
1. Scope of Authority
What can the agent do? Define precisely:
- Allowed actions (book travel? send invoices? close contracts?)
- Counterparties (employees only? customers? third parties?)
- Geographic and temporal limits
- Channels (email? Slack? phone? web forms?)
Vague scope = liability for actions you didn't expect.
2. Decision Boundaries
Within authorized actions, what limits apply?
- Maximum amount per transaction
- Maximum frequency per period
- Type of contract or commitment
- Risk tier triggering escalation
Decision boundaries are the clause that prevents agent runaway scenarios.
3. Human Oversight and Escalation
When must a human approve before the agent acts?
- Specific transaction types
- Above defined thresholds
- Edge cases the agent flags
- Mandatory human-in-the-loop scenarios (regulated transactions, high-value deals)
Escalation triggers are how you keep control while delegating.
4. Input and Output Data
What data flows in and out?
- Data sources the agent can access (CRM? email? calendar? customer database?)
- Data the agent generates (decisions, content, transaction records)
- Confidentiality and segregation of sensitive data
- Cross-border data flows (interface with LGPD)
Without this clause, you don't know what the agent has touched.
5. Ownership of Outputs and Prompts
Who owns what the agent produces?
- Output content (text, decisions, executed transactions)
- Prompts and instructions you provide
- Fine-tuning data, if any
- Usage logs
Common default: you own outputs, provider owns the model. Variations matter — read carefully.
6. Liability Allocation
When the agent errs, who pays?
- User responsibility for actions within scope
- Provider responsibility for tool defects
- Mutual indemnification structure
- Liability caps and exclusions
- Insurance requirements
This is where most contracts are too provider-friendly. Negotiate.
7. Logging and Audit
What is recorded and who can see it?
- Input, output, decision, action taken
- Access controls on logs
- Retention period
- Export format and rights
Logs are evidence when something goes wrong. Negotiate access and retention upfront.
8. Stop and Shutdown
How do you turn it off?
- Technical ability to shut down immediately
- Pause without permanent shutdown
- Transition plan if the provider discontinues the agent
- Data return or destruction at end
- Survival of post-shutdown obligations
Stop clauses matter most when you need them most.
9. Confidentiality
What data is confidential?
- Definition of confidential information
- Use restrictions
- Subprocessors and chain
- Survival period
Standard but easy to draft poorly.
10. LGPD and Sector Compliance
Where regulations apply:
- Personal data processing under LGPD
- Sector-specific rules (financial, healthcare, telecom)
- Roles (controller, processor, joint)
- Incident response
Integrate with your existing compliance program — don't create a parallel regime.
11. Indemnification and Insurance
Real protection against agent errors:
- Mutual indemnification scope
- Cap and basket
- Survival period
- Insurance requirements (provider liability insurance, cyber coverage)
12. Model Updates and Versioning
What happens when the model changes?
- Prior notice requirement
- Versioning (keep old model for a period)
- Continuity of contractual clauses
- Termination right on material change
Without this, the agent's behavior can shift without warning.
Common Mistakes
- Treating it as a SaaS contract. Agent ≠ tool. Authority and liability differ.
- Vague authority scope. Agent acts beyond expectations; you answer.
- No decision boundaries. Single transaction can wipe out your budget.
- No human oversight escalation. Regulated or high-value moves go through.
- No logging clause. When something goes wrong, no evidence.
- Default liability. Most provider templates push everything to you.
- No update notice. Model changes; agent behavior changes; you don't know.
- No stop clause. Trying to disable a misbehaving agent without contractual right is messy.
How to Move Forward
- Identify whether your tool is an agent or just AI SaaS
- Map the parties — provider, model provider, you
- Define scope of authority and decision boundaries
- Negotiate liability and indemnification — don't accept defaults
- Document logging, oversight, and stop mechanisms
- Integrate with LGPD and sector compliance
- Plan for model updates from day one
Talk to Hosaki Advogados
Your company is rolling out an AI agent — for sales, customer support, internal operations, content generation, transactions? Send us your draft contract. We flag missing clauses, uncovered risks, and adapt the terms to Brazilian realities. One week to a solid contract before you sign with the vendor.
We work with founders, legal teams, and operations leaders deploying AI agents in Brazilian and cross-border contexts.
Reach us at hosakiadvocacia.com.br // contato@hosakiadvocacia.com.br // send us your draft to review.
FAQ
In an AI SaaS contract, the user interacts with the tool — asks for something, gets a response, decides what to do. In an AI agent contract, the tool acts on its own on behalf of the user — schedules meetings, closes orders, contracts vendors, executes decisions within a defined scope. The practical difference: in SaaS, the user is responsible for the final decision; with an agent, the agent makes the decision and the user answers for it. Liability clauses, authority scope, and supervision are radically different in the two cases.
As a rule, the user of the agent — because the agent acts on behalf of the user, similar to a mandate under the Civil Code. The agent provider can be liable in specific scenarios: product defect under consumer law (CDC) when the error stems from tool failure, breach of contractual warranty, violation of a duty of means (security, updates). Liability allocation is negotiated in the contract. Without a clear clause, the default rule tends to push liability to the user. That's why mutual indemnification, insurance, and liability caps matter so much.
Yes — and this is an essential clause. The agent's authority scope must be precisely defined: what actions the agent may take, within what limits (max amount per transaction, max frequency, counterparty type, geography), and which decisions require human approval before execution. Without clear definition, the agent may act beyond expectations and the user answers. Decision boundaries is the clause that protects the user most in real use.
Depends on the contract. The ownership clause should cover: ownership of the generated output (text, decision, executed transaction), the prompts and instructions given to the agent, the data used to train or fine-tune the agent. Common default in current contracts: user owns the output, provider keeps the model. But there are variations — some providers reserve rights over outputs, others over usage logs for service improvement. Reading this clause carefully prevents surprise later.
The agent processes personal data when: it receives input with personal data, operates on customer records, decides about individuals, or maintains interaction logs. Each of these is data processing under LGPD. The contract should define: applicable legal bases, controller/processor roles between user and provider, technical and organizational security measures, international data transfers, data subject rights, incident plan. For a digital company, integrating agent operations into the existing LGPD program is the path — not creating a parallel regime.
A model update can change agent behavior. The update clause should cover: prior notice to the user, possibility of versioning (keep old model for a period), continuity guarantee for contractual clauses, termination right if the update materially changes the service. Without this clause, the user may wake up to an agent that behaves differently — a real risk in AI deployment.
The log and audit clause defines: what is recorded (input, output, decision, action taken), who has access to the logs, retention period, format, and exportability. For a digital company, the agent's log is primary evidence when something goes wrong — regulatory dispute, customer complaint, internal investigation. Without structured logs, reconstructing what happened is hard. Without a contractual clause on logs, the provider may not deliver them or may charge to deliver them later.
The stop and shutdown clause should be explicit. It includes: technical ability to shut down immediately, ability to pause without permanent shutdown, transition plan if the agent is discontinued by the provider, return or destruction of data at the end, survival of post-shutdown obligations (confidentiality, logs, indemnification for past acts). Without this clause, the user may struggle to stop using an agent that's causing a problem.
