Synthetic media is no longer a future-state concern for creators in Brazil. Tools that produce convincing video and audio deepfakes are now consumer-grade. Creators with a public profile have already discovered themselves "endorsing" cryptocurrency platforms, weight-loss products, or political messages they would never associate with.
This article covers what Brazilian law already provides — without waiting for AI-specific legislation — and the operational sequence for the first 24 hours after a deepfake is discovered.
What a deepfake violates under Brazilian law today
There is no single dedicated deepfake statute in Brazil, but the act sits at the intersection of multiple legal frameworks. A non-authorized deepfake of a creator's image and voice typically triggers all of the following simultaneously:
Personality rights. The Federal Constitution (Art. 5, X) and the Civil Code (Arts. 11-21) protect honor, image, and voice as inalienable personality rights. Use without authorization, especially for commercial purposes, generates a direct claim.
Related copyright (direitos conexos). The Copyright Act (Law No. 9,610/1998), in Art. 89 and following, protects performers' rights over their interpretations — the use of a creator's voice and performance in synthetic content produced without authorization is unauthorized exploitation.
Data protection. Biometric data (face, voice patterns) is sensitive personal data under LGPD Art. 11, requiring a specific legal basis. Training a model on a person's biometric data, or processing it to generate synthetic content of that person, without consent or another valid legal basis, is unlawful processing.
Criminal offenses. Depending on the content, the deepfake may also constitute crimes against honor (Penal Code Arts. 138-140 — calumny, defamation, insult), fraud (Art. 171), and ideological falsification (Art. 299). The criminal qualification matters for the criminal complaint pathway, separate from civil action.
The first 24 hours
Speed and documentation in the first day determine the trajectory of the response.
Document everything
Before anything else: screenshot the video itself, the URL, the posting profile, the timestamp, the view count, the comments. If the content is migrating across platforms, capture each instance. This becomes the evidentiary base for everything that follows.
For high-stakes cases — high reach, monetary loss, severe reputational damage — request an ata notarial (notarized record of fact) at a notary public. Brazilian courts give significant evidentiary weight to ata notarial documenting digital content, since it freezes the state of an online publication at a specific moment.
Platform takedown
Major platforms (Instagram, TikTok, YouTube, X, Meta-owned products) have specific reporting flows for deepfakes, impersonation, and synthetic media. Use the most specific category available — generic "abuse" reports tend to take longer. Most platforms also have copyright takedown channels that can be parallel-filed if the deepfake includes any of the creator's original audio or visual elements.
Platform response times vary. Some content disappears within hours; some takes days. Persistent escalation through verified channels (creator support, partner programs) helps.
Communicate proactively
Within hours, post a public alert through verified channels: original platforms, email list, brand partners. The communication does two things:
- Mitigates conversion of viewers into victims of the underlying scam (if the deepfake is selling something fraudulent)
- Protects partners and brand relationships from confusion
Keep the language factual. Avoid speculation about who produced the content. Reference the takedown process in motion.
Civil action
If the deepfake creator is identifiable and within Brazilian jurisdiction, civil action under Civil Code Arts. 11-21 (personality rights) combined with the Copyright Act (related rights) is the standard pathway. Damages can include:
- Moral damages — non-pecuniary harm from the violation of honor and image
- Material damages — actual losses (lost partnerships, lost contracts, brand impact) and any commercial benefit obtained by the offender from the unauthorized use
Emergency relief (tutela de urgência) requesting takedown can be granted by Brazilian courts when documentation is clear and the harm is ongoing. This is parallel to the platform takedown — judicial removal binds the platform legally even if voluntary takedown stalls.
Criminal action
Criminal action runs in parallel to civil action, not as a substitute. The criminal complaint (queixa-crime for honor crimes; representation for fraud) is filed with police or directly with the prosecution depending on the offense. Documentation collected for the civil action is reused.
Criminal proceedings tend to move slower than civil but produce different outcomes — they target the offender personally, not just compensation, and the criminal record itself has reputational weight in further conduct.
What about the foreign offender
Many deepfake operations originate outside Brazil. Brazilian jurisdiction can still apply if the content is accessible in Brazil and harms a person within Brazilian territory (Marco Civil Art. 11), but enforcing a decision against a foreign defendant requires international cooperation, which is slow.
The practical sequence:
- Immediate: takedown via platform with Brazilian operations (which can be ordered by Brazilian courts, regardless of where the content was originated)
- Short-term: civil action against the platform itself if needed, including obligation to preserve poster data for investigation
- Medium-term: investigation to identify the actual person behind the deepfake; only then is the criminal/civil action against that individual practical
Most creators, in practice, focus on takedown and reputational protection rather than tracking the offender across jurisdictions — unless monetary stakes or political motivations make that pursuit worth the cost.
What to ask for in contracts going forward
As a defensive measure, contracts with brands, platforms, and any party that processes the creator's image or voice should include explicit clauses on:
- Prohibition of generating synthetic content from the creator's image, voice, or performance without specific written consent for that synthetic use
- Audit rights over how biometric data captured (e.g., during a shoot) is stored, retained, and disposed of
- Indemnification for unauthorized synthetic use originating from data shared during the engagement
- Specific deepfake/impersonation insurance coverage where applicable
These clauses are not yet market-default in Brazil. Adding them shifts a creator's posture from reactive (responding to deepfakes after the fact) to preventive (controlling who has access to the inputs that make convincing deepfakes possible).
What changes when AI-specific legislation passes
Brazil's AI regulation framework is in legislative discussion, not yet sanctioned at the time of writing. The expected direction includes specific provisions on synthetic media, transparency obligations, and operator liability for AI systems used to produce harmful content. When it passes, the legal basis above will be reinforced — not replaced.
For now: the existing framework is sufficient. The challenge is operational, not legal. Creators with documentation, platform relationships, and a fast response sequence handle deepfake incidents in days. Those without, in months.
FAQ
First, document: screenshot of the video, URL, screenshot of the posting profile, capture of view count at the moment. Second, notify the platform — Instagram, TikTok, YouTube, and Meta have specific forms for deepfake and impersonation. Third, preserve evidence with a notary's certified record (ata notarial) if the case is serious. In parallel: alert partners and followers via official channels about the fake content, to mitigate reputational damage and prevent fraudulent conversions.
There is no single dedicated law, but the act is covered by multiple instruments: Constitution Art. 5, X (protection of honor and image); Civil Code Arts. 11-21 (personality rights — name, image, voice); Copyright Act (Law No. 9,610/1998) Art. 89 et seq. (related rights over voice and performance); LGPD (Law No. 13,709/2018) Art. 11 (biometric data is sensitive data, requires specific legal basis); Penal Code Arts. 138-140 (calumny, defamation, insult) where content is offensive. The combination provides robust grounds for civil and/or criminal action.
Yes. Unauthorized use of one's image for commercial purposes or that damages honor generally generates compensable moral damages (Civil Code Arts. 11-21 and consolidated jurisprudence of Brazil's Superior Court of Justice — STJ). The amount varies according to reach of the publication, severity of the content, intent of the offender, and economic capacity of the offender. In cases involving unlawful commercial benefit (a deepfake selling a product), material damages may also be claimed — the profit obtained from unauthorized use.
Under Brazil's Marco Civil da Internet (Law No. 12,965/2014, Art. 19), platform civil liability for third-party content depends on a specific judicial order for takedown — except for copyright violations (separate rule) and non-consensual intimate imagery (Art. 21, faster rule). For deepfakes, the fastest path is usually: formal notice to the platform → emergency judicial measure (tutela de urgência) seeking removal. Judges tend to grant preliminary injunctions when documentation is clear.
More complex — it involves international jurisdiction. If the content is accessible in Brazil and affects a Brazilian rights holder, there is grounding for Brazilian jurisdiction under the Marco Civil (Art. 11). But enforcing a decision against a foreign defendant depends on international cooperation. The practical path tends to be: immediate takedown via the platform (which has Brazilian operations) + civil action against the platform for data preservation of the original poster + investigation to identify the actual responsible party.
