Important: PL 2338/2023 is a bill under deliberation in Brazil's Federal Senate. It is not law. This article describes what the bill proposes — not what Brazilian law currently requires.
What the Bill Proposes
PL 2338/2023, authored by Senator Rodrigo Pacheco, adopts a risk-based approach similar to the EU AI Act. It proposes four categories of AI systems:
Unacceptable risk (prohibited): Systems that manipulate people subliminally or exploit their vulnerabilities; state social scoring; and real-time remote biometric identification in public spaces in specific contexts.
High risk: Systems making or influencing decisions in sensitive areas — health, education, employment, credit, social security, security, and law enforcement. For these, the bill proposes: mandatory human oversight, documentation and record-keeping, transparency to affected individuals, and harm impact assessments.
Limited risk: Transparency obligations only — systems interacting with users must identify themselves as AI.
Minimal risk: No specific obligations proposed. Most AI tools used for content creation fall in this category.
What This Means for Creators and Digital Businesses
If you use AI tools as creative or productivity assistants — for copywriting, image generation, code, or translation — the bill as drafted imposes minimal compliance obligations on you.
The heaviest burdens fall on those who develop or operate high-risk AI systems. A startup building an AI-powered hiring tool would face substantial requirements. A creator using generative AI for content would not.
Transparency disclosure: The bill does propose that AI-generated content that could be mistaken for human-created content must be identified as AI-generated in relevant contexts. This aligns with directions already developing in CONAR guidelines and international best practices.
ANPD as AI Regulator
The bill proposes ANPD as the primary AI oversight authority, coordinating with existing sectoral regulators — BACEN for financial AI, ANS for health AI, and others. This builds on ANPD's existing enforcement infrastructure under the LGPD.
What Already Applies Today (Without PL 2338)
Existing frameworks already govern AI use in Brazil:
- LGPD, Art. 20: Automated decisions affecting individuals must be disclosed; data subjects have the right to request human review.
- CDC: AI-powered services offered to consumers are subject to consumer protection rules.
- Copyright law: Using third-party protected works to train AI models without authorization may infringe copyright — an area of active legal debate in Brazil.
- Civil Code: Creating realistic AI representations of real people without consent may violate image rights.
FAQ
No. PL 2338/2023 is a bill under deliberation in the Federal Senate. Until approved and signed by the President, it has no legal effect.
The bill classifies as high-risk those systems that make or influence decisions in sensitive areas — such as health, education, employment, credit, and law enforcement. For these systems, the draft proposes additional obligations of transparency, governance, and human oversight.
For most creators using AI tools as creative assistants, the proposed obligations would be minimal. The heaviest compliance burdens fall on those who develop or operate high-risk AI systems — not on end users of those tools.
The bill proposes ANPD (Brazil's National Data Protection Authority) as the competent authority for AI system oversight, coordinating with sectoral regulators.
The bill proposes restrictions on unacceptable-risk applications — including subliminal manipulation and real-time biometric identification in public spaces. Non-consensual deepfakes already fall under existing rights (image rights, LGPD) and may be covered by specific prohibitions in the bill, depending on the final text.
LGPD (Art. 20 — right to human review of automated decisions), the Consumer Protection Code for AI-powered services to consumers, copyright law for use of protected works in model training, and the Civil Code for image rights violations.
