Allende & Brea – Estudio Jurídico

This report cannot be considered as legal or any other kind of advice by Allende & Brea. For any questions, do not hesitate to contact us.

Navigating the AI Landscape in Argentina: Data Protection, Guidelines & Judicial Responses

Summary

  • Argentina currently has no specific AI regulation, though existing frameworks on personal data protection and automated decision-making (via PDPL and binding to Convention 108+) provide the baseline for private-sector AI oversight.
  • The AAIP’s Interpretative Guidance and its “Guide for Public and Private Entities on Transparency and Personal Data Protection for Responsible AI” extend AI governance beyond public agencies, promoting ethical development through transparency, impact assessments, and accountability.
  • Legislative activity is ongoing, with various AI-related bills introduced, though none have become law yet.
  • Judicial practice is emerging; courts are using general legal provisions to address AI-related challenges—examples include prosecutions for deepfake dissemination during electoral campaigns and criminal liability for AI-generated child pornography.

Overview

Despite rapid AI adoption, Argentina has not enacted a specific AI Act. However, a robust legal and regulatory ecosystem already applies to AI through the existing Personal Data Protection Law (Ley 25.326 or PDPL), its implementing decree, and the country’s accession to Convention 108+, affording alignment with international data frameworks. Automated decision-making falls within the ambit of these regulations, especially under Section 20 of the PDPL.

Complementing formal regulations, the AAIP (Agency for Access to Public Information) issued an interpretative decision interpreting PDPL as applicable to private actors using automated decision-making mechanisms. The AAIP also published the “Guide for Public and Private Entities on Transparency and Personal Data Protection for Responsible AI,” embedded within its National Program for Transparency and Data Protection in the Use of AI (Resolution 161/2023). This proposal for the public sector instructs responsible AI use through measures like impact assessments, transparency by design, interdisciplinary review, explainability, and data protection throughout the AI lifecycle.

Multiple bills with AI relevance have been proposed, ranging from expanding transparency mechanisms, regulating the use of AI to create deepfakes, to regulating facial recognition for public security purposes. However, none of those bills have become law at this stage.

In the absence of AI-specific legislation, judicial decisions are filling the gap. Notably, the National Electoral Chamber confirmed prosecutions for circulating deepfakes during an election campaign—addressing electoral integrity through existing criminal and media laws. Another case held that using AI to create child pornography constitutes a criminal offense—applying preexisting criminal statutes to AI-generated sexual content involving minors.

This report cannot be considered as legal or any other kind of advice by Allende & Brea. For any questions, do not hesitate to contact us.

Related areas