News

/ The Power of AI Legislation in Defining a Nation’s Path

April 17, 2026

Laura Hernández Bethermyt

Senior Associate, Alessandri Abogados

Artificial intelligence (AI) is not merely a technological disruption. It is, above all, a driver of economic, legal, and cultural transformation that redefines global competitiveness. AI legislation is no longer a technical exercise but has become a strategic tool: its design can determine whether a country actively integrates into the digital economy or remains trapped in symbolic regulation, unable to generate real innovation or effectively protect the rights it claims to safeguard.

The global debate is illustrative. The European Union is committed to the AI Act[i] as a comprehensive regulatory model, with a risk-based approach and increasing obligations for high-impact and general-purpose systems. The United States, on the other hand, oscillates between sector-specific regulations, ex post enforcement, and calls for temporary moratoriums, prioritizing flexibility and enforceability. Japan is moving forward with an enabling approach, where regulation accompanies technological experimentation rather than anticipating it. Latin America observes, debates, and legislates, but does so from a structurally distinct position, marked by capacity gaps, technological inequality, and institutional weakness.

Regulatory power as a form of sovereignty

Legislating on artificial intelligence is also about defining a “country’s regulatory brand.” Not all regulations convey the same message: some convey certainty, others caution, and many merely good intentions. In Latin America, between 2021 and 2025, 193 legislative initiatives on AI have been recorded across 13 countries—a volume that reveals dynamism, but also fragmentation. Nearly 60% of these bills have a predominantly regulatory focus[ii], compared to just 26% with an enabling or pro-innovation orientation. The recurring emphasis on crimes, criminal aggravating factors, and general frameworks contrasts with the scant attention paid to education, digital infrastructure, or technological promotion.

National examples show divergent paths. Peru has moved forward with several specific laws and regulations; El Salvador adopted an openly enabling law in 2025, even creating a national artificial intelligence authority to attract investment and talent; Chile, on the other hand, has promoted a bill inspired by the AI Act that has been criticized for the risk of overregulation and for its limited practical viability. This diversity reflects an unresolved structural tension: how to control the risks of AI without stifling its transformative potential.

Regulating without strengthening technical capabilities, without specialized agencies, and without real enforcement mechanisms leads to a familiar scenario in the region: ambitious regulations on paper, but unenforceable in practice. The result is a dangerous paradox: high compliance costs for those who do try to comply (generally startups and SMEs) and ample room for informality or evasion for actors with greater economic and technological power.

AI and Intellectual Property: Authorship in Crisis

One of the areas where this tension manifests most clearly is intellectual property. The emergence of generative systems challenges classic notions of authorship, ownership, and originality. Questions that once seemed theoretical are now operational: Can a work created entirely by AI be protected by copyright? Who owns the results when developers, users, third-party datasets, and models trained on millions of pre-existing works are involved?

Comparative experience is beginning to outline answers. U.S. case law has been clear in requiring significant human intervention as a prerequisite for copyright protection. This forces companies, creators, and law firms to rethink their practices: from due diligence on datasets (their origin, licenses, and potential opt-outs) to documenting inventorship in AI-assisted patents and the contractual management of generated outputs.

The lack of regulatory clarity regarding model training and the legal status of the results not only threatens legal certainty but also discourages responsible innovation. Proposals such as collective licenses for training, compensation schemes for the use of protected works, or mandatory labeling of AI-generated content appear as interim solutions, but they require coordination, common standards, and political will to be effective.

The Risk of Regulatory Mimicry

Against this backdrop, one of the greatest risks for Latin America is regulatory mimicry: the temptation to copy European models without thoroughly adapting them to local realities. The AI Act is designed for an ecosystem with robust agencies, large budgets, and a long tradition of regulatory compliance. Transferring that framework, with almost no adjustments, to contexts with limited institutional capacity could produce exactly the opposite effect: discouraging innovation, driving away talent, and reinforcing dependence on external technology.

The Chilean case is illustrative. Various technology associations have warned that excessively rigid regulation, without implementation phases or clear incentives, risks turning AI into a regulatory burden rather than a lever for development. The problem is not regulation itself, but regulating without a serious diagnostic assessment or regulatory impact analysis.

Toward a Sustainable Regulatory Strategy

If legislation on artificial intelligence is to be more than a mere statement of principles, the region needs a shift in approach. First, we must move beyond fear of technology and legislate based on evidence, prioritizing impact assessments and experimental frameworks such as regulatory sandboxes. Combining clear obligations with tax incentives, adoption programs, and support for startups can create a more virtuous balance between protection and innovation.

Second, AI and intellectual property must be coherently integrated. This involves updating IP laws to incorporate exceptions for text and data mining, transparency protocols for datasets, and clear protection for digital replicas and moral rights. Alignment with international standards (such as those of the OECD, UNESCO, and the AI Act) should not be mimetic but functional: reducing transnational friction without sacrificing local flexibility.

Finally, it is essential to move toward effective regional governance. Latin America has produced numerous declarations, but few have translated into operational instruments. Common metrics, shared technical guidelines, financing mechanisms, and specialized authorities could turn regional rhetoric into real capabilities.

Conclusion

The regulation of artificial intelligence is not an end in itself. It is a strategic decision that defines competitiveness, technological sovereignty, and the quality of rights in the digital age. For Latin America, the challenge is not to legislate more, but to legislate better: with institutional realism, strategic ambition, and a deep understanding of how AI is reshaping the knowledge economy.

The most solid path is not to copy a single model, but to intelligently combine different strengths: European regulatory clarity, British agility, U.S. enforcement capacity, Chinese content governance, and the ethical approach of UNESCO and the OECD—all adapted to local priorities and the urgent need to bridge digital divides.

 

[i] The EU Artificial Intelligence Act – The AI Act is a European regulation on artificial intelligence (AI) – the first comprehensive regulation on AI by a major regulator anywhere – https://artificialintelligenceact.eu/

[ii] Report on AI Regulatory Challenges in Latin America – https://niubox.legal/informedesafios-ia/