/ Algorithmic Transparency Redefines Scoring in Chile
January 22, 2026Law No. 21,719 requires disclosure when a credit assessment is based on automated decisions and mandates a general explanation of how those models work. The standard is no longer focused solely on data quality and now advances toward transparency regarding the process that gives rise to the decision.
Cristina Valenzuela
Associate at Alessandri
Law No. 21,719 on Personal Data Protection introduces a structural shift in the processing of financial information in Chile. This regulatory turn directly affects scoring or risk scores, an essential mechanism for credit assessment.
Until now, the regulatory framework—comprising Law No. 20,575, which establishes the purpose limitation principle for personal data processing, and Article 17 of Law No. 19,628 on the protection of private life—focused on the objectivity of the data and the exclusive purpose of assessing commercial risk. The new law adds a fundamental requirement, which is transparency regarding the inferential process that produces the rating.
Relationship Between the General Framework and the Financial Sector Regulation
Chile’s risk assessment system has operated under a clear rule, which establishes that economic and financial data may only be used to evaluate credit risk under Law No. 20,575. Traditionally, this mandate was understood to be satisfied through the use of truthful and objective data. However, this standard did not address a critical point, which is the manner in which those data were weighted and transformed into a score.Law No. 21,719 introduces an unprecedented obligation in the financial domain, consisting of making transparent how risk scores are built. Its Article 8 bis provides that decisions affecting a data subject cannot rely exclusively on automated processing without the data subject knowing, at least in general terms, how the system operates. This means that entities must describe the variables they consider and how those variables influence the final outcome. The standard shifts toward understanding the reasoning the model follows.
Case Law and Recognition of the Score as Personal Data
This shift had been foreshadowed by Supreme Court case law. In Case No. 7613-2024, the court held that those who process financial information to generate reports act as data banks subject to Law No. 19,628 and cannot invoke trade secrets to deny access to the information generated.In Case No. 60876-2021, the Court determined that the risk score constitutes personal data, understood as a mathematical result constructed from information that belongs to the data subject. Law No. 21,719 expressly incorporates this view by requiring transparency regarding the model’s criteria. Thus, the data subject will not only be able to exercise their rights with respect to the data they provide, but also with respect to the methodology used and the result obtained through its processing.
Data Quality and Limits on Sources Used
The interaction between the new law and the special financial regime redefines the boundaries of what data may feed a scoring model. Law No. 20,575 already prohibited requesting economic information for purposes other than credit evaluation. With Law No. 21,719, principles such as lawfulness, proportionality, and data minimization, along with the right to object, impose additional restrictions. These limitations make it harder to incorporate data from social networks, open sources, or mass‑collection techniques where there is no direct and necessary link to the data subject’s creditworthiness. In that sense, mere availability or public access is no longer sufficient. As a consequence of the foregoing, the result is a stricter standard in which the relevance and necessity of the data take on a central role.
Conclusion: From Data Objectivity to Process Clarity
Law No. 21,719 does not replace sector‑specific regulation on credit scoring. It complements it by imposing a more demanding transparency standard. The industry will have to move from a model based exclusively on the objectivity of the data entered into the system to one that requires explaining, in an understandable way, the logic of the score produced. Going forward, automated risk assessment will only be valid if it can justify its conclusions to the data subject.The challenge no longer lies solely in processing accurate data, but in being able to explain the logic of the algorithm.