The Oregon Attorney General’s recently released AI guidance stands out for its approachable tone and engaging style, more reminiscent of a blog post than a government directive. By presenting complex issues in straightforward language, the document ensures accessibility for businesses and developers navigating AI-related challenges.
Unlike some AI policies bogged down by dense jargon, this guidance uses a refreshingly simple definition of artificial intelligence, describing it as “a variety of new products that pass the Turing Test.” This nod to the test’s foundational concept—determining whether a machine can mimic human behavior indistinguishably—offers clarity amid a sea of convoluted definitions.
The guidance doesn't aim to be comprehensive. Instead, it underscores how existing laws apply to AI, emphasizing that technological innovation doesn't absolve businesses from their legal obligations.
While acknowledging AI’s potential economic benefits for Oregon, the guidance places stronger emphasis on mitigating risks. Key concerns include privacy violations, discriminatory practices, and accountability in automated decision-making.
Unlawful Trade Practices Act (UTPA)
The AG interprets this law to address AI-related risks in consumer transactions, citing several examples of potential violations:
1.AI tools failing to provide accurate information to consumers.
2.Non-disclosure of known material defects or nonconformities in AI products at the time of delivery.
3.Misrepresentation of product characteristics, uses, or benefits through AI.
Using AI to falsely advertise price reductions.
4.Exploiting AI to set unconscionably excessive prices during emergencies (violating anti-price gouging laws).
5. AI-generated voices in robocalls that misrepresent the caller’s identity or purpose.
📍 Oregon Consumer Privacy Act (OCPA)
The AG clarifies that developers using third-party datasets to train AI models may qualify as "controllers" under the OCPA. Consequently, they must meet the same compliance standards as the original data collectors.
The guidance highlights several obligations:
1.Providing transparent, accessible privacy notices.
2. Obtaining explicit consent for sensitive data before using it to train AI models.
3. Prohibiting retroactive or passive alterations to privacy policies to legitimize the use of previously collected personal data for AI training.
4.Allowing consumers to opt out of AI-driven profiling in decisions with significant legal or personal impacts (e.g., housing, education, lending).
5. Respecting the right to be forgotten.
6. Conducting Data Protection Assessments for activities involving heightened risks, such as processing sensitive data or profiling.
📍 Oregon Consumer Information Protection Act
This act, focusing on safeguarding personal information and requiring breach notifications, also applies to entities using AI. The AG stresses that these provisions remain relevant for AI actors.
📍 Oregon Equality Act
Strengthened anti-discrimination protections under this act extend to AI use cases. The AG underscores that AI applications must not adversely affect protected groups.
Overall, this document provides a user-friendly guide, enriched with relatable examples, to help businesses recognize and address the legal risks associated with AI. It serves as a practical resource for fostering compliance and mitigating potential liabilities.
Comments