A preview of Leonard Wills' upcoming SiRAcon '26 talk "Trustworthy AI Begins Here: NIST's Risk Management Framework and Public Safeguards. Read more about Leonard's talk here.
Over the past few years, organizations have continued to face increased scrutiny by federal and state regulators regarding their deployment and governance of AI. The Federal Trade Commission (FTC) enforcement action against Rite Aid serves as a lesson for organizations to develop and maintain AI governance programs that adequately reduce foreseeable harm.
FTC Enforcement Action Against Rite Aid
Overview
In 2023, the FTC found that between 2012 and 2020 Rite Aid failed to take reasonable steps to manage risks to consumers arising from its use of AI facial recognition technology (FRT) in retail stores. The company deployed AI FRT to deter retail theft but failed to implement controls to address risks associated with the technology.
Foreseeable Harm
The FTC determined that the risks associated with deploying FRT – including inaccurate match alerts and their harmful impact on consumers – were reasonably foreseeable. The system generated false-positive matches that disproportionately affected women and people of color, increasing the likelihood of repeated and unequal harm across certain demographic groups. Despite this, Rite Aid did not implement adequate safeguards to assess, monitor, or mitigate those risks prior to or during deployment.
Operational Failures
Retail employees received match alerts from the FRT with limited guidance on how to verify their accuracy. Employees often treated these alerts as reliable rather than as system-generated outputs that require verification. On numerous occasions, employees acted on these alerts and confronted customers about their identity. As a result, customers experienced tangible harm because of wrongful stops, unwarranted suspicion, denial of service, and in some cases, wrongful arrests.
The Need for AI Governance
NIST AI RMF
What’s one of the takeaways from the FTC enforcement action against Rite Aid?
Organizations must develop an AI governance program to address risks that cause foreseeable harm. Implementing the NIST AI Risk Management Framework (RMF) provides a strong starting point for addressing AI risks from procurement through deployment.
The framework consists of four core functions: GOVERN, MAP, MEASURE, and MANAGE.
The GOVERN function requires organizations to develop policies, procedures, and processes to address AI risks across the AI lifecycle.
The MAP function requires organizations to discover and document AI risks.
The MEASURE function requires organizations to apply methods to measure and evaluate AI risks across the AI lifecycle and provide feedback to ensure the accuracy of measurements.
The MANAGE function requires organizations to proactively prioritize and mitigate AI risks based on the results of the MAP and MEASURE functions.
Although the NIST AI RMF offers guidance on how organizations should structure their AI risk management program, the framework does not offer a model to calculate AI risks in financial terms.
Enter the FAIR model.
FAIR provides an enterprise-scalable model to manage and quantify AI risks in financial terms, regardless of the technology stack, organization, or industry. Organizations can leverage both the NIST AI RMF and FAIR to develop and strengthen their AI governance program.
NIST AI RMF & FAIR Implementation
What’s one way to consider implementing the NIST AI RMF with the FAIR approach?
Organizations can implement the four core functions – GOVERN, MAP, MEASURE, and MANAGE – and integrate the FAIR model across these functions to analyze, measure, and manage AI risks.
GOVERN
Under the GOVERN function, an organization establishes policies, procedures, and structures to oversee AI systems across their lifecycle. An organization must define roles, responsibilities, and accountability for those individuals who design, develop, deploy, and monitor AI systems. This function also aligns AI activities with organizational values, policies, and strategic priorities, and incorporates processes to assess potential impacts to an organization, users, and the public.
The GOVERN function ensures that risk identification under the MAP function, risk quantification under the MEASURE function, and risk response under the MANAGE function occur consistently and align with an organization’s risk management program and regulatory requirements.
MAP
Under the MAP function, the FAIR model – unlike a traditional risk register – provides a common language to define and analyze risks across an organization. A traditional risk register documents risks at a point in time using qualitative categories (e.g., “Low,” “Medium,” and “High”). The traditional risk register does not answer: (1) who causes the harm, (2) how often the harm occurs, (3) what conditions make the harm more likely, and (4) what the losses involve. Without addressing these questions, an organization may overlook AI risks that lead to “foreseeable harm,” which may result in federal and state enforcement actions.
Mapping AI risks with FAIR requires an organization to develop risk scenarios that tie a specific threat community, threat type, and threat event to loss event frequency and loss magnitude. For instance, a retail employee (threat community) relies on an erroneous system output (threat type) generated by AI FRT. The system generates a false positive match alert, which the employee considers valid without performing any verification (threat event). The employee then confronts the customer, asking the individual to leave the store in front of others, including family members.
These customer interactions may occur repeatedly based on the rate of false positive alerts and the likelihood that employees will act on those alerts without verification (loss event frequency). As a result, customers may experience wrongful stops, unwarranted suspicion, denial of service, and, in some cases, wrongful arrests, while an organization may face reputational damage and regulatory exposure (loss magnitude).
MEASURE
The MEASURE function builds on the risk scenarios developed under the MAP function. Using these scenarios, an organization estimates loss event frequency by assessing both the rate of false positive alerts and the probability that employees act on match alerts without verification. An organization expresses these inputs as ranges, which define loss event frequency and loss magnitude. Loss magnitude includes primary loss – direct impacts to an organization such as operational disruption and lost revenue – and secondary loss, which includes downstream effects such as reputational damage, fines, and judgments.
An organization uses these ranges in a Monte Carlo simulation to model a loss distribution and estimate annualized loss exposure (ALE). This distribution provides a data-driven view of AI risks and informs an organization’s understanding of foreseeable harm associated with deploying AI technologies. An organization can show that harmful outcomes – such as employee-customer interactions driven by false positives – occur at a defined frequency and result in measurable loss.
MANAGE
The MANAGE function requires an organization to proactively prioritize AI risks and implement appropriate controls. The FAIR model supports this prioritization by quantifying loss exposure and modeling a distribution of potential loss events. An organization may manage these AI risks with directive, preventative, detective, corrective, and deterrent controls, among others.
- Directive controls establish guidance or requirements to enforce compliance with an organization’s AI policies (e.g., AI use policy that defines roles and accountability).
- Preventative controls reduce undesired actions that run against an organization’s AI policies (e.g., bias testing).
- Detective controls identify erroneous actions that have occurred (e.g., logging and monitoring system-generated match alerts).
- Corrective controls address errors and mitigate their negative consequences (e.g., customer remediation processes, incident response procedures).
- Deterrent controls discourage actions that violate AI policies (e.g., audit review, disciplinary actions).
-
Additionally, an organization should measure control effectiveness. This measurement evaluates how each control reduces loss exposure, including foreseeable harm. With this information, executives can determine which controls adequately reduce AI risks and prioritize investments in controls that align with business objectives and regulatory requirements.