Skip to content

NIST AI Risk Management Framework Overlap with SOC 2

NIST AI Risk Management Framework Overlap with SOC 2

October 14, 2024

 

The National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF) on July 26, 2024. This framework provides a voluntary roadmap for organizations seeking to identify, assess, and mitigate risks associated with AI systems while promoting responsible and trustworthy AI development. Simultaneously, organizations have been increasingly engaged in System and Organization Controls (SOC) reporting, particularly SOC 2, which evaluates controls related to security, availability, processing integrity, confidentiality, and privacy.

Both the AI RMF and SOC reporting serve to protect organizations and their stakeholders from technological and operational risks. While the AI RMF focuses specifically on managing risks related to AI systems, the SOC 2 framework provides a robust foundation for managing system controls more generally, covering many of the same core areas that are critical for AI, such as security, privacy, and governance. Organizations already familiar with SOC 2 can leverage their existing policies and procedures—such as those for risk assessment, governance, monitoring, and system development lifecycle (SDLC)—to effectively implement the AI RMF.

Organizations that already comply with SOC 2 will find they have a strong starting point for many of the requirements set out by the AI RMF.

Overview of the NIST AI RMF

  1. Govern: This function focuses on organizational policies and practices for AI risk management. It stresses the importance of establishing legal, regulatory, and operational frameworks, along with setting clear roles and responsibilities across the AI lifecycle. Organizations should develop robust governance frameworks for managing AI risks, with policies that address risk assessment, monitoring, and the system development lifecycle (SDLC). Policies developed as part of the SOC audit process, such as those for governance, risk assessment, and monitoring, will support the AI RMF’s Govern function.
  2. Manage: Managing AI risks involves taking concrete steps to address identified risks, mitigate potential harms, and adjust systems as they evolve. Organizations should develop continuous feedback loops, ensuring that AI systems are regularly evaluated for emerging risks and opportunities. As part of this, the SDLC should account for ongoing monitoring of system performance against trustworthiness benchmarks.
  3. Map: Mapping is about understanding the context in which AI systems operate, identifying stakeholders, and documenting risks related to the AI system. This function encourages organizations to assess the potential impacts of AI systems and understand how they fit into broader business goals. The SOC 2 framework already requires a structured approach to assessing risks, and this maps closely to the AI RMF’s guidance, which emphasizes continuous risk assessment throughout the AI lifecycle.
  4. Measure: The Measure function emphasizes evaluating the performance and trustworthiness of AI systems. Organizations are encouraged to track the system's effectiveness, accuracy, and compliance with defined standards over time. SDLC policies and processes will need to incorporate new assessments of system output. This includes developing specific metrics to determine system fit and function and evaluating whether the AI system meets trustworthiness standards such as fairness, transparency, and accountability.

Preparing for AI's Organizational Impact

Beyond technical changes, organizations will also need to revisit human resource and training practices as AI systems evolve. The rapid pace of AI development demands that HR policies ensure ongoing industry awareness and continuous learning. Training programs should focus not only on AI system operation but also on the ethical and regulatory aspects of AI, helping employees stay up to date with the latest advancements and risks.

In conclusion, while the NIST AI RMF introduces new frameworks for managing AI risks, organizations that already comply with SOC 2 will find alignment between the two systems. Leveraging existing SOC controls to implement the AI RMF can help organizations build trustworthy, secure, and accountable AI systems, ensuring they are better prepared for the risks associated with AI.

Larson and Company has developed a suite of services specifically to serve the needs of companies of all sizes in a wide range of industries.  Reach out to us today for additional guidance on this topic.

Written with AI assistance

Sources:

AICPA Guide: (SOC 2) Reporting on an Examination of Controls at a Service OrganizationRelevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy (Updated As of October 15, 2022)

NIST Resources, including:

NIST AI RMF Playbook

NIST AI RMF Roadmap

NIST Perspectives