October 14, 2024
The National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF) on July 26, 2024. This framework provides a voluntary roadmap for organizations seeking to identify, assess, and mitigate risks associated with AI systems while promoting responsible and trustworthy AI development. Simultaneously, organizations have been increasingly engaged in System and Organization Controls (SOC) reporting, particularly SOC 2, which evaluates controls related to security, availability, processing integrity, confidentiality, and privacy.
Both the AI RMF and SOC reporting serve to protect organizations and their stakeholders from technological and operational risks. While the AI RMF focuses specifically on managing risks related to AI systems, the SOC 2 framework provides a robust foundation for managing system controls more generally, covering many of the same core areas that are critical for AI, such as security, privacy, and governance. Organizations already familiar with SOC 2 can leverage their existing policies and procedures—such as those for risk assessment, governance, monitoring, and system development lifecycle (SDLC)—to effectively implement the AI RMF.
Organizations that already comply with SOC 2 will find they have a strong starting point for many of the requirements set out by the AI RMF.
Beyond technical changes, organizations will also need to revisit human resource and training practices as AI systems evolve. The rapid pace of AI development demands that HR policies ensure ongoing industry awareness and continuous learning. Training programs should focus not only on AI system operation but also on the ethical and regulatory aspects of AI, helping employees stay up to date with the latest advancements and risks.
In conclusion, while the NIST AI RMF introduces new frameworks for managing AI risks, organizations that already comply with SOC 2 will find alignment between the two systems. Leveraging existing SOC controls to implement the AI RMF can help organizations build trustworthy, secure, and accountable AI systems, ensuring they are better prepared for the risks associated with AI.
Larson and Company has developed a suite of services specifically to serve the needs of companies of all sizes in a wide range of industries. Reach out to us today for additional guidance on this topic.
Written with AI assistance
Sources:
AICPA Guide: (SOC 2) Reporting on an Examination of Controls at a Service OrganizationRelevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy (Updated As of October 15, 2022)
NIST Resources, including:
NIST AI RMF Playbook
NIST AI RMF Roadmap
NIST Perspectives