The Good, the Bad and the Ugly
We recently reviewed new additions to the global collection of LRRs (laws, rules and regulations), frameworks, guidelines and standards to drive alignment and agreement on what it means to be trustworthy and responsible in regard to AI. The Association of Southeast Asion Nations (ASEAN) Guide on AI Governance and Ethics and Expanded ASEAN Guide on AI Governance and Ethics - Generative AI are very interesting reads and have been added to our library. Almost every day a new artifact appears in our feeds with more on the horizon. Most speak in similar terms and try to provide a taxonomy, vocabulary and "incentives" to allow diverse stakeholders to communicate in a common language that articulates "good."
However, there are many gaps and even more areas that do not have a stakeholder or named custodian. To paraphrase Douglas Adams, "The secret to being invisible is to be someone else's problem." You never see the one that gets you, so mind the gap. In the WWT spirit of no surprises, let's look at the joined-up guidance from around the planet to settle in on The Good, the bad and the ugly…
While compliance does NOT equal security, compliance frameworks are very useful when interacting with other stakeholders as they facilitate a shared frame of reference across diverse communities. The good news is there is no shortage of compliance frameworks, LRRs, best practices and unsolicited advice from people like us ;) How does anyone make sense of it? Which pony should I bet on? Our response is, why limit yourself to one when you can leverage them all?
Leveraging compliance frameworks, LRRs and best practices
Let's provide some context in the form of grounded truth and approach principles:
- System confidence is a combination of trust and control.
- Flexibility, agility and resilience are valued traits.
- Holistic (multi-domain), trustworthy and responsible AI is our north star.
If we are going to cherry pick from the shared global brain trust, where do we start? Unified compliance frameworks have been around for a long time. We are big fans of the concept and think they are used effectively in many practices. When viewed through a lens of connected data, priorities and opportunities, clear "exploitable insight" emerges.
Our field-proven approach ingests, connects and fabricates relevant artifacts that our analysts, consultants and advisors use to provide prescriptive guidance fueled by data. The heat map below is one such artifact.
- The Good: The detailed heat map above reveals a consistently high-matching alignment to NIST 800-53 controls, indicating a robust compliance posture and mature control framework. This synergy provides strong assurances of audit readiness, boosts stakeholder confidence and streamlines integration with other compliance standards. It also underscores that the organization has solid foundations for AI governance, reducing potential operational and security gaps while enabling continuous improvement.
- The Bad: Despite this high level of overall alignment, three critical requirements lack mapped controls: Govern 3.1, Measure 2.9 and Measure 2.12. Without diversity, equity and inclusion (DEI) focused governance (Govern 3.1), AI systems may develop blind spots, increasing the risk of bias and failing to meet the needs of diverse stakeholder groups. Similarly, missing controls for AI model explainability, validation and governance (Measure 2.9) can result in opaque, unverified and misinterpreted AI outputs, undermining trust and accountability. Missing controls for ESG (environmental, social and governance) considerations (Measure 2.12) also means organizations may be overlooking broader environmental, social and governance impacts of AI. Together, these gaps can weaken responsible AI oversight, leading to unintended risks and reduced alignment with ethical and regulatory expectations.
- The Ugly: Our unique insight suggests that there are major gaps in most, if not all, generative AI-specific frameworks. Highlighting the most novel and potentially impactful aspects of the NIST 600-1 AI RMF, aka the "Dirty Dozen," the threat categories of DEI and ESG lack controls. If unaddressed, this can lead to compliance shortfalls, ethical missteps and reputational damage. Failing to measure AI risks or demonstrate robust governance could lead to cascading operational issues, such as uncontrolled model drift, unchecked biases or failure to meet emerging regulatory requirements. In turn, these shortcomings may erode stakeholder trust, increase liability and undermine the progress shown elsewhere in the heat map.
By swiftly prioritizing and closing these gaps, particularly around foundational AI governance, risk measurement and ESG impacts, organizations can adopt a baseline that is fit-for-purpose, ensuring alignment with the NIST AI Risk Management Framework and safeguarding against evolving AI risks.
A way forward
Our initial read suggests ASEAN members are starting from a great place and have laudable objectives that can be accelerated by adopting our approach. However, it takes a (global) village to make the vision real. Don't go it alone, leverage our curated threat catalogs when discussing use cases with stakeholders. Understand control affinities that work and support the vision's mission. Be prescriptive but flexible, providing your stakeholder community "facilitated choice," while making it easy to do the right thing and hard to do the wrong thing, because AI Matters…