Recent research has revealed that whilst almost 50% companies have an ‘AI strategy’, only 13% align that strategy with a formal AI governance framework.
This same study, published by UNESCO and Thomson Reuters Foundation found that a mere 12% have policies in place to ensure human oversight of AI processes.
This is alarming. But it’s not surprising. When you look at the rate that AI is evolving – and the excitement around what outcomes it can deliver in various business areas – it’s understandable that organisations have embraced innovation first, and governance second.
This approach is not one that we would advocate. Aligning with governance frameworks, and embedding practical AI guardrails, should not – cannot – be an after thought.
ℹ️ AI governance should be seen as a core organisational capability, not just a compliance exercise.
So, today we’ll:
- Take a high-level look at some of the key AI governance frameworks
- Explore the AI guardrails and processes you can put in place
- Understand the impact of AI governance on long-term success and adoption
And then we’ll ask you: Are your current AI policies and approaches to risk management enough?
Are you seeking specialist support to embed AI sustainably in your organisation? Explore our range of dedicated AI solutions – each designed to enhance adoption and accelerate value.
AI governance vs AI guardrails: What’s the difference?
AI governance frameworks provide a structure for how AI should be managed across an organisation, with clear standards and policies.
Examples include ISO/IEC 42001, which was one of the first international standards specifically designed for the responsible and ethical application of AI.
There’s also the EU AI Act, which acts as a risk-based framework, identifying AI usage that may range from unacceptable levels of risk to limited/low risk activities. Impacted organisations in the EU must legally comply with this legislation, which came into force in 2024. Core obligations are expected to apply in summer 2026.
AI guardrails are the tangible actions and processes that you put in place to comply with your governance framework(s).
💡 AI Guardrails Use Case
In our work with a long-standing, international client, we’ve embedded clear AI guardrails to enable them to use Rovo, Atlassian’s flagship AI solution, safely, sustainably and in line with the organisation’s governance frameworks.
These guardrails include:
- A Rovo solution where all transcripts more than 30 days old are deleted (in line with company data retention policies).
- Mandated the ‘human-in-the-loop’ principle, to ensure that any decisions or actions made by Rovo are reviewed by a person.
- Improved Confluence’s site and space permission structures, to ensure that Rovo can only do and see what the user can do and see, to mitigate security issues.
- Worked with teams to help them understand what Rovo is good at (and where it has limitations), to make sure that they’re using the right tool and that they’re using it securely.
- Cleaned up data quality to ensure that the input data Rovo uses will produce results of the highest possible quality.
Are you facing barriers to Rovo or AI tooling adoption in your organisation? Our Rovo AI Accelerator solution may be just what you need.
Why do AI guardrails and compliance with governance frameworks matter?
Governance frameworks like ISO 42001 matter because they set a universal standard that organisations need to meet – and which instills trust in employees and customers alike.
Without these kind of frameworks and compliance principles, you face the following challenges:
- Lack of accountability
- Lack of due process
- Shaky foundations for growth
- Limited adoption opportunities
- Potential for unethical or irresponsible use
- Increased risk of delayed decisions and late-stage interventions
- Inconsistent treatment of AI use cases
- Difficulty responding to audits.
At the organisational level these issues may cause regulatory challenges, internal compliance issues and erode that trust we mentioned above. Combined, these factors can all slow down innovation.
Why do so few organisations have them?
Let’s go back to that research.
Less than half of companies report having an AI strategy – and of those that do, 76% could not demonstrate concrete policies were in place. At a governance level, only 13% align AI strategy with governance frameworks (with over half citing the EU AI Act as their chosen framework).
When we look at support for teams on the ground meanwhile, only 12% of companies offer structured training on AI tooling,
Why are these numbers so low?
As we alluded to at the beginning of this post, some organisations may equate laying the foundations of AI strategy and governance with a delay in innovation. By focusing on speed, teams may relegate policies and due process to the ‘to do’ pile. (Which, in our experience, rarely get done!)
We’ve also seen many organisations mistakenly believe that existing controls for data protection or IT are sufficient for AI. In practice, these controls often operate in silos and do not address the unique challenges posed by AI Agents operating in cross functional value streams.
Our AI accelerators help you prioritise the right use cases, connect AI to real workflows, and scale solutions that deliver consistent, provable ROI.
What AI guardrails could you put in place?
We already shared some real-world examples of specific guardrails we’ve implemented for a client using Atlassian’s Rovo AI solution. Here are some other processes and principles that we’d recommend (depending on your use cases):
- Data minimisation policies: In line with your wider data protection processes, consider how you can minimise the data used by your AI solutions. If we look at Atlassian Rovo, for example, this could include adjusting Confluence permissions to ensure Rovo can only access specific content.
- Data mapping: As part of your existing data protection compliance efforts, you should have completed data-mapping exercises to understand how and where you process and store data. Now you need to apply an AI-lens to this, clearly mapping out how your data will flow into your AI tooling, where it will be stored, and how it will be used.
💡 AI governance documentation tip!
Centralise all your AI governance and guardrail documentation, from policies to process mapping and risk assessments, on a platform like Confluence.
If you use a tool like the Workflows for Confluence Marketplace app (developed by our colleagues at AppFox), you can automate user access and permissions, build automated publishing and approval workflows to ensure policy documents stay up to date, and ensure you maintain a robust audit log of changes and metadata.
Risk assessments: Identify where the potential risks (and opportunities) lie in your AI toolset. This could be around data protection, information security, quality management, bias, discrimination, and more. You should then clearly document how you will respond to and mitigate risks.
How confident are your team in building and deploying AI Agents? From Atlassian Rovo to AWS Strands, we’ve developed custom Agents for our clients that make a real impact on efficiency and speed. Explore our AI Agent Use Case library today!
PII detection and redaction: Let’s say you’re using Rovo in Confluence again. Do you know where PII and sensitive data may be hiding? You need a way to find this, whichever platforms you’re using, as you must have clear processes and policies to protect sensitive data in line with wider regulations and governance frameworks.
💡 Sensitive data and AI guardrails tip!
Another Marketplace app recommendation from us. If you need to know where PII may be stored in your Confluence pages, it’s worth trying the Compliance for Confluence app (again, created by our development arm, AppFox). Compliance for Confluence can detect sensitive data in Confluence, and automatically redact or remove it.
Running a regular automated scan for sensitive data is a prime example of a practical AI guardrail, and strengthens your compliance with both AI and data protection governance frameworks.
- Human review for high-stakes decisions: As we mentioned in our client use case, we’ve embedded a human-in-the-loop for all AI agentic outcomes, to ensure there’s still human oversight in every process.
- Clear accountability for AI outcomes: Your AI policies and organisation charts should clearly identify roles and responsibilities around the ethical use of AI. Just as organisations need to appoint a Data Protection Officer to comply with GDPR or the UK DPA, you’ll need to assign a AIMS Manager, for example, to align with the ISO/IEC 42001 framework.
- Third-party vendor risk assessments: Any AI solution providers should have a Trust Center where you can find detailed information about the models they use, data residency, information security, sub-processes, and more. You’ll need to be clear on how solutions can adhere to your internal policies, and ensure that team members don’t install any AI tooling that doesn’t meet these.
Are your AI guardrails up to scratch?
Sometimes finding the time to step off the hamster wheel to review your processes, policies and more can be challenging in itself.
So, bring in outside eyes. A fresh perspective from AI specialists can help you:
- Ensure your AI strategy is aligned with best practice and governance principles
- Review or advise on new guardrails to actively protect your data in line with your governance frameworks
- Understand how responsible, well-governed AI models lead to stronger, long-term outcomes
Talk to our team here at AC. We’re an independent, boutique AI, Agile and DevOps Consultancy, with over 20+ years experience across all stages of the software lifecycle and proven, hands-on Agentic AI expertise.





