AI safety in New Zealand is no longer an abstract policy debate. It is now a day-to-day operating problem.
The key 2026 pattern is simple. AI use is already widespread across New Zealand workplaces, but safety systems are not keeping pace with the speed of adoption.
The safest organisations will not be the ones that avoid AI. They will be the ones that pair fast adoption with clear rules, practical training, visible review steps, and worker trust.
AI safety statistics in New Zealand: the headline numbers
- 81% of New Zealanders believe AI regulation is required. (KPMG NZ, 2025)
- 89% want laws and action to combat AI-generated misinformation. (KPMG NZ, 2025)
- Only 23% believe current safeguards are sufficient to make AI use safe. (KPMG NZ, 2025)
- Only 44% believe the benefits of AI outweigh the risks. (KPMG NZ, 2025)
- Only 24% have undertaken AI-related training or education. (KPMG NZ, 2025)
- Only 36% believe they have the skills to use AI appropriately. (KPMG NZ, 2025)
- 97% of workers have heard of AI, but only 34% can clearly explain what it is. (MBIE citing Verian, 2024)
- 43% of non-users cite lack of expertise as their main reason for not adopting AI. (MBIE citing Datacom, 2024)
- 84% of New Zealand knowledge workers already use generative AI at work. (Microsoft NZ, 2024)
- 81% of NZ AI users are bringing their own AI tools to work. (Microsoft NZ, 2024)
- 74% of NZ leaders worry their organisation lacks a plan and vision to implement AI. (Microsoft NZ, 2024)
- 91% of Kiwi workers use generative AI to some degree, 56% use it regularly or almost every day, and 26% use it every day. (Robert Half NZ, 2025)
- 93% are transparent with employers about their AI use, and 87% say AI skills are necessary for career success. (Robert Half NZ, 2025)
- 91% of businesses report efficiency improvements from AI, 77% report lower operating costs, and 50% cite positive financial impacts. (AI Forum NZ, 2025)
- 75% say AI setup costs are under $5,000, lowering the barrier to rapid adoption. (AI Forum NZ, 2025)
- 55% of NZ employers say AI has already increased workforce productivity. (Randstad NZ, 2026)
- 60% of employers think AI will affect a high proportion of work tasks, but only 48% of talent agrees. (Randstad NZ, 2026)
- 59% of NZ talent believe workplace AI will mainly benefit companies, not them. (Randstad NZ, 2026)
1. Safety pressure is rising because usage is already mainstream
The first safety lesson is that widespread AI use is already here.
- 84% of New Zealand knowledge workers already use generative AI at work.
- 81% of NZ AI users are bringing their own AI tools to work.
- 91% of Kiwi workers use generative AI to some degree.
- 56% use it regularly or almost every day, and 26% use it every day.
That means AI safety is not a future-readiness topic. It is a live workplace reality. Policy, review, and training now have to catch up with behaviour that is already happening.
Soundbite
AI safety gets harder when 84% are already using AI and 81% are bringing their own tools.
The real risk is not late adoption. It is unmanaged adoption.
2. New Zealanders want stronger rules, not vague reassurance
The public numbers are clear. People do not want safety to depend on good intentions alone.
- 81% believe AI regulation is required.
- 89% want laws and action to combat AI-generated misinformation.
- Only 23% believe current safeguards are sufficient to make AI use safe.
- Only 44% believe AI benefits outweigh AI risks.
That is a strong signal that AI safety in New Zealand needs visible guardrails. People want to know who is responsible, how outputs are checked, and what happens when systems go wrong.
Soundbite
81% want regulation, 89% want misinformation laws, and only 23% trust current safeguards.
In New Zealand, safer AI will need governance people can actually see, not just promises in the background.
3. Literacy and training are still too weak for safe scaling
Safety depends on more than policies. It also depends on whether people understand what they are using.
- 97% of workers have heard of AI, but only 34% can clearly explain what it is.
- 43% of non-users cite lack of expertise as their main adoption barrier.
- Only 24% have undertaken AI-related training or education.
- Only 36% believe they have the skills to use AI appropriately.
- 87% say AI skills are necessary for career success.
Those numbers point to a safety gap hiding inside a capability gap. Awareness is high, but practical judgment, verification habits, and role-specific skill still lag behind.
4. Safety is also a leadership and operating-model problem
A lot of AI risk comes from weak structure rather than dramatic failure.
- 74% of NZ leaders worry their organisation lacks a plan and vision to implement AI.
- 93% of workers say they are transparent with employers about using generative AI.
- 60% of employers think AI will affect a high proportion of work tasks, but only 48% of talent agrees.
- 59% of NZ talent believe workplace AI will mainly benefit companies, not them.
Workers are often not hiding AI use. The bigger issue is whether leadership has built a credible operating model around review, acceptable use, quality control, and worker buy-in.
5. The gains are real, which is exactly why safety matters more
The business case for AI is strong enough that adoption will keep accelerating.
- 91% of businesses report efficiency improvements from AI.
- 77% report lower operating costs.
- 50% cite positive financial impacts.
- 75% say setup costs are under $5,000.
- 55% of NZ employers say AI has already increased workforce productivity.
This is what makes AI safety urgent. The incentives to deploy AI are strong, the costs are falling, and the upside is visible. Without better guardrails, the pace of adoption will keep outrunning the systems designed to keep it safe and trustworthy.
Soundbite
91% see efficiency gains and 77% see lower costs, so safer AI is now a scaling issue, not a side issue.
When the upside is this visible, safety has to be built into the rollout rather than added later.
What these NZ AI safety statistics really mean
The clearest reading of the numbers is this:
- AI use is already widespread across New Zealand work.
- Public demand for stronger guardrails is already high.
- Current safeguard confidence is weak.
- Training and literacy are still not strong enough for safe scale.
- The winners will be organisations that treat safety as part of the operating model, not as legal cleanup after adoption.
For most NZ organisations, the safest next step is not to slow everything down. It is to make AI use more explicit, more reviewable, and more teachable.
- Define which AI use cases are low, medium, and high risk inside your organisation.
- Set simple review rules for customer-facing, legal, financial, and sensitive outputs.
- Train teams by role so safe use is practical instead of abstract.
- Normalise transparency so people can say when and how AI was used.
- Measure success using both efficiency gains and trust signals, not productivity alone.
In 2026, AI safety in New Zealand is not mainly about stopping people from using AI. It is about helping them use it in ways that are reliable enough to scale.
Frequently asked questions
What do AI safety statistics measure?
They measure how prepared people and organisations are to use AI without creating unnecessary harm, including safeguards, regulation demand, literacy, training, transparency, and the gap between rapid adoption and safe operating practices.
Are New Zealanders confident current AI safeguards are strong enough?
Not really. KPMG found only 23% of New Zealanders believe current safeguards are sufficient to make AI use safe, while 81% believe AI regulation is required.
Why is AI safety now a workplace issue in New Zealand?
Because workplace use is already mainstream. Microsoft reported 84% of New Zealand knowledge workers use generative AI at work, and 81% of AI users are bringing their own tools to work, which means governance has to catch up with real behaviour.
What is the biggest AI safety gap in NZ right now?
The clearest gap is capability plus guardrails. Awareness is high, but MBIE says only 34% of workers can clearly explain what AI is, KPMG found only 24% have had AI-related training, and only 36% believe they have the skills to use AI appropriately.
What should NZ leaders do with these safety statistics?
Treat them as an operating brief: set practical AI rules, train teams by role, define review and approval steps for high-risk work, make transparency normal, and show employees where AI improves their own work rather than just company efficiency.
Sources
Every statistic on this page is grounded in a public source so you can inspect the original reporting yourself.
- KPMG NZ — Trust, attitudes and use of artificial intelligence
- MBIE — Addressing barriers to AI uptake in New Zealand
- Microsoft NZ — AI at work is here. Now comes the hard part.
- Robert Half NZ — New Zealand workers embrace Gen AI and see AI skills as imperative to career success
- AI Forum NZ — AI in Action: Key Findings from New Zealand’s Third AI Productivity Report
- Randstad NZ — The AI strategic risk: how New Zealand leaders can scaffold AI-augmented roles for future productivity
Need AI systems that are useful without becoming risky or messy?
The NZ numbers point the same way, fast adoption only holds up when teams have clear rules, practical training, and review habits that make AI outputs safe to trust.
OpenClaws NZ helps New Zealand businesses turn fast-moving AI adoption into practical systems with clearer rules, safer workflows, and training that people can actually use.