• About Us
  • Advertise with Us
  • Contact Us
  • Events
  • Newsletter
  • Digital Magazine
  • Home
  • News
  • Opinion
  • Entrepreneurship
  • Self Development
  • Growth
  • Finance
  • Marketing
  • Technology
  • Sustainability
  • About Us
  • Advertise with Us
  • Contact Us
  • Events
  • Newsletter
  • Digital Magazine
NZBusiness Magazine

Type and hit Enter to search

Linkedin Facebook Instagram Youtube
  • Home
  • News
  • Opinion
  • Entrepreneurship
  • Self Development
  • Growth
  • Finance
  • Marketing
  • Technology
  • Sustainability
NZBusiness Magazine
  • News
  • Opinion
  • Entrepreneurship
  • Self Development
  • Growth
  • Finance
  • Marketing
  • Technology
  • Sustainability
AIEditors PickTechnology

Taking the ‘zero trust’ approach to AI

Artificial intelligence deserves the same ‘zero trust’ approach we take with cybersecurity, says Matthew Evetts, so constantly check and verify AI systems to ensure the best outcomes. When customers ask […]

Glenn Baker
Glenn Baker
September 11, 2023 4 Mins Read
1.7K

Artificial intelligence deserves the same ‘zero trust’ approach we take with cybersecurity, says Matthew Evetts, so constantly check and verify AI systems to ensure the best outcomes.

When customers ask me what the best way to secure their organisations is, my response is just two words: “zero trust”.

This is the increasingly predominant approach to improving cybersecurity maturity that rests on the premise that you can’t trust anything or anyone connected to or accessing your network, applications and devices. Instead, you basically assume your computers are already compromised, that everyone is a risk. You verify everything, all the time.

That’s a shift away from the “trust but verify” approach that long dominated cybersecurity – an approach that assumed that once you were logged in, you were trusted. That approach has become unsustainable in a hyper-connected world of increasingly sophisticated and frequent cyber-attacks.

Zero trust implementations of cybersecurity are still evolving and every day we help our customers employ zero trust principles for better cyber resilience. But we have also quite suddenly found ourselves in the era of generative AI.

 

AI and its risks

Services based on large language models (LLMs) from the likes of OpenAI, Microsoft, AWS, Google and Meta are already powering customer service chatbots, writing computer code, and summarising contracts and legal documents.

But our trust model for AI is incredibly immature. Even OpenAI’s creators of the generative pre-trained transformer (GPT) technology that underpins ChatGPT, the most rapidly adopted technology in history, can’t fully explain the answers it comes up with.

We’ve been told to expect generative AI systems to make and repeat mistakes and occasionally ‘hallucinate’ or generate false information. That doesn’t mean we shouldn’t use these systems. They offer the potential to boost productivity and power compelling new products and services, but we need to come up with methods to ensure we can trust what they produce and protect how we interact with them.

The field of AI needs to go through the evolution cybersecurity did to get to zero trust, but on an accelerated timeline. If the zero trust triangle in cybersecurity rests on verifying devices, users, and applications (all actions and actors), when it comes to AI systems, it’s about input data, outputs, and the users and machines who access the outputs.

The best AI minds in the world have developed these incredible LLMs and transformer technologies. But AI is fundamentally still about data stewardship. It’s about being very clear that you understand the data you are feeding into an AI system and how appropriate its application is for the intended use case. Determining who should have access to that data and to the outputs being generated from it is also an essential consideration.

There’s a fairly good understanding in the business community that feeding proprietary and sensitive data into AI systems like ChatGPT is a bad idea – it will be shared with the underlying model potentially exposing sensitive data in the results served up to other users.

But even when your data is ‘ring-fenced’ there are still a host of questions you need to ask yourself about what data AI systems should draw on and when. What data will be relevant to the intended outcome? Do we have permission to use the data? Are there likely to be biases in there? Could the data be unintentionally exposed?

We need to be constantly verifying and checking the data fed into AI systems. On the other side we should also be constantly verifying and checking the outputs served up by AI. This can’t be a one-off – outputs need to be monitored over time. The AI system in the middle may, essentially, be a black box to many organisations, particularly those using systems from third party vendors like OpenAI, Microsoft or AWS.

But we can take a zero trust approach to data stewardship and to verifying exactly who has access to the data at every step of the process. That requires human oversight, though tools are increasingly available to help automate parts of the process.

 

Self-preserving distrust

“Is a zero-trust AI mentality always necessary?” IEEE fellows Phil Laplante and Jeffrey Voas asked in an editorial for Computer last year that preceded the arrival of GPT-4.

“No. But we are suggesting that you should extend this instinctual, self-preserving distrust to AI. And we suggest that any critical AI-based product or service should be continuously questioned and evaluated; however, we acknowledge that there will be an overhead cost in doing so.”

At Datacom, we are helping our customers make the most of AI while ensuring the process of protecting their systems and data is manageable and affordable. We’ve kicked off a series of proof-of-concept projects in the AI space. One involves us exploring uses of Microsoft’s new AI-powered Copilot product, which will soon be in the hands of millions of users.

In our cybersecurity practice, we are looking at it through a risk lens – examining data access and potential leakage, making sure that verification processes can constantly take place to ensure AI tools serve up trustworthy results to the right people. This work is giving us insights into the opportunities and the challenges products like Copilot represent.

AI is also changing the face of cybersecurity itself. We’ve already seen extensive use of AI to automate network, device and data security. Generative AI will allow for faster and more intuitive security assessments and help tackle the cyber skills shortage.

On the flipside, AI is being adopted by the threat actors trying to exploit our customers’ networks, devices, data, and the identities of their employees. There are already ChatGPT equivalent for hackers. For us to keep up, our use of AI has to augment our response, making us faster and more effective.

The onus is on us, our customers and the vendors we partner with to harness AI to stay ahead of the threat actors who are employing it to their own ends.

The zero trust approach to cybersecurity is helping us do that. It will serve us well in the world of AI as well.

 

Matthew Evetts (pictured above) is director of connectivity and security at Datacom.

Share Article

Glenn Baker
Follow Me Written By

Glenn Baker

Glenn is a professional writer/editor with 50-plus years’ experience across radio, television and magazine publishing.

Other Articles

AI IMAGE 1
Previous

Artificial intelligence is all around us

NZ Fiji conference-265
Next

Visa waiver tabled at NZ-Fiji business conference

Next
NZ Fiji conference-265
September 11, 2023

Visa waiver tabled at NZ-Fiji business conference

Previous
September 11, 2023

Artificial intelligence is all around us

AI IMAGE 1

Subscribe to our newsletter

NZBusiness Digital Issue – June 2025

READ MORE

The Latest

AI that actually works for you

June 20, 2025

How tech, optimism and agility can drive SME growth

June 19, 2025

Disruption and opportunity: Why Kiwi companies are looking to the UK

June 19, 2025

Icehouse Business Owner Programme alumni win at Hi-Tech Awards

June 19, 2025

Future-proof your business with AI and a supportive network

June 18, 2025

NZ business optimism hits six-year high, 2degrees survey finds

June 18, 2025

Most Popular

Understanding AI
How much AI data is generated every 60 seconds? New report reveals global AI use
Navigating economic headwinds: Insights for SME owners
Navigating challenges: Small business resilience amidst sales decline
Nourishing success: Sam Bridgewater on his entrepreneurship journey with The Pure Food Co

Related Posts

AI that actually works for you

June 20, 2025

Future-proof your business with AI and a supportive network

June 18, 2025

The chicken before the egg: ERP systems for SMEs

June 13, 2025

New hi-tech seaweed product could make Paeroa world famous

June 12, 2025
NZBusiness Magazine

New Zealand’s leading source for business news, training guides and opinion from small businesses to multi-national corporations.

© Pure 360 Limited.
All Rights Reserved.

Quick Links

  • Advertise with us
  • Magazine issues
  • About us
  • Contact us
  • Privacy policy
  • Sitemap

Categories

  • News
  • Entrepreneurship
  • Growth
  • Finance
  • Education & Development
  • Marketing
  • Technology
  • Sustainability

Follow Us

LinkedIn
Facebook
Instagram
YouTube
  • Home
  • News
  • Opinion
  • Entrepreneurship
  • Self Development
  • Growth
  • Finance
  • Marketing
  • Technology
  • Sustainability