When your new hire brings their own AI: The next governance frontier
As employees begin bringing their own AI tools – and even personal “digital twins” – into the workplace, boards face a new governance frontier. Institute of Directors leaders Trish Oakley and Kirsten (KP) Patterson explore what BYO-AI means for productivity, risk, culture and the future of work.
Who doesn’t enjoy interviewing? We see it as a window into someone’s world – a chance to glimpse their strengths, values, and how they show up day to day. It showcases attitudes to learning and personal growth.
But in the age of AI, what does that interview window look like? Enter a recent example we heard about and the classic “is there anything you would like to ask us?” question.
This is the reply from a young candidate for a non-tech, general business role: “What technology stack does your organisation use?”
The candidate had built their own ‘digital twin’ and they wanted to bring it to work. This was a personal AI assistant they had trained for years. It was their co-worker, one who helped them organise, plan, research, interpret, draft, think and make decisions.
This isn’t hypothetical. It’s real. This isn’t about AI finessing language here or fixing coding errors there. This is about deliberate capability enhancement and greater productivity.
The ‘twin’ was a trusted, valued part of the employee package being offered.
From BYO device to BYO twin
Remember BYOD – bring your own device – days? Organisations wrestled with security around phones, tablets and laptops. Today, those have become normal because we adapted.
What if now we face:
- BYOA – bring your own AI
- BYOT – bring your own twin
If this is part of how our employees are going to show up each day, how do we govern that?
This isn’t just a tech story, it’s a governance story. It’s about value, productivity, risk, culture and the future of work. Here are five thought starters:
- What does this mean for productivity and value?
If an employee becomes more productive because they bring an AI twin, who gets the credit?
Does the employee deserve higher remuneration? Work fewer hours? Receive bigger bonuses? Or is the twin just an “assistant”, like Excel or Outlook?
If two employees produce the same output and one uses AI and the other doesn’t, should they be treated the same?
The Institute of Directors Four Pillars of Governance Best Practice notes that remuneration frameworks should support “performance, behaviours and outcomes aligned with organisational purpose and strategy”. But now “performance” itself is changing and frameworks may need reflection.
- Will we need to redefine work?
Personal AI tools already create reminders, summarise meetings, draft reports, review data and prepare presentations. Should employment models evolve to reflect AI-enhanced workflows?
Might roles split into human-AI partnerships where the twin is a “co-worker” on the payroll? We recently read an article where a bank has 100 digital employees complete with email address, login, human manager and performance reviews.
Are we measuring output or effort?
- Are our risk and cyber controls up to date?
Naturally your mind is turning to risk and cyber. And we have been talking about this already in the context of AI:
- Could the employee expose sensitive information? (Could the agent access sensitive information?)
- Could they carry proprietary data out the door? (IP is dealt with in employment agreements but what about digital twin-generated IP)?
- Who’s responsible if a bot makes a mistake?
When tools are outside visibility or control, your digital perimeter becomes porous.
The IoD’s Director’s Guide to AI Board Governance suggests boards should “identify, assess and manage current and emerging risks, including technology, information and cyber risks”. Organisations may need to extend oversight beyond organisational systems to include personally operated AI agents.
- What kind of culture are we building?
Young professionals want more than purpose-filled work and flexibility. They want to work smarter and, increasingly, that means working with AI.
Whether they bring their own curated workflows in the form of a digital twin or expect the organisation to provide one, culture and policy must adapt.
Perhaps your organisation could consider:
- Do our people policies consider human augmentation? How do we assess performance and design teams in this context?
- Are we rewarding outdated ways of working?
- What does professional development look like when learning is continuous, self-directed, and supported by an intelligent twin?
- Is our culture open to AI collaboration?
- Is your board AI literate and future focused?
Board directors don’t all need to be bleeding edge AI-experts, but they do need to understand the implications of AI for work, culture, ethics and strategy.
The IoD’s nine principles for AI board governance provide a framework:
- Take action from a strategic perspective
- Seize the opportunities
- Categorise and address the risks
- Build board capability
- Select the right board structure
- Oversee AI use and data governance
- Look after your people
- Proactively build trust
- Embrace AI as part of your governance practice
Related: AI and the human touch in recruitment
Where to from here?
This isn’t a problem to solve overnight. But it’s a conversation worth having. Suggested board meeting discussion points include:
- Do we have digital twins in our workplace and how would we know?
- Do our policies and risk frameworks account for this?
- How could we turn this into a strategic advantage?
- Are we ready to govern a hybrid workforce of people and their AI co-workers?
Final word: Will the twin be welcome?
That candidate who asked about the tech stack wasn’t being clever. They were showing us the future: a future where employees don’t just bring skills, they bring partners in how they work.
As directors and leaders, how do we govern and manage in a world where the human and digital now blend?
And the final question to ponder: Are you ready to build your own digital twin?