Leading the Way in Governing AI for People and Performance
Aaron Neilson
Page Published Date:
May 12, 2026
The world of work is fundamentally changing. The integration of digital work systems and Artificial Intelligence (AI) into our daily operations is no longer a futuristic concept; it is the present reality. For those of us in Human Resources, this shift presents a new frontier, one that requires us to proactively manage the profound impact of technology on employee experience, trust, and mental health. This evolution is now being met with decisive legislative action, most recently in NSW, but the message is universal: the digital workplace demands the same management rigour as any physical site.

Strengthening the Digital Guardrails
The NSW government recently made headlines by strengthening the ‘guardrails’ around digital safety in workplaces, essentially recognising the unique psychosocial health and safety risks posed by digital work systems and algorithmic management. This regulatory push clarifies the duty of care businesses have in this rapidly evolving space.
For HR professionals, this new focus is a call to action. While the primary WHS duty lies with the business, the management of risks related to excessive surveillance, unreasonable performance tracking, and discriminatory work allocation – all explicitly covered by the proposed legislation – falls squarely within the HR domain. This legislative shift underscores that relying solely on AI for sensitive decisions, such as rostering or performance review inputs, is now a major legal and talent risk.
The Dual Challenge of AI Governance and Trust
The governance of AI is the immediate challenge. AI offers incredible efficiencies, yet the risk of unintentional bias and discrimination in hiring, promotion, and performance evaluation is high if the technology is not carefully audited. The very efficiency tools we introduce can inadvertently create new sources of stress, job insecurity, and a loss of control for our people.
As HR Leader notes, the Business Council of Australia (BCA) has expressed strong caution regarding the unprecedented access this Bill may grant to digital systems. Regardless of the final form of the legislation, the debate highlights the tension between innovation and worker protection.
Crucially, the introduction of any new technology that may significantly affect employees triggers consultation requirements under the Fair Work Act 2009 (Cth). HR must lead this consultation, ensuring workers are partners, not passive recipients, in shaping how AI is deployed, which is a key recommendation from the Australian Council of Trade Unions (ACTU).
Beyond Compliance: Solving the ‘How’ of People Risk
The legislative changes provide the ‘why’: a clear duty to manage people-related risk, but the most common challenge we see across Australian industry is grappling with the ‘how’.
At a recent forum hosted by The Next Group, the collective observation was stark: while most organisations understand the duty of care and legislative requirement to address psychosocial risk, many still struggle with the practical implementation. This is often rooted in issues of organisational maturity and culture. Leadership support from the board down to middle managers is vital.
As one forum participant noted, the risk is "inherently messy" because it’s intertwined with performance management, organisational change, and human behaviours like gossip. It’s a holistic challenge, requiring a sustained effort and aligning psychosocial risk management with broader HR, Safety, and wellbeing strategies to ensure consistency across the entire employee lifecycle. For HR, this means recognising that poor work design is a WHS hazard, not just a productivity issue.
Empowering Human-Centred Leadership
The burden of managing these risks falls heavily on leadership and middle management. These leaders are the fulcrum, caught between operational demands and the human complexities of their teams. Effective leadership in this context involves authentic, human-centred conversations and role-modelling supportive behaviours.
HR must equip managers to handle complex situations, such as workload challenges, conflict, and addressing job insecurity caused by AI, without causing additional harm. This requires capability building, like establishing mental health literacy using simple frameworks to provide managers with a common language and a clear path to action. The focus must shift from merely mitigating risk to actively fostering protective factors: strong relationships, respect, connection, and supportive environments. The Australian HR Institute (AHRI) report on psychosocial risks confirms this, highlighting that effective leadership not only reduces risk but builds cultures of trust, engagement, and innovation.
Integrating Technology with Integrity
Technology itself can be a powerful protective factor if implemented thoughtfully. Psychosocial risk assessment platforms and digital pulse-check tools offer consistent, data-driven insights into both risk and protective factors. These tools support early detection and consistency, allowing organisations to measure and analyse the true health of the workforce.
The key is integrity. The technology must be implemented in a way that supports the human element of work, not in a way that replaces sound managerial judgement or compromises the psychological safety of the worker. When introducing AI, the Australian Psychological Society stresses that the implementation must be psychologically-informed, with safeguards that consider motivation, trust, job design, and culture.
The Path Forward
For Australian businesses, the new digital guardrails are an opportunity for HR to re-evaluate our strategic approach. This is about more than just compliance; it is about acknowledging the whole worker: the digital, the psychological, and the physical.
As HR professionals and business leaders, our work must focus on ensuring that our systems – both digital and human – are designed to support clear job roles, appropriate workloads, and a culture where people feel safe, respected, and heard. This commitment to transparent consultation, ethical governance, and human-centred leadership is how we will truly manage the algorithmic edge and foster a genuinely safe, healthy, and highly engaged workplace.



