Uncertainty evokes anxiety
The possibility for agentic AI to replace public sector jobs elicits anxiety. Could leadership-facilitated open dialogue pertaining to the limits of what can be known produce a sense of containment?
In considering this I find Humberto Maturana's thinking on the various domains of human interaction helpful:
The capacity to explicitly shift our frame of thinking between the pragmatic and relational (aesthetic) aids reflection upon organisational life. Pragmatic thinking might ask ‘what is the next action?’. Aesthetic considerations ask ‘what relational pattern am I observing, and how do I contribute to it?’. Deploying both frames in parallel creates opportunities for learning and change.
Clearly, agentic AI will make itself known in both operational and aesthetic domains. However, I would guess that a good portion of staff are still unaware of what is meant by agentic AI specifically.
Agentic AI is authorised to act independently between prompts. Deploying this locally might mean having an AI agent read and screen incoming stakeholder correspondence, check a rota for available staff appropriately placed to respond, book a meeting room, communicate meeting details, write and send a report following the meeting… without human authorisation between steps.
While the private sector demonstrates more nimble deployment of agentic AI, public-sector ethical responsibility makes early deployment more precarious.
Perhaps public sector leadership, aware of the potential cost savings (particularly in the current economic climate), take a wait-and-see approach, eyeballing agentic AI’s established use cases in the private sector - awaiting the utility to be demonstrated, the risks and legalities to be thrashed out. Meanwhile, front-line public sector staff experience the anxiety of an increasingly uncertain future. All the while leaders, inundated with the day-to-day, do not explicitly discuss the potential coming changes (for example to staffing) and the limits of their understanding.
Perhaps frontline public sector staff would feel a sense of empowerment by learning about the implications of the kinds of tasks agentic AI can automate, while leadership could contribute to a sense of containment by making explicit the limits of what can be known at present?
I couldn’t pretend to know the answer. The question might resonate though, and those resonances can contribute to productive systemic change whereby the unspoken is made explicit.