When routine becomes a time sink
Piles of applications, emails, and documents land every day on the desks of public sector staff. An enormous amount of time is spent on recurring tasks: checking forms, cross-referencing data, answering standard enquiries. All of this is necessary, but it’s also incredibly time-consuming.
A 2023 McKinsey study found that up to 70 per cent of activities in German public agencies could potentially be automated. This is a clear signal of just how much routine work holds the public sector back.
Europe's digital dilemma
Automation seems the obvious path forward. However, the market for high-performance AI systems and the necessary cloud infrastructure is dominated by a handful of non-European tech giants. This creates a difficult balancing act for Europe's public sector, caught between the promise of efficiency and the risk of digital dependency. One particularly critical issue is the processing of sensitive administrative data in compliance with European data protection regulations.
One way forward could be to locally host open-source AI. Although these models are less powerful, they offer complete control over data and processes. Initial trials show this approach is viable in practice. At the KIPITZ hackathon, hosted by the German Federal IT Centre (ITZBund), public sector employees worked alongside development teams to create AI prototypes. The goal of building tangible, everyday solutions led to the realisation that AI can indeed support public administration when deployed in a controlled and transparent manner. Building on these findings, ZenDiS plans on working closely with the Competence Centre for AI in the Public Sector (KIPITZ) to integrate the first AI functions directly into openDesk and create a sovereign, practical alternative.
Taking technological risks seriously
The focus on potential efficiency gains must not overshadow the risks, however. AI models are susceptible to manipulation. If training data is deliberately corrupted, models can be systematically misled. It would be particularly alarming if such systems were tasked with making decisions about applications, grants and subsidies, or taxpayer-funded benefits.
Another problem is hallucination. Large language models (LLMs) present false information with extraordinary self-confidence. The reason for this is simply how they work: they merely calculate the statistically most probable reponse, regardless of its truthfulness. If information is missing, the AI invents plausible details. Errors can even occur when all data is available, for example, if a response is generated before an uploaded document has been fully processed by the model.
In citizen services, an error might be annoying. In finance or public planning departments though, it could have serious legal consequences. The technology is not yet mature enough to function without human oversight, an assessment shared by Dr Dieter Bolz, in-house AI expert at ZenDiS. He emphasises that the solution lies not in abandoning the technology, but in shaping it prudently: