Blog

Integrating AI in public administration processes

23/07/25 · Digital transformation
Jenna Brinning
8 min. reading time
Copied link!
Can AI be integrated into administrative processes?

The backlog keeps growing and citizen affairs are getting more complex, yet there are still only 24 hours in a day. While the private sector has long embraced digital assistants, public administration continues to lag behind. Could AI be the answer?

When routine becomes a time sink

Piles of applications, emails, and documents land every day on the desks of public sector staff. An enormous amount of time is spent on recurring tasks: checking forms, cross-referencing data, answering standard enquiries. All of this is necessary, but it’s also incredibly time-consuming.

A 2023 McKinsey study found that up to 70 per cent of activities in German public agencies could potentially be automated. This is a clear signal of just how much routine work holds the public sector back.

Europe's digital dilemma

Automation seems the obvious path forward. However, the market for high-performance AI systems and the necessary cloud infrastructure is dominated by a handful of non-European tech giants. This creates a difficult balancing act for Europe's public sector, caught between the promise of efficiency and the risk of digital dependency. One particularly critical issue is the processing of sensitive administrative data in compliance with European data protection regulations.

One way forward could be to locally host open-source AI. Although these models are less powerful, they offer complete control over data and processes. Initial trials show this approach is viable in practice. At the KIPITZ hackathon, hosted by the German Federal IT Centre (ITZBund), public sector employees worked alongside development teams to create AI prototypes. The goal of building tangible, everyday solutions led to the realisation that AI can indeed support public administration when deployed in a controlled and transparent manner. Building on these findings, ZenDiS plans on working closely with the Competence Centre for AI in the Public Sector (KIPITZ) to integrate the first AI functions directly into openDesk and create a sovereign, practical alternative.

Taking technological risks seriously

The focus on potential efficiency gains must not overshadow the risks, however. AI models are susceptible to manipulation. If training data is deliberately corrupted, models can be systematically misled. It would be particularly alarming if such systems were tasked with making decisions about applications, grants and subsidies, or taxpayer-funded benefits.

Another problem is hallucination. Large language models (LLMs) present false information with extraordinary self-confidence. The reason for this is simply how they work: they merely calculate the statistically most probable reponse, regardless of its truthfulness. If information is missing, the AI invents plausible details. Errors can even occur when all data is available, for example, if a response is generated before an uploaded document has been fully processed by the model.

In citizen services, an error might be annoying. In finance or public planning departments though, it could have serious legal consequences. The technology is not yet mature enough to function without human oversight, an assessment shared by Dr Dieter Bolz, in-house AI expert at ZenDiS. He emphasises that the solution lies not in abandoning the technology, but in shaping it prudently:

It would be irresponsible not to seize the opportunities offered by AI technology or to stay on top of the latest developments. But it would be equally irresponsible not to place people at the centre and remove the layer of human accountability. At ZenDiS, we want to help shape the use of AI responsibly. And that's exactly why we want to take a 'human in the loop' approach for openDesk.
Dr Dieter Bolz, ZenDiS GmbH

Real opportunities: Where AI can be used effectively

When used sensibly, AI offers significant potential. Areas with clearly defined, rule-based processes where people retain control are particularly suitable.

Document processing

AI could help sort documents, identify and categorise application types, or check for completeness. Initial responses to citizens’ common enquiries might be automated, so that caseworkers could devote their time to more complex issues.

Citizen services

Typical enquiries such as "When is the council office open?" or "What documents do I need to renew my passport?" make up a large part of daily communications in public services. Chatbots can offer relief here. Stuttgart, for example, already uses an AI-based chatbot to field such questions. The system combines pre-formulated content with AI-generated answers, and it does so in multiple languages. In Essen, too, a bot is being tested for the public service hotline. It currently operates on a rule-based system, with AI features to follow.

Use of chatbots in public administration

A key advantage here is that chatbots have no opening hours; they’re available 24/7 and thus reduce wait times and follow-up queries.

Data analysis

From traffic counts and economic data to population statistics, public bodies hold vast amounts of data. AI could help identify patterns and produce informed forecasts. Where are new schools needed? How are rush-hour traffic flows evolving? A data-driven administration could act more proactively.

Compliance checks

Whether an application meets all prerequisites, a grant is eligible for approval, or deadlines have been met, compliance checks like these are well-suited for automation. The German Federal Office for Migration and Refugees (BAMF) already uses AI to pre-screen asylum applications, while final decisions remain in human hands. Similar approaches could be used to process planning applications, tax returns or social welfare requests.

A gradual rollout, not a rush to act

The move towards an AI-supported public sector is not just another IT project. Technology alone won’t cut it; a strategic approach is needed. One that involves staff from the outset and takes legal frameworks into account.

A common mistake is to launch technology-driven projects without involving those who use the systems every day. Yet it’s the acceptance of everyday users which ultimately determines success. The first step, therefore, should be dialogue: Which tasks are consuming the most resources? Where are bottlenecks in our processes? What would genuinely make our daily workload a bit easier? Feedback like this is more valuable than any external expert opinions. It helps create AI solutions that are actually useful.

Change needs trust

While new technology-focused roles will emerge, their number will fall far short of the jobs likely to disappear. Redeployment alone cannot bridge this gap. It will take targeted training and clear career prospects to help staff move into new roles. Theory alone is rarely convincing. What makes a difference are practical training courses and small pilot projects where teams can experience the benefits of AI for themselves.

AI applications in the public sector

Rather than trying to transform an entire administration all at once, it makes sense to start with concrete, clearly defined problems. Automatically sorting emails, introducing a simple bot for frequently asked questions or digitising a widely used form are all manageable use cases with real value. Success with those first smaller projects creates momentum for further initiatives.

Factoring in legal requirements from the start

Data protection, IT security and administrative regulations are not obstacles, but essential foundations for viable solutions. Factoring them in from the outset prevents setbacks further down the line. What matters most is that AI decisions remain auditable and transparent, supported by clear documentation of data processing. Data protection officers should be involved from the very beginning.

Finally, when choosing technology, it’s worth looking at open-source alternatives. Proprietary solutions often promise quick wins, but they create new dependencies. While open source takes more up-front responsibility, it offers full control over algorithms and data.

AI is no substitute for reform

For all its potential, AI cannot solve structural problems. Even with the best technology, staff shortages, outdated processes, and rigid legislation will persist. Moreover, new requirements will arise: on data quality, technical maintenance, and human oversight.

This raises a central question: who bears responsibility? If an AI system makes a wrong decision, who is liable? How can discrimination be prevented? And who should set the ethical guardrails for its use in public institutions – tech companies, the authorities, parliament, or the citizens themselves? These fundamental questions about the democratic oversight of algorithmic decision-making are yet to be fully resolved.

Other articles