Three mistakes when implementing AI in municipal management (and how to avoid them)
Every year, more city councils announce their move into artificial intelligence. The headlines sound promising: automated processes, citizen chatbots, efficient management. But a few weeks later, the teams are overwhelmed, the suppliers disappear from view, and the projects fizzle out in administrative silence.
It's not a technology problem. It's a problem of how change is managed. AI doesn't fail because of the algorithms, but because of the people, the processes, and the lack of institutional structure to support the transition.
Published October 9, 2025 · AI and digitization · Institutions

Resistance to change is not a rejection of innovation: it is a fear of losing control.
In this article we review the four most common mistakes when implementing AI in municipal management, and how to avoid them with viable and human solutions.
1. Believing that implementing AI is a technical matter
In many city councils, AI projects are launched from the ICT or Innovation department, with good intentions, but without change management. They are presented as a "necessary" modernization, without explaining the why or the wherefore. The result is predictable: confusion, fear, and internal resistance.
In public administration, teams operate under high levels of responsibility, administrative burden, and strict regulations. Even a small change in systems can affect the daily work of dozens of people. When the impact is not properly explained, the immediate effect is stress, anxiety, and mistrust.
Imagine a realistic scenario: a city council revamps its case management system to improve traceability and connect various platforms. In the second week of implementation, complaints, user errors, and even anxiety-related absences would almost certainly arise. Unions might demand the project be halted "until job security is guaranteed," and the technicians would revert to the old system out of inertia.
These types of situations are documented by the FEMP In their reports on local innovation, they point to a lack of support and communication as the main causes of resistance to change.
AI is implemented through technology, but it is consolidated through people.
How to avoid it
- Communicate from the beginning what specific improvement the change brings: fewer repetitive tasks, more operational time.
- Involve staff representatives in pilot tests.
- Provide practical, not theoretical, training: how it helps in everyday life, not how the algorithm works.
- Celebrate visible results: reports written in less time, fewer errors in files, better traceability.
Unaccompanied implantation
Abrupt change, lack of information, anxiety, and internal rejection.
Implementation with change management
Clear communication, progressive training, and trust in the process.
2. Confusing digitization with intelligence
Many municipalities have made a tremendous effort to digitize processes. But digitization is not the same as automation, and automation is not the same as applying artificial intelligence. The confusion between these three levels explains a large part of the institutional failures.
As the OECD in its report on AI in the public sector, Digitizing does not equal making intelligent: without analysis, governance, or human supervision, systems end up generating more burden than efficiency.
A common example: an AI tool for citizen services is acquired with the hope of reducing workload. The system answers basic questions correctly, but it doesn't understand administrative language or regulatory exceptions. Within a few days, citizens lose trust, and staff have to manually review each response. The workload doesn't decrease; it increases.
Digitizing is not automating. Automating is not making things smart.
How to avoid it
- Define processes where AI makes sense: drafting, classifying documents, summarizing minutes.
- Create an inventory of data and internal flows before bidding.
- Clearly measure what time savings or quality is being sought.
- Start with limited pilot projects, with measurable objectives and technical supervision.
3. Do not prepare control structure or roles
Another common mistake is implementing AI projects without an institutional structure to govern them. Each department acts independently: communications uses a tool to draft memos, IT tests a document classification model, and the environment experiments with predictive analytics. No one coordinates or validates the results.
AI projects need a cross-cutting governance structure, even if minimal: a technical committee, a commission or a responsible figure that ensures coherence and validation.
The European guidelines for trustworthy AI They recommend establishing oversight mechanisms and defined roles as a basis for ethical and effective adoption.
How to avoid it
- Create an AI governance table with profiles from ICT, Secretariat, Communication and service areas.
- Establish clear roles: who designs, who validates, and who maintains.
- Document flows and results from the beginning.
- Train middle managers in AI assessment and traceability.
4. Delegate intelligence to the provider
In many cases, management buys turnkey AI solutions, trusting that the vendor will resolve the technical complexities. But without internal expertise, control is diluted. Models are trained on unknown data, versions change without notice, and decisions are based on opaque rules.
| Risk | Example | Safe alternative |
|---|---|---|
| Supplier dependence | Closed model with no access to settings | Require documentation and explainability clauses (CLAD, 2022) |
| Loss of sovereignty | Data hosted on external servers not under your control | Ensure public ownership of data and version control |
| Technical opacity | Automatic updates without review | Periodic audits and documented traceability |
Giving up control of AI is giving up institutional sovereignty.
5. Case study
Let's consider a medium-sized municipality that decides to automate part of its building permit registration process using AI. The system, well-intentioned, is launched quickly, without a technical committee or staff training.
In this situation, it's highly likely that within two weeks there would be complaints from the technicians, errors in the classifications, and a climate of widespread distrust. The unions would demand a halt to the project until "reliability is guaranteed," and the public would receive inconsistent answers.
The problem wouldn't be the technology itself, but how it's implemented. With a different approach—a coordination committee, human validation, and ongoing support—that same system could be effective. could become in an effective tool. It would free up time on repetitive tasks, guarantee traceability, and increase the quality of service.
The Spanish Network of Smart Cities (RECI) It includes similar cases where the key to success has been internal support and cross-functional coordination.
6. Institutional AI maturity checklist
- Does the staff understand why AI is being applied and how it benefits them?
- Is there a strategy or just isolated projects?
- Is there a responsible person or cross-cutting committee?
- Are the data and models controlled by management?
- Is ongoing training and communication provided during implementation?
If three or more answers are “no”, the priority is not more technology, but more structure.
7. Our conclusions
Artificial intelligence projects in public administration don't usually fail because of the technology. They fail because They are not accompanied by change management, structure, and real governance.Algorithms are the easy part; the complex part is aligning people, processes, and responsibilities.
The good news is that these errors are avoidable. With a clear strategy, technical support, and constant communication, AI can save time, improve traceability, and strengthen institutional trust.
Innovation is not about running faster, but about moving forward with purpose and control.
👉 In Direction & Results We help municipalities implement AI with strategy, support and traceability, so that teams can save time without losing control or confidence.
From strategic definition to operational change management, We help you apply real intelligence in public administration.
Would you like us to notify you when we publish new content?
We value your time. We'll only send you articles, guides, or tools that help you improve, make better decisions, or take better action.
Our practical resources and online tools
Checklist for citizen communication in the environment
Check in minutes if your municipality effectively communicates its environmental initiatives and succeeds in engaging people
Checklist for operational models of urban green spaces
In just a few minutes, determine if your municipality has a solid operational model for management.
Checklist for managing green infrastructure projects
Evaluate whether your municipality has the necessary capacity to manage and coordinate infrastructure projects
Checklist for green and blue infrastructure strategy
Evaluate in 2 minutes whether your municipality has the necessary foundation to advance in infrastructure




