Nine out of ten Swedish municipalities are already experimenting with AI, according to AI Sweden. But many of the initiatives are small, isolated pilot projects, which risks fragmentation as each municipality builds its own solutions. At the same time, cyberattacks are happening constantly — the latest in a series of hacker attacks targeted Svenska kraftnät (electricity transmission system operator in Sweden) while new data shows that the public sector in Sweden was exposed to an average of 2,700 cyberattacks every week during September. In a data leak earlier this year, information on over two million Swedes, some more than 20 years old, was leaked. It is no longer about what we do when the accident occurs, but how we secure our data before something happens.
Trust in the public sector is at the core of the Swedish welfare model. If citizens cannot trust that their personal data is handled correctly, the entire digital development risks losing its support. This is especially true now, when the privacy reform is expected to increase data sharing between authorities. How do we share data securely, but perhaps above all, how do we ensure that the right data is shared, whether it concerns authorities, healthcare or individual municipalities?
The problem is not that there are no rules. Sweden already has guidelines from DIGG and IMY for how municipalities should handle AI, personal data and information security.
The guidelines require that municipalities identify information security risks, limit what information is entered into AI systems, use role-based access and ensure that handling complies with laws on public documents and confidentiality. In addition, NIS2 means that municipalities and suppliers must be able to demonstrate that their systems can withstand attacks, that their partners are secure and that the entire chain can be followed and controlled.
But guidelines and laws on paper are not enough. Responsibility must be proven in practice, in the actual movement of data. It is not about writing more policy documents, but about building responsibility directly into the systems that move, log and secure data. SDK (Secure Digital Communication) is today the chosen standard for secure information exchange between authorities, regions and municipalities. But in practice, SDK can be complex to implement and connect with existing systems, and here, well-functioning integrations play an important role. SDK applies to communication between organizations, and there is still a gap when it comes to tools for the actual data management.
At the same time, there is a growing need for digital cleaning, as personal data sometimes remains after several years, even though it is not allowed to be saved according to GDPR. In light of the numerous cyberattacks, the Swedish Data Protection Authority is also opening up the possibility of increased penalty fees for violations of GDPR.
When we fail to protect data, it is not just a breach of the rules; it is people's privacy that is at stake, and that is precisely why the regulations exist.
It is clear that automated processes are needed, acting as “data cleaners” that clean up old information and log every step — solutions that are already available on the market today.
As AI becomes a natural part of society’s everyday life, Sweden must decide: do we want to build on insecure data foundations, or create the infrastructure that makes AI safe, traceable and transparent?
Integrations are often forgotten in the debate about AI and data management. Without them, AI initiatives become isolated experiments instead of real business improvements, and in practice, an engine that connects systems and data is required. It is not just about automating data flows but also making the work fully traceable. Every decision can be logged and reviewed, which provides the control and security that the law requires. It is not about letting AI take over, but about giving people control over AI.
Securing data flows is not a technical sidetrack; it is the very prerequisite for AI to work. Only when the public sector can show who has used what data, when and why, can citizens truly trust that AI is being used to strengthen society.
Robert Jakobsson is Head of Public Sector Sweden at Frends.
This article was originally published in Swedish by the newspaper Dagens Industri. You can read it here.