Insights from Talkflows Tampere on why AI fails without integration and how organizations can move from experimentation to production with governed processes, reliable data and automation.
On a late afternoon in Tampere, Finland, inside a room filled with developers, architects, private and public sector leaders, the hot topic was AI, much like in any other gathering of tech enthusiasts for the past months. But this time the conversation felt refreshingly grounded.
At Talkflows Tampere, hosted by Frends and Netum, the focus quickly moved beyond hype. Instead of debating what AI might become, the panel tackled a more pressing question: why so many AI initiatives struggle to deliver real results and what it actually takes to make them work in production.
The answer, repeated from multiple perspectives throughout the discussion, was clear. AI fails because it isn’t connected; it lacks one important backbone: integrations.
The opening speech, by Juho Elometsä, Head of Integrations at Netum, set the tone:
“AI usually doesn’t fail because the models are weak, it fails because of integrations.”
AI is powerful, just not in the way many think
One of the first points raised during the panel was the growing gap between how AI is perceived and how it actually works. While generative AI has created the impression of intelligence, the reality is more nuanced.
The topic was explored by Professor Pekka Abrahamsson, founder of the GPT-Lab at Tampere University and AI Scientist of the Year in Finland in 2025:
“These systems are not intelligent. They are very good at predicting language, but that’s not the same as understanding.”
This distinction matters. When organizations treat AI as something that can independently reason, decide and act, they risk building systems on unstable ground. AI can generate impressive outputs, but without structure around it, those outputs remain disconnected from real-world processes.
This is where many initiatives begin to break down—not because the technology fails, but because expectations are misplaced.
From promising experiments to real-world complexity
Across both public and private sector perspectives, the discussion highlighted a familiar pattern. AI works well in controlled environments, but struggles when exposed to the complexity of real operations.
In practice, that complexity includes legacy systems, fragmented data, regulatory constraints, and dependencies across teams and organizations. Moving AI into production means navigating all of these at once.
As Abrahamsson put it:
“If the process isn’t clear, AI won’t fix it. It will just make the problems harder to see.”
This insight reframes the challenge. AI does not solve broken processes; on the contrary, it amplifies them. Introducing AI without clarity, structure, and ownership can make systems less transparent rather than more effective.
Why integration is the foundation, not an afterthought
If there was one idea that unified the panel, it was the role of integration as the true enabler of AI.
AI needs access to reliable data, but also context — an understanding of how that data fits into business processes — and the ability to act on insights in a controlled way. All of this depends on how systems are connected.
Without integration, AI remains isolated. It can suggest, analyze, and generate, but it cannot influence outcomes.
This is why integration was repeatedly described not as a supporting component, but as the foundation. It is what allows AI to move from outputs to actions, from isolated use cases to operational capability.
Public sector perspective: control before scale
The discussion brought a particularly grounded perspective from the public sector, where the stakes of AI adoption are different.
In environments like cities and municipalities, AI is more than a tool for efficiency, but a part of systems that affect citizens directly. That changes how it must be approached.
“We don’t use AI to make decisions for citizens. There always has to be a human responsible,” explained Klaus Nylamo, Senior Information Systems Engineer, Data and Artificial Intelligence Services at the City of Tampere.
This emphasis on accountability shapes how AI is implemented. Rather than pushing for rapid deployment, public organizations focus on validation, governance and maintaining human oversight.
While this approach can appear cautious, it reflects a deeper understanding of what it takes to build systems that are functional and trustworthy.
.jpg?width=3000&height=2000&name=netum-frends-13b%20(1).jpg)
AI must operate inside controlled processes
From a technical perspective, the conversation turned to where AI initiatives most often fail and how to avoid those pitfalls.
A recurring theme was the danger of allowing AI to operate without sufficient control. When AI interacts with systems without clear rules, validation and observability, it introduces risk rather than value.
This was one of the central points in Asmo Urpilainen's participation in the panel. The Frends CTO has been vocal about the need to implement AI with guardrails in place, like embedding it into a workflow:
“AI becomes dangerous when it acts without control. That’s why it must be part of a controlled process.”
This is where integration and automation platforms play a critical role. They provide the structure in which AI can operate safely, ensuring that every input is validated, every action is traceable and every outcome aligns with business rules.
In this model, AI is not the system itself, but a component within a larger, governed workflow.
The real challenge: alignment, not technology
While much of the discussion focused on architecture and systems, the panel also highlighted a broader challenge, one that is often underestimated.
Implementing AI at scale requires alignment across organizations, teams and partners. And in many cases, this proves more difficult than the technology itself.
“The technology is not the hardest part. The hardest part is getting people, organizations, and partners to work together,” said Vesa-Matti Ruottinen, Development Manager Regional Development, Data and Technologies at the City of Tampere.
This insight reinforces a key theme of the event: AI must be an organizational initiative.
Success depends on collaboration, shared understanding and clear ownership — factors that cannot be solved with tools alone.
From discussion to real capability
As the event moved into informal discussions and networking, the idea that AI is an amplifier of transformation continued to resonate across all perspectives.
When built on strong foundations, with clear processes, reliable data and well-designed integrations, it can accelerate decision-making, improve efficiency and unlock new capabilities. But without those foundations, it adds complexity without delivering value.
Talkflows Tampere made one thing clear: the future of AI will be defined by how well models are connected: into systems, into processes and into the reality of how organizations operate every day.