AI-agents: From promises to reality

Jan 16, 2026

One year with AI-agents: From promise to reality

A year ago, we wrote about AI-agents as one of the most promising concepts in the latest wave of AI. Since then, we have built them. Put them into production. Seen them fail. And seen them create real value. In the meantime, interest has only grown: today, AI-agents are presented as the solution to everything from inefficiency and bottlenecks to labor shortages and increasing complexity.

But reality is more down to earth. Across organizations, AI-agents have begun to demonstrate that they can create value — but still far from the scale many had hoped for. Expectations remain higher than what can be documented in real-world operations. Agents stall, make incorrect decisions, or become difficult to maintain when the data foundation is messy, processes are unclear, or a new version of a language model changes behavior from one day to the next.

Motivated by these experiences, this article is not yet another piece about what AI-agents could become, but an honest look at what they are in practice. We examine where they actually create value, where they break down, and what it takes to make them work reliably in a real organization — far removed from AI James Bond fantasies, but close to day-to-day operations.

An AI-agent Is a digital human — Not a magical engine

In practice, we have learned that the most useful way to understand an AI-agent is not as a piece of software, but as a new kind of digital employee. That may sound counterintuitive if you are used to thinking of IT as deterministic and predictable — but AI-agents do not behave like classical software. They reason, interpret, prioritize, and choose actions in a way that resembles human behavior far more than a rule engine.

This also means they share many of the same limitations. An AI-agent can read, understand, and act — but only within the context it has access to. It cannot “see” the entire organization, just as a human cannot. It operates based on the data, documents, and systems it is connected to, and everything outside of that effectively does not exist to it.

And just like with people, the quality of an agent’s output is entirely determined by the quality of the environment it operates in. Give an agent messy data, unclear concepts, and inconsistent naming, and it will produce messy decisions — just faster. Where a human might stop and say, “this doesn’t make sense,” an AI-agent will typically try to find an explanation and act on it, even when the foundation is wrong. That is exactly what it has been trained to do. This makes poor data far more dangerous in an agent-driven world than it ever was before.

The same applies to boundaries and responsibility. A human employee can often navigate unclear rules, apply common sense, and ask questions when something is ambiguous. An AI-agent cannot — which is why it requires extremely clear guardrails around what it is allowed to do, what it must not do, and especially how it should react when it is uncertain.

Seen this way, an AI-agent is neither a miracle tool nor an autonomous super intelligence. It is something far more mundane — and far more demanding: a digital employee that can work quickly and cheaply, but that requires structure, clarity, and guidance to create value.

 

The Data illusion: AI does not clean up messy organizations

One of the myths of the AI wave is the idea that “AI can just read everything and understand it.” In presentations and demos, it often looks that way: the model answers questions across documents, emails, databases, and systems. So why would data quality still be a problem?

In practice, the opposite is almost always true. If your data is messy, contradictory, or incomplete, an AI agent cannot navigate it. It cannot distinguish between truth and noise. It can only distinguish between patterns. And when the patterns in your data reflect organizational chaos, historical compromises, and informal truths, the agent will learn exactly that.

Large language models are extremely good at interpreting what they see. They excel at filling in gaps, creating coherence, and formulating plausible explanations. But they are not good at detecting when the underlying foundation itself is wrong. This means they do not just use your data — they amplify it. An organization with high data maturity gets a powerful multiplier. An organization with low data maturity gets its problems scaled.

This becomes very clear in practice:

  • If your customers exist in three different systems, the agent will choose one — the most likely one, but not necessarily the correct one.

  • If naming conventions are ambiguous, the agent will invent an interpretation that sounds convincing.

  • If your CRM reflects politics rather than reality, the agent will treat politics as truth.

The brutal truth is therefore that AI-agents do not clean up your organization. They turn on the lights. They make bad data more visible, processes more transparent, and disagreements more obvious. And for many organizations, that is far more confronting than expected.

The part no one talks about: LLM vendor risk

Vendor risk is one of the realizations you only get once AI-agents are running in production. Unlike most other software, AI-agents depend on a component you do not control: a large language model. Unless that model is open source and runs inside your own IT environment, its lifecycle will be updated and changed continuously — often without prior notice.

This means the lifecycle of an AI agent has a variable component that cannot be controlled. The model may gain new capabilities, new limitations, a new tone, or new failure modes without you asking for it. And because the agent’s reasoning and decisions depend directly on the model’s behavior, even small changes can have major operational consequences. That is, needless to say, a huge challenge when the system is business-critical.

In classical software, you decide when to upgrade. Changes are tested and only then rolled out. In the agent world, it is often the reverse. A vendor like OpenAI upgrades the model according to its own, opaque roadmap. This means the model’s behavior changes over time.

Often, these changes are subtle. A model might become slightly more “creative,” slightly less inclined to follow instructions rigidly, or slightly more willing to fill in missing information. But for an agent acting on behalf of an organization, such shifts can be the difference between correct execution and systematic errors.

As a result, building AI-agents is not just an AI engineering challenge. It is equally a question of vendor strategy, architecture, and operational processes. AI-agents based on commercial language models will always depend on an actor that optimizes for its own objectives — not for the stability of your specific agent.

This is why classical software disciplines suddenly become critical: the ability to lock specific model versions, continuously test agent behavior, and have clear rollback strategies when something changes unexpectedly. Without this, you risk building systems that work brilliantly today — and unpredictably tomorrow.


Concluding thoughts: Agents are not hype — But they are not plug-and-play either

AI-agents are not AI James Bond fantasies. But they are not just chatbots either. They are something far more interesting — and far more demanding: autonomous digital employees that can read, understand, reason, and act on behalf of an organization. When they work, they can take over tasks that previously required humans and perform them faster, cheaper, and more consistently.

However, one year of real-world projects has made one thing clear: AI-agents do not create value by virtue of their technology alone. They create value when deployed in the right tasks, with the right data foundation, and within the right constraints. Some tasks are naturally suited for autonomy; others are not. Some processes are ready to be automated; others remain too unclear, too political, or too messy.

The winners, therefore, will not be those who deploy the most agents, but those who understand where agents can actually make a difference. It is about choosing the tasks where a digital employee makes sense, ensuring that the data the agent sees is accurate, and accounting for the technological risks that come with letting external models make decisions.

AI-agents are thus neither a miracle cure nor a temporary trend. They represent a new degree of digitization. A tool for increasing organizational productivity. An opportunity with enormous potential — and very real limitations.

AI is happening now - and the potential is massive.

AI is here, and most businesses can benefit greatly.

Are you ready?

AI is here, and most businesses can benefit greatly. Are you ready?

Explore how businesses are unlocking value with Data & AI

Explore our articles
on how to succeed with AI

Explore Insights

Codellent Aps

Rahbeks Alle 21

1801 Frederiksberg C

DK - 43115235

© Codellent 2025. All rights reserved

Codellent Aps

Rahbeks Alle 21

1801 Frederiksberg C

DK - 43115235

© Codellent 2025. All rights reserved

Codellent Aps

Rahbeks Alle 21

1801 Frederiksberg C

DK - 43115235

© Codellent 2025. All rights reserved