Categories
Blog

From LLMs to Agentic Workflows: How Domain Intelligence Matures

From LLMs to Agentic Workflows: How Domain Intelligence Matures

Large Language Models (LLMs) have changed how we interact with information. They can summarise, explain, draft, and reason at a level that was unthinkable only a few years ago.

However, as organisations begin using AI in domain-specific, high-stakes contexts—regulation, law, finance, healthcare, or strategy—a critical question emerges:

When is an LLM enough, and when do you need something more structured?

This article explains the progression from simple LLM usage to fully agentic workflows, and why that evolution matters.

  1. The core problem: fluency is not reliability

LLMs are excellent at producing answers that sound correct.
That strength is also their weakness.

In general domains, this is acceptable. In regulated or expert domains, it is not.

Domain work requires:
1. Correct interpretation of formal rules
2. Clear jurisdictional boundaries
3. Traceability back to authoritative sources
4. Defensible reasoning under scrutiny

LLMs alone do not guarantee these properties.

  1. A maturity continuum, not a binary choice

AI systems used for domain intelligence tend to evolve along a continuum:
1. Basic LLM
General reasoning and language generation. Fast, flexible, but ungrounded.
2. LLM + Prompt Discipline
Better consistency through structured prompts, still reliant on model recall.
3. LLM + RAG (Retrieval-Augmented Generation)
Answers are grounded in documents, policies, or knowledge bases.
4. Semi-Agentic Systems
Tools, checks, and limited validation steps are introduced.
5. Full Agentic Workflows
Explicit rules, verification steps, jurisdiction control, and failure handling.

Each step to the right adds control, reliability, and auditability.

Agentic workflows are not an alternative to LLMs; they are their governed, operational form.

  1. When does agentic become necessary?

The deciding factor is not technical sophistication.
It is risk.

Two questions determine the appropriate architecture:
1. What is the cost of being wrong?
2. Do I need to explain or defend the answer to someone else?

If both are low, a simple LLM is sufficient.
If either is high, relying on a single model becomes dangerous.

This is why agentic workflows appear first in:
• Regulatory analysis
• Legal reasoning
• Financial decision support
• Safety-critical or compliance-driven domains

In these contexts, confidence without justification is a liability.

  1. What makes an agentic workflow different?

An agentic workflow introduces elements that LLMs do not provide on their own:
1. Explicit rules
Formal constraints encoded outside the model.
2. Source authority
Clear prioritisation of documents, clauses, or standards.
3. Verification steps
Independent checks before answers are finalised.
4. Failure states
The system can stop, flag uncertainty, or request clarification.
5. Traceability
Every conclusion can be traced back to inputs and rules.

This transforms AI from a conversational assistant into a domain system.

  1. Why complexity can be justified

Agentic systems are more complex to build.
That complexity only creates value when it buys certainty.

In low-risk scenarios, complexity is wasteful.
In high-risk scenarios, simplicity is irresponsible.

The mistake many organisations make is treating all AI use cases as equal. They are not.

  1. A practical rule of thumb
    1. If an answer only needs to help you think, use an LLM.
    2. If an answer must stand up to scrutiny, use an agentic workflow.

  1. Final thought

The future of applied AI is not about choosing between LLMs and agents.
It is about placing LLMs inside systems that understand rules, risk, and responsibility.

That is how domain intelligence matures.

Infographic

Categories
Blog

Making Ai work for you not against you


According to ChatGPT, I was in the top 1% of their worldwide users in 2025. This is their description of my profile “This profile reflects power-user, professional-grade usage. High message volume, many distinct chats, and heavy long-form writing signals indicate you used ChatGPT as a thinking partner, drafting engine, and systems tool—not entertainment or light assistance.”


Here are some lessons that I learned:

  1. The closer you are to working in a deterministic rather than probabilistic way, the better use you will get. Set the rules upfront so that AI knows how to behave – personalise it by asking to save your preferences to memory. This will enable long term consistency so that you can work at speed.

  2. Start with the end goal in mind and explain your context. What do I want AI to do for me? Is it helping me think, review, generate possibilities, summarise, explain something, etc.

  3. Think of form and content most appropriate for the answer that you want. Is it better presented as a table, infographic, ASCII text or bulleted or paragraph text.

  4. Most output is better understood in a table format. Working with tables helps thinking in different dimensions whilst keeping the scope narrow and avoid chasing down rabbit holes.

  5. Scope creep is real. The role of AI is to generate high probabilistic possibilities. It will try and guess beyond what you asked and you may risk embelishment without answering the primary question first.

  6. Most AI benchmarks do not reflect real world use. What is the use of a high benchmark for an AI model if you cannot trust it to get a reference right ? Always check external references to avoid further issues down the line. Verification is important for important tasks.

  7. Long form conversations will drift off will become less accurate. As with scope creep, there will also be coherence creep. The longer the conversation the higher likelihood of losing the memory thread.

  8. Start general and then dive into the specifics. Try to understand architecture and model first before instructions and tasks, for dealing with complexity. Ask AI to brainstorm with you first to agree the high level vision and then work on the details of the task. This way you will always have a map to reference back to.

  9. Ask for feedback. After working with AI for a while, your profile will become more visible to the model and you can ask to review where you can improve including how to ask better questions.


Infographic