Skip to main content
Back to Blog

Building Practical AI Agents

November 15, 2025·2 min read

labagentslanggraphllm

“AI agents” is overloaded. A lot of demos are a prompt plus a loop plus vibes. I care about agents that close a real loop:

  • produce an artifact (a summary, a report, a PR),
  • verify it against explicit criteria,
  • and revise until it’s acceptable (or fail loudly with a reason).

If you can’t explain the loop, instrument it, and test it with real inputs, you don’t have an agent. You have a generator.

TL;DR

  • Make the loop explicit: draft -> verify -> revise with a max revision budget.
  • Verify against something concrete: schemas, checklists, citations, invariants.
  • Log state transitions and the reasons for revisions. “It got better” is not a debug strategy.
  • Treat tools as untrusted I/O: timeouts, retries, and bounded outputs.

Real-World Implementation: News Summarizer

Instead of talking in the abstract, here’s the core pattern from my News Analyzer agent. It uses a “draft → verify → revise” workflow: a fast model produces a draft, and a stronger model acts as a critic.

One detail that matters operationally: the agent talks to an OpenAI-compatible endpoint (I run it behind a LiteLLM proxy). That means the same code can point at local models in my homelab or a hosted provider by changing config, not rewriting prompts.

The Agent Architecture

I use LangGraph to define the behavior as a small state machine. The point is not the framework. The point is making the loop explicit and inspectable.

Here is the core graph definition from agent.py:

class SpecState(TypedDict):
    task: str
    draft: str
    verified: str
    needs_revision: bool
    revision_count: int
    max_revisions: int

def _build_graph(self):
    workflow = StateGraph(SpecState)

    # Define the nodes
    workflow.add_node("draft_step", self.draft_node)
    workflow.add_node("verify_step", self.verify_node)
    workflow.add_node("revise_step", self.revise_node)

    # Define the flow
    workflow.set_entry_point("draft_step")
    workflow.add_edge("draft_step", "verify_step")

    # Conditional logic: If verification fails, revise. Otherwise, end.
    workflow.add_conditional_edges(
        "verify_step",
        self.should_revise,
        {"revise": "revise_step", "end": END}
    )
    workflow.add_edge("revise_step", "verify_step")

    return workflow.compile()

Self-Correction in Action

The reliability gain comes from the verification step. The verification model acts as a critic: it checks a concrete rubric and either approves or returns actionable feedback.

def verify_node(self, state: SpecState) -> SpecState:
    """Quality verification using orchestrator model."""
    response = self.verify_model.invoke([
        SystemMessage(content="Review this draft. If acceptable, output APPROVED. If issues, output REVISE: <feedback>"),
        HumanMessage(content=f"Task: {state['task']}\n\nDraft:\n{state['draft']}")
    ])

    if "APPROVED" in str(response.content):
        return {"verified": state['draft'], "needs_revision": False}
    return {"needs_revision": True, "verified": str(response.content)}

This lets the agent catch hallucinations and missed details without a human in the loop. In practice, it’s the difference between “pretty output” and “a system you can lean on at 6am.”

Comparison to Standard RAG

Many “agents” are closer to RAG pipelines. This differs in one important way: verification is a gate, not an afterthought. The system can decide to loop back and fix its own mistakes based on should_revise.

Concrete practices that helped:

  1. Start simple: small state + small tools beats a mega-prompt.
  2. Log everything: inputs, outputs, timings, and the exact revision feedback.
  3. Test with real tasks: synthetic tasks hide the failure modes you care about.
  4. Be honest about models: smaller models are great for drafting and extraction; verification often needs a stricter model or a narrower rubric.

Check out the Agent Orchestrator project for more details.

Related Articles

Comments

Join the discussion. Be respectful.