I had the opportunity (16-Mar-2026) to present at Memphis AgentCamp. Thank you Doug Starnes for a great event!
The description, link to github repo, and slides follow.
Making Agents Work
Building more powerful AI Agents seems to be getting easier by the day. They are powered by incredible models, have access to tools, and can work in teams. But how can we have confidence in non-deterministic systems that make consequential decisions?
This talk explores four approaches for building that confidence.
1. Observability platforms – You can’t improve what you can’t see. We’ll explore tools that make the hard-to-see stuff visible.
2. Evals (evaluations) – Moving beyond LGTM (looks good to me), evals wrap agents in formal testing structures to measure accuracy, consistency, and edge case handling – both before and after your Agent goes live.
3. Safety guardrails – Content filtering, PII detection, and hallucination detection from both platform vendors and standalone models. Let’s see how they fit into your agent stack.
4. Selective determinism – Sometimes we make better AI solutions by knowing when NOT to use AI. We will discuss mixing in deterministic logic with our non-deterministic behaviors.
Concepts are platform-agnostic, but demos will use Microsoft Foundry and the Agent Framework (currently in preview). (In case you haven’t been following along, Microsoft Foundry was previously know as known as Azure AI Foundry, and before that was Azure AI Studio. And Agent Framework is the next generation of both Semantic Kernel and AutoGen.)
Target audience: Those new to building production agent systems seeking approaches beyond the “hello world” tutorials – which described me not too long ago.
- Repo is here: https://github.com/CrankingAI/goat-agent
- Deck is here: