I had the opportunity (28-Mar-2026) to present at the 40th running of Boston Code Camp. Thank you to the incredible pros running these events, twice yearly, making it happen for a grateful greater-Boston tech community.
Thank You to the Speakers, Sponsors, and Organizers
Thank you to all the speakers:
Anirban Tarafder · Bala Subra · Bill Wilder · Bob German · Bryan Hogan · Chris Seferlis · Cole Flenniken · Dave Davis · Dave Finn · Dekel Cohen Sharon · Fnu Tarana · Gleb Bahmutov · Harry Kimpel · Jason Haley · Jeff Blanchard · Jesse Liberty · Jim Wilcox · John Miner · Joseph Parzel · Josh Goldberg · Juan Pablo Garcia Gonzalez · Keith Fitts · Matt Ferguson · Matthew Norberg · Michael Mintz · Pavan Kumar Kasani · Richard Crane · Sunil Kadimdiwan · Taiob Ali · Ty Augustine · Udaiappa Ramachandran · Varsham Papikian · Vijaya Vishwanath · Viswa Mohanty
And thank you sponsors:
Hosting: Microsoft · Gold: MILL5 · Silver: Pulsar Security · Progress Telerik · Triverus · Brightstar · In-kind: Sessionize

Making Agents Work
My session was Making Agents Work which highlighted some of “the boring side” of building an AI Agent – but these boring details can be super-valuable. The talk was inspired by work I did in my day job as CTO at Open Admissions. I am using an AI Agent to scale a 30 year-old methodology that can be used to help people understand themselves better and use those insights to choose a more aligned college, major, job, or other consequential life decision. Doing this with an AI Agent is a huge responsibility and, as I shared, putting together the initial agent was the easy part – being confident it is consistent, accurate, well behaved, robust if attacked or misused – but still easy to use – that was the hard and boring part!
The talk uses a different AI Agent – a simple one that accepts a movie and returns a rating summary – to illuminate some of the points. For example, it uses Agent Framework and has a fan-out/fan-in workflow in the internal agent architecture, uses Microsoft Foundry, a modern tech stack, and Azure Monitor for OTel-aligned Observability.
The full description, link to github repo, and slides follow.
But first, please find some elaboration on OTel Traces, inspired by my OTel demo snafu at the live event. That blog post is here: https://blog.codingoutloud.com/2026/03/30/otel-traces-for-the-win/
OTel Traces for the Win
Speaking of OTel… Due 100% to user error (that would be me!), the demo I had prepared to show the incredible power of OTel had a technical glitch. So I have attempted to remedy that with a blog post I’m calling OTel Traces for the Win. So please hop over there if you are interested.
Making Agents Work – the official talk description
Building more powerful AI Agents seems to be getting easier by the day. They are powered by incredible models, have access to tools, and can work in teams. But how can we have confidence in non-deterministic systems that make consequential decisions?
This talk explores four approaches for building that confidence.
1. Observability platforms – You can’t improve what you can’t see. We’ll explore tools that make the hard-to-see stuff visible.
2. Evals (evaluations) – Moving beyond LGTM (looks good to me), evals wrap agents in formal testing structures to measure accuracy, consistency, and edge case handling – both before and after your Agent goes live.
3. Safety guardrails – Content filtering, PII detection, and hallucination detection from both platform vendors and standalone models. Let’s see how they fit into your agent stack.
4. Selective determinism – Sometimes we make better AI solutions by knowing when NOT to use AI. We will discuss mixing in deterministic logic with our non-deterministic behaviors.
Concepts are platform-agnostic, but demos will use Microsoft Foundry and the Agent Framework (currently in preview). (In case you haven’t been following along, Microsoft Foundry was previously know as known as Azure AI Foundry, and before that was Azure AI Studio. And Agent Framework is the next generation of both Semantic Kernel and AutoGen.)
Target audience: Those new to building production agent systems seeking approaches beyond the “hello world” tutorials – which described me not too long ago.
Source Code
- The code used to implement the Movie Rating Agent is in the GitHub repo here: https://github.com/CrankingAI/movie-trivia-agent
Presentation
- The slides I presented are here:
Pingback: OTel Traces for the Win | Coding Out Loud