Author Archives: Bill Wilder

Unknown's avatar

About Bill Wilder

My professional developer persona: coder, blogger, community speaker, founder/leader of Boston Azure cloud user group, and Azure MVP

Talk: Building an Agent with Semantic Kernel

Today I attended and spoke at the 37th Boston Code Camp. The rainy weather was just enough to maximize attendance.

There was an incredibly energetic group of inquisitive people at my talk which was on how you can give your AI LLM a goal and some tools and let it figure out how to move ahead! Lots of questions came from this highly engaged group.

The details of my talk follow.

Building an AI Agent with Semantic Kernel

The classic approach to managing complexity is through abstraction. While also useful in the physical world (you can know how to use a “car” without needing to know about all the parts under the hood), it is an essential tool in software.

To program against the current generative AI models you can use the model’s native abstraction (their SDK). But there are other options too, one of which is to use Semantic Kernel, an open-source library from Microsoft.

In this talk we will understand the first-class abstractions representable using Semantic Kernel, from the granular Function and building up to an Agent, and a couple of steps in between.

This talk will be a mix of explaining AI-relevant and Semantic Kernel-relevant topics + some explanatory sample code. We may also sneak in a little Prompty.

By the end of this talk you will appreciate why you might (or might not) want to build your AI solution with Semantic Kernel (SK) and how you would approach it.

This talk will assume you have used LLMs (like ChatGPT or others) and know the very basics of iterating on prompts and experiencing that GenAI systems have an ability to make decisions from human language. Anything beyond this will be explained in the talk.

The sample application used in the talk can be found here:

https://github.com/semantickerneldev/icon-agent

The deck used in the talk can be found here:

Talk: Hello Semantic Kernel and Giving your AI a Goal

At Virtual Boston Azure tonight Jason Haley and I teamed up to talk about ways Gen AI can be integrated with your existing systems. In the case of existing enterprise software systems, many are written in C# and Java, both languages supported by Semantic Kernel. Semantic Kernel also supports Python, which is a great language, but all things being equal using a language and technology stack already familiar to your team is also attractive. So considering a library like Semantic Kernel is a productive angle when looking across the spectrum of AI tools.

Much of my talk was focused on how to use Semantic Kernel (in C# and .NET 8) to give your AI a goal and have it solve it. The deck I presented and a recording of the talk follow. <I will likely update this post to link to code used in demo and as other artifacts become available>

Talk: Season of AI at Boston Azure

Last night at Boston Azure I teamed up with Jason Haley to cover the current Azure AI topics from the Microsoft-created Season of Azure program. An engaged group showed up at NERD in Cambridge to hear all about it.

Jason Haley’s code and materials are here: https://jasonhaley.com/2024/06/25/boston-azure-june-2024/

For my part, I pulled content from Generative AI for .NET Developers and Getting Started with Azure AI Studio and blended in some of my own. The combined mega-deck is attached to this post, though the deck spans much more than I had time to go through.

If you attended and have not had opportunity to give some feedback to Microsoft, there are only a few quick questions.

Take the survey here: aka.ms/AttendeeSurveySeasonOfAI

Additional Resources

Also complements of the Season of AI team, check out these resources.

Join the Azure AI Community on Discord

Connect with fellow enthusiasts, engage with Microsoft experts and MVPs, discuss your favorite sessions, and delve into AI discussions. Your space to ask, share, and explore!

aka.ms/AzureAI/Discord

Get started skilling with AI on Microsoft Learn

Build AI skills, connect with the community, earn Microsoft Credentials, learn from experts, and take the Cloud Skills Challenge.

aka.ms/LearnAtAITour

Download Deck Bill Presented

A diagram we spent time on (slightly updated, source here):

And finally, the deck I used follows:

And here are a couple of the links I showed during the talk that got a lot of discussion or attention:

  1. https://platform.openai.com/tokenizer
  2. https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4

Recall the third one shown – Telugu – was wildly more expensive (in terms of token count) than English (50 tokens) and Chinese (75 tokens) – where Telugu weighed in at 353 tokens.

Talk: Top AI Highlights from Microsoft Build 2024 from Boston Azure

Last night at Virtual Boston Azure I teamed up with Jason and Veronika and the three of us covered some of the topics from Microsoft Build 2024 that we found most impactful and interesting.

Here’s a direct link to the video on the Boston Azure YouTube channel: https://www.youtube.com/watch?v=odwHlnk_tzI and the same video is embedded immediately below.

hello-ai: A Simple Demonstration of Azure OpenAI

I wrote some code demonstrating how to use Azure OpenAI to support the AI mini-workshop we ran for Virtual Boston Azure. I created versions in Python and C#.

This weekend I create a web front-end for it and deployed as an Azure Static Web App with an Azure Function supporting the refactored C# logic to execute the Azure OpenAI service calls.

The new app is running here: https://hello-ai.doingazure.com

You can find the source code here: https://github.com/codingoutloud/hello-ai

Note that while the additional grounding fails to stop all of the hallucinations, it does help with the most obvious one (so we are making progress) but there’s more to be done.

Workshop: AI Mini-Workshop at Boston Azure

The March 28 Virtual Boston Azure was headlined by Pamela Fox from Microsoft. She explained all about the RAG pattern which is commonly used for building effective applications based on Large Language Models (“LLMs”) and Generative AI (“GenAI”). Pamela shared many superb insights, including lots of depth, while answering a ton of interesting follow-up questions. Was a fantastic talk. Boston Azure has a YouTube channel at youtube.com/bostonazure where you can find recordings of many past events. Pamela Fox’s talk is available there as the 48th video to be posted to the channel.

After Pamela’s talk around 15 people stuck around to participate in our first ever “AI mini-workshop” hands-on experience. The remainder of this post is about that mini-workshop.

The AI mini-workshop was a facilitated hands-on coding experience with the following goals:

1. Demystify Azure OpenAI

As background, OpenAI’s ChatGPT burst onto the scene in November 2022. That led to an explosion of people learning about AI and associated technologies such as “LLMs” which is the common shorthand for Large Language Models.

The vast majority of people interact with LLMs via chat interfaces such as available from OpenAI’s public version of ChatGPT or via Copilot on Microsoft Bing search. There’s also a more integrated programming experience surfaced through GitHub Copilot for use with VS Code and several other popular IDEs.

But what about programming your own solution that uses an LLM? Microsoft has done a great job of providing an enterprise-grade version of the OpenAI LLM as a set of services known as Azure OpenAI.

The first goal of this AI mini-workshop was to demystify this programming experience.

This was accomplished by giving the mini-workshop participants a working C# or Python program that fit on a page. And there are only around 10 lines of meaningful code needed to interact with the AI service. This is NOT that complex.

Creating a production-grade application has additional requirements, but at its core, it is straight-forward to interact with Azure OpenAI service programmatically.

The hoped for “Aha!” moment was this:

Aha #1! I can do this! I can programmatically interact with the Azure OpenAI LLM. It isn’t that mysterious after all.

Aha #2! This is possible without much code! In the Python and C# solutions shared there were only around 10 lines of core code.

2. Understand Some AI Concepts

Part of the mini-workshop exercise was to recognize a hallucination and fix it through some additional grounding using a very simple form of RAG.

The hope here is for some “Aha!” moments:

Aha #3! Here’s a concrete, understandable example of a hallucination!

Aha #4! And here’s a concrete, simple example use of RAG pattern to better ground the AI so that it no longer hallucinates about today’s date! But do note that other hallucinations remain…

3. Wield Great Power

The ability to program a LLM to generate unique content is something that essentially NO DEVELOPER COULD DO, EVER, before the super-powerful LLMs that were developed at costs of hundreds of millions of dollars and democratized by the Microsoft Azure OpenAI services (as well as by OpenAI themselves).

The hands-on AI mini-workshop required either (a) a functional Python 3 environment, or (b) a functional C#/.NET environment – everything else was provided, including sufficient access to the Azure OpenAI LLM service to complete the mini-workshop.

But in the end with very little coding you can get to the 5th Aha! moment which is:

Aha #5! I have at my command capabilities that have not been possible in all of the history of computers. The magic of LLMs available via Azure OpenAI gives me superpowers that we are only in the very beginning of understanding the ways this can be put to use.


The source code for the AI mini-working is available here. Note that the API key has subsequently been rolled (invalidated), but the code works pretty well otherwise. 🙂

My original thinking was to distribute the keys separately (like this). If this was an in-person workshop I would have kept the configuration values separated from the source, but given the added challenge of doing this with an online distributed audience I decided to simplify the mini workshop by included the configuration values directly in the source code. Looking back, I believe it was a good concession for minimizing obstacles to learning. So I’d do it again next time.

Talk: Boston Code Camp 36 – Meet GitHub Copilot, Your AI Coding Assistant!

23-Mar-2024

Always great to hang out with the greater Boston tech community. Today I attended and contributed a talk to Boston Code Camp 36 (the 36th edition of this event).

I made the trip with Maura (she gave a talk on blockchain). and we met a lot of cool people and had a great time.

I spoke on GitHub Copilot. Much of my talk was demo and discussion – you have to see this in action (or use it) to appreciate what’s happening. I consider this a glimpse into the future – it will surely become then norm to have an AI assistant when programming.

It is fun have one AI 🤖 (GitHub Copilot) help us program another AI 🤖 (Azure OpenAI). 🤖 🤖 🤖 😀 After Copilot Chat was able to explain that Azure OpenAI did not have any notion of “today” we used Copilot to implement a trivial version of RAG to anchor the prompt to the current day.

We saw how the agents like @workspace can explain a body of code and even help us figure out where to implement a new feature (such as the --joke command line param).

Another demo was to get Copilot to write unit tests for me. The demo gods were not helpful 😱 😱 😱 and I ran into an error. I moved on without fixing it since time was short. I diagnosed it later and it turns out I had double-pasted (classic demo failure!) which caused the problem. We did use /tests to create unit tests, which were initially NUnit test, but then we asked Copilot to recast them as xUnit tests, then to more efficiently create test cases using the InlineData attribute to consolidate similar test cases.We didn’t get to run the tests at the end, but hopefully the power of GitHub Copilot in helping to create unit tests came through.

I also had the opportunity to hang out with some smart soon-to-be graduates from my alma mater – University of Massachusetts at Boston (some of them were Rohini Deshmukh, Master’s in Information Technology, Kunal Sahjwani, Master’s in Information Technology, and Shounak Kulkarni, Master’s in Business Analytics). Great to see our profession is in such capable hands from chatting with these very smart and capable technologists, analysts, and future leaders.

Here is the published talk abstract for the talk I delivered – and though much of the session was demos, the PowerPoint deck is attached after the abstract.

Meet GitHub Copilot, Your AI Coding Assistant

Imagine an assistant who anticipates your needs as you code, handling mundane and time-consuming steps, allowing you to focus on more complex challenges (the fun stuff). Allow me to introduce you to GitHub Copilot.

GitHub Copilot is an AI-powered coding assistant that integrates with your developer IDE adding many powerful productivity features. Backed by the same OpenAI Large Language Model (LLM) behind ChatGPT, it has an uncanny ability to suggest code snippets that you were about to type in. But suggesting code snippets is just the beginning.

In this demo-heavy talk, we’ll show usage basics, distinguish scenarios where it excels vs. some it finds challenging, and point out a few common anti-patterns so you can avoid them.

Since it is still early days, big features are still showing up at a fast clip, so we’ll highlight some recent features and some others just emerging. At the end we’ll squeeze in just a bit of prognosticating about what it might mean for the future of software development.

As you’ll learn in more depth during this session, the promise of GitHub Copilot is to help you be more productive – go faster, stay in flow, build even more robust software. We are not fully there but we are on the way. This imperfect tool is still a game changer.

I believe the rise of AI coding assistants is unstoppable at this point – and that’s a good thing. By the end of this session, you might agree. And maybe you’ll join the revolution early.

Talk: Orlando Code Camp – Meet GitHub Copilot

24-Feb-2024

Had a great time hanging out with the Orlando tech community at their annual Code Camp. I made the trip with Maura (she gave a talk on blockchain) and we met a lot of cool people and had a great time.

I spoke on GitHub Copilot. Much of my talk was demo and discussion – you have to see this in action (or use it) to appreciate what’s happening. I consider this a glimpse into the future – it will surely become then norm to have an AI assistant when programming.

One part of the demo showed using one AI (GitHub Copilot) to help program another AI (Azure OpenAI).

It is invigorating to engage with a vibrant community of technologists. Thank you Orlando Code Camp organizers, sponsors, and all those in the tech community!

The deck I used in the talk is both attached for download and available on slideshare.

Talk: Meet GitHub Copilot, your AI Coding Assistant at Granite State Code Camp 02-Dec-2024

My talk description:

According to legend, programmers back in the stone age would write code without IntelliSense and refactoring tools. The next generation of developers will wonder how our generation got anything done without AI-powered assistants. If you don’t know what GitHub Copilot is all about then come on by to get a glimpse of the future.

In this fast-paced demo-heavy talk we will see how you can go faster, stay in flow, and maybe even do more (unit tests anyone?) with GitHub Copilot, which became commercially available during 2023. Along the way we’ll learn to talk like an AI nerd by explaining and examining terms like “prompt engineering” (how to get Copilot to do what we really want), prompts vs. suggestions, what is a “conversational AI”, what do you mean by “non-deterministic”, and how does this relate to ChatGPT (and its underlying LLM). And hallucinations. All will be explained.

The deck I used:

Talk: Exploring DORA! at Boston Code Camp 35

(I gave two talks at this event – the other one was on GitHub Copilot.)

Always great to engage with the OG Code Camp crew at Boston Code Camp. My talk description:

Haven’t heard about DORA yet? You will.

The annual DevOps Research and Assessment Report — affectionately known as the “DORA Report” — is a data-driven, research-backed set of practices and metrics that will make engineers happier and more productive while improving not just dev, ops, and security outcomes, but also business outcomes. DORA tends to also shine a light on practices that are common within teams that are measurably more effective than industry averages – examples will be drawn from cloud technologies, automation, and security outcomes.

In this talk we’ll explore the DORA report as a data-driven toolbox for helping you “get better at getting better” in the software delivery realm. We will give some historical context, then zoom in on findings from the recently published 2023 report. The goal is for you to leave this talk with an overall appreciation of the breadth of coverage and the impact of DORA as well as some specific metrics and capabilities you’ll want to focus on to level up your own teams.

The deck I used: