October Azure Cloud Events in Boston Area

Are you interested in Cloud Computing generally, or specifically Cloud Computing using the Windows Azure Platform? Listed below are the upcoming Azure-related events in the Greater Boston area which you can attend in person and for FREE (or low cost).

Since this summary page is – by necessity – a point-in-time SNAPSHOT of what I see is going on, it will not necessarily be updated when event details change. So please always double-check with official event information!

updates: Added Tech Boston event + more details on Boston Azure

Know of any more cloud events of interest to the Windows Azure community? Have any more information or corrections on the events listed? Please let us know in the comments.

Events are listed in the order in which they will occur.

October Events

1. Mongo Boston

  • when: Mon 03-Sep-2011, 9:00-5:00 PM
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Provided
  • cost: $30
  • what: The main Azure-related content is a talk by Jim O’Neilon using Mongo with the Windows Azure Platform – from the published program description: “MongoDB in the Cloud, Jim O’Neil – Developer Evangelist, Microsoft: MongoDB is synonomous with scale and performance, and, hey, so is cloud computing! It’s peanut butter and chocolate all over again as we take a look at why you might consider running MongoDB in the cloud in general and also look at the alpha release of MongoDB on Azure, a collaboration from 10gen and Microsoft.”
  • more info: http://www.10gen.com/events/mongo-boston-2011
  • register: http://www.10gen.com/events/mongo-boston-2011
  • twitter: @mongodb

2. Cloud Camp

  • when: Thu 06-Oct-2011, 5:15 – 8:30 PM (then after-party)
  • where: CloudCamp Boston #5 is Co-located with the OpenStack Design Summit. Intercontinental Hotel, 510 Atlantic Ave, Salon A (between Congress St & Fort Hill Wharf), Boston, MA 02210
  • wifi: (not sure)
  • food: (not sure, though food and beer were offered last time)
  • cost:
  • what: (from the event description on cloudcamp.org) “CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas.  With the rapid change occurring in the industry, we need a place where we can meet to share our experiences, challenges and solutions.  At CloudCamp, you are encouraged to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing.  End users, IT professionals and vendors are all encouraged to participate.”
  • more info: http://www.cloudcamp.org/boston
  • register: here
  • twitter: (not sure)

3. Boston Tech: What Can Cloud Do for YOU?

  • when: Mon 24-Oct-2011, 6:30 – 8:30 PM
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Pizza and drinks will be provided
  • cost: FREE
  • what: “You’ve probably seen the “to the cloud” commercial and are aware of the hype
    that makes cloud computing sound like the next best thing since sliced bread,
    but do you really know what cloud computing is? And what it’s not? When does it
    make sense? And when doesn’t it? What does it mean to us as software developers,
    startup entrepreneurs, and end-users? And how do you sort through all of the
    vendors and offerings to determine whose cloud portfolio offers the most value
    to you? We’ll look at all of these questions and more as we spend the evening
    navigating through the cloudscape.” (text taken from the Meetup listing)
  • more info: http://www.meetup.com/BostonTech/events/33357482/
  • register: http://www.meetup.com/BostonTech/events/33357482/
  • twitter:

4. Boston Azure User Group meeting: Cloud Architecture Patterns

  • when: Thu 27-Oct-2011, 6:00 – 8:30 PM
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Pizza and drinks will be provided
  • cost: FREE
  • what: Featured talk: “There are some big ideas in software architecture that are particularly relevant for cloud platforms. In this talk we will introduce a few of these big ideas – eventual consistency, scale out, and design for failure – and discuss the implications of these big ideas on cloud application architecture generally, with specific examples of useful patterns and services drawn from the Windows Azure Platform.” There will also be a shorter opening topic.
  • more info: See our (new) Boston Azure Meetup.com site for more info
  • register: http://www.meetup.com/bostonazure/events/35904052/
  • twitter: #bostonazure

5. New England Code Camp #16

While not strictly an Azure-only event, there will be Azure content at this community-driven event. Hope to see you there!

Omissions? Corrections? Comments? Please leave a comment or reply on the Twitters!

Vermont Code Camp III

Along with Maura Wilder and Joan Wortman, I made the trek to Vermont from Boston to hang out with the cool kids at Vermont Code Camp III. The three of us gave talks and attended a bunch of excellent sessions. For my part, I attended talks on Hadoop, Visual Studio tools for Unit Testing, EF, software consulting, and Maura and Joan’s talk Introduction to the Ext JS JavaScript framework “for Rich Apps in Every Browser” (after which I admit I was convinced that this is a framework to take seriously – very impressive).

I presented a talk in the morning called Cloud Architecture Patterns for Mere Mortals (with Examples in Windows Azure). If you are interested, my slide deck is attached: Vermont Code Camp III – Cloud Architecture Patterns for Mere Mortals – Bill Wilder – 10-Sept-2011 (also available on Slideshare)

Also, you are all invited to the (free) Boston Azure Bootcamp to be held in the Boston area (Cambridge, MA) on Friday September 30 and Saturday October 1. Sign up here, and please help spread the word. Hope to see some Vermont Code Camp friends there! Let me know if you have a strong desire to “couch surf”, especially on the middle night, and I’ll see if I can help out. Tickets won’t last forever, so I encourage you to sign up sooner than later.

Thank you to all the Vermont Code Camp III organizers, volunteers, and sponsors – like last year, this was an inspired event and I’m glad I made the trip. Find them on Twitter at @VTCodeCamp.

A handful of Vermont Code Camp photos follow… (and a couple from Sunday night on Church Street in Burlington)

This slideshow requires JavaScript.

Azure FAQ: How frequently is the clock on my Windows Azure VM synchronized?

Q. How often do Windows Azure VMs synchronize their internal clocks to ensure they are keeping accurate time?

A. This basic question comes up occassionally, usually when there is concern around correlating timestamps across instances, such as for log files or business events.  Over time, like mechanical clocks, computer clocks can drift, with virtual machines (especially when sharing cores) effected even more. (This is not specific to Microsoft technologies; for example, it is apparently an annoying issue on Linux VMs.)

I can’t find any official stats on how much drift happens generally (though some data is out there), but the question at hand is what to do to minimize it. Specifically, on Windows Azure Virtual Machines (VMs) – including Web Role, Worker Role, and VM Role – how is this handled?

According to this Word document – which specifies the “MICROSOFT ONLINE SERVICES USE RIGHTS SUPPLEMENTAL LICENSE TERMS, MICROSOFT WINDOWS SERVER 2008 R2 (FOR USE WITH WINDOWS AZURE)” – the answer is once a week. (Note: the title above includes “Windows Server 2008 R2” – I don’t know for sure if the exact same policies apply to the older Windows Server 2008 SP2, but would guess that they do.)

Here is the full quote, in the context of which services you can expect will be running on your VM in Windows Azure:

Windows Time Service. This service synchronizes with time.windows.com once a week to provide your computer with the correct time. You can turn this feature off or choose your preferred time source within the Date and Time Control Panel applet. The connection uses standard NTP protocol.

So Windows Azure roles use the time service at time.windows.com to keep their local clocks up to snuff.  This service uses the venerable Network Time Protocol (NTP), described most recently in RFC 5905.

UDP Challenges

The documentation around NTP indicates it is based on User Datagram Protocol (UDP). While Windows Azure roles do not currently support you building network services that require UDP endpoints (though you can vote up the feature request here!), the opposite is not true: Windows Azure roles are able to communicate with non-Azure services using UDP, but only within the Azure Data Center. This is how some of the key internet plumbing based on UDP still works, such as the ability to do Domain Name System (DNS) lookups, and – of course – time synchronization via NTP.

This may lead to some confusion since UDP support is currently limited, while NTP being already provided.

The document cited above mentions you can “choose your preferred time source” if you don’t want to use time.windows.com. There are other sources from which you can update the time of a computing using NTP, such as free options from National Institute for Standards and Technology (NIST).

Here are the current NTP Server offerings as seen in the Control Panel on a running Windows Azure Role VM (logged in using Remote Desktop Connection). The list includes time.windows.com and four options from NIST:

Interestingly, when I manually tried changing the time on my Azure role using a Remote Desktop session, any time changes I made were immediately corrected whenever I tried to make changes. Not sure if it was doing an automatic NTP correction after any time change, but my guess is something else was going on since the advertised next time it would sync via NTP did not change based on this.

When choosing a different NTP Server, it did not always succeed (sometimes timing out), but also I did see it succeed, as in the following:

The interesting part of seeing any successful sync with time.nist.gov is that it implies UDP traffic leaving and re-entering the Windows Azure data center. This, in general, is just not allowed – all UDP traffic leaving or entering the data center is blocked (unless you use a VM Role with Windows Azure Connect). To prove this for yourself another way, configure your Azure role VM to use a DNS server which is outside of the Azure data center; all subsequent DNS resolution will fail.

If “weekly” is Not Enough

If the weekly synchronization frequency is somehow inadequate, you could write a Startup Task to adjust the frequency to, say, daily. This can be done via the Windows Registry (full details here including all the registry settings and some tools, plus there is a very focused summary here giving you just the one registry entry to tweak for most cases).

How frequently is too much? Not sure about time.windows.com, but time.nist.gov warns:

All users should ensure that their software NEVER queries a server more frequently than once every 4 seconds. Systems that exceed this rate will be refused service. In extreme cases, systems that exceed this limit may be considered as attempting a denial-of-service attack.

Of further interest, check out the NIST Time Server Status descriptions:

Name IP Address Location Status
time-a.nist.gov 129.6.15.28 NIST, Gaithersburg, Maryland ntp ok, time,daytime busy, not recommended
time-b.nist.gov 129.6.15.29 NIST, Gaithersburg, Maryland ntp ok, time,daytime busy, not recommended
time-nw.nist.gov 131.107.13.100 Microsoft, Redmond, Washington ntp, time ok, daytime busy, not recommended
time.nist.gov 192.43.244.18 NCAR, Boulder, Colorado All services busy, not recommended

They recommend against using any of the servers, at least at the moment I grabbed these Status values from their web site.  I find this amusing since – other than the default time.windows.com – these are the only four servers offered as alternatives in the User Interface of the Control Panel applet. As I mentioned above, sometimes these servers timed out on an on-demand NTP sync request I issued through the applet user interface; this may explain why.

It may be possible to use a commercial NTP service, but I don’t know if the Windows Server 2008 R2 configuration supports it (at least I did not see it in the user interface), and if there was a way to specify it (such as in the registry), I am not sure that the Windows Azure data center will allow the UDP traffic to that third-party host. (They may – I just don’t know. They do appear to allow UDP requests/responses to NIST servers. Not sure if this is a firewall/proxy rule, and if so, is it for NTP, or just NTP to NIST?)

And – for the (good kind of) hacker in you – if you want to play around with accessing an NTP service from code, check out this open source C# code.

Is this useful? Did I leave out something interesting or get something wrong? Please let me know in the comments! Think other people might be interested? Spread the word!

Four 4 tips for developing Windows Services more efficiently

Are you building Windows Services?

I recently did some work with Windows Services, and since it had been rather a long while since I’d done so, I had to recall a couple of tips and tricks from the depths of my memory in order to get my “edit, run, test” cycle to be efficient. The singular challenge for me was quickly getting into a debuggable state with the service. How I did this is described below.

Does Windows Azure support Windows Services?

First, a trivia question…

Trivia Question: Does Windows Azure allow you to deploy your Windows Services as part of your application or cloud-hosted service?

Short Answer: Windows Azure is more than happy to run your Windows Services! While a more native approach is to use a Worker Role, a Windows Service can surely be deployed as well, and there are some very good use cases to recommend them.

More Detailed Answer: One good use case for deploying a Windows Service: you have legacy services and want to use the same binary on-prem and on-azure. Maybe you are doing something fancy with Azure VM Roles. These are valid examples. In general – for something only targetting Azure – a Worker Role will be easier to build and debug. If you are trying to share code across a legacy Windows Service and a shiny new Windows Azure Worker Role, consider following the following good software engineering practice (something you may want to do anyway): factor out the “business logic” into its own class(es) and invoke it with just a few lines of code from either host (or a console app, a Web Service, a unit test (ahem), etc.).

Windows Services != Web Services

Most readers will already understand and realize this, but just to be clear, a Windows Service is not the same as a Web Service. This post is not about Web Services. However, Windows Azure is a full-service platform, so of course has great support for not only Windows Services but also Web Services. Windows Communication Foundation (WCF) is a popular choice for implementing Web Services on Windows Azure, though other libraries work fine too – including in non-.NET languages and platforms like Java.

Now, on to the main topic at hand…

Why is Developing with Windows Services Slower?

Developing with Windows Services is slower than some other types of applications for a couple of reasons:

  • It is harder to stop in the Debugger from Visual Studio. This is because a Windows Service does not want to be started by Visual Studio, but rather by the Service Control Manager (the “scm” for short – pronounced “the scum”). This is an external program.
  • Before being started, Windows Services need to be installed.
  • Before being installed, Windows Services need to be uninstalled (if already installed).

Tip 1: Add Services applet as a shortcut

I find myself using the Services applet frequently to see which Windows Services are running, and to start/stop and other functions. So create a shortcut to it. The name of the Microsoft Management Console snapin is services.msc and you can expect to find it in Windows/System32, such as here: C:\Windows\System32\services.msc

A good use of the Services applet is to find out the Service name of a Windows Service. This is not the same as the Windows Services’s Display name you seen shown in the Name column. For example, see the Windows Time service properties – note that W32Time is the real name of the service:

Tip 2: Use Pre-Build Event in Visual Studio

Visual Studio projects have the ability to run commands for you before and after the regular compilation steps. These are known as Build Events and there are two types: Pre-build events and Post-build events. These Build Events can be accessed from your Project’s properties page, on the Build Events side-tab. Let’s start with the Pre-build event.

Use this event to make sure there are no traces of the Windows Service installed on your computer. Depending on where you install your services from (see Tip 3), you may find that you can’t even recompile your service until you’ve at least stopped it; this smooths out that situation, and goes beyond it to make the usual steps happen faster than you can type.

One way to do this is to write a command file –  undeploy-service.cmd – and invoke it as a Pre-build event as follows:

undeploy-service.cmd

You will need to make sure undeploy-service.cmd is in your path, of course, or else you could invoke it with the path, as in c:\tools\undeploy-service.cmd.

The contents of undeploy-service.cmd can be hard-coded to undeploy the service(s) you are building every time, or you can pass parameters to modularize it. Here, I hard-code for simplicity (and since this is the more common case).

set ServiceName=NameOfMyService
net stop %ServiceName%
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u %ServiceName%
sc delete %ServiceName%
exit /b 0
Here is what the commands each do:
  1. Set a reusable variable to the name of my service (set ServiceName=NameOfMyService)
  2. Stop it, if it is running (net stop)
  3. Uninstall it (installutil.exe /u)
  4. If the service is still around at this point, ask the SCM to nuke it (sc delete)
  5. Return from this .cmd file with a  success status so that Visual Studio won’t think the Pre-Build event ended with an error (exit /b 0 => that’s a zero on the end)
In practice, you should not need all the horsepower in steps 2, 3, and 4 since each of them does what the prior one does, plus more. They are increasingly powerful. I include them all for completeness and your consideration as to which you’d like to use – depending on how “orderly” you’d like to be.

Tip 3: Use Post-Build Event in Visual Studio

Use this event to install the service and start it up right away. We’ll need another command file – deploy-service.cmd – to invoke as a Post-build event as follows:

deploy-service.cmd $(TargetPath)

What is $(TargetPath) you might wonder. This is a Visual Studio build macro which will be expanded to the full path to the executable – e.g., c:\foo\bin\debug\MyService.exe will be passed into deploy-service.cmd as the first parameter.  This is helpful so that deploy-service.cmd doesn’t need to know where your executable lives. (Visual Studio build macros may also come in handy in your undeploy script from Tip 2.)

Within deploy-service.cmd you can either copy the service executables to another location, or install the service inline. If you copy the service elsewhere, be sure to copy needed dependencies, including debugging support (*.pdb). Here is what deploy-service.cmd might contain:

set ServiceName=NameOfMyService
set ServiceExe=%1
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe %ServiceExe%
net start %ServiceName%
Here is what the commands each do:
  1. Set a reusable variable to the name of my service (set ServiceName=NameOfMyService)
  2. Set a reusable variable to the path to the executable (passed in via the expanded $(TargetPath) macro)
  3. Install it (installutil.exe)
  4. Start it (net start)
Note that net start will not be necessary if your Windows Service is designed to start automatically upon installation. That is specified through a simple property if you build with the standard .NET template.

Tip 4: Use System.Diagnostics.Debugger in your code

If you follow Tip 2 when you build, you will have no trouble building. If you follow Tip 3, your code will immediately begin executing, ready for debugging. But how to get it into the debugger? You can manually attach it to a running debug session, such as through Visual Studio’s Debug menu with the Attach to Process… option.

I find it is often more productive to drop a directive right into my code, as in the following:

void Foo()
{
int x = 1;
System.Diagnostics.Debugger.Launch(); // use this…
System.Diagnostics.Debugger.Break();    // … or this — but not both
}

System.Diagnostics.Debugger.Launch will launch into a into debugger session once it hits that line of code and System.Diagnostics.Debugger.Break will break on that line. They are both useful, but you only need one of them – you don’t need them both – I only show both here for illustrative purposes. (I have seen problems with .NET 4.0 when using Break, but not sure if .NET 4.0 or Break is the real culpret. Have not experienced any issues with Launch.)

This is the fastest way I know of to get into a debugging mood when developing Windows Services. Hope it helps!

Quick: How many 9s are in your SLA?

I recently attended an event where one of the speakers was the CTO of a company built on top of Amazon cloud services, the most critical of these being the Simple Storage Service known as Amazon S3.

The S3 service runs “out there” (in the cloud) and provides a scalable repository for applications to store and manage data files. The service can support files of any size, as well as any quantity. So you can put as much stuff up there as you want – and since it is a pay-as-you-go service, you pay for what you use. The S3 service is very popular. An example of a well-known customer, according to Wikipedia, is SmugMug:

Photo hosting service SmugMug has used S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being “considerably more reliable than our own internal storage” and claimed to have saved almost $1 million in storage costs.

Good stuff.

Of course, Amazon isn’t the only cloud vendor with such an offering. Google offers Google Storage, and Microsoft offers Windows Azure Blob Storage; both offer features and capabilities very similar to those of S3. While Amazon was the first to market, all three services are now mature, and all three companies are experts at building internet-scale systems and high-volume data storage platforms.

As I mentioned above, S3 came up during a talk I attended. The speaker – CTO of a company built entirely on Amazon services – twice touted S3’s incredibly strong Service Level Agreement (SLA). He said this was both a competitive differentiator for his company, and also a competitive differentiator for Amazon versus other cloud vendors.

Pause and think for a moment – any idea? – What is the SLA for S3? How about Google Storage? How about Windows Azure Blob Storage?

Before I give away the answer, let me remind you that a Service Level Agreement (SLA) is a written policy offered by the service provider (Amazon, Google, and Microsoft in this case) that describes the level of service being offered, how it is measured, and consequences if it is not met. Usually, the “level of service” part relates to uptime and is measured in “nines” as in 99.9% (“three nines”) and so forth. More nines is better, in general – and wikipedia offers a handy chart translating the number of nines into aggregate downtime/unavailability. (More generally, an SLA also deals with other factors – like refunds to customers if expectations are not met, what speed to expect, limitations, and more. I will focus only on the “nines” here.)

So… back to the question… For S3 and equivalent services from other vendors, how many nines are in the Amazon, Google, and Microsoft SLAs? The speaker at the talk said that S3 had an uptime SLA with 11 9s. Let me say that again – eleven nines – or 99.999999999% uptime. half-an-eye-blinkIf you attempt to look this up in the chart mentioned above, you will find this number is literally “off the chart” – the chart doesn’t go past six nines! But my back-of-the-envelope calculation says it amounts to – on average – less than 32 milliseconds of downtime per year. This is about half what “a blink of your eye” would take – yes, a mere half of an eye-blink. (Which ends with your eyes closed. :-))

This is an impressive number! If only it was true. It turns out the real SLA for Amazon S3 has exactly as many nines as the SLA for Windows Azure Blob Storage and the SLA for Google Storage: they are all 99.9%.

Storage SLAs for Amazon, Google, and Microsoft all have exactly the same number of nines: they are all 99.9%. That’s three nines.

I am not picking on the CTO I heard gushing about the (non-existant) eleven-nines SLA. (In fact, his or her identity is irrelevent to the overall discussion here.) The more interesting part to me is the impressive reality distortion field around Amazon and its platform’s capabilities. The CTO I heard speak got it wrong, but this is not the first time it was misinterpreted as an SLA, and it won’t be the last.

I tracked down the origin of the eleven nines. Amazon CTO Werners Vogels mentions in a blog post that the S3 service is “design[ed]” for “99.999999999% durability” – choosing his words carefully. Consistent with Vogels’ language is the following Amazon FAQ on the same topic:

Q: How durable is Amazon S3? Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

First of all, these mentions are a comment on a blog and an item in an FAQ page; neither is from a company SLA. And second, they both speak to durability of objects – not uptime or availability. And third, also critically, they say “designed” for all those nines – but guarantee nothing of the sort. Even still, it is a bold statement. And good marketing.

It is nice that Amazon can have so much confidence in their S3 design. I did not find a comparable statement about confidence in the design of their compute infrastructure… Reality is that [cloud] services are about more than design and architecture – also about implementation, operations, management, and more. To have any hope, architecture and design need to be solid, of course, but alone they cannot prevent a general service outage which could take your site down with it (and even still lose data occasionally). Some others on the interwebs are skeptical as I am, not just of Amazon, but anyone claiming too many nines.

How about the actual 99.9% “three-nines” SLA? Be careful in your expectations. As a wise man once told me, there’s a reason they are called Service Level Agreements, rather than Service Level Guarantees. There are no guarantees here.

This isn’t to pick on Amazon – other vendors have had – and will have – interruptions in service. For most companies, the cloud will still be the most cost-effective and reliable way to host your applications; few companies can compete with the big platform cloud vendors for expertise, focus, reliability, security, economies-of-scale, and efficiency. It is only a matter of time before you are there. Today, your competitors (known and unknown) are moving there already. As a wise man once told me (citing Crossing the Chasm), the innovators and early adoptors are those companies willing to trade off risk for competitive advantage. You saw it here first: this Internet thing is going to stick around for a while. Yes, and cloud services will just make too much sense to ignore. You will be on the cloud; it is only a matter of where you’ll be on the curve.

Back to all those nines… Of course, Amazon has done nothing wrong here. I see nothing inaccurate or deceptive in their documentation. But those of us in the community need to pay closer attention to what is really being described.  So here’s a small favor I ask of this technology community I am part of: Let’s please do our homework so that when we discuss and compare the cloud platforms – on blogs, when giving talks, or chatting 1:1 – we can at least keep the discussions based on facts.

July Boston Azure User Group – Recap

The July Boston Azure User Group meeting had a tough act to follow: the June meeting included a live, energy-packed Rock, Paper, Azure hacking contest hosted by Jim O’Neil! The winners were chosen completely objectively since the Rock, Paper, Azure server managed the who competition. First prize was taken by two teenagers (Kevin Wilder and T.J. Wilder) whose entry beat out around 10 others (including a number of professional programmers!).

This month’s July Boston Azure User Group meeting was up for the challenge.

Hope to see you at the Boston Azure meeting in August (Windows Phone 7 + Azure), two meetings in September (one in Waltham (first time EVER), and the “usual” one at NERD), and then kicking off a two-day Boston Azure Bootcamp!

Details on ALL upcoming Boston-area events of interest to Azure folks (that I know about) can be found in this blog post about Boston-events in August and September. Those hosted by Boston Azure are also at www.bostonazure.org and the upcoming events page.

August and September Azure Cloud Events in Boston Area

Are you interested in Cloud Computing generally, or specifically Cloud Computing using the Windows Azure Platform? Listed below are the upcoming Azure-related events in the Greater Boston area which you can attend in person and for FREE.

[Note – this post originally was mis-titled to say July and August instead of the correct August and September. I have not changed its URL, but did fix the title.]

Since this summary page is – by necessity – a point-in-time SNAPSHOT of what I see is going on, it will not necessarily be updated when event details change. So please always double-check with official event information!

Know of any more cloud events of interest to the Windows Azure community? Have any more information or corrections on the events listed? Please let us know in the comments.

Events are listed in the order in which they will occur.

August Events

1. Boston Azure User Group meeting: Special Guest John Garland on Windows Phone

  • when: Thu 25-Aug-2011, 6:30 – 8:30 PM
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Pizza and drinks will be provided
  • cost: FREE
  • what: Windows Phone 7 expert John Garland (a Senior Consultant at Wintellect) is the featured speaker. John’s presentation will show how the Windows Azure Toolkit for Windows Phone 7 can be used to quickly create Azure-enabled applications for the Windows Phone platform.  This talk will also include a discussion of some of the new features available in the Windows Phone “Mango” release due later this Fall (update), and how they can be used to further enhance the experience of working with Azure-based applications.
  • more info: See the Boston Azure cloud user group site for more info, or join the (low volume) Boston Azure mailing list to keep most up to date
  • register: here
  • twitter: #bostonazure

September Events

2. Vermont Code Camp 2011

While not strictly a “Boston-area” event, this may be of interest still. I attended Vermont Code Camp 2010 as both an attendee (hitting lots of great sessions) and as a speaker (spoke about Azure, of course). There was a great deal of buzz and energy at the event. There was also major swag – some really good stuff. I don’t know what this year will hold, but they did set a pretty high bar last year across the board. I will be attending again this year (and have proposed a talk: Applying Architecture Patterns for Scalability and Reliability to the Windows Azure Cloud Platform). Hope to see you there!

  • when: Saturday, September 10, 2011 8am6pm
  • where: Kalkin Hall on the University of Vermont campus in Burlington, VT
  • wifi: (I think so)
  • food: (Pretty sure)
  • cost: FREE
  • what: It’s a Code Camp! (from http://vtcodecamp.org/):  “Last year’s event had four rooms with sessions on .NET, PHP, Ruby, Python, and more. Two of the rooms had .NET topics and another had sessions on free/open source software. There was a fourth room where developers were introduced to various technologies that they may not use every day. Check back for details about Vermont Code Camp 2011 or follow us on Twitter.”
  • more info: http://vtcodecamp.org/
  • register: http://vtcodecamp.org/register/
  • twitter: @VTCodeCamp

3. Cloudy Mondays

  • when: Mon 19-Sep-2011, 5:00 – ?:?? PM
  • where: Venture Development Center, 100 Morrissey Blvd, Boston, MA
  • wifi: (not sure)
  • food: (not sure, though food and beer were offered last time)
  • cost: FREE
  • what: (topics not posted yet, though generally cloud and cloud startup-related)
  • more info: http://www.meetup.com/Cloudy-Mondays/
  • register: here
  • twitter: (not sure)

4. Boston Azure User Group meeting in Waltham: Special Guest Thom Robbins!

In this special event, the Boston Azure User Group is combining forces with both the Boston .NET Architecture Study Group and the New England ASP.NET Professionals Group to host this talk from Thom Robbins. Note the location is Waltham (not NERD).

  • when: Wed 21-Sept-2011, 6:00 – 8:30 PM
  • where: Hosted at Microsoft Office in Waltham (201 Jones Road, Waltham, MA 02451 – come to the 6th floor) – ample free parking is available
  • wifi: Wireless Internet access is NOT available to attendees
  • food: Pizza and drinks will be provided
  • cost: FREE
  • what: Special Guest speaker is Thom Robbins

Kentico CMS: A Case Study in Building for Today’s Web

Building software is a set of smart choices to meet the needs of your customers and the possibilities of technology.  Today’s Web demands that customers have a choice in how they deploy their applications. With over 7,000 websites in 84 countries, Kentico CMS for ASP.Net is delivered as a single code base for use as a cloud, hosted, or on-premise solution. With over 34 out of the box modules and everything built on a SQL Server backend – How did we do it?  What tradeoffs did we make? In this session we will answer that question and look at how to build a rich and compelling website using Windows Azure.

About Thom Robbins

Thom Robbins is the Chief Evangelist for Kentico Software. He is responsible for evangelizing Kentico CMS for ASP.NET with Web developers, Web designers and interactive agencies. Prior to joining Kentico, Thom joined Microsoft Corporation in 2000 and served in a number of executive positions.  Most recently, he led the Developer Audience Marketing group that was responsible for increasing developer satisfaction with the Microsoft platform. Thom also led the .NET Platform Product Management group responsible for customer adoption and implementation of the .NET Framework and Visual Studio. Thom was also a Principal Developer Evangelist working with developers across New England implementing .NET based solutions. A regular speaker and writer, he currently resides in Seattle with his wife and son. He can be reached at thomasr@kentico.com or on Twitter at @trobbins.

5. Boston Azure User Group meeting: Special Guest Brian Prince!

  • when: Thu 29-Sep-2011, 6:00 – 8:30 PM
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Pizza and drinks will be provided
  • cost: FREE
  • what: Brian Prince – from Microsoft, and co-author of the most excellent Azure in Action book – is our featured speaker.
  • more info: See the Boston Azure cloud user group site for more info, or join the (low volume) Boston Azure mailing list to keep most up to date.
  • register: (will open in early September)
  • twitter:

6. Boston Azure Bootcamp

  • when: Fri/Sat Sep 30 – Oct 1 (full days, but start/end times are tbd)
  • where: Hosted at NERD Center
  • wifi: Wireless Internet access will be available
  • food: Expected to be provided, but details being worked out
  • cost: FREE
  • what:This free event is a two-day, hands-on bootcamp with the goal of learning a whole lot about the Windows Azure Platform. The primary programming environment will be Visual Studio 2010 (a free version is available). Coding will be primarily done in C#. (Other programming environments and other languages are available for Windows Azure. If you plan to program in other than Visual Studio and C#, please let us know it advance in the “Any Other Comments” section of the sign-up form.)The two days will largely consist of a sequence segments where important general topics in cloud computing will be introduced, and the Windows Azure approach will be discussed in detail. Each of these segments will include both a lecture by an Azure expert followed by a hands-on lab where you code a basic solution to get these concepts to really sink in. Azure experts will be in the room to help you with any questions or issues during labs.At the end of this two days, you will have learned key cloud and Windows Azure concepts, and have hands-on experience building, debugging, and deploying real applications. You need bring your own Azure-ready laptop – or let us know on the signup form if you would like a loaner – or would like to pair with someone for the coding part.
  • more info: See the Boston Azure Bootcamp page on Eventbrite for more info
  • register: Registration is LIMITED BY SPACE register here
  • twitter: #bostonazurebootcamp

Omissions? Corrections? Comments? Please leave a comment or reply on the Twitters!

Talk: Architecture Patterns for Scalability and Reliability in Context of Azure Platform

I spoke last night to the Boston .NET Architecture Study Group about Architecture Patterns for Scalability and Reliability in Context of the Windows Azure cloud computing platform.

The deck is attached at the bottom, after a few links of interest for folks who want to dig deeper.

Command Query Responsibility Segregation (CQRS):

Sharding is hard:

NoSQL:

CAP Theorem:

PowerPoint slide deck used during my talk:

Azure FAQ: Can I create a Startup Task that executes only when really in the Cloud?

Q. Can I create a Startup Task that executes only when really in the Cloud? I mean really in the cloud. In other words, can I get my Startup Task to NOT RUN when I debug/deploy my Windows Azure application on my development machine?

A. The short answer is that while there is no built-in support for this, you can get the same effect by using a simple trick to add logic to your Startup Script to provide sufficient control. Before getting into that, let’s describe the problem in a bit more detail. Update 14-Oct-2011: Stop the presses!! This capability is now built into Windows Azure! Steve Marx has a blog post on the matter. I will leave this blog post around since the details in it may be of value for other scenarios.

Suppose you want to use ASP.NET MVC 3 in your Windows Azure Web Role. At the time of this writing, MVC 2 was installed in Azure, but not MVC 3. What to do? The short answer is, you can install MVC 3 along with your application at deployment time in the cloud. This type of prerequisite installation is most conveniently handled using a Startup Task. The idea is that I include the ASP.NET MVC 3 bits with my app, and define a Startup Task that installs these bits, and I can set things up easily so that these bits are already installed before my Web Role tries to run (via a Simple Startup Task). This is a pretty clean solution. (For more on Startup Tasks and how to configure them see How to Define Startup Tasks for a Role. For specific guidance on installing ASP.NET MVC 3 as a Startup Task, see Technique #2 in the ASP.NET MVC 3 in Windows Azure post on Steve Marx’s blog.)

Example Startup Task That ALWAYS Runs

Of course, installing ASP.NET MVC 3 is only one example. Here is another example – a Startup Task that enables support for ADSI with IIS – let’s call it enable-webmetabase.cmd. First, you would add the following entry to ServiceDefinition.csdef:

<?xml version=”1.0″ encoding=”utf-8″?>
<ServiceDefinition name=”NameOfMyAzureApp” xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition“>

<Startup>
<Task commandLine=”enable-webmetabase.cmd” executionContext=”elevated” taskType=”simple” />
</Startup>

The contents of enable-webmetabase.cmd would be something like the following (first enabling PowerShell scripting, then executing a specific script):

powershell -command “Set-ExecutionPolicy Unrestricted”
powershell .\enable-webmetabase.ps1

Though the specifics are not important for these instructions, since this script invokes a PowerShell script – let’s call it enable-webmetabase.ps1 – here is what that might look like:

Import-Module ServerManager
Add-WindowsFeature Web-Metabase

And as a final step, you would include both enable-webmetabase.cmd and enable-webmetabase.ps1 with your Visual Studio Project, and set the Copy to Output Directory property on each of these two files to be Copy always. Now, every time you deploy this Azure solution this Startup Task will be executed – and you can feel confident that you won’t have to worry about ADSI in IIS not being available (or whatever it is your Startup Tasks do for you).

Startup Tasks Run in Development Too

But what happens when I wish to deploy this solution on my development machine so I can quickly test it out while I am in the midst of development? Since the Windows Azure Platform has an outstanding local cloud simulation environment (which can be downloaded for free), “local” is the most common deployment target! It is not ideal that the Startup Tasks will run locally – I do not want to continually install ASP.NET MVC (or re-enable web metabase support, etc.) since that will just slow me down.

The Simple Workaround

I know of no built-in support that makes it easy for a Startup Task to “know” whether it is running in the cloud or in your local development environment. But it is simple to roll your own. Here’s what I do:

  • Create an Environment Variable called AZURE_CLOUD_SIMULATION_ENVIRONMENT. While the exact value of this variable does not matter, for the sake of someone else who may see it and be puzzled, I set mine to be “set manually per http://bit.ly/rs5SRN” where the bit.ly link points back to this blog post. 🙂 It also doesn’t matter if the Environment Variable is user-specific or System-wide. If it is a shared development machine, I would make it System-wide (for all users).
  • It is common practice when defining Startup Tasks to create command script using a .cmd file and have that be the Startup Task. Within the Startup Task .cmd file, use the defined keyword (supported in the command shells of recent versions of Windows, such as those you will be using for Azure development and deployment) to add a little logic so that you run only those commands you wish to execute in the current environment.

To set up the AZURE_CLOUD_SIMULATION_ENVIRONMENT environment variable:

  1. Run SystemPropertiesAdvanced.exe to bring up the System Properties dialog box:
  2. Click the Environment Variables button to bring up the Environment Variables dialog box:
  3. Click the New… button at the bottom to bring up the New System Variable dialog box:
  4. Type AZURE_CLOUD_SIMULATION_ENVIRONMENT into the Variable name field, and set manually per http://bit.ly/rs5SRN into the Variable value field:
  5. Hit a few OK buttons and you’ll be done.

Revised “Smart” Startup Task

Of course the trick is that the AZURE_CLOUD_SIMULATION_ENVIRONMENT variable will only be set on development machines, so it will NOT be set in the real cloud, getting you the desired results. Here is the same enable-webmetabase.cmd Startup Task script from above, except rewritten so that when you run it locally it will not do anything to your development machine.

if defined AZURE_CLOUD_SIMULATION_ENVIRONMENT goto SKIP

powershell -command “Set-ExecutionPolicy Unrestricted”
powershell .\enable-webmetabase.ps1

:SKIP

The line “if defined AZURE_CLOUD_SIMULATION_ENVIRONMENT goto SKIP” simply checks whether AZURE_CLOUD_SIMULATION_ENVIRONMENT exists in the environment, and if it does exist, the script jumps over the two powershell lines. This is pretty handy!

Again, in summary, if you follow the very simple approach in this post, the AZURE_CLOUD_SIMULATION_ENVIRONMENT will exist only on development machines – in the simulated cloud – and not out in the “real” cloud.

Not to be Confused with RoleEnvironment.IsAvailable

There is another technique – that is built into Azure – which you can use in code that needs to behave one way when running under Windows Azure, and another way when not running under Windows Azure: RoleEnvironment.IsAvailable. This is good for code that might be deployed both in, say, an Azure Web Role and in a non-Azure ASP.NET web site. For Azure applications, RoleEnvironment.IsAvailable will be true for both the local development machine and when deployed into the public cloud.

While RoleEnvironment.IsAvailable and AZURE_CLOUD_SIMULATION_ENVIRONMENT serve different purposes, they are complementary and can be used together.

For more information on RoleEnvironment.IsAvailable, there is documentation and a good description of its use.

Other Uses for the Technique

Maybe you want to do certain things ONLY in your development environment. For example, perhaps you wish to launch Fiddler. Or maybe uninstall a Windows Service (via InstallUtil /u <service exe name>). Whatever your needs – you can use the same simple technique to make this easy. The following syntax is also supported – each bullet being a single line (though some of them may appear on more than one line in this blog post):

  • if defined AZURE_CLOUD_SIMULATION_ENVIRONMENT (echo AZURE_CLOUD_SIMULATION_ENVIRONMENT equals %AZURE_CLOUD_SIMULATION_ENVIRONMENT%) else (echo AZURE_CLOUD_SIMULATION_ENVIRONMENT is NOT defined)
  • if defined AZURE_CLOUD_SIMULATION_ENVIRONMENT echo DOING SOMETHING
  • if NOT defined AZURE_CLOUD_SIMULATION_ENVIRONMENT echo DOING SOMETHING ELSE

Is this useful? Did I leave out something interesting or get something wrong? Please let me know in the comments! Think other people might be interested? Spread the word!

Platform as a Service (PaaS) is a Business Differentiator

I am a big fan of my friend Jason Haley‘s blog where he posts “Interesting Finds” on a daily basis – always highlighting good reads on many topics relevant to me and so many other developers, architects, and entrepreneurs out in the real world – especially those of us who want to still be relevant next year (and the year after). Some of the areas highlighted are “hard core” topics like Mobile, Web, Database, .NET, and Security; “soft skill” topics like Career, Agile, and Business; and, of course, my favorite: Cloud Computing.

As I was working through the Interesting Finds: June 23, 2011 posts on Cloud Computing I drilled into one from the Official Google Enterprise Blog titled Businesses innovate and scale faster on Google App Engine. It is a very well crafted post which includes some great customer quotes and a couple of videos. I must say, it does a great job of promoting the value in the Google App Engine (GAE) platform, essentially as mini-case studies. Well done!

What struck me as particulary interesting about this post, however, is the types of benefits the GAE customers say they value:

  • The first embedded video features Dan Murray, founder and managing director of a cloud-based SEC-filings company called WebFilings. Mr. Murray mentions they needed a platform that would be secure and would support rapid growth. He goes on (at 1:50 into the video): “Google App Engine provides a platform that takes the infrastructure management off of our hands, we don’t have to worry about it, so it’s easy for us to build and deploy apps. For us right now it’s about execution and making sure that we’re scaling our business, while App Engine provides the ability to scale the technology and platform.”
  • The second embedded video features Jessica Stanton from the famous Evite event invitation site. Ms. Stanton mentions (at 0:52 into the video) “the things that App Engine especially desirable for us are the autoscaling and … monitoring systems” that Google provides. Near the end (at 1:12 into the video) she emphasises: “the opportunity that App Engine has afforded to us is more time to do what we need to do. To just get things done and to get new features out and not have to worry so much about load and things going down because we take on 16-18 million unique users a month.  It’s really nice to see instances spin up and come down and we never had to touch anything.”
  • Quote from Gary Koelling of Best Buy: “… we don’t have to spend any time doing system administration or setting up servers, which allows us to focus on the development and testing new ideas.”

The funny thing is, the benefits touted are really the benefits of Platform as a Service (PaaS). These services could just as easily have been built on the Windows Azure Platform!

  • Mr. Murray from WebFilings mentioned the need for a a platform based on a great security infrastructure. Both Microsoft and Google have some of the industry’s best and brightest working for them in their state-of-the art, world-class data centers. Here are some good resources relating to security in the Windows Azure data centers. If you want a secure data center and secure platform, I don’t think you can go wrong with either Microsoft or Google. (Frankly, I expect you are more likely to have problems – including with cost and security – if you roll your own data center. Your company will not have the top experts in the world on your payroll.)
  • Both Ms. Stanton from Evite and Mr. Koelling of Best Buy emphasize that they benefit from being able to focus on building software – and not being distracted by needing to worry about infrastructure. This is what Platform as a Service (PaaS) is all about. Both Microsoft and Google offer PaaS. GAE supports apps which run on the JVM (e.g., Java) and apps written in Python. Windows Azure supports programming in any .NET language (e.g., C#), plus a plethora of other platforms that run on Windows – PHP, Java, Python, Ruby, C++, and so many more. GAE has database support with a query language they call GQL, and Azure has SQL Azure which supports the regular SQL you know and love. Each platform has other features as well, making it a place where you can focus on your app – not your infrastructure.
  • Ms. Stanton mentions that they have a team of 5 developers. I wonder how large the Evite team would need to be if they were not running on PaaS?

Mr. Murray from WebFilings mentions that they began using GAE back 2008 – and the Windows Azure Platform was not announced until late in 2008 (at Microsoft PDC in November 2008), so that was not an option yet for them. It is not mentioned when the other companies began to use GAE. If they were starting today, I wonder how many would choose Azure?