Category Archives: Programming

Related to some aspect of programming, software development, related tools, or supporting technologies, related standards, etc.

What causes “specified container does not exist” error message in Windows Azure Storage?

In debugging some Windows Azure Storage code, I ran across a seemingly spurious, unpredictable exception in Azure Blob code where I was creating Blob containers and uploading Blobs to the cloud. The error would appear sometimes… at first there was no discernable pattern… and the code would always work if I ran my code again immediately after a failure. Mysterious…

A Surprising Exception is Raised

When there was an exception raised, this was the error message with some details:

StorageClientException was unhandled - The specified container does not exist

The title bar reads “StorageClientException was unhandled” which is accurate, since that code was not currently in a try/catch block. No problem or surprise there, at least with that part. But the exception text itself was surprising: “The specified container does not exist.”

Uhhhh, yes it does! After calling GetContainerReference, container.CreateIfNotExist() was called to ensure the container was there. No errors were thrown. What could be the problem?

A Clue

Okay, here’s a clue: while running, testing, and debugging my code, occasionally I would want a completely fresh run, so I would delete all my existing data stored in the cloud (that this code cared about at least) by deleting the whole Blob container (called “AzureTop40”). This was rather convenient using the handy myAzureStorage utility:

This seemed like an easy thing to do, since my code re-created the container and any objects needed. Starting from scratch was a convenience for debugging and testing. Or so I thought…

Azure Storage is Strongly Consistent, not Eventually Consistent

Some storage systems are “eventually consistent” – a technique used in distributed scalable systems in which a trade-off is made: we open a small window of inconsistency with our data, in exchange for scalability improvements. One example system is Amazon’s S3 storage offering.

But, per page 130 of Programming Windows Azure, “Windows Azure Storage is not eventually consistent; it is instantly/strongly consistent. This means when you do an update or a delete, the changes are instantly visible to all future API calls. The team decided to do this since they felt that eventual consistency would make writing code against the storage services quite tricky, and more important, the could achieve very good performance without needing this.”

So there should be no problem, right? Well, not exactly.

Is Azure Storage actually Eventually Strongly Consistent?

Okay, “Eventually Strongly Consistent” isn’t a real term, but it does seem to fit this scenario.

I’ve heard more than once (can’t find authoritative sources right now!??) that you need to give the storage system time to clean up after you delete something – such as a Blob container – which is immediately not available (strongly consistent) but is cleaned up as a background job, with a garbage collection-like feel to it. There seems to be a small problem: until the background or async cleanup of the “deleted” data is complete, the name is not really available for reuse. This appears to be what was causing my problem.

Another dimension of the problem was that there was no error from the code that purportedly ensured the container was there waiting for me. At least this part seems to be a bug: it seems a little eventually consistent is leaking into Azure Storage’s tidy instantly/strongly consistent model.

I don’t know what the Azure Storage team will do to address this, if anything, but at least understanding it helps suggest solutions. One work-around would be to just wait it out – eventually the name will be available again. Another is to use different names instead of reusing names from objects recently deleted.

I see other folks have encountered the same issue, also without a complete solution.

Vermont Code Camp – Building Cloud-Native Applications with Azure

I attended Vermont Code Camp 2 yesterday (11-Sept-2010) at the University of Vermont.  Many thanks to the awesome crew of Vermonters who put on an extremely well-organized and highly energetic event! I look forward to #vtcc3 next year. (Twitter stream, while it lasts: #vtcc2)

I presented a talk on Building Cloud-Native Applications using Microsoft Windows Azure. My slides are available as a PPT download and on slideshare.net.

<aside>Maura and I went to Vermont a day early. We put that time to good use climbing to the summit of Vermont’s highest mountain: Mt. Mansfield. We hiked up from Underhill State Park, up the Maple Ridge Trail, over to the Long Trail, up to the summit, then down the Sunset Ridge Trail (map). It was a really tough climb, but totally worth it. I think the round trip was around 7 miles.

</aside>

Chris Bowen Speaks at August 2010 Boston Azure Meeting

Many thanks to Chris Bowen who was the guest speaker at the August 2010 Boston Azure user group meeting. The topic was ASP.NET MVC, with an Azure perspective.

chris-bowen-mvc-aug-2010.1

Here are my rough notes:

There was no slide deck – Chris jumped right into the code. Here are a few of my rough notes.

Consider Web Platform Installer 2.0 to install Azure tooling.

  • Windows Azure Platform Tools
  • Visual Web Developer 2010 Express

ASP.NET MVC concepts / benefits:

  • “A lot of convention” – great in the long run, hard to grasp at first…
  • Separation of Concerns – controller then view
  • ASP.NET MVC is closer to the metal than traditional ASP.NET – if you want to implement, say, XHTML, then nothing stands in your way.
  • Strongly-typed Controllers and Views can be generated once your model is in place.
  • Controller may choose to pass along only a ViewModel – subset of full Model, or perhaps enhanced
  • Model Binding is also by convention
  • Hackable URLs

Tips and Tricks:

  • Ctrl-Shift-Click on Visual Studio in Win 7 will launch in Admin mode which Azure requires.
  • Can modify the T4 template for MVC to alter its UI options in wizards.
  • Ctrl-M-G – bring me to the appropriate View for this Action

New in MVC 2 / ASP.NET 4:

  • Html.DisplayForModel
  • RenderActions – new in MVC 2
  • New in ASP.NET 4 (not just ASP.NET MVC 2) is <%: “foo” %> where the “:” is a new feature as shortcut for HTML.Encode for the content.
  • MVC 2 has powerful client-side validation based on characteristics of your model. Does not require a server-side round trip. You specify e.g., [Required] attribute on Model data – and you don’t need to write any imperative code.

 chris-bowen-mvc-aug-2010.2 

http://asp.net/mvc – many great resources.

Windows Azure developer fabric – also known as “the fog” – is the Azure cloud simulator running locally.

Also check out by Arra Derderian’s write-up of the same Boston Azure meeting.

There were around 30 people in attendance at the meeting.

Three Types of Scaling in the Cloud: Scale Up, Scale Out, and now Scale Side-by-Side (with Juxtaposition Scaling)

Computer systems or individual applications have capacity limits. A web site might be working just fine with one or two or fifty users, but when use goes way up, it may no longer work correctly – or a tall. A desktop application may work fine for a long time – then one day, we try loading a really large file or data set, and it can’t handle it. These are scalability challenges.

After our system or application reaches its capacity limits, what are our options to make it work even with the new demands? In other words, how do we make it scale?

The  following scalability approaches  allow us to handle more computations (with vertical and horizontal scaling) or more system instances (with juxtaposition scaling).

There are other very important scaling patterns that we might address in a future post – such as the scalability using algorithms that embrace parallelism (such as Map/Reduce), NoSQL-like schema-less storage, and data sharding. These are not covered in this article.

Scale Up with More Powerful Hardware

The obvious option in many cases is to address a scalability problem with better, faster, more capable hardware. If we can’t load that giant spreadsheet model on a computer with 512MB of RAM, we install 2GB and give it another try. If it is still too slow, we can use a machine with a faster processor or faster hard disk.

This approach can also be applied to web servers, database servers, and other parts of your system. Got an architecture problem? Get some better hardware.

This approach is variously called “scaling up” or “vertical scaling” – since we are addressing the problem by substituting a more capable system (usually a single server), but one that is still logically equivalent.

The essential point here is that, generally speaking, the limits of scalability are due to the limits of a single computer (or perhaps the limits of an affordable single computer).

In Scaling Up (also known as Vertical Scaling) the limitation is hardware related in a very specific way: how much memory, disk, and processor a single server can support…

The key challenge with Scaling Up is that you might run out of hardware options. What happens if you are running on the fastest available machine, or it can’t take any more memory? You may be out of luck.

Scale Out with More Hardware Instances

Another option is some cases is to leave the existing machines in place, and add additional machines to the mix to share the burden. This is variously called “scaling out” or “horizontal scaling” – a metaphor suggestive of spreading out the system as we add more machines beside the existing ones.

The key point here are that systems need to be architected to support Scaling Out – though the benefit is that they can generally scale a lot more than a Scale Up system – and scalability is enabled by the software architecture.

In Scaling Out (also known as Horizontal Scaling) scalability must be architected into the system… it is not automatic and is generally more challenging than Scaling Up. You scale by running on more instances of the hardware – and having these hardware instances share the workload.

As mentioned, scaling out is an attribute of the architecture of the system. This is a great fit for the elastic nature of cloud computing platforms.

Scale Side-by-Side with More Systems

In the real world, not all of our scaling concerns are with “the” system – we tend to have many copies of systems. I recently heard that for every production instance of SAP, there are seven non-production instances. And in my own experience, organizations *always* need many instances of systems: for development, test, training and … then we have different versions of all these systems … and the list goes on.

It turns out that another great use of the cloud generally (including the Azure Cloud) is for spinning up these other instances of our system for many purposes – sometimes we don’t want 1 N-node app, we want N 1-node apps.

I dub this use of cloud to be “scaling side-by-side” or “juxtaposition scaling” – a metaphor suggestive of putting similar systems beside each other, since they are a related collection of sorts, even though the instances of systems scaled side-by-side to are not connected to, or operationally related to, any of the other instances.

Scaling Side-by-Side (also known as Juxtaposition Scaling) happens when you use the cloud’s elastic nature to create additional (often  temporary) instances of a system – such as for test or development.

Also, scaling side-by-side (juxtaposition scaling) is orthogonal to scaling up (vertical scaling) or scaling out (horizontal scaling). It is more about scaling to support more uses of more variants (versions, test regions, one for training, penetration testing, stress testing, …) for overall environmental efficiency.

And, finally, like other ways to leverage cloud infrastructure, to efficiently scale side-by-side you will benefit from some automation to easily provision an instance of your application. Azure has management APIs you can call to make the whole process automagic. Consider PowerShell for building your automation…

[It was in a conversation at the Hub Cloud Club with several folks, including William Toll and John Treadway. John mentioned the SAP statistic and also suggested that adding more instances is just another type of scaling in the cloud. I agreed and still agree. So I am giving that type of scalability a name… Scaling Side-by-Side or Juxtaposition Scaling. Neither seems to have any real hits in Google, but let’s see if this catches on.]

4 Reasons to embrace the “www” subdomain prefix in your Web Addresses, and how to do it right

In support of the www subdomain prefix

For web addresses, I used to consider the “www” prefix an anachronism and argued that its use be deprecated in favor of the plain-old domain. In other words, I used to consider forms such as bostonazure.org superior to the more verbose www.bostonazure.org.

I have seen the light and now advocate the use of the “www” prefix – which is technically a  subdomain – for clarity and flexibility. I now consider www.bostonazure.org superior to the overly terse bostonazure.org.

I am not alone in my support of the www subdomain. Not only is there a “yes www” group – found at www.yes-www.org – advocating we keep using the www prefix, there is also an “extra www” group – found at www.www.extra-www.org [sic] – advocating we go all in and start using two sets of www prefixes. While I’m not ready to side with the extra www folks (which would give us www.www.bostonazure.org), for those who do, you might want to know they offer the following nifty badge for your displaying pleasure.

image

While use of two “www” prefixes may one too many, here are 4 reasons to embrace a single “www’ prefix, followed by 2 tips on how to implement it correctly.

Four reasons to embrace the www prefix

traffic light

Reason #1: It’s a user-friendly signal, even if occasionally redundant

The main, and possibly best, reason is that it is user-friendly. Users have simply come to expect a www prefix on web pages.

The “www” prefix provides a good signal. You might argue that it is redundant: Perhaps the http:// protocol is sufficient? Or the “.com” at the end?

First, consider that the http:// protocol is not always specified; it is common to see sites advertised in the form www.example.com.

Second, consider that the TLD (top-level-domain) can vary – not every web site it a “dot com” – it might be a .org, .mil, or a TLD from another country – many of which may not be obvious as web addresses for the common user without a www prefix, even with the http:// protocol.

Third, consider that even if there are cases where the www is redundant, that is still okay. An additional, familiar signal to humans letting them know with greater confidence that, yes, this is a web address, is a benefit, not a detriment.

Today, most users probably think that the Web and the Internet are synonymous anyway. To most users, there is nothing but the www – we need to realize that today’s Internet is inhabited by regular civilians (not just programmers and hackers).  Let’s acknowledge this larger population by utilizing the www prefix and reducing net confusion (pun intended).

Reason #2: Go with the flow

The application and browser vendors are promoting the www prefix.

Microsoft Word and Microsoft Outlook – two of the most popular applications in the world – both automatically recognize www.bostonazure.org as a web address, while neither automatically recognizes bostonazure.org. (Both also auto recognize http://bostonazure.org.) Other text processing applications have similar detection capabilities and limitations.

Browsers also assume we want the www prefix; in any browser, type in just “twitter” followed by Ctrl-Enter – the browser will automatically put “http://www.” and append “.com” forming “http://www.twitter.com” (though then we are immediately redirected to http://twitter.com). [Note that browsers typically are actually configured to append something other than “.com” if that is not the most common TLD there; country specific settings are in force.] For the less common cases where you are typing in a .org or other non-default setting, the browser can only be so smart; you need to type some in fully on your own.

Reason #3: Advantages on high volume sites

While I have been aware of most of the raw material used in this blog post for years, this one was new to me.

High traffic web sites can get performance benefits by using www, as described in the Yahoo! Best Practices for Speeding Up Your Web Site, though there is a workaround (involving an additional images domain) that still would allow a non-www variant, apparently without penalty.

Reason #4: Azure made me do it!

It turns out that Windows Azure likes you to use the www prefix, as described by Steve Marx in his blog post on custom domain names in Azure. This appears to be due to the combined effects of how Azure does virtualization for highly dynamic cloud environments – plus limitations of DNS.

In fact, it was this discovery that caused me to rethink my long-held beliefs around the use of www. Though I didn’t find any posts that specifically viewed this exactly like I did, my conclusion is the following:

I concluded the Internet community has changed over the years and is now dominated by non-experts. The “www” affordance inserted into the URLs makes enough of a difference in the user experience for non-expert users that we ought to just use the prefix, even if expert users see it as redundant and repetitive – as I used to.

In other words, nobody is harmed by use of the www prefix, while most users benefit.

Two tips to properly configure the www prefix

One of the organizations promoting dropping the www – http://no-www.org/ – describes three classes of “no www” compliance:

  • Class A: Do what most sensible sites do and allow both example.com and www.example.com to work. This is probably the most easily supported in GoDaddy, and probably the most user-friendly, since anything reasonable done by the user just works.
  • Class B: Redirect traffic from example.com to www.example.com, presumably with a 301 (Permanent) http redirect; this approach is most SEO/Search Engine-friendly, while maintaining similar user-friendliness to Class A.
  • Class C: Have the www variant fail to resolve (so browser would give an error to the user attempting to access it). This is not at all user friendly, but is SEO-friendly.

So what are the two rules for properly configuring the www prefix?

Tip #1: Be user- and SEO-friendly with 301 redirect

Being user-friendly argues for Class A or Class B approach as mentioned above.

You don’t want search engines to be confused about whether the www-prefixed or the non-www variant is the official site. This is not Search Engine Optimization (SEO)-friendly; it will hurt your search engine rankings. This argues for Class B or Class C approach as mentioned above.

For the best of both worlds, the Class B approach is the clear winner. Set up a 301 permanent http redirect from your non-www domain to your www-prefixed variant.

You can set this up in GoDaddy with the Forward Subdomain feature in Domain Manager, for example.

You can also set it up with IIS :

Or with Apache:

Tip #2: Specify your canonical source for content

While the SEO comment above covers part of this, you also want to be sure that if you are on a host or environment where you are not able to set up a 301 redirect, you can at least let the search engines know which variant ought to get the SEO-juice.

In your HTML page header, be sure to set the canonical source for your content:

<head>
    <link rel="canonical" href="http://www.bostonazure.org/" />
    ...
</head>

Google honors this currently:

Google is even looking at cross-domain support for canonical tag (though other search engines have not announced plans for cross-domain support):

From an official Bing Webmaster blog post from Feb 2009, Bing will support it:

Reportedly, Bing and Yahoo! are not yet supporting this very well:

But it appears Bing and Yahoo! have either just implemented it, or perhaps they are about to:

You can also configure Google Webmaster Tools (and probably the equivalents in Bing and Yahoo!) to say which variant you prefer as the canonical source.

Unusual subdomain uses

There are some odd uses of subdomain prefixes. Some are designed to be extremely compact – such as URL shortening service bit.ly. Others are plain old clever – such as social bookmarking site del.i.cio.us. Still others defy understanding – in the old days (but not *that* old!), I recall adobe.com did not resolve – there was no alias or redirect, just an error – if you did not type in the www prefix, you were out of luck.

Another really interesting case of subdomain shenanigans is still in place over at MIT where you will find that www.mit.edu and mit.edu both resolve – but to totally different sites! This is totally legal, though totally unusual. There is also a web.mit.edu which happens to match mit.edu, but www.mit.edu is in different hands.

In the early days of the web, the Wall Street Journal was an early adopter and they used to advertise as http://wsj.com. These days both wsj.com and www.wsj.com resolve, but they both redirect to a third place, online.wsj.com. Also totally legal, and a bit unusual.

[edit 11-April-2012] Just noticed this related and interesting post: http://pzxc.com/cname-on-domain-root-does-work [though it is not http://www.pzxc.com .. :-)]

Credit for Traffic Light image used above:

  1. capl@washjeff.edu
  2. http://capl.washjeff.edu/browseresults.php?langID=2&photoID=3803&size=l
  3. http://creativecommons.org/licenses/by-nc-sa/3.0/us/
  4. http://capl.washjeff.edu/2/l/3803.jpg

Presented on Windows Azure at Hartford Code Camp

Today at Hartford Code Camp #3 in Connecticut, I presented two talks on Windows Azure.

The first talk was an introduction to Cloud Computing, with a Microsoft slant towards Windows Azure. The second drilled into the Two Roles and a Queue (TRAAQ) design pattern – a key pattern for architecting systems for the cloud.

The PowerPoint slides are available here:

Also plugged the Boston Azure User Group to those attending my talks! Hope to see some of you at NERD in Cambridge, MA for talks and hands-on-coding sessions. Details always at bostonazure.org.

Introducing the Boston Azure Project

Cloud Computing on Microsoft’s Windows Azure platform is still new, but will be big. I believe that. That believe fueled my interest in starting the Boston Azure cloud computing user group (henceforth in this blog post, simply “Boston Azure”) back in the fall, even before Azure was released. Boston Azure is a cloud computing community group focused on learning about Azure.

Currently Boston Azure meets monthly on the 4th Thursday of the month in Cambridge, MA in the USA. This is an in-person meeting. I have received a loud and clear vibe from the Boston Azure membership that there is a thirst for more hands-on stuff. That was fueled further first by the hands-on Azure SDK meeting we held April 29, then again by the all-day Firestarter held May 8. But we need more. So, I had this idea for an ongoing community coding project that we can hack on together at Boston Azure meetings and other times… I bounced the idea off the community at the May meeting… since I received a really positive response, I now officially declare I plan to go ahead with it…

Introducing the Boston Azure Project

Why are we doing this Project?

The community wants to code. There is a desire to learn a lot about programming in Windows Azure – and what better way to get really good at programming Windows Azure than by programming Windows Azure.

The primary goal of the project is to learn – to get good – really good – at Windows Azure.

How will the Project work?

To be hands-on, we need a project… so here’s a project to provide us with focus:

We shall build a “gently over-engineered” version of bostonazure.org.

This “gently over-engineered” version of bostonazure.org:

(a) will provide a productive environment where participants (developers and otherwise) can learn about Azure through building a real-world application by contributing directly to the project (through code, design, ideas, testing, etc., …), and

(b) will do so by taking maximum advantage of the technology in the Windows Azure platform in the advancement of the bostonazure.org web site (though thinking of it as “just a web site” is limiting – there is nothing stopping us from, say: adding an API; exporting OData or RSS feeds; being mobile-friendly for our visitors with iPhone, Android, and Windows Phone 7 devices; etc.), and

(c) will serve the collaboration and communication needs of the Boston Azure community, and

(d) will provide an opportunity for a little fun, meet other interesting people, and enhance our skills through sharing knowledge and learning from each other.

When will we code?

We will reserve time at Boston Azure meetings so we can collaborate in-person on a monthly basis. Participants are also free to hack at other times as well, of course.

Wait a second… Does it make sense to port a little web site like bostonazure.org to Azure?

It does not make sense – not in isolation. Go ahead and crunch the numbers on Windows Azure pricing and compare with an ISP-hosted solution. However, this is the “gently over-engineered” part: we are doing it this way to show off the capabilities of Windows Azure and learn a bunch in the process.

What is the output of the Project?

This project will be feature rich, easy to use, accessible, flexible… and open source.

Keep in mind: Since bostonazure.org is the web presence for Boston Azure community…

It Has To Work!

This project is for and by the community.

Anyone can contribute – at any seniority level, with any skill set, with many possible roles (not just developers).

Then how do we reconcile anyone can contribute with it has to work? The community process needs to be able to make the code work before we put it into production. We have to make this work. And we will.

So, now you’ve heard it all – the whole idea – at least the Big Picture. I will post more details later, but for now that’s it.

Next Steps

Please contact me (on twitter or by comment to this blog post or by email) if you want to be one of the very first participants – I would like a couple of folks to be in a “private beta” to get some details squared away before I make the CodePlex site public.

Update 23-June-2010: The project is now live on CodePlex at bostonazure.codeplex.com.

Fermat’s Last Theorem is safe

I saw on twitter this morning a long time ago [it was a long time ago when I wrote this post but didn’t publish it] (from Jeff Atwood, of Coding Horror blog and Stack Overflow fame) the following elegantly and concisely stated counter-example that would – if true – disprove perhaps the most famous of mathematical theorems, Fermat’s Last Theorem (FLT):

1782^12 + 1841^12 = 1922^12

Wow! A counter-example for FLT. A theorem I’ve known about since I was a kid. One counter-example is all it takes to disprove the whole deal.

Fermat’s theorem states that the equation an + bn = cn has no solutions for integer n > 2, and integers a, b, and c not equal to zero. For n = 2 we have many solutions (Pythagorean triples), but none for n > 2. Nor should we, according to English mathematician Andrew Wiles, who proved FLT in 1995.

Until now. Or do we? The equation Jeff posted is a little awkward to validate since most calculators cannot handle numbers this size at full precision. They appear equal with a normal calculator – due to precision limits (round-off errors). Same problem with Excel.

So, since I’ve recently started playing with F#, I put together a trivial F# program (included below) to show the math at full precision, with the following results:

1782^12 + 1841^12 = 2541210258614589176288669958142428526657
and
1922^12 = 2541210259314801410819278649643651567616
which differ by
700212234530608691501223040959

So Fermat is safe. Saved by F#. 🙂 But don’t feel bad if you fell for it – just be glad you knew what it meant. Bonus if  you noticed it on The Simpsons or Futurama.

Boston Azure Firestarter Wrap Up

Boston Azure Firestarter a Success!

We had 60-something folks attend the Boston Azure Firestarter (more photos) on May 8, 2010 in Cambridge, MA. This event provided both talks about important Azure concepts and hands-on-roll-up-your-sleeves-and-write-some-code Labs. Yes, attendees brought laptops! Feedback was positive. Many thanks to all the folks who helped make this event possible. This was a Boston Azure cloud computing user group event, supported by and hosted at Microsoft.

Many Thanks!

Those who helped prepare for the event, work the sign-in desk, help with technical problems, and handle the pair-programmer matching service included Nazik Huq, Chander Khanna, Joan Linskey, and Maura Wilder. Jim O’Neil and Chris Bowen (our East Coast Microsoft Developer Evangelists) were also on hand for trouble-shooting and general support and help.

 

Here was our speaker lineup:

  1. David Aiken from Microsoft’s Windows Azure team came from the left-coast in Redmond to the right-coast in Boston to keynote the event. David gave many demos, a couple of which were My Azure Storage and his new URL shortening service hmbl.me.
    David’s keynote was followed by:
  2. Bill Wilder: Roles and Queues talk + lab (http://hmbl.me/1OHBMZ)
  3. Ben Day: Azure Storage + lab
  4. Andy Novick: SQL Azure + lab (http://hmbl.me/1H46PK)
  5. Jim O’Neil: Dallas and OData (http://hmbl.me/1OHC5W)
  6. Panel Q&A (in the order shown in photo below): Mark Eisenberg (Microsoft), Bill Wilder, Ben Day, Jason Haley, and Jim O’Neil

After hours, a smaller group unwound at the sports bar over at the Marriott. This included Jim O’Neil, Maura Wilder, Joan Linskey, Bill Wilder, Sri from New Jersey, (okay, other names are vague!) …

Two Roles and a Queue – Creating an Azure Service with Web and Worker Roles Communicating through a Queue

Two Roles and a Queue Lab from Boston Azure Firestarter

At the Firestarter event on May 8, 2010, I spoke about Roles and Queues and worked through a coding lab on same. The final code is available in a zip file. The Boston Azure Firestarter – Bill Wilder – Roles and Queues deck can be downloaded – though since there were so many questions we didn’t get to covering a number many of the slides! – this was a hot topic!

The remainder of this post contains the narrative for the LAB we did as a group at the Firestarter. It probably will not stand alone super well, but may be of interest to some folks, so I’ve posted it.

The following procedure assumes Microsoft Visual Web Developer 2010 Express on Windows 7. The same general steps apply to Visual Studio 2008, Visual Studio 2010, and Web Developer 2008 Express versions, though details will vary.

0. Open Microsoft Visual Web Developer 2010 Express and select File | New Project

1. Select Windows Azure Service and click Okay:

image

If you have trouble finding the Windows Azure Service template, you can type “Azure” into the search box in the top-right to narrow the options. Also, if you don’t have the Windows Azure SDK installed, you will need to install that before proceeding – but there will be a link provided by Visual Web Developer 2010 Express that will direct you to the right page. Install it if you need to and try again up to this point.

2. You will see a special dialog box for New Cloud Service Project from which you will add both a Web Role

image

and a Worker Role

image

Verify that both WebRole1 and WorkerRole1 are in the list on the right side, then click OK.

3. Before you begin making code changes, you can run your new application. You can run it in the debugger by pressing the F5 key.

You will probably get the following error message:

image

The error message is telling you that you need to close Visual Web Developer 2010 Express and restart it with elevated privileges.

4. To start any Windows program with elevated privileges , right-click on the application then choose Run as administrator from the pop-up menu:

image

Before it obeys your request to run as administrator, Windows 7 will double-check by popping up a security dialog.

Now you can reload your project and try running it again. The app should run and you should see a blank web browser page.

5. Once you’ve proven your application runs, it is time to make some changes.

Make the code changes indicated for the Two Roles and A Queue Lab in CODING STEP 1.

Note: the “coding step 1” and future coding steps were handouts (paper!) at the Boston Azure Firestarter on Sat May 8, 2010. In lieue of reproducing them here, I will post the final solution.

This lab will establish some WebRole basics.

6. When done applying CODING STEP 1, run the application again.

7. After demonstrating your application runs, Deploy it to Azure.

This is a simple application so it helps us get through the initial deployment with minimal challenges.

8. Apply CODING STEP 2 – Add Queue (in local dev fabric storage)

9. CODING STEP 3 – Add “DumpQueue” method and “FirestarterWebRoleHelpers.cs”

image

You will get the following dialog box – type “code file” into the search area on the top-right, select Visual C# Code File, and type in the filename “FirestarterWebRoleHelpers.cs” as shown and click Add:

image

The new file “FirestarterWebRoleHelpers.cs” will open in the editor. It should be empty to begin with. Cut and Paste in the contents from http://bostonazure.org/files/FirestarterWebRoleHelpers.cs.txt.

Why? The contents of this file has little to do with Windows Azure, so we don’t want to focus on it. But we want to use some utility routines from it so that we can focus on Azure concepts.

10. CODING STEP 4 – Adding Cloud-based Queue

First we need to configure the cloud.

Go to http://windows.azure.com and log in. You may wish to consult instructions on redeeming a token at https://blog.codingoutloud.com/2010/05/06/redeeming-an-azure-token/ or http://bit.ly/dgCuMn

image

Your storage account has a subdomain, as circled above. This – and the Access Key – need to be added to your Web Role and Worker Role so that they can access (and share the same queue within) cloud-hosted storage.

Right-click in Visual Studio on the WebRole1, select Properties, and select the Settings tab on the left. It will appear something like this:

image

Now click on Add Setting and give the new item the name “DataConnectionString”, the Type “Connection String”, and click on the “…”

image

This will bring up the Storage Connection String editor – fill in the fields – where your “Account name” is the same as the subdomain shown on the Storage Service (see above – in that screen shot it is “bostonazurequeue”) and the Key can be either Primary or Secondary Access Key (from same area in the Azure Portal):

image

You are NOT DONE in the screen yet. Also add a Setting named “StatusUpdateQueueName”– of Type “String” – with Value “updatemessagequeue1” as follows:

image

Click OK.

11. Now REPEAT BOTH STEPS for WorkerRole1.

Yes, add both Settings also to WorkerRole1 – they both will end up with the same settings. You can “cheat” with cut and paste in the .cscfg and .csdef files.

12. Enable Cloud-hosted Queue from Web Role

Now you are ready go on to make the code changes to use this new configuration item.

Apply CODING STEP 4: Enabling the Cloud-hosted Queue from the Web Role

Now run your application using cloud storage for the queue:

image

Note that you can also examine the contents of the queue online by visiting http://myAzureStorage.com and providing the same credentials you used when setting up the DataConnectionString above for both the Web and Worker roles.

13. Enable Cloud-hosted Queue from Worker Role

Now you are ALMOST ready go on to make the code changes to use this new configuration item.

Before applying the coding, we need to add a project reference (otherwise you won’t be able to Resolve use of networking classes used in the FirestarterWorkerRoleHelpers.). In Visual Studio on the right side, under the Solution Explorer, right-click on the References element underneath WorkerRole1 and select Add Reference, then from the .NET tab, select System.Web and click okay:

image

Also, similar to step 9 above, add a new Code File called “FirestarterWorkerRoleHelpers.cs” to hold some additional needed (but not core to Azure) code.

The new file “FirestarterWorkerRoleHelpers.cs” will open in the editor. It should be empty to begin with. Cut and Paste in the contents from http://bostonazure.org/files/FirestarterWorkerRoleHelper.cs.txt.

Now you can apply Apply CODING STEP 5: Enabling the Cloud-hosted Queue from the Worker Role.

14. Deploying to Staging Area in Cloud to Staging

15. Cutover from Staging to Production

16. Add in secret Twitter posting code from your Worker Role…

Yes, this can be done by including a hash character (#) as part of the message you type into your web application.