You new to Windows Azure? Experienced with Windows Azure? Wondering what all the buzz is about…
You can Meet #WindowsAzure in a live stream featuring keynote speaker Scott Guthrie (@ScottGu) along with other Azure/cloud experts. Event is June 7 at 4:00 PM Boston time (UTC-7 hours).
I will be watching and you can find discussions on the Twitters…. I am @codingoutloud, the event hashtag is #MeetAzure, and be sure to check out the Lanyard page that Magnus set up.
Also if you are an Azure fan in the Boston area, please check out the Boston Azure cloud user group (www.bostonazure.org). The group meets monthly, with occasional special events, such as the 2-day bootcamp later this month. The group events are usually at NERD in Cambridge, MA.
My talk was the opening act. I extended on my Hadoop on Azure talk from February and climbed up the stack to talk about HIVE and Pig which generate MapReduce under the hood. I only used a few slides in the talk, but they have some useful references, and are available here: Hadoop-BostonAzure-29-Mar-2012.
New England Give Camp 2012 (A handful of Boston Azure members are signed up – including (off the top of my head) me, Jason, Maura, Joan, and Gladys): http://newenglandgivecamp.org/
UX and Cloud studies were discussed and the vast majority of the attendees indicated they would prefer offers/opportunities be sent to/shared with the Boston Azure community – I will post on the Meetup.com distribution
Upcoming Boston Azure events include:
Tuesday April 24 – Michael Cummings from Magenic on building mobile games with a Windows Azure back-end (signup here)
Wednesday May 30 – SDK again – but this time will be hands-on – BYO laptop! (Tables and power strips will be provided.) (signup here)
Fri-Sat June 22-23 – TWO FULL DAYS of Bootcamp! (signup here (Fri) and here (Sat))
Thu June 28 – (Awesome topic TBD) – Michael Collier from Neudesic (signup here)
There was interest in incorporating some topics into future Boston Azure meetings:
Tools – what tools are useful in building/managing/monitoring cloud applications, especially Azure ones (PowerShell, cspack, Cerebrata, Storage Explorer from CodePlex, CloudBerry Explorer for Azure Blob Storage, monitoring tools, IntelliTrace from Visual Studio Ultimate, other Visual Studio tips/tricks, …)
Comparisons of Azure with other clouds, e.g., OpenStack, AWS, Eucalyptus (note: AWS and Eucalptus are now partners), Rackspace, others…
More big data via Hadoop on Azure (maybe a whole dedicated meeting is in order)
Earlier this month I hung out with Jim O’Neil at the Farmington, CT offering of the Windows Azure DevCamp series. The format of the camp was a quick-ramp introduction to the Windows Azure Platform followed by some hands-on coding on the RockPaperAzure challenge.
Jim introduced cloud and presented specifics on Blob and Table storage services and SQL Azure. I had the opportunity to present one of the sections – mine was a combination of Windows Azure Compute services + the Windows Azure Queue service with some basics around using these services to assemble “cloud native” applications. The official slides for the Windows Azure DevCamp series appear to be here, though my slides were a little different and are also available (WindowsAzureDeveloperCamp-FarmingtonCT-07Dec2011-BillWilder). At the end, Jim also ran through the creation of a RockPaperAzure “bot” and it was (literally!) game on as attendees raced to create competitive entries.
I took a few photos at the event – some of Jim presenting, some showing participants at the end coming to claim their prizes from the RockPaperAzure challenge – and none from the middle!
Are you interested in Cloud Computing generally, or specifically Cloud Computing using the Windows Azure Cloud Platform? Listed below are the upcoming Azure-related events in the Greater Boston area which you can attend in person and are offered usually FREE, but most that cost money are inexpensive.
Since this summary page is – by necessity – a point-in-time SNAPSHOT of what I see is going on, it will not necessarily be updated when event details change. So please always double-check with official event information!
I’ve attempted to list events of interest to the local Azure community – not just topics specific to the Windows Azure Cloud Platform. Know of any more cloud events of interest? Have any more information or corrections on the events listed? Please let everyone know about them by adding a comment.
Events are listed in the order in which they will occur.
November 2011 Events
1. Cloudy Mondays
when: Mon 14-Nov-2011, 5:00 – ?:?? PM
where: Small Business Development Center (note: NOT Venture Development Center!), 100 Morrissey Boulevard, Boston, MA wifi: (not sure)
food: (not sure, though food and spirits were provided last time)
cost: FREE
what: Discuss Amazon’s cloud: launching projects on AWS, comparing AWS to other public clouds, etc. One speaker is Jason Haruska, Chief Architect at Backupify. Vikram Kumar, CTO and Founder of OfficeDrop.com, will also be discussing how OfficeDrop.com launched using AWS.
food: (looks like cocktails and hors d’oeuvres towards the end)
cost: FREE
what: From the description on the web site: “The cloud is a fundamental paradigm shift from our current or past thinking about scalable architecture and there are security tradeoffs: less control of data, new vulnerability classes, and compliance challenges. However, if managed properly, these risks can be mitigated. This interactive seminar will discuss the challenges of cloud computing, demonstrate how to build a secure and redundant system, and touch upon real-world examples of cloud computing gone bad.”
3. Boston Azure User Group meeting: Introduction to Cloud, Windows Azure, Azure Dev Tools
when: Thu 17-Nov-2011, 6:00 – 8:30 PM
where: Hosted at NERD Center, 1 Memorial Drive, Cambridge, MA (directions)
wifi: Wireless Internet access will be available
food: Pizza and drinks will be provided
cost: FREE
what: Get ramped up on Cloud and Windows Azure – what is this cloud thing all about? How does the Windows Azure Cloud Platform fit in? And how can you get started using Visual Studio and the Windows Azure tools? Get all these questions answered in one night!
what: Speaker: Dan Bartow on Testing in the Cloud – description from event site: “Achieving real performance on the web begins with realistic testing and an understanding of the application and it’s infrastructure. Testing realistically and for real results across web & mobile applications means leveraging the
cloud. Agile development processes, complex multi-tier architectures, and the potential for massive (and sudden) load all require a different approach than
historical apps. Come hear from Soasta, the makers of the CloudTest platform and CloudTest Lite, about their experiences and why companies need to change their expectations about what “testing” means. Dan Bartow is Vice President and CloudTest Evangelist at SOASTA, the leader
in performance testing from the cloud. Prior to joining SOASTA he was Senior Manager of Engineering at Intuit, where his team was responsible for the speed and stability of TurboTax Online, the #1 rated, best-selling online tax software. Over the past decade he has been responsible for the speed and
scalability of websites for such well-known brands as American Eagle Outfitters, AT&T, Best Buy, Finish Line, J.Crew, Neiman Marcus and Sony Online
Entertainment, among others. Dan has set multiple industry precedents including launching the worlds largest stateful JBoss cluster and using over 2000 cloud computing cores to generate load against a live web site. Dan is a frequent industry presenter and has spoken at leading testing and cloud computing
conferences such as Software Test & Performance (STP), O’Reilly’s Web2.0 Expo, Amazon Web Services Road Show, and SYS-CON’s Cloud Computing Expo.” Visit the event web site to view the Cloud Testing Bill of Rights.
food: (Food is either available … or provided .. not sure)
cost: FREE
what: Cloud Architecture Patterns (generally), then see how the Windows Azure Platform helps you realize these patterns. Speaker is Bill Wilder, Boston Azure user group leader, Windows Azure MVP, and an Independent Azure Consultant.
8. Using Windows Azure to Build Cloud Enabled Windows Phone Apps at Microsoft DevBoston
when: Wed 14-Dec-2011, 6:00 – 8:30 (?) PM
where: Microsoft Corporation 201 Jones Road, Waltham, MA Hosted at NERD Center, 1 Memorial Drive, Cambridge, MA (directions)
wifi: I don’t think so (given the location) food: (I am guessing there will be pizza, but not 100% sure, please check with the group organizers)
cost: FREE
what: “Speaker: John Garland This presentation will discuss how Windows Phone applications can be enriched by leveraging the power of the Cloud that is made available by Windows Azure. The Windows Azure Toolkit for Windows Phone will be explored to show how to quickly tap into resources in the cloud for computation, storage, identity, and communication from within a Windows Phone application.”
Just got back from Harvard where I teamed up with Jim O’Neil to talk about the Windows Azure Cloud Platform to the class CSCI E-175 Cloud Computing and Software as a Service. This was at the invitation of the Dr. Zoran B. Djordjevic – who also hosted us last year, and the year before that it was Jim and some guy named Chris.
Like last year, the class was engaged, asking tough and interesting questions… which is all the more impressive since this class meets on FRIDAY NIGHT. Must be a Harvard thing… Anyhow, we went from around 5:30 – 8:00… ON FRIDAY NIGHT. 🙂
On Friday September 30 and Saturday October 1 the Boston Azure cloud user group hosted the Boston Azure Bootcamp – with a few of our friends – and it was a big success.
Here are a few links that folks attending might have been told about, plus a couple of answers I offered to gather offline.
Where can I get the materials used in the Bootcamp?
However, as I explained at the bootcamp, the actual materials used at our sessions were a mix of what is posted on the web and some slide decks that had been updated (mostly for the Azure SDK 1.5, but also other changes in some cases). So you can pull the materials as linked to above and you’ll be pretty close, but the updated ones are not yet publicly posted.
You can thank our TWO MAJOR SPONSORS: This event was provided free to you because our Gold Sponsor SNI TECHNOLOGY generously sponsored the food, and Microsoft NERD donated the space. Many thanks to these major sponsors!
Without these sponsors this event would simply not have happened.
And you can thank the Boston Azure Bootcamp team which included (in alphabetical order): Andy Novick (who led the SQL Azure segment), Arra Derderian (helped during labs), George Babey (“swag guy” – and helped during labs), Jim O’Neil (lab-time tech support, lecture-time answer-man), Joan Wortman (ran the registration), Maura Wilder (who led the Azure Table Storage segment – and helped during labs), Nazik Huq (“twitter guy” – plus made sure there was food – and helped during labs), and William Wilder (yes, that’s me; you can call me “Bill” but wanted to be listed last…). Also, many thanks to Martha O’Neil for baking us a cloudy cake. 🙂
We are planning another Boston Azure Bootcamp in 2012. Stay tuned!
Update 22-Oct-2011: Here is contact info for our Gold sponsors at SNI TECHNOLOGY:
I recently did some work with Windows Services, and since it had been rather a long while since I’d done so, I had to recall a couple of tips and tricks from the depths of my memory in order to get my “edit, run, test” cycle to be efficient. The singular challenge for me was quickly getting into a debuggable state with the service. How I did this is described below.
Does Windows Azure support Windows Services?
First, a trivia question…
Trivia Question: Does Windows Azure allow you to deploy your Windows Services as part of your application or cloud-hosted service?
Short Answer: Windows Azure is more than happy to run your Windows Services! While a more native approach is to use a Worker Role, a Windows Service can surely be deployed as well, and there are some very good use cases to recommend them.
More Detailed Answer: One good use case for deploying a Windows Service: you have legacy services and want to use the same binary on-prem and on-azure. Maybe you are doing something fancy with Azure VM Roles. These are valid examples. In general – for something only targetting Azure – a Worker Role will be easier to build and debug. If you are trying to share code across a legacy Windows Service and a shiny new Windows Azure Worker Role, consider following the following good software engineering practice (something you may want to do anyway): factor out the “business logic” into its own class(es) and invoke it with just a few lines of code from either host (or a console app, a Web Service, a unit test (ahem), etc.).
Windows Services != Web Services
Most readers will already understand and realize this, but just to be clear, a Windows Service is not the same as a Web Service. This post is not about Web Services. However, Windows Azure is a full-service platform, so of course has great support for not only Windows Services but also Web Services. Windows Communication Foundation (WCF) is a popular choice for implementing Web Services on Windows Azure, though other libraries work fine too – including in non-.NET languages and platforms like Java.
Now, on to the main topic at hand…
Why is Developing with Windows Services Slower?
Developing with Windows Services is slower than some other types of applications for a couple of reasons:
It is harder to stop in the Debugger from Visual Studio. This is because a Windows Service does not want to be started by Visual Studio, but rather by the Service Control Manager (the “scm” for short – pronounced “the scum”). This is an external program.
Before being started, Windows Services need to be installed.
Before being installed, Windows Services need to be uninstalled (if already installed).
Tip 1: Add Services applet as a shortcut
I find myself using the Services applet frequently to see which Windows Services are running, and to start/stop and other functions. So create a shortcut to it. The name of the Microsoft Management Console snapin is services.msc and you can expect to find it in Windows/System32, such as here: C:\Windows\System32\services.msc
A good use of the Services applet is to find out the Service name of a Windows Service. This is not the same as the Windows Services’s Display name you seen shown in the Name column. For example, see the Windows Time service properties – note that W32Time is the real name of the service:
Tip 2: Use Pre-Build Event in Visual Studio
Visual Studio projects have the ability to run commands for you before and after the regular compilation steps. These are known as Build Events and there are two types: Pre-build events and Post-build events. These Build Events can be accessed from your Project’s properties page, on the Build Events side-tab. Let’s start with the Pre-build event.
Use this event to make sure there are no traces of the Windows Service installed on your computer. Depending on where you install your services from (see Tip 3), you may find that you can’t even recompile your service until you’ve at least stopped it; this smooths out that situation, and goes beyond it to make the usual steps happen faster than you can type.
One way to do this is to write a command file – undeploy-service.cmd – and invoke it as a Pre-build event as follows:
undeploy-service.cmd
You will need to make sure undeploy-service.cmd is in your path, of course, or else you could invoke it with the path, as in c:\tools\undeploy-service.cmd.
The contents of undeploy-service.cmd can be hard-coded to undeploy the service(s) you are building every time, or you can pass parameters to modularize it. Here, I hard-code for simplicity (and since this is the more common case).
Set a reusable variable to the name of my service (set ServiceName=NameOfMyService)
Stop it, if it is running (net stop)
Uninstall it (installutil.exe /u)
If the service is still around at this point, ask the SCM to nuke it (sc delete)
Return from this .cmd file with a success status so that Visual Studio won’t think the Pre-Build event ended with an error (exit /b 0 => that’s a zero on the end)
In practice, you should not need all the horsepower in steps 2, 3, and 4 since each of them does what the prior one does, plus more. They are increasingly powerful. I include them all for completeness and your consideration as to which you’d like to use – depending on how “orderly” you’d like to be.
Tip 3: Use Post-Build Event in Visual Studio
Use this event to install the service and start it up right away. We’ll need another command file – deploy-service.cmd – to invoke as a Post-build event as follows:
deploy-service.cmd $(TargetPath)
What is $(TargetPath) you might wonder. This is a Visual Studio build macro which will be expanded to the full path to the executable – e.g., c:\foo\bin\debug\MyService.exe will be passed into deploy-service.cmd as the first parameter. This is helpful so that deploy-service.cmd doesn’t need to know where your executable lives. (Visual Studio build macros may also come in handy in your undeploy script from Tip 2.)
Within deploy-service.cmd you can either copy the service executables to another location, or install the service inline. If you copy the service elsewhere, be sure to copy needed dependencies, including debugging support (*.pdb). Here is what deploy-service.cmd might contain:
set ServiceName=NameOfMyService
set ServiceExe=%1
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe %ServiceExe%
net start %ServiceName%
Here is what the commands each do:
Set a reusable variable to the name of my service (set ServiceName=NameOfMyService)
Set a reusable variable to the path to the executable (passed in via the expanded $(TargetPath) macro)
Install it (installutil.exe)
Start it (net start)
Note that net start will not be necessary if your Windows Service is designed to start automatically upon installation. That is specified through a simple property if you build with the standard .NET template.
Tip 4: Use System.Diagnostics.Debugger in your code
If you follow Tip 2 when you build, you will have no trouble building. If you follow Tip 3, your code will immediately begin executing, ready for debugging. But how to get it into the debugger? You can manually attach it to a running debug session, such as through Visual Studio’s Debug menu with the Attach to Process… option.
I find it is often more productive to drop a directive right into my code, as in the following:
void Foo()
{
int x = 1;
System.Diagnostics.Debugger.Launch(); // use this…
System.Diagnostics.Debugger.Break(); // … or this — but not both
}
System.Diagnostics.Debugger.Launch will launch into a into debugger session once it hits that line of code and System.Diagnostics.Debugger.Break will break on that line. They are both useful, but you only need one of them – you don’t need them both – I only show both here for illustrative purposes. (I have seen problems with .NET 4.0 when using Break, but not sure if .NET 4.0 or Break is the real culpret. Have not experienced any issues with Launch.)
This is the fastest way I know of to get into a debugging mood when developing Windows Services. Hope it helps!
I recently attended an event where one of the speakers was the CTO of a company built on top of Amazon cloud services, the most critical of these being the Simple Storage Service known as Amazon S3.
The S3 service runs “out there” (in the cloud) and provides a scalable repository for applications to store and manage data files. The service can support files of any size, as well as any quantity. So you can put as much stuff up there as you want – and since it is a pay-as-you-go service, you pay for what you use. The S3 service is very popular. An example of a well-known customer, according to Wikipedia, is SmugMug:
Photo hosting service SmugMug has used S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being “considerably more reliable than our own internal storage” and claimed to have saved almost $1 million in storage costs.
Good stuff.
Of course, Amazon isn’t the only cloud vendor with such an offering. Google offers Google Storage, and Microsoft offers Windows Azure Blob Storage; both offer features and capabilities very similar to those of S3. While Amazon was the first to market, all three services are now mature, and all three companies are experts at building internet-scale systems and high-volume data storage platforms.
As I mentioned above, S3 came up during a talk I attended. The speaker – CTO of a company built entirely on Amazon services – twice touted S3’s incredibly strong Service Level Agreement (SLA). He said this was both a competitive differentiator for his company, and also a competitive differentiator for Amazon versus other cloud vendors.
Pause and think for a moment – any idea? – What is the SLA for S3? How about Google Storage? How about Windows Azure Blob Storage?
Before I give away the answer, let me remind you that a Service Level Agreement (SLA) is a written policy offered by the service provider (Amazon, Google, and Microsoft in this case) that describes the level of service being offered, how it is measured, and consequences if it is not met. Usually, the “level of service” part relates to uptime and is measured in “nines” as in 99.9% (“three nines”) and so forth. More nines is better, in general – and wikipedia offers a handy chart translating the number of nines into aggregate downtime/unavailability. (More generally, an SLA also deals with other factors – like refunds to customers if expectations are not met, what speed to expect, limitations, and more. I will focus only on the “nines” here.)
So… back to the question… For S3 and equivalent services from other vendors, how many nines are in the Amazon, Google, and Microsoft SLAs? The speaker at the talk said that S3 had an uptime SLA with 11 9s. Let me say that again – eleven nines – or 99.999999999% uptime. If you attempt to look this up in the chart mentioned above, you will find this number is literally “off the chart” – the chart doesn’t go past six nines! But my back-of-the-envelope calculation says it amounts to – on average – less than 32 milliseconds of downtime per year. This is about half what “a blink of your eye” would take – yes, a mere half of an eye-blink. (Which ends with your eyes closed. :-))
Storage SLAs for Amazon, Google, and Microsoft all have exactly the same number of nines: they are all 99.9%. That’s three nines.
I am not picking on the CTO I heard gushing about the (non-existant) eleven-nines SLA. (In fact, his or her identity is irrelevent to the overall discussion here.) The more interesting part to me is the impressive reality distortion field around Amazon and its platform’s capabilities. The CTO I heard speak got it wrong, but this is not the first time it was misinterpreted as an SLA, and it won’t be the last.
I tracked down the origin of the eleven nines. Amazon CTO Werners Vogels mentions in a blog post that the S3 service is “design[ed]” for “99.999999999% durability” – choosing his words carefully. Consistent with Vogels’ language is the following Amazon FAQ on the same topic:
Q: How durable is Amazon S3? Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.
First of all, these mentions are a comment on a blog and an item in an FAQ page; neither is from a company SLA. And second, they both speak to durability of objects – not uptime or availability. And third, also critically, they say “designed” for all those nines – but guarantee nothing of the sort. Even still, it is a bold statement. And good marketing.
It is nice that Amazon can have so much confidence in their S3 design. I did not find a comparable statement about confidence in the design of their compute infrastructure… Reality is that [cloud] services are about more than design and architecture – also about implementation, operations, management, and more. To have any hope, architecture and design need to be solid, of course, but alone they cannot prevent a general service outage which could take your site down with it (and even still lose dataoccasionally). Some others on the interwebs are skeptical as I am, not just of Amazon, but anyone claiming too many nines.
How about the actual 99.9% “three-nines” SLA? Be careful in your expectations. As a wise man once told me, there’s a reason they are called Service Level Agreements, rather than Service Level Guarantees. There are no guarantees here.
This isn’t to pick on Amazon – other vendors have had – and will have – interruptions in service. For most companies, the cloud will still be the most cost-effective and reliable way to host your applications; few companies can compete with the big platform cloud vendors for expertise, focus, reliability, security, economies-of-scale, and efficiency. It is only a matter of time before you are there. Today, your competitors (known and unknown) are moving there already. As a wise man once told me (citing Crossing the Chasm), the innovators and early adoptors are those companies willing to trade off risk for competitive advantage. You saw it here first: this Internet thing is going to stick around for a while. Yes, and cloud services will just make too much sense to ignore. You will be on the cloud; it is only a matter of where you’ll be on the curve.
Back to all those nines… Of course, Amazon has done nothing wrong here. I see nothing inaccurate or deceptive in their documentation. But those of us in the community need to pay closer attention to what is really being described. So here’s a small favor I ask of this technology community I am part of: Let’s please do our homework so that when we discuss and compare the cloud platforms – on blogs, when giving talks, or chatting 1:1 – we can at least keep the discussions based on facts.
The July Boston Azure User Group meeting had a tough act to follow: the June meeting included a live, energy-packedRock, Paper, Azure hacking contest hosted by Jim O’Neil! The winners were chosen completely objectively since the Rock, Paper, Azure server managed the who competition. First prize was taken by two teenagers (Kevin Wilder and T.J. Wilder) whose entry beat out around 10 others (including a number of professional programmers!).
This month’s July Boston Azure User Group meeting was up for the challenge.
Mark Eisenberg of Microsoft then shared some great insights about the cloud and the Windows Azure Platform – what they really are, why they matter, and how they fit into the real world. You can find Mark on Twitter @azurebizandtech.
We wrapped up the meeting with a short live demonstration of the Windows Azure Portal doing its thing. Then a few of us retired to the Muddy.
Hope to see you at the Boston Azure meeting in August (Windows Phone 7 + Azure), two meetings in September (one in Waltham (first time EVER), and the “usual” one at NERD), and then kicking off a two-day Boston Azure Bootcamp!
I spoke last night to the Boston .NET Architecture Study Group about Architecture Patterns for Scalability and Reliability in Context of the Windows Azure cloud computing platform.
The deck is attached at the bottom, after a few links of interest for folks who want to dig deeper.
Command Query Responsibility Segregation (CQRS):
I’m a big fan of Bertrand Meyer‘s work, and I just learned that CQRS is based on his earlier CQR pattern
Martin Fowler has a entry on CQRS (recently added, I will now read this)