Category Archives: Cloud Computing

Start Windows Azure Storage Emulator from a Shortcut

When building applications to run on Windows Azure you can get a lot of development and testing done without ever leaving your developer desktop. Much of this is due to the convenient fact that much code “just works” on Windows Azure. How can that be, you might wonder? Running on Windows Azure in many cases amounts to nothing different than running on Windows Server 2012 (or Linux, should you chose). In other words, most generic PHP, C#, C++, Java, Python, and <your favorite language here> code just works.

Once your code starts accessing specific cloud features, you face a choice: access those services in the cloud, or use the local development emulator. You can access most cloud services directly from code running on your developer desktop – it usually just amounts to a REST call under the hood (with some added latency from desktop to cloud and back) – it is an efficient and effective way to debug. But the development emulator gives you another option for certain Windows Azure cloud services.

A common use case for the local development emulator is to have web applications such as with ASP.NET, ASP.NET MVC, and Web API that run either in Cloud Services or just in a Web Site. This is an important difference because when debugging, Visual Studio will start the Storage Emulator automatically, but this will not happen if you debugging web code that does not run from a Cloud Service. So if your web code is accessing Blob Storage, for example, when you run it locally you will get a timeout when it attempts to access Storage. That is, unless you ensure that the Storage Emulator has been started. Here’s an easy way to do this. (Normally, you only need to do this once per login (since it keeps running until you stop it).)

In my case, it was very convenient to have a shortcut that I could click to start the Storage Emulator on occasion. Here’s how to set it up. I’ll explain it as a shortcut (such as on a Windows 8 desktop), but the key step is very simple and easily used elsewhere.

Creating the Desktop Shortcut

  1. Right-click on a desktop
  2. From pop-up menu, choose New –> Shortcut      image
  3. You get a dialog box asking about what you’d like to create a shortcut for:image
  4. HERE’S IMPORTANT PART 1/2: click hit the Browse button and navigate to wherever your Windows Azure SDK is installed and drill into the image
  5. In my case this places the path "C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe" into the text field.
  6. HERE’S IMPORTANT PART 2/2: Now after the end of the path (after the second double quote) add the parameter /devstore:start which indicates to start up the Storage Emulator.
  7. Click Next to reach the last step – naming the shortcut: image
  8. Perhaps change the name of the shortcut from the default (csrun.exe) to something like Start Storage Emulator: image
  9. Done! Now you can double-click this shortcut to fire up the Windows Azure Storage Emulator: image 

On my dev computer, the path to start the Windows Azure Storage Emulator was: "C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe" /devstore:start

Now starting the Storage Emulator without having to use a Cloud Service from Visual Studio is only a double-click away.

RELATED

Examine User Identity and Claims from Visual Studio Debugger

When debugging a claims-aware application (you ARE using claims, aren’t you?), sometimes it is useful to answer the question “which user is logged in (if any) and (if so) which claims are associated with said user.”

Assuming you are using Visual Studio and .NET 4.5, the simple solution is to add the following to one of your Visual Studio Watch windows:

System.Threading.Thread.CurrentPrincipal

[If you happen to be debugging ASP.NET code, you could save a little typing and instead add User to your Watch window. User should have the same value as the CurrentPrincipal in the context of ASP.NET. For ASP.NET WebForms User is a property of the Page class (Page.User), while for ASP.NET MVC User is a property of both the Controller class (Controller.User) and the HttpContext class (HttpContext.User).]

Drill in, and you will see something like the following:

image

If you then right-click on the Results View entry under Claims (the one that says “Expanding the Results View will enumerate the IEnumerable”) and, uhh, click on that entry to expand the results view, you will see all the claims.

In my case, some claims were flowing through Windows Azure Access Control Service (ACS), and these list the ACS namespace as the Issuer. Other claims were added at runtime by my code using a ClaimsAuthenticationManager module, and these list LOCAL AUTHORITY as the Issuer.

image

Alternatively, you can add the more complex direct expression to your Watch window – using the cast to coerce the right values:

((System.Security.Claims.ClaimsPrincipal)(System.Threading.Thread.CurrentPrincipal))

This will also do the job – with a little less drilling.

Talk: What’s New in Windows Azure – New England Microsoft Dev Group

A couple of nights ago, I had the privilege of speaking at the New England Microsoft Dev Group in Waltham, MA. The topic covered a general and high-level overview of the broad capabilities of the Windows Azure Cloud Platform, with some specific topics added by attendees as well. It turned out to be an interactive session with good questions from the group.

We agreed I would come back after the summer for an architecture-focused session; the session presented was more feature & technology-oriented.

A few followups:

The deck I used is pretty short, but here in case you are interested:

My book, if you are interested, is described here (note: my next talk to the group will cover material more closely associated with the book, which is more focused on patterns and architecture in the context of designing effective cloud applications).

Talk: Architecting for the Cloud at Nashua Cloud .NET User Group

Last night I had the privilege of speaking at the Nashua .NET Cloud User Group in Nashua, NH. It was an engaged group to be sure – thanks for all the great questions.

A few followups:

  • Azure VM pricing: the $0.013/hour pricing mentioned for Extra Small instances of the Infrastructure as a Service (IaaS) Virtual Machine is shown here to be a promotional price, with the regular price of $0.02/hour (two cents per hour) kicking in on June 1. The architectures we spoke of in the talk used Platform as a Service (PaaS) Virtual Machines and the pricing for those is very similar, though slightly lower, and is shown here.
  • How many customers does Azure have: here is the 10,000 number that Udai shared, which is from was about three years ago when most of the tech world had not yet even heard of Azure. More recently,  it was mentioned there are 200,000 Azure customers and it has passed $1 billion in revenue. So, according to those numbers, it appears to have grown 20x in a little less than three years. Additional interesting numbers mentioned here and here.
  • We focused on use of Cloud Services last night, but we also mentioned Virtual Machines (part of what Microsoft is calling Infrastructure Services, like IaaS) and Web Sites, noting all use different approaches. You can read more about all of them here where you’ll see write-ups for each specific area.
  • I mentioned that Blob Storage is also being used to support the persistent disks on the Infrastructure Services Virtual Machines, in part-enabled by new high performance network architecture. I wrote about some of this before in a blog post titled Azure Cloud Storage Improvements Hit the Target.

The deck I used follows.

Architecting for the Cloud — NH Azure — 15-Mar-2013 — Bill Wilder (blog.codingoutloud.com)

My book, if you are interested, is described here. And the Boston Azure Cloud User Group can be found here.

Cloud Architecture Patterns book

Clash of the Clouds Followup

Last night, Mark Eisenberg and I represented the Windows Azure Cloud Platform in a Clash of the Clouds panel discussion/debate opposite Erik Sebesta  and Ed Brennan who represented the Open Source cloud alternatives. Erik & Ed declared OpenStack to be the strongest of the open source options today, so it became about Azure vs. OpenStack.

While I will not attempt to reproduce the discussion (sorry!, though there are a few photos), I do want to follow up on a few questions that I offered to provide references on. If you have further questions, please feel free to put a comment on this post. Also, at the end of this post, you will find a link to the short “Azure in 3 minutes or less” deck we used to introduce the Windows Azure Cloud Platform at the very beginning (per the ground rules of the panel – we limited the intro to 3 minutes).

  • In response to the question about scalability of Windows Azure Blobs, here is the write-up I referenced on Windows Azure Storage Scalability Targets. Here is an additional (more comparative) discussion (follow links) you may find helpful: Azure Cloud Storage Improvements Hit the Target.
  • In response to the question about pricing, check out the Windows Azure pricing calculator. Note that for the Microsoft Server products (e.g. Windows Server, or SQL Server on Windows Azure SQL Database (offered as a service) or on a Virtual Machine (that you manage)), the cost of the license is baked into the hourly rental cost.
  • In response to the question about the ability to support different types of apps (whether new ones from startups, existing ones from big company, etc.), see the spectrum of offerings described here: https://www.windowsazure.com/en-us/develop/net/fundamentals/compute/. In a nutshell, Web Sites is for hosting (with a free Tier) for basic, low-scale sites, but these can scale very nicely too), Cloud Services is for building Cloud-Native applications using PaaS (which my book focuses on), Virtual Machines (parallel to what OpenStack offers in terms of managed VMs) is more useful for applications you want to run in the cloud with minimal change, and Virtual Networking allows many options for connecting your data center with a secure private network on Windows Azure among other options.
  • In response to the question about openness, any programming language or platform can access the Windows Azure services through REST APIs, but here is the list of those with first-class SDKs: http://www.windowsazure.com/en-us/downloads/
  • For any further follow-up questions feel free leave a COMMENT below and I will update this post.

Windows Azure is not the only full-service, rock-solid cloud platform out there, but I hope you got an appreciation for how it might help you and why you might wish to choose it for your applications and services. If you are interested in learning more about Windows Azure, you may wish to check out the Boston Azure User Group, which has been meeting regularly at NERD since October 2009. Our next meeting is in just a few days: Tuesday May 9.

The SLIDE DECK we used for the 3 minute intro is here:

 

Talk: Azure Best Practices – How to Successfully Architect Windows Azure Apps for the Cloud

Webinar Registration:

  • Azure Best Practices – How to Successfully Architect Windows Azure Apps for the Cloud @ 1pm ET on 13-March-2013
  • VIEW RECORDING HERE: http://bit.ly/ZzQDDW 

Abstract:

Discover how you can successfully architect Windows Azure-based applications to avoid and mitigate performance and reliability issues with our live webinar
Microsoft’s Windows Azure cloud offerings provide you with the ability to build and deliver a powerful cloud-based application in a fraction of the time and cost of traditional on-premise approaches.  So what’s the problem? Tried-and-true traditional architectural concepts don’t apply when it comes to cloud-native applications. Building cloud-based applications must factor in answers to such questions as:

  • How to scale?
  • How to overcome failure?
  • How to build a manageable system?
  • How to minimize monthly bills from cloud vendors?

During this webinar, we will examine why cloud-based applications must be architected differently from that of traditional applications, and break down key architectural patterns that truly unlock cloud benefits. Items of discussion include:

  • Architecting for success in the cloud
  • Getting the right architecture and scalability
  • Auto-scaling in Azure and other cloud architecture patterns

If you want to avoid long nights, help-desk calls, frustrated business owners and end-users, then don’t miss this webinar or your chance to learn how to deliver highly-scalable, high-performance cloud applications.

Deck:

Book:

The core ideas were drawn from my Cloud Architecture Patterns (O’Reilly Media, 2012) book:

book-cover-medium.jpg

Hosted by Dell:

image

Azure Cloud Storage Improvements Hit the Target

Windows Azure Storage (WAS)

Brad Calder SOSP talk from http://www.youtube.com/watch?v=QnYdbQO0yj4

Brad Calder delivering SOSP talk

Since its initial release, Windows Azure has offered a storage service known as Windows Azure Storage (WAS). According to the SOSP paper and related talk published by the team (led by Brad Calder), WAS is architected to be a “Highly Available Cloud Storage Service with Strong Consistency.” Part of being highly availably is keeping your data safe and accessible. The SOSP paper mentions that the WAS service retains three copies of every stored byte, and (announced a few months before the SOSP paper) another asynchronously geo-replicated trio of copies in another data center hundreds of miles away in the same geo-political region. Six copies in total.

WAS is a broad service, offering not only blob (file) storage, but also a NoSQL store and a reliable queue.

Further, all of these WAS storage offerings are strongly consistent (as opposed to other storage approaches which are sometimes eventually consistent). Again citing the SOSP paper: “Many customers want strong consistency: especially enterprise customers moving their line of business applications to the cloud.” This is because traditional data stores are strongly consistent and code needs to be specially crafted in order to handle an eventually consistent model. This simplifies moving existing code into the cloud.

The points made so far are just to establish some basic properties of this system before jumping into the real purpose of this article: performance at scale. The particular points mentioned (highly available, storage in triplicate and then geo-replicated, strong consistency, and supporting also a NoSQL database and reliable queuing features) were highlighted since they may be considered disadvantages – rich capabilities that may be considered to hamper scalability and performance. Except that they don’t hamper scalability and performance at all. Read on for details.

Performance at Scale

A couple of years ago, Nasuni benchmarked the most important public cloud vendors on how their services performed on cloud file storage at scale (using workloads modeled after those observed from real world business scenarios). Among the public clouds tested were Windows Azure Storage (though only the blob/file storage aspect was considered), Amazon S3 (an eventually consistent file store), and a couple of others.

In the first published result in 2011, Nasuni declared Amazon S3 the overall winner, prevailing over Windows Azure Storage and others, though WAS fininshed ahead of Amazon in some of the tests. At the time of these tests, WAS was running on its first-generation network architecture and supported capacity as described in the team’s published scalability targets from mid-2010.

In 2012, Microsoft network engineers were busy implementing a new data center network design they are calling Quantum 10 (or Q10 for short). The original network design was hierarchical, but the Q10 design is flat (and uses other improvements like SSD for journaling). The end result of this dramatic redesign is that WAS-based network storage is much faster, more scalable, and as robust as ever. The corresponding Q10 scalability targets were published in November 2012 and show substantial advances. EDIT: the information on scalability targets and related factors is kept up to date in official documentation here.

Q10 was implemented during 2012 and apparently was in place before Nasuni ran its updated benchmarks between November 2012 and January 2013. With its fancy new network design in place, WAS really shined. While the results in 2011 were close, with Amazon S3 being the overall winner, in 2012 the results were a blowout, with Windows Azure Storage being declared the winner, sweeping all other contenders across the three categories.

“This year, our tests revealed that Microsoft Azure Blob Storage has taken a significant step ahead of last year’s leader, Amazon S3, to take the top spot. Across three primary tests (performance, scalability and stability), Microsoft emerged as a top performer in every category.” -Nusani Report

The Nasuni report goes on to mention that “the technology [Microsoft] are providing to the market is second to none.”

Reliability

One aspect of the report I found very interesting was in the error rates. For several of the vendors (including Amazon, Google, and Azure), Nasuni reported not a single error was detected during 100 million write attempts. And Microsoft stood alone for the read tests: “During read attempts, only Microsoft resulted in no errors.” In my book, I write about the Busy Signal Pattern which is needed whenever transient failures result during attempts to access a cloud service. The scenario described in the book showed the number of retries needed when I uploaded about four million files. Of course, the Busy Signal Pattern will still be needed for storage access and other services – not all transient failures can be eliminated from multitenant cloud services running on commodity hardware served over the public internet – and while this is not a guarantee there won’t be any, it does bode well for improvements in throughput and user experience.

And while it’s always been the case you can trust WAS for HA, these days it is very hard to find any reason – certainly not peformance or scalability – to not consider Windows Azure Storage. Further, WAS, S3, and Google Storage all have similar pricing (already low – and trending towards even lower prices) – and Azure, Google, and Amazon have the same SLAs for storage.

References

Note that the Nasuni report was published February 19, 2013 on the Nasuni blog and is available from their web site, though is gated, requiring that you fill out a contact form for access. The link is here: http://www.nasuni.com/blog/193-comparing_cloud_storage_providers_in

Other related articles of interest:

  1. Windows Azure beats the competition in cloud speed test – Oct 7, 2011 – http://yossidahan.wordpress.com/2011/10/07/windows-azure-beats-the-competition-in-cloud-speed-test/
  2. Amazon bests Microsoft, all other contenders in cloud storage test – Dec 12, 2011 –
  3. Only Six Cloud Storage Providers Pass Nasuni Stress Tests for Performance, Stability, Availability and Scalability – Dec 11, 2011 – http://www.nasuni.com/news/press_releases/46-only_six_cloud_storage_providers_pass_nasuni_stress
  4. Dec 3, 2012 – http://www.networkworld.com/news/2012/120312-argument-cloud-264454.html – Cloud computing showdown: Amazon vs. Rackspace (OpenStack) vs. Microsoft vs. Google
  5. http://www.networkworld.com/news/2013/021913-azure-aws-266831.html?hpg1=bn – Feb 19, 2013 – Microsoft Azure overtakes Amazon’s cloud in performance test

Talk: Architecting for the Cloud at Boston Code Camp #19

On Saturday March 9, 2013, I teamed up with Joan Wortman on a talk at the 19th (!) Boston Code Camp. Some of the patterns I discuss require some different thinking about application architecture, including aspects that impact the user experience (UX). I teamed up with Joan Wortman (who is a UX expert) to better include some context around how to deal with some of these UX challenges as they intersect with architecture.

I also hope to see many of the attendees at future Boston Azure meetings (held at same location as the Boston Code Camp – NERD in Cambridge, MA). Also feel free to post follow-up questions to this post or email me (codingoutloud on gmail) or ask me on twitter where I am @codingoutloud.

Here are a couple of questions that came up in the talk:

  1. How much does the cloud cost? As I mentioned, this is a question that deserves some discussion since it is not as simple as looking at the pricing calculator (which can be found here). Sometimes it will be less costly, sometimes more costly. (I did point out there is a free tier for Windows Azure Web Sites.) One major factor is the cost of resources (which is trending down over time). Another major factor is the impact of reducing resource usage when it is not needed; for example, consider a Line of Business application which is used only during business hours in North America and can be turned off completely (accruing no VM usage charges) during non-business hours/weekends/holidays; as another example consider that you don’t need to own resource for the “spike” at the Superbowl (like Shazam scenario described by Joan) since you can “give it all back” (stop paying) once the rush is over. There are also other considerations when you get into DR and HA and geo-distribution. (I wrote about RPO and RTO terms in the context Engineering for DR in the Cloud recently.) And still another factor is understanding what you are paying for — don’t forget the Iceberg idea — so do not compare pricing with those of traditional hosting (unless that’s what you really want) since hosting is not cloud computing!
  2. Why can I only access 32 messages at a time from the Windows Azure Storage Queue? This is the same limit when we talk about “peeking” (looking at what’s on the queue without removing it) and retrieving messages for exclusive access. I don’t know why this particular limit was chosen (why not 20? why not 100?) so could only speculate on that. The bottom line is that all messages can be accessed – sometimes requiring more than one call. I wish I had time to probe into the application scenario that would benefit from grabbing so many messages at once, but due to time constraints did not do that. I will answer the question further if I get a follow-up question.
  3. Where can I find the mail app that Joan mentioned? The Mailbox app is for iOS and can be found in your app store or directly on iTunes here: https://itunes.apple.com/us/app/mailbox/id576502633?mt=8 (and there’s a lot of press – such as this story here).
  4. OTHER QUESTIONS? Send ’em along!

Hope to see you at Boston Azure:

clip_image001_thumb.png

Much of the material for the talk also appears in my book:

Cloud Architecture Patterns book

Talk: How is Architecting for the Cloud Different?

On Thursday 07-February-2013 I spoke at DevBoston about “How is Architecting for the Cloud Different?”

Here is the abstract:

If my application runs on cloud infrastructure, am I done? Not if you wish to truly take advantage of the cloud. The architecture of a cloud-native application is different than the architecture of a traditional application and this talk will explain why. How to scale? How do I overcome failure? How do I build a system that I can manage? And how can I do all this without a huge monthly bill from my cloud vendor? We will examine key architectural patterns that truly unlock cloud benefits. By the end of the talk you should appreciate how cloud architecture differs from what most of use have become accustomed to with traditional applications. You should also understand how to approach building self-healing distributed applications that automatically overcome hardware failures without downtime (really!), scale like crazy, and allow for flexible cost-optimization.

Here are the slides:

How is Architecting for the Cloud Different — DevBoston — 06-Feb-2013 — Bill Wilder (blog.codingoutloud.com)

Here is the book we gave away copies of (and from which some of the material was drawn):

book-cover-medium.jpg

Ready to learn more about Windows Azure? Come join us at the Boston Azure Cloud User Group!

Boston Azure cloud user group logo

Beyond IaaS for the IT Pro (Part 21 of 31)

[This post is part 21 of the 31 Days of Server (VMs) in the Cloud Series – I contributed the article below, but all others have been contributed by others – please find the index for the whole series by clicking here.]

As technology professionals we need to be careful about how we spend our time. Unless we want short careers, we find time to keep us with at least some new technologies, but there isn’t time in anyone’s day to keep up with every technology. We have to make choices.

For the IT Pro looking at cloud technologies, the IaaS capabilities are a far more obvious area on which to spend time than PaaS capabilities. In this post, we’ll take a peek into PaaS. The goal is to clarify the difference between IaaS and PaaS, understand what PaaS is uniquely good for, and offer some reasons why a busy IT Pro might want to invest some time learning about PaaS.

While the concepts in this point can apply generally to many platforms – including public and private clouds, Microsoft technologies and competing solutions – this post focuses on IaaS and PaaS capabilities within the Windows Azure Cloud Platform. Virtual machines and SQL databases are highlighted since these are likely of greatest interest to the IT Pro.

The Options – From Ten Thousand Feet

The NIST Definition of Cloud Computing (SP800-145) defines some terms that are widely used in the industry for classifying cloud computing approaches. One set of definitions delineates Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). You can read the NIST definitions for more details, but the gist is this:

Service Model What You Provide Target Audience Control & Flexibility Expertise Needed Example
SaaS Users Business Users Low App usage Office 365
PaaS Applications Developers Medium App design and mgmt Windows Azure Cloud Services
IaaS Virtual Machines IT Pros High App design and mgmt
+ VM/OS mgmt
Windows Azure Virtual Machines (Windows Server, Linux)

Generally speaking, as we move from SaaS through PaaS to IaaS, we gain more control and flexibility at the expense of more cost and expertise needed due to added complexity. There are always exceptions (perhaps a SaaS solution that requires complex integration with an on-premises solution), but this is good enough to set the stage. Now let’s look at the core differences between PaaS and IaaS as they relate to the IT Pro.

Not All VMs are Created Equal

Even though Windows Azure has vastly more to offer (more on that later), the most obvious front-and-center offering is the humble VM. This is true both for PaaS and IaaS. So what distinguishes the two approaches?

The VMs for PaaS and IaaS behave very differently. The PaaS VM has a couple of behaviors that may surprise you, while the IaaS VM behavior is more familiar. Let’s start with the most far-reaching difference: On a PaaS VM, local storage is not durable.

This has significant implications. Suppose you install software (perhaps a database) on a PaaS VM and it stores some data locally. This will work fine… at least for a short while. At some point, Azure will migrate your application from one node to another… and it will not bring local data with it. Your locally-stored database data, not to mention any custom system tuning you did during installation, are gone. And this is by design. (For list of scenarios where PaaS VM drive data is destroyed, see the bottom of this document.)

How can this possibly be useful: a VM that doesn’t hold on to its local data…

You might wonder how this can possibly be useful: a VM that doesn’t hold on to its data. The fact of the matter is that it is not very useful for many applications written with conventional (pre-cloud) assumptions (such as guarantees around the durability of data). [PaaS may not be good at running certain applications, but is great at running others. So please keep reading!]

PaaS VM Local Storage

The PaaS VM drives use conventional server hard drives. These can fail, of course, and they are not RAID or high-end drives; this is commodity hardware optimized for high value for the money. And even if drives don’t outright fail, there are scenarios where the Azure operating environment does not guarantee durability of locally stored data (as referenced earlier).

IaaS VM Local Storage

On the other hand, IaaS VMs do have persistent/durable local drives. This is what makes them so much more convenient to use – and why they have a more familiar feel to IT Pros (and developers). But these drives are not the local server hard drives (other than the D: drive which is expected to be used only for temporary caching), they use a high capacity, highly scalable data storage service known as the Windows Azure Blob service (“blobs” for short, where each blob is roughly equivalent to a file, and each drive referenced by the VM is a VHD stored as one of these files). Data stored in blobs is safe from hardware failure: it is stored in triplicate by the blob service (each copy on a different physical node), and is even then geo-replicated in the background to a data center in another region, resulting (after a few minutes of latency) in an additional three copies.

IaaS VMs have persistent/durable local storage backed by blobs… this makes them so much more convenient to use – and more familiar to IT Pros

Storing redundant copies of your data offers a RAID-like feel, though is more cost-efficient at the scale of a data center.

Since blobs transparently handle storage for IaaS VMs (operating system drive, one or more data drives) and is external to any particular VM instance, in addition to being a familiar model, it is extremely robust and convenient.

Summarizing Some Key Differences

PaaS VM IaaS VM
Virtual Machine image Choose from Win 2008 SP2, Win 2008 R2, and Win 2012. There are patch releases within each of these families. There are many to choose from, including those you can create yourself. Can be Windows or Linux.
Hard Disk Persistence Not durable. Could be lost due to hardware failure or when moved from one machine to another. Durable. Backed by a blob (blobs are explained below).
Service Level Agreement (SLA) 99.95% for two or more instances (details). No SLA offered for single instance. 99.95% for two or more instances. 99.9% for single instance. (Preliminary details.)

SLA details for the IaaS VM are preliminary since the service is still in preview as of this writing.

SQL Database Options: PaaS vs IaaS

Windows Azure offers a PaaS database option, formerly called SQL Azure, and today known simply as SQL Database. This is really SQL Server behind the scenes, though it is not exactly the same as SQL Server 2012 (“Denali”).

SQL Database is offered as a service. This means with a few mouse clicks (or a few lines of PowerShell) you can have a database connection string that’s ready to go. Connecting to this database will actually connect you to a 3-node SQL Server cluster behind the scenes, but this is not visible to you; it appears to you to simply be a single-node instance. Three copies of your data are maintained by the cluster (each on different hardware).

Consider the three copies of every byte to be great for High Availability (HA), but offers no defense against Human Error (HE). If someone drops the CUSTOMER table, that drop will be immediately replicated to all three copies of your data. You still need a backup strategy.

One big benefit of the SQL Database service is that the server is completely managed by Windows Azure… with the flip side of that coin being that an IT Pro simply cannot make any adjustments to the configuration. Note that SQL tuning and database schema design skills have not gone anywhere; this is all just as demanding in the cloud as outside the cloud.

SQL Database Service has a 150 GB Limit

SQL Database has some limitations. The most obvious is that you cannot store more than 150 GB in a single instance. What happens when you have 151 GB? This brings to light another PaaS/IaaS divergence: the IaaS approach is to grow the database (“scale up” or “vertical scaling”) while the PaaS approach is to add additional databases (“scale out” or “horizontal scaling”). For the SQL Database service in Windows Azure, only the “horizontal scaling” approach is supported – it becomes up to the application to distribute its data across more than one physical database, an approach known commonly as sharding, where each shard represents one physical database server. This can be a big change for an application to support since the database schema needs to be compatible, which usually means it needs to have been originally designed with sharding in mind. Further, the application needs to be built to handle finding and connecting to the correct shard.

For PaaS applications that wish to support sharding, the Federations in SQL Database feature provides robust support for handling most of the routine tasks. Without the kind of support offered by Federations, building a sharding layer can be far more daunting. Federations simplifies connection string management, has smart caching, and offers management features that allow you to repartition your data across SQL Database nodes without experiencing downtime.

The alternative to SQL Database is for you to simply use an IaaS VM to host your own copy of SQL Server. You have full control (you can configure, tune, and manage your own database, unlike with the SQL Database service where these functions are all handled and controlled by Windows Azure). You can grow it beyond 150 GB. It is all yours.

But realize that in the cloud, there are still limitations. All public cloud vendors offer a fixed menu of virtual machine sizes, so you will need to ensure that your self-managed IaaS SQL Server will have enough resources (e.g., RAM) for your largest database.

Any database can outgrow its hardware, whether on the cloud or not.

It is worth pointing out that any database can outgrow its hardware. And the higher end the hardware, the more expensive it becomes from a “capabilities for the money” point of view. And at some point you can reach the point where (a) you can’t afford sufficiently large hardware, or (b) the needed hardware is so high end that it is not commercially available. This will drive you towards a either a sharding architecture, or some other approach to make your very large database smaller so that it will fit in available hardware.

SQL Database Service is Multitenant

Another significant difference between the SQL Database service and a self-hosted IaaS SQL Server is that the SQL Database service is multitenant: your data sits alongside the data of other customers. This is secure – one customer cannot access another customer’s data – but it does present challenges when one customer’s queries are very heavy, and another customer [potentially] experiences variability in performance as a result. For this reason, the SQL Database service protects itself and other customers by not letting any one customer dominate resources – this is accomplished with “throttling” and can manifest in multiple ways, from a delay in execution to dropping a connection (which the calling application is responsible for reestablishing).

Don’t underestimate the importance of properly handling of throttling. Applications need to be written to handle these scenarios in order to function correctly. Throttling can happen even if your application is doing nothing wrong.

Handling the throttling should not be underestimated. Proper throttling handling requires that application code handle certain types of transient failures and retry. Most existing application code does not do this. Blindly pointing an existing application at a SQL Database instance might seem to work, but will also potentially experience odd errors occasionally that may be hard to track down or diagnose if the application was written (and tested) in an environment where interactions with SQL Server always succeeded.

The self-managed IaaS database does not suffer this unpredictability since you presumably control which application can connect and can manage resources more directly.

SQL Database Service has Additional Services

The SQL Database service has some easy to enable features that may make your like easier. One example is the database sync service that can be enabled in the Windows Azure Portal. You can easily configure a SQL Database instance to be replicated with one or more other instances in the same or different databases. This can help with an offsite-backup strategy, mirroring globally to reduce latency, and is one area where PaaS shines.

image

SQL Database Service is SQL Server

Windows Azure today offers the SQL Database service based on SQL Server 2012. If your application (for some reason) needs an older version SQL Server (perhaps it is a vendor product and you don’t control this), then your hands are tied.

Or perhaps you want another database besides SQL Server. Windows Azure has a partner offering MySQL, and other vendor products will likely be offered over time. NoSQL Databases are also becoming more popular. Windows Azure natively offers the NoSQL Windows Azure Table service, and a few examples of other third-party ones include MongoDB, Couchbase, RavenDB, and Riak. Unless (or until) these are offered as PaaS services through the Windows Azure Store, your only option will be to run them yourself in an IaaS VM.

WazOps Features and Limitations

The main thrust of PaaS is to make operations efficient for applications designed to align with the PaaS approach. For example, applications that can deal with throttling, or can deal with a PaaS VM being migrated and losing all locally stored data. This is all doable – and without degrading user experience – it just so happens that most applications that exist today (and will still exist tomorrow) don’t work this way.

The PaaS approach can be used to horizontally scale an application very efficiently (whether computational resources running on VMs or database resources sharded with Federations for SQL Database), overcome disruptions due to commodity hardware failures, gracefully handle throttling (whether from SQL Database or other Azure services not discussed), and do so with minimal human interaction. But getting to this point is not automatic.

WazOps – DevOps, Windows Azure style! – is the role that will build out this reality. There are auto-scaling tools, both external services, and some that we can run ourselves — like the awesome WASABi auto-scaling application block from Microsoft’s Patterns & Practices group – that can be configured to scale an application on a schedule or based on environmental signals (like the CPU is spiking in a certain VM).

There is also the mundane. How to script a managed deployment so our application can be upgraded without downtime? Windows Azure PaaS services have features for this, such as the in-place update and the VIP Swap. But we still need to understand them and create a strategy to use them appropriately.

Further, there are at least some of the same-old-details. For example, it is easy to deploy an SSL certificate to my PaaS VM that is being deployed to IIS… but it still will expire in a year and someone still needs to know this – and know what to do about it before it results in someone being called at 2:00 AM on a Sunday.

Should IT Pros Pass on PaaS?

Clearly there are some drawbacks to running PaaS since most existing applications will not run successfully without some non-trivial rework, but will work just fine if deployed to IaaS VMs.

However, that does not mean that PaaS is not useful. It turns out that some of the most reliable, scalable, cost-efficient applications in the world are architected for this sort of PaaS environment. The Bing services behind bing.com take this approach, as only one example. The key here is that these applications are architected assuming a PaaS environment. I don’t use the term “architected” lightly, since architecture dictates the most fundamental assumptions about how an application is put together. Most applications that exist today are not architected with PaaS-compatible assumption. However, as we move forward, and developer skills catch up with the cloud offerings, we will see more and more applications designed from the outset to be cloud-native; these will be deployed using these PaaS facilities.

A stateless web-tier (with no session affinity in the load balancer) is a good example today of an application that could run successfully in a PaaS environment – though I’ll be quick to note that other tiers of that application may not run so well in PaaS. Which bring up an obvious path going forward: hybrid applications that mix PaaS and IaaS. This will be a popular mix in the coming years.

Hybrid Applications

Consider a 3-tier application with a web tier running in IIS, a service tier, and a SQL Server back-end database. If built with conventional approaches, not considering the PaaS cloud, none of these three tiers would be ready for a PaaS environment. So we could deploy all three tiers using IaaS VMs.

As a software maintenance step, it would be reasonable to upgrade the web site (perhaps written in PHP or ASP.NET) to be stateless and not need session affinity (Windows Azure PaaS Cloud Services do not support session affinity from the load balancer). These types of changes may be enough to allow the web tier to run more efficiently using PaaS VMs, while still interacting with a service tier and database running on IaaS VMs.

A future step could upgrade the service tier to handle SQL Database throttling correctly, allowing the SQL Server instance running on an IaaS VM to be migrated to the SQL Database service. This will reduce the number of Windows servers and SQL Servers being managed by the organization (shifting these to Windows Azure), and may also simplify some other tasks (like replicating that data using the Data Sync Service). Each services and VM also has its own direct costs (our monthly bill to Microsoft for the Windows Azure Cloud services we consume), which are detailed in the pricing section of the Windows Azure Portal.

Still another future step could be to migrate the middle tier to be stateless – but maybe not. All of these decisions are business decisions; perhaps the cost-benefit is not there. It depends on you application and your business and the skills and preferences of the IT Pros and developers in the organization.

Conclusions

I’ll summarize here with some of the key take-aways for the IT Pro who is new to PaaS services:

  1. Be aware of the challenges in migrating existing applications onto either PaaS VMs or SQL Database. If the application is not architected with the right assumptions (stateless VMs, SQL operations that may be throttled, 150 GB limit), it will not work correctly – even though it might seem to work at first. IaaS VMs will often present a better option.
  2. SQL Database does not support all of the features that SQL Server 2012 support. Though it does have some special ones of its own: always runs as a three-node cluster for HA, and has Federation support.
  3. PaaS is increasingly the right choice for new applications that can be built from the outset. Assumes that team understands PaaS and has learned the needed skills! (I wrote a book – Cloud Architecture Patterns – to illuminate these new skills.)
  4. Pure IaaS and pure PaaS are not the only approaches. Hybrid approaches will be productive.
  5. PaaS will gain momentum long-term due to the economic benefits since they can be cheaper to run and maintain. There are direct costs which are easy to measure (since you get a detailed bill) and indirect/people costs which are more challenging to measure.
  6. WazOps (DevOps with an Azure spin) will be the role to deliver on the promise of PaaS going forward. Not only with the well-informed WazOps professional help avoid issues of going too fast (see earlier points which speak to not all applications being PaaS-ready), but also understand the business drivers and economics of investing to move faster where appropriate for your business.

Feedback always welcome and appreciated. Good luck in your cloud journey!

[This post is part 21 of the 31 Days of Server (VMs) in the Cloud Series – please return to the series index by clicking here]