Tag Archives: SLA

Azure Cloud Storage Improvements Hit the Target

Windows Azure Storage (WAS)

Brad Calder SOSP talk from http://www.youtube.com/watch?v=QnYdbQO0yj4

Brad Calder delivering SOSP talk

Since its initial release, Windows Azure has offered a storage service known as Windows Azure Storage (WAS). According to the SOSP paper and related talk published by the team (led by Brad Calder), WAS is architected to be a “Highly Available Cloud Storage Service with Strong Consistency.” Part of being highly availably is keeping your data safe and accessible. The SOSP paper mentions that the WAS service retains three copies of every stored byte, and (announced a few months before the SOSP paper) another asynchronously geo-replicated trio of copies in another data center hundreds of miles away in the same geo-political region. Six copies in total.

WAS is a broad service, offering not only blob (file) storage, but also a NoSQL store and a reliable queue.

Further, all of these WAS storage offerings are strongly consistent (as opposed to other storage approaches which are sometimes eventually consistent). Again citing the SOSP paper: “Many customers want strong consistency: especially enterprise customers moving their line of business applications to the cloud.” This is because traditional data stores are strongly consistent and code needs to be specially crafted in order to handle an eventually consistent model. This simplifies moving existing code into the cloud.

The points made so far are just to establish some basic properties of this system before jumping into the real purpose of this article: performance at scale. The particular points mentioned (highly available, storage in triplicate and then geo-replicated, strong consistency, and supporting also a NoSQL database and reliable queuing features) were highlighted since they may be considered disadvantages – rich capabilities that may be considered to hamper scalability and performance. Except that they don’t hamper scalability and performance at all. Read on for details.

Performance at Scale

A couple of years ago, Nasuni benchmarked the most important public cloud vendors on how their services performed on cloud file storage at scale (using workloads modeled after those observed from real world business scenarios). Among the public clouds tested were Windows Azure Storage (though only the blob/file storage aspect was considered), Amazon S3 (an eventually consistent file store), and a couple of others.

In the first published result in 2011, Nasuni declared Amazon S3 the overall winner, prevailing over Windows Azure Storage and others, though WAS fininshed ahead of Amazon in some of the tests. At the time of these tests, WAS was running on its first-generation network architecture and supported capacity as described in the team’s published scalability targets from mid-2010.

In 2012, Microsoft network engineers were busy implementing a new data center network design they are calling Quantum 10 (or Q10 for short). The original network design was hierarchical, but the Q10 design is flat (and uses other improvements like SSD for journaling). The end result of this dramatic redesign is that WAS-based network storage is much faster, more scalable, and as robust as ever. The corresponding Q10 scalability targets were published in November 2012 and show substantial advances. EDIT: the information on scalability targets and related factors is kept up to date in official documentation here.

Q10 was implemented during 2012 and apparently was in place before Nasuni ran its updated benchmarks between November 2012 and January 2013. With its fancy new network design in place, WAS really shined. While the results in 2011 were close, with Amazon S3 being the overall winner, in 2012 the results were a blowout, with Windows Azure Storage being declared the winner, sweeping all other contenders across the three categories.

“This year, our tests revealed that Microsoft Azure Blob Storage has taken a significant step ahead of last year’s leader, Amazon S3, to take the top spot. Across three primary tests (performance, scalability and stability), Microsoft emerged as a top performer in every category.” -Nusani Report

The Nasuni report goes on to mention that “the technology [Microsoft] are providing to the market is second to none.”

Reliability

One aspect of the report I found very interesting was in the error rates. For several of the vendors (including Amazon, Google, and Azure), Nasuni reported not a single error was detected during 100 million write attempts. And Microsoft stood alone for the read tests: “During read attempts, only Microsoft resulted in no errors.” In my book, I write about the Busy Signal Pattern which is needed whenever transient failures result during attempts to access a cloud service. The scenario described in the book showed the number of retries needed when I uploaded about four million files. Of course, the Busy Signal Pattern will still be needed for storage access and other services – not all transient failures can be eliminated from multitenant cloud services running on commodity hardware served over the public internet – and while this is not a guarantee there won’t be any, it does bode well for improvements in throughput and user experience.

And while it’s always been the case you can trust WAS for HA, these days it is very hard to find any reason – certainly not peformance or scalability – to not consider Windows Azure Storage. Further, WAS, S3, and Google Storage all have similar pricing (already low – and trending towards even lower prices) – and Azure, Google, and Amazon have the same SLAs for storage.

References

Note that the Nasuni report was published February 19, 2013 on the Nasuni blog and is available from their web site, though is gated, requiring that you fill out a contact form for access. The link is here: http://www.nasuni.com/blog/193-comparing_cloud_storage_providers_in

Other related articles of interest:

  1. Windows Azure beats the competition in cloud speed test – Oct 7, 2011 – http://yossidahan.wordpress.com/2011/10/07/windows-azure-beats-the-competition-in-cloud-speed-test/
  2. Amazon bests Microsoft, all other contenders in cloud storage test – Dec 12, 2011 –
  3. Only Six Cloud Storage Providers Pass Nasuni Stress Tests for Performance, Stability, Availability and Scalability – Dec 11, 2011 – http://www.nasuni.com/news/press_releases/46-only_six_cloud_storage_providers_pass_nasuni_stress
  4. Dec 3, 2012 – http://www.networkworld.com/news/2012/120312-argument-cloud-264454.html – Cloud computing showdown: Amazon vs. Rackspace (OpenStack) vs. Microsoft vs. Google
  5. http://www.networkworld.com/news/2013/021913-azure-aws-266831.html?hpg1=bn – Feb 19, 2013 – Microsoft Azure overtakes Amazon’s cloud in performance test
Advertisement

Quick: How many 9s are in your SLA?

I recently attended an event where one of the speakers was the CTO of a company built on top of Amazon cloud services, the most critical of these being the Simple Storage Service known as Amazon S3.

The S3 service runs “out there” (in the cloud) and provides a scalable repository for applications to store and manage data files. The service can support files of any size, as well as any quantity. So you can put as much stuff up there as you want – and since it is a pay-as-you-go service, you pay for what you use. The S3 service is very popular. An example of a well-known customer, according to Wikipedia, is SmugMug:

Photo hosting service SmugMug has used S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being “considerably more reliable than our own internal storage” and claimed to have saved almost $1 million in storage costs.

Good stuff.

Of course, Amazon isn’t the only cloud vendor with such an offering. Google offers Google Storage, and Microsoft offers Windows Azure Blob Storage; both offer features and capabilities very similar to those of S3. While Amazon was the first to market, all three services are now mature, and all three companies are experts at building internet-scale systems and high-volume data storage platforms.

As I mentioned above, S3 came up during a talk I attended. The speaker – CTO of a company built entirely on Amazon services – twice touted S3’s incredibly strong Service Level Agreement (SLA). He said this was both a competitive differentiator for his company, and also a competitive differentiator for Amazon versus other cloud vendors.

Pause and think for a moment – any idea? – What is the SLA for S3? How about Google Storage? How about Windows Azure Blob Storage?

Before I give away the answer, let me remind you that a Service Level Agreement (SLA) is a written policy offered by the service provider (Amazon, Google, and Microsoft in this case) that describes the level of service being offered, how it is measured, and consequences if it is not met. Usually, the “level of service” part relates to uptime and is measured in “nines” as in 99.9% (“three nines”) and so forth. More nines is better, in general – and wikipedia offers a handy chart translating the number of nines into aggregate downtime/unavailability. (More generally, an SLA also deals with other factors – like refunds to customers if expectations are not met, what speed to expect, limitations, and more. I will focus only on the “nines” here.)

So… back to the question… For S3 and equivalent services from other vendors, how many nines are in the Amazon, Google, and Microsoft SLAs? The speaker at the talk said that S3 had an uptime SLA with 11 9s. Let me say that again – eleven nines – or 99.999999999% uptime. half-an-eye-blinkIf you attempt to look this up in the chart mentioned above, you will find this number is literally “off the chart” – the chart doesn’t go past six nines! But my back-of-the-envelope calculation says it amounts to – on average – less than 32 milliseconds of downtime per year. This is about half what “a blink of your eye” would take – yes, a mere half of an eye-blink. (Which ends with your eyes closed. :-))

This is an impressive number! If only it was true. It turns out the real SLA for Amazon S3 has exactly as many nines as the SLA for Windows Azure Blob Storage and the SLA for Google Storage: they are all 99.9%.

Storage SLAs for Amazon, Google, and Microsoft all have exactly the same number of nines: they are all 99.9%. That’s three nines.

I am not picking on the CTO I heard gushing about the (non-existant) eleven-nines SLA. (In fact, his or her identity is irrelevent to the overall discussion here.) The more interesting part to me is the impressive reality distortion field around Amazon and its platform’s capabilities. The CTO I heard speak got it wrong, but this is not the first time it was misinterpreted as an SLA, and it won’t be the last.

I tracked down the origin of the eleven nines. Amazon CTO Werners Vogels mentions in a blog post that the S3 service is “design[ed]” for “99.999999999% durability” – choosing his words carefully. Consistent with Vogels’ language is the following Amazon FAQ on the same topic:

Q: How durable is Amazon S3? Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

First of all, these mentions are a comment on a blog and an item in an FAQ page; neither is from a company SLA. And second, they both speak to durability of objects – not uptime or availability. And third, also critically, they say “designed” for all those nines – but guarantee nothing of the sort. Even still, it is a bold statement. And good marketing.

It is nice that Amazon can have so much confidence in their S3 design. I did not find a comparable statement about confidence in the design of their compute infrastructure… Reality is that [cloud] services are about more than design and architecture – also about implementation, operations, management, and more. To have any hope, architecture and design need to be solid, of course, but alone they cannot prevent a general service outage which could take your site down with it (and even still lose data occasionally). Some others on the interwebs are skeptical as I am, not just of Amazon, but anyone claiming too many nines.

How about the actual 99.9% “three-nines” SLA? Be careful in your expectations. As a wise man once told me, there’s a reason they are called Service Level Agreements, rather than Service Level Guarantees. There are no guarantees here.

This isn’t to pick on Amazon – other vendors have had – and will have – interruptions in service. For most companies, the cloud will still be the most cost-effective and reliable way to host your applications; few companies can compete with the big platform cloud vendors for expertise, focus, reliability, security, economies-of-scale, and efficiency. It is only a matter of time before you are there. Today, your competitors (known and unknown) are moving there already. As a wise man once told me (citing Crossing the Chasm), the innovators and early adoptors are those companies willing to trade off risk for competitive advantage. You saw it here first: this Internet thing is going to stick around for a while. Yes, and cloud services will just make too much sense to ignore. You will be on the cloud; it is only a matter of where you’ll be on the curve.

Back to all those nines… Of course, Amazon has done nothing wrong here. I see nothing inaccurate or deceptive in their documentation. But those of us in the community need to pay closer attention to what is really being described.  So here’s a small favor I ask of this technology community I am part of: Let’s please do our homework so that when we discuss and compare the cloud platforms – on blogs, when giving talks, or chatting 1:1 – we can at least keep the discussions based on facts.