Category Archives: DevOps

Sorting out Digital Certificates – Dec 2012 Boston Azure Meeting

At the December 13, 2012 meeting for Boston Azure Cloud User Group, I gave a short talk on how Digital Certificates work (cryptographically speaking).

The backstory is that Windows Azure uses certificates in a few different ways, and understanding the different types of certificate uses is key to understanding why these different ways of using and deploying certificates are the way they are.

The slide deck is here:

Sorting Out Digital Certificates – 13-Dec-2012 – Bill Wilder – Boston Azure

 

 

Engineering for Disaster Recovery in the Cloud (Avoiding Data Loss)

Disaster Recovery, or DR, refers to your approach for recovering from an event that results in failure of your software system. Some examples of such events: hurricanes, earthquakes, and fires. The common thread with these events is that they were not your fault and they happened suddenly, usually at the most inconvenient of times.

image of storm clouds

Clouds are not always inviting! Be prepared for storm clouds.

Damage from one of these events might be temporary: a prolonged power outage that is eventually restored. Damage might be permanent: servers immersed in water are unlikely to work after drying out.

Whether a one-person shop with all the customer data on a single laptop, or a large multi-national with its own data centers, any business that uses computers to manage data important to that business needs to consider DR.

The remainder of this article focuses on some useful DR approaches for avoiding loss of business data when engineering applications for the cloud. The detailed examples are specific to the Windows Azure Cloud Platform, but the concepts apply more broadly, such as with Amazon Web Services and other cloud platforms. Notable this post does not discuss DR approaches as they apply to other parts of infrastructure, such as web server nodes or DNS routing.

Minimize Exposure

Your first line of defense is to minimize exposure. Consider a cloud application with business logic running on many compute nodes.

Terminology note: I will use the definition of node from page 2 of my Cloud Architecture Patterns book (and occasionally in other places in this post I will reference patterns and primers from the book where they add more information):

An application runs on multiple nodes, which have hardware resources. Application logic runs on compute nodes and data is stored on data nodes. There are other types of nodes, but these are the primary ones. A node might be part of a physical server (usually a virtual machine), a physical server, or even a cluster of servers, but the generic term node is useful when the underlying resource doesn’t matter. Usually it doesn’t matter.

In cloud-native Windows Azure applications, these compute nodes are Web Roles and Worker Roles. The thing to realize is that local storage on Web Roles and Worker Roles is not a safe place to keep important data long term. Well before getting to an event significant enough to be characterized as needing DR, small events such as a hard-disk failure can result in the loss of such data.

While not a DR issue per se due to the small scope, these applications should nevertheless apply the Node Failure Pattern (Chapter 10) to deal with this.

But the real solution is to not use local storage on compute nodes to store important business data. This is part of an overall strategy of using stateless nodes to enable your application to scale horizontally, which comes with many important benefits beyond just resilience to failure. Further details are described in the Horizontally Scaling Compute Pattern (Chapter 2).

Leverage Platform Services

In the United States, there are television commercials featuring “The Most Interesting Man in the World” who lives an amazing, fantastical life, and doesn’t always drink beer, but when he does he drinks DOS EQUIS.

image

In the cloud, our compute nodes do not always need to persist data long-term, but when they do, they use cloud platform services.

And the “DOS” in “DOS EQUIS” stands for neither Disk Operating System nor Denial of Service here, but rather is the number two in Spanish. But cloud platform services for data storage do better than dos, they have tres – as in three copies.

Windows Azure Storage and Windows Azure SQL Database both write three copies of each byte onto three independent servers on three independent disks. The hardware is commodity hardware – chosen for high value, not strictly for high availability – so it is expected to fail, and the failures are overcome by keeping multiple copies of every byte. If the one of the three instances fails, a new third instance is created by making copies from the other two. The goal state is to continually have three copies of every byte.

Windows Azure Storage is always accessed through a REST interface, either directly, or via specific SDK which uses the REST interface under the hood. For any REST API call that modifies data, the API does not return until all three copies of the bytes are successfully stored.

Windows Azure SQL Database is always accessed through TDS, which is the same TCP protocol as SQL Server. While your application is provided a single connection string, and you create a single TDS connection, behind the scenes there is a three-node cluster. For any operation that modifies data, the operation does not return until at least two copies of the update have been successfully applied on two of the nodes in this cluster; the third node is updated asynchronously.

So if you have a Web Role or Worker Role in Windows Azure, and that node has to save data, it should use one of the persistent storage mechanisms just mentioned.

What about Windows Azure Virtual Machines?

Windows Azure also has a Virtual Machine node that you can deploy (Windows or Linux flavored), and the hard disks attached to those nodes are persistent, but how can that be? It turns out they are backed by Windows Azure Blob storage, so that doesn’t break the model: they also have some storage that is truly local and can use it for caching sorts of functions, but any long-term data is persisted to blob storage, even though it is indistinguishable from a local disk drive from the point of view of any code running on the virtual machine.

But wait, there’s more!

In addition to this, Windows Azure Storage asynchronously geo-replicates blobs and tables to a sister data center. There are eight Azure data centers, and they are paired as follows: East US-West US, North Central US-South Central US, North Europe-West Europe, and East Asia-Southeast Asia. Note that the pairs are chosen to be in the same geo-political region to simplify regulatory compliance in many cases. So if you save data to a blob in East US, three copies will be synchronously written in East US, then three more copies will be asynchronously written to West US.

It is easy to overlook the immense value of having data stored in triplicate and transparently geo-replicated. While the feature comes across rather matter-of-factly, you get incredibly rich DR features without lifting a finger. Don’t let the ease of use mask the great value of this powerful feature.

All of the local and geo-replication mentioned so far happens for free: it is included as part of the listed at-rest storage costs, and no action needed on your part to enable this capability (though you can turn it off).

Enable More as Needed

All the replication listed above will help DR. If a hardware failure takes out one of your three local copies, the system self-heals – you will never even know most types of failures happen. If a natural disaster takes out a whole data center, Microsoft decides when to reroute DNS traffic for Windows Azure Storage away from the disabled data center and over to its sister data center which has the geo-replicated copies.

Note that the geo-replication is only out-of-the-box today for Windows Azure Storage (and not for queues – just for blobs and tables) and not for SQL Database. However, this can be enabled using the sync service available today – you decide how many copies and to which data centers and at what frequency.

Note that there are additional costs associated with using the sync service for SQL Database, for the sync service itself and for data center egress bandwidth.

Regardless of the mechanism, there is always a time-lag in asynchronous geo-replication, so if a primary data center was lost suddenly, the last few minutes worth of updates may not have been fully replicated. Of course, you could choose to write synchronously to two data centers for super-extra safety, but please consult the Network Latency Primer (Chapter 11) before doing so.

This is all part of the overall Multisite Deployment Pattern (Chapter 15), though servicing a geo-distributed user base is another feature of this architecture pattern, beyond the DR features.

Where’s the Engineering?

The title of this blog post is “Engineering for Disaster Recovery in the Cloud” but where did all the engineering happen?

Much of what you need for DR is handled for you by cloud platform services, but not all of it. From time-to-time we alluded to some design patterns that your applications need to adhere to in order for these platform services to make sense. As one example, if your application is written to assume it is safe to use local storage on your web server as a good long-term home for business data, well… the awesomeness built into cloud platform services isn’t going to help you.

There is an important assumption here if you want to leverage the full set of services available in the cloud: you need to build cloud-native applications. These are cloud application that are architected to align with the architecture of the cloud.

I wrote an entire book explaining what it means to architect a cloud-native application and detailing specific cloud architecture patterns to enable that, so I won’t attempt to cover it in a blog post, except to point out that many of the architectural approaches of traditional software will not be optimal for applications deployed to the cloud.

Distinguish HE from DR

Finally, we need to distinguish DR from HE – Disaster Recover from Human Error.

Consider how the DR features built into the cloud will not help with many classes of HE. If you modify or delete data, your changes will dutifully be replicated throughout the system. There is no magic “undo” in the cloud. This is why you usually will still want to take control of making back-ups of certain data.

So backups are still desirable. There are cloud platform services to help you with backups, and some great third-party tools as well. Details on which to choose warrant an entire blog post of their own, but hopefully this post at least clarifies the different needs driven by DR vs. HE.

Is This Enough?

Maybe. It depends on your business needs. If your application is one of those rare applications that needs to be responsive 24×7 without exception, not even for a natural disaster, then no, this is not enough. If your application is a line-of-business application (even an important one), often it can withstand a rare outage under unusual circumstances, so this approach might be fine. Most applications are somewhere in between and you will need to exercise judgement in weighing the business value against the engineering investment and operational cost of a more resilient solution.

And while this post talked about how the combination of following some specific cloud architecture patterns to design cloud-native applications provides a great deal of out-of-the-box resilience in DR situations, it did not cover ongoing continuity, such as with computation, or immediate access to data from multiple data centers. If you rely entirely on the cloud platform to preserve your data, you may not have access to it for a while since (as mentioned earlier, and emphasized nicely in Neil’s comment) you don’t control all the failover mechanisms; you will need to wait until Microsoft decides to failover the DNS for Windows Azure Storage, for example. And remember that background geo-replication does not guarantee zero data loss: some changes may be lost due to the additional latency needed in moving data across data centers, and not all data is geo-replicated (such as queued messages and some other data not discussed).

The ITIL term for “how much data can I stand to lose” is known as the recovery point objective (RPO). The ITIL term for “how long can I be down” is known as the recovery time objective (RTO). The RPO and RTO are useful concepts for modeling DR.

So the DR capabilities built into cloud platform services are powerful, but somewhat short of all-encompassing. However, they do offer a toolbox providing you with unprecedented flexibility in making this happen.

Is This Specific to the Cloud?

The underlying need to understand RPO and RTO and use them to model for DR is not specific to the cloud. These are very real issues in on-premises systems as well. The approaches to addressing them may vary, however.

Generally speaking, while the cloud does not excuse you from thinking about these important characteristics, it does provide some handy capabilities that make it easier to overcome some of the more challenging data-loss threats. Hopefully this allows you to sleep better at night.

—-

Bill Wilder is the author of the book Cloud Architecture Patterns – Develop Cloud-Native Applications from O’Reilly. This post complements the content in the book. Feel free to connect with Bill on twitter (@codingoutloud) or leave a comment on this post. (He’s also warming up to Google Plus.)

book-cover-medium

—-

Quick: How many 9s are in your SLA?

I recently attended an event where one of the speakers was the CTO of a company built on top of Amazon cloud services, the most critical of these being the Simple Storage Service known as Amazon S3.

The S3 service runs “out there” (in the cloud) and provides a scalable repository for applications to store and manage data files. The service can support files of any size, as well as any quantity. So you can put as much stuff up there as you want – and since it is a pay-as-you-go service, you pay for what you use. The S3 service is very popular. An example of a well-known customer, according to Wikipedia, is SmugMug:

Photo hosting service SmugMug has used S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being “considerably more reliable than our own internal storage” and claimed to have saved almost $1 million in storage costs.

Good stuff.

Of course, Amazon isn’t the only cloud vendor with such an offering. Google offers Google Storage, and Microsoft offers Windows Azure Blob Storage; both offer features and capabilities very similar to those of S3. While Amazon was the first to market, all three services are now mature, and all three companies are experts at building internet-scale systems and high-volume data storage platforms.

As I mentioned above, S3 came up during a talk I attended. The speaker – CTO of a company built entirely on Amazon services – twice touted S3’s incredibly strong Service Level Agreement (SLA). He said this was both a competitive differentiator for his company, and also a competitive differentiator for Amazon versus other cloud vendors.

Pause and think for a moment – any idea? – What is the SLA for S3? How about Google Storage? How about Windows Azure Blob Storage?

Before I give away the answer, let me remind you that a Service Level Agreement (SLA) is a written policy offered by the service provider (Amazon, Google, and Microsoft in this case) that describes the level of service being offered, how it is measured, and consequences if it is not met. Usually, the “level of service” part relates to uptime and is measured in “nines” as in 99.9% (“three nines”) and so forth. More nines is better, in general – and wikipedia offers a handy chart translating the number of nines into aggregate downtime/unavailability. (More generally, an SLA also deals with other factors – like refunds to customers if expectations are not met, what speed to expect, limitations, and more. I will focus only on the “nines” here.)

So… back to the question… For S3 and equivalent services from other vendors, how many nines are in the Amazon, Google, and Microsoft SLAs? The speaker at the talk said that S3 had an uptime SLA with 11 9s. Let me say that again – eleven nines – or 99.999999999% uptime. half-an-eye-blinkIf you attempt to look this up in the chart mentioned above, you will find this number is literally “off the chart” – the chart doesn’t go past six nines! But my back-of-the-envelope calculation says it amounts to – on average – less than 32 milliseconds of downtime per year. This is about half what “a blink of your eye” would take – yes, a mere half of an eye-blink. (Which ends with your eyes closed. :-))

This is an impressive number! If only it was true. It turns out the real SLA for Amazon S3 has exactly as many nines as the SLA for Windows Azure Blob Storage and the SLA for Google Storage: they are all 99.9%.

Storage SLAs for Amazon, Google, and Microsoft all have exactly the same number of nines: they are all 99.9%. That’s three nines.

I am not picking on the CTO I heard gushing about the (non-existant) eleven-nines SLA. (In fact, his or her identity is irrelevent to the overall discussion here.) The more interesting part to me is the impressive reality distortion field around Amazon and its platform’s capabilities. The CTO I heard speak got it wrong, but this is not the first time it was misinterpreted as an SLA, and it won’t be the last.

I tracked down the origin of the eleven nines. Amazon CTO Werners Vogels mentions in a blog post that the S3 service is “design[ed]” for “99.999999999% durability” – choosing his words carefully. Consistent with Vogels’ language is the following Amazon FAQ on the same topic:

Q: How durable is Amazon S3? Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

First of all, these mentions are a comment on a blog and an item in an FAQ page; neither is from a company SLA. And second, they both speak to durability of objects – not uptime or availability. And third, also critically, they say “designed” for all those nines – but guarantee nothing of the sort. Even still, it is a bold statement. And good marketing.

It is nice that Amazon can have so much confidence in their S3 design. I did not find a comparable statement about confidence in the design of their compute infrastructure… Reality is that [cloud] services are about more than design and architecture – also about implementation, operations, management, and more. To have any hope, architecture and design need to be solid, of course, but alone they cannot prevent a general service outage which could take your site down with it (and even still lose data occasionally). Some others on the interwebs are skeptical as I am, not just of Amazon, but anyone claiming too many nines.

How about the actual 99.9% “three-nines” SLA? Be careful in your expectations. As a wise man once told me, there’s a reason they are called Service Level Agreements, rather than Service Level Guarantees. There are no guarantees here.

This isn’t to pick on Amazon – other vendors have had – and will have – interruptions in service. For most companies, the cloud will still be the most cost-effective and reliable way to host your applications; few companies can compete with the big platform cloud vendors for expertise, focus, reliability, security, economies-of-scale, and efficiency. It is only a matter of time before you are there. Today, your competitors (known and unknown) are moving there already. As a wise man once told me (citing Crossing the Chasm), the innovators and early adoptors are those companies willing to trade off risk for competitive advantage. You saw it here first: this Internet thing is going to stick around for a while. Yes, and cloud services will just make too much sense to ignore. You will be on the cloud; it is only a matter of where you’ll be on the curve.

Back to all those nines… Of course, Amazon has done nothing wrong here. I see nothing inaccurate or deceptive in their documentation. But those of us in the community need to pay closer attention to what is really being described.  So here’s a small favor I ask of this technology community I am part of: Let’s please do our homework so that when we discuss and compare the cloud platforms – on blogs, when giving talks, or chatting 1:1 – we can at least keep the discussions based on facts.