Monthly Archives: December 2012

Azure FAQ: How to Use .NET 4.5 with Windows Azure Cloud Services?

Microsoft released version 4.5 of its popular .NET Framework in August 2012. This framework can be installed independently on any compatible machine (check out the .NET FrameworkThe Azure FAQ Deployment Guide for Administrators) and (for developers) come along with Visual Studio 2012.

Windows Azure Web Sites also support .NET 4.5, but what is the easiest way to deploy a .NET 4.5 application to Windows Azure as a Cloud Service? This post shows how easy this is.

Assumption

This post assumes you have updated to the most recent Windows Azure Tools for Visual Studio and the latest SDK for .NET.

For any update to a new operating system or new SDK, consult the Windows Azure Guest OS Releases and SDK Compatibility Matrix to understand which versions of operating systems and Azure SDKs are intended to work together.

You can do this with the Web Platform Installer by installing Windows Azure SDK for .NET (VS 2012) – Latest (best option) – or directly here (2nd option since this link will become out-of-date eventually).

Also pay close attention to the release notes, and don’t forget to Right-Click on your Cloud Service, hit Properties, and take advantage of some of the tooling support for the upgrade:

UpgradeFall2012

Creating New ASP.NET Web Role for .NET 4.5

Assuming you have up-to-date bits, a File | New from Visual Studio 2012 will look something like this:

image

Select a Cloud project template, and (the only current choice) a Windows Azure Cloud Service, and be sure to specify .NET Framework 4.5. Then proceed as normal.

Updating Existing ASP.NET Web Role for .NET 4.5

If you wish to update an existing Web Role (or Worker Role), you need to make a couple of changes in your project.

First, update the Windows Azure Operating System version use Windows Server 2012. This is done by opening your Cloud project (pageofphotos in the screen shot) and opening ServiceConfiguration.Cloud.cscfg.

image

Change the osFamily setting to be “3” to indicate Windows Server 2012.

   osFamily=”3″

As of this writing. the other allowed values for osFamily are “1” and “2” to indicate Windows Server 2008 SP2 and Windows Server 2008 R2 (or R2 SP1) respectively. The up-to-date settings are here.

Now you are set for your operating system to include .NET 4.5, but none of your Visual Studio projects have yet been updated to take advantage of this. For each project that you intend to update to use .NET 4.5, you need to update the project settings accordingly.

image

First, select the project in the Solution Explorer, right-click on it, and choose Properties from the pop-up menu. That will display the screen shown. Now simply select .NET Framework 4.5 from the available list of Target framework options.

If you open an older solution with the newer Azure tools for Visual Studio, you might see a message something like the following. If that happens, just follow the instructions.

WindowAzureTools-dialog-NeedOct2012ToolsForDotNet45

That’s it!

Now when you deploy your Cloud Service to Windows Azure, your code can take advantage of .NET 4.5 features.

Troubleshooting

Be sure you get all the dependencies correct across projects. In one project I migrated, I realized the following came up because I had a mix of projects that needed to stay on .NET 4.0, but those aspects deployed to the Windows Azure Cloud could be on 4.5. If you don’t get this quite right, you may get a compiler warning like the following:

Warning  The referenced project ‘CapsConfig’ is targeting a higher framework version (4.5) than this project’s current target framework version (4.0). This may lead to build failures if types from assemblies outside this project’s target framework are used by any project in the dependency chain.    SomeOtherProjectThatReferencesThisProject

The warning text is self-explanatory: the solution is to not migrate that particular project to .NET 4.5 from .NET 4.0. In my case, I was trying to take advantage of the new WIF features, and this project did not have anything to do with Identity, so there was no problem.

Sorting out Digital Certificates – Dec 2012 Boston Azure Meeting

At the December 13, 2012 meeting for Boston Azure Cloud User Group, I gave a short talk on how Digital Certificates work (cryptographically speaking).

The backstory is that Windows Azure uses certificates in a few different ways, and understanding the different types of certificate uses is key to understanding why these different ways of using and deploying certificates are the way they are.

The slide deck is here:

Sorting Out Digital Certificates – 13-Dec-2012 – Bill Wilder – Boston Azure

 

 

Engineering for Disaster Recovery in the Cloud (Avoiding Data Loss)

Disaster Recovery, or DR, refers to your approach for recovering from an event that results in failure of your software system. Some examples of such events: hurricanes, earthquakes, and fires. The common thread with these events is that they were not your fault and they happened suddenly, usually at the most inconvenient of times.

image of storm clouds

Clouds are not always inviting! Be prepared for storm clouds.

Damage from one of these events might be temporary: a prolonged power outage that is eventually restored. Damage might be permanent: servers immersed in water are unlikely to work after drying out.

Whether a one-person shop with all the customer data on a single laptop, or a large multi-national with its own data centers, any business that uses computers to manage data important to that business needs to consider DR.

The remainder of this article focuses on some useful DR approaches for avoiding loss of business data when engineering applications for the cloud. The detailed examples are specific to the Windows Azure Cloud Platform, but the concepts apply more broadly, such as with Amazon Web Services and other cloud platforms. Notable this post does not discuss DR approaches as they apply to other parts of infrastructure, such as web server nodes or DNS routing.

Minimize Exposure

Your first line of defense is to minimize exposure. Consider a cloud application with business logic running on many compute nodes.

Terminology note: I will use the definition of node from page 2 of my Cloud Architecture Patterns book (and occasionally in other places in this post I will reference patterns and primers from the book where they add more information):

An application runs on multiple nodes, which have hardware resources. Application logic runs on compute nodes and data is stored on data nodes. There are other types of nodes, but these are the primary ones. A node might be part of a physical server (usually a virtual machine), a physical server, or even a cluster of servers, but the generic term node is useful when the underlying resource doesn’t matter. Usually it doesn’t matter.

In cloud-native Windows Azure applications, these compute nodes are Web Roles and Worker Roles. The thing to realize is that local storage on Web Roles and Worker Roles is not a safe place to keep important data long term. Well before getting to an event significant enough to be characterized as needing DR, small events such as a hard-disk failure can result in the loss of such data.

While not a DR issue per se due to the small scope, these applications should nevertheless apply the Node Failure Pattern (Chapter 10) to deal with this.

But the real solution is to not use local storage on compute nodes to store important business data. This is part of an overall strategy of using stateless nodes to enable your application to scale horizontally, which comes with many important benefits beyond just resilience to failure. Further details are described in the Horizontally Scaling Compute Pattern (Chapter 2).

Leverage Platform Services

In the United States, there are television commercials featuring “The Most Interesting Man in the World” who lives an amazing, fantastical life, and doesn’t always drink beer, but when he does he drinks DOS EQUIS.

image

In the cloud, our compute nodes do not always need to persist data long-term, but when they do, they use cloud platform services.

And the “DOS” in “DOS EQUIS” stands for neither Disk Operating System nor Denial of Service here, but rather is the number two in Spanish. But cloud platform services for data storage do better than dos, they have tres – as in three copies.

Windows Azure Storage and Windows Azure SQL Database both write three copies of each byte onto three independent servers on three independent disks. The hardware is commodity hardware – chosen for high value, not strictly for high availability – so it is expected to fail, and the failures are overcome by keeping multiple copies of every byte. If the one of the three instances fails, a new third instance is created by making copies from the other two. The goal state is to continually have three copies of every byte.

Windows Azure Storage is always accessed through a REST interface, either directly, or via specific SDK which uses the REST interface under the hood. For any REST API call that modifies data, the API does not return until all three copies of the bytes are successfully stored.

Windows Azure SQL Database is always accessed through TDS, which is the same TCP protocol as SQL Server. While your application is provided a single connection string, and you create a single TDS connection, behind the scenes there is a three-node cluster. For any operation that modifies data, the operation does not return until at least two copies of the update have been successfully applied on two of the nodes in this cluster; the third node is updated asynchronously.

So if you have a Web Role or Worker Role in Windows Azure, and that node has to save data, it should use one of the persistent storage mechanisms just mentioned.

What about Windows Azure Virtual Machines?

Windows Azure also has a Virtual Machine node that you can deploy (Windows or Linux flavored), and the hard disks attached to those nodes are persistent, but how can that be? It turns out they are backed by Windows Azure Blob storage, so that doesn’t break the model: they also have some storage that is truly local and can use it for caching sorts of functions, but any long-term data is persisted to blob storage, even though it is indistinguishable from a local disk drive from the point of view of any code running on the virtual machine.

But wait, there’s more!

In addition to this, Windows Azure Storage asynchronously geo-replicates blobs and tables to a sister data center. There are eight Azure data centers, and they are paired as follows: East US-West US, North Central US-South Central US, North Europe-West Europe, and East Asia-Southeast Asia. Note that the pairs are chosen to be in the same geo-political region to simplify regulatory compliance in many cases. So if you save data to a blob in East US, three copies will be synchronously written in East US, then three more copies will be asynchronously written to West US.

It is easy to overlook the immense value of having data stored in triplicate and transparently geo-replicated. While the feature comes across rather matter-of-factly, you get incredibly rich DR features without lifting a finger. Don’t let the ease of use mask the great value of this powerful feature.

All of the local and geo-replication mentioned so far happens for free: it is included as part of the listed at-rest storage costs, and no action needed on your part to enable this capability (though you can turn it off).

Enable More as Needed

All the replication listed above will help DR. If a hardware failure takes out one of your three local copies, the system self-heals – you will never even know most types of failures happen. If a natural disaster takes out a whole data center, Microsoft decides when to reroute DNS traffic for Windows Azure Storage away from the disabled data center and over to its sister data center which has the geo-replicated copies.

Note that the geo-replication is only out-of-the-box today for Windows Azure Storage (and not for queues – just for blobs and tables) and not for SQL Database. However, this can be enabled using the sync service available today – you decide how many copies and to which data centers and at what frequency.

Note that there are additional costs associated with using the sync service for SQL Database, for the sync service itself and for data center egress bandwidth.

Regardless of the mechanism, there is always a time-lag in asynchronous geo-replication, so if a primary data center was lost suddenly, the last few minutes worth of updates may not have been fully replicated. Of course, you could choose to write synchronously to two data centers for super-extra safety, but please consult the Network Latency Primer (Chapter 11) before doing so.

This is all part of the overall Multisite Deployment Pattern (Chapter 15), though servicing a geo-distributed user base is another feature of this architecture pattern, beyond the DR features.

Where’s the Engineering?

The title of this blog post is “Engineering for Disaster Recovery in the Cloud” but where did all the engineering happen?

Much of what you need for DR is handled for you by cloud platform services, but not all of it. From time-to-time we alluded to some design patterns that your applications need to adhere to in order for these platform services to make sense. As one example, if your application is written to assume it is safe to use local storage on your web server as a good long-term home for business data, well… the awesomeness built into cloud platform services isn’t going to help you.

There is an important assumption here if you want to leverage the full set of services available in the cloud: you need to build cloud-native applications. These are cloud application that are architected to align with the architecture of the cloud.

I wrote an entire book explaining what it means to architect a cloud-native application and detailing specific cloud architecture patterns to enable that, so I won’t attempt to cover it in a blog post, except to point out that many of the architectural approaches of traditional software will not be optimal for applications deployed to the cloud.

Distinguish HE from DR

Finally, we need to distinguish DR from HE – Disaster Recover from Human Error.

Consider how the DR features built into the cloud will not help with many classes of HE. If you modify or delete data, your changes will dutifully be replicated throughout the system. There is no magic “undo” in the cloud. This is why you usually will still want to take control of making back-ups of certain data.

So backups are still desirable. There are cloud platform services to help you with backups, and some great third-party tools as well. Details on which to choose warrant an entire blog post of their own, but hopefully this post at least clarifies the different needs driven by DR vs. HE.

Is This Enough?

Maybe. It depends on your business needs. If your application is one of those rare applications that needs to be responsive 24×7 without exception, not even for a natural disaster, then no, this is not enough. If your application is a line-of-business application (even an important one), often it can withstand a rare outage under unusual circumstances, so this approach might be fine. Most applications are somewhere in between and you will need to exercise judgement in weighing the business value against the engineering investment and operational cost of a more resilient solution.

And while this post talked about how the combination of following some specific cloud architecture patterns to design cloud-native applications provides a great deal of out-of-the-box resilience in DR situations, it did not cover ongoing continuity, such as with computation, or immediate access to data from multiple data centers. If you rely entirely on the cloud platform to preserve your data, you may not have access to it for a while since (as mentioned earlier, and emphasized nicely in Neil’s comment) you don’t control all the failover mechanisms; you will need to wait until Microsoft decides to failover the DNS for Windows Azure Storage, for example. And remember that background geo-replication does not guarantee zero data loss: some changes may be lost due to the additional latency needed in moving data across data centers, and not all data is geo-replicated (such as queued messages and some other data not discussed).

The ITIL term for “how much data can I stand to lose” is known as the recovery point objective (RPO). The ITIL term for “how long can I be down” is known as the recovery time objective (RTO). The RPO and RTO are useful concepts for modeling DR.

So the DR capabilities built into cloud platform services are powerful, but somewhat short of all-encompassing. However, they do offer a toolbox providing you with unprecedented flexibility in making this happen.

Is This Specific to the Cloud?

The underlying need to understand RPO and RTO and use them to model for DR is not specific to the cloud. These are very real issues in on-premises systems as well. The approaches to addressing them may vary, however.

Generally speaking, while the cloud does not excuse you from thinking about these important characteristics, it does provide some handy capabilities that make it easier to overcome some of the more challenging data-loss threats. Hopefully this allows you to sleep better at night.

—-

Bill Wilder is the author of the book Cloud Architecture Patterns – Develop Cloud-Native Applications from O’Reilly. This post complements the content in the book. Feel free to connect with Bill on twitter (@codingoutloud) or leave a comment on this post. (He’s also warming up to Google Plus.)

book-cover-medium

—-