[Ugh – editing 2nd week in a row after accidental early publishing.]
The original programming model for Windows Azure applications was to use Cloud Services (originally known as Hosted Services, but still the same thing). Of particular note, Cloud Services run on VMs with disks that are not-persistent – you can write data locally (some pointers here), but any locally stored data is not guaranteed to stick around. This is a powerful model for some scenarios, especially highly scalable applications. Another feature of Cloud Services has always been that it comes with an emulator you can run locally – on your laptop at 30,000 feet was a common way to hammer home the point. (Remember, Cloud Services were announced in 2008 – a long time before we had wifi on airplanes!) There are actually two emulators: Compute – which emulates the Cloud Service model by supporting Web Role and Worker Role abstractions, and Storage – which emulates Blob, Table, and [Storage] Queue Services. The rest of this post will focus specifically on the Storage Emulator.
Since their announcement in 2012, Windows Azure Web Sites and Virtual Machines have been taking on many of the common workloads that used to require Cloud Services. This diagram at least conceptually should capture the sense that the when to use which model decision has become blurred over time. This is good – with more choice comes the freedom to get started more simply – often a Virtual Machine is an easier onramp for existing apps, and a Web Site can be a great onramp for a website that adheres to some of the well-known programming stacks running on PHP, ASP.NET, Python, or Node.js. If you are a big success, consider upgrading to Cloud Services.
Notably absent from the diagram is the Storage Emulator. It should be in the middle of the diagram because while the local storage emulator is still useful for Cloud Services, you can also use it locally when developing applications targeting Windows Azure Web Sites or Virtual Machines.
This is awesome – of course, it will be popular to create applications destined for Windows Azure Web Sites or Virtual Machines that take advantage of the various Storage Services.
So that’s the trick – be sure to take advantage of the Storage Emulator, even when you are not targeting a Cloud Service. You need to know two things: how to turn it on, and how to address it.
Turning on the Storage Emulator
If you create a regular old Web Site and run that in Visual Studio, the Storage Emulator is not turned on. Visual Studio only turns on the Storage Emulator for you when you debug using a Cloud Service, but this is not convenient.
Find csrun.exe — In my case: “C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe”
Run csrun.exe with the parameter /devstore:start which indicates to start up the Storage Emulator.
Done. Of course you might want this is a bat file or as a PowerShell function.
Here’s PowerShell script that will turn it on:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The latest version of the Windows Azure Storage Emulator (v2.2.1) is in Preview. This release has support for “2013-08-15” version of Storage which adds CORS and JSON and still has all those features from years gone by…
A comparison of emulated and cloud storage services is also available. There are some differences.
[Edit: I originally accidentally published an old draft. The draft went out to all email subscribers and was public for around 90 minutes. Fixed now.]
In the most recent Stupid Azure Trick installment, I explained how one could host a 1000 visitor-per-day web site for one penny per month. Since then I also explained my choice to use CORS in that same application. Here I will dig into specifically using CORS with Windows Azure.
I also show how the curl command line tool can be helpful to examine CORS properties in HTTP headers for a blob service.
I also will briefly describe a simple tool I built that could quickly turn CORS on or off for a specified Blob service – the CORS Toggler. The CORS Toggler (in its current simple form) was useful to me because of two constraints that were true for my scenario:
I was only reading files from the Windows Azure Blob Service. When just reading, pre-flight request doesn’t matter when you are just reading. Simplification #1.
I didn’t care whether the blob resource is publicly available, rather than just available to my application. So the CORS policy was to open to any caller (‘*’). Simplification #2.
These two simplifications mean that the toggler knew what it meant to enable CORS (open up for reading to all comers) and to disable. (Though it is worth noting that opening up CORS to any caller is probably a common scenario. Also worth noting that tool could easily extended to support a whitelist for allowed domains or other features.)
First, here’s the code for the toggler – there are three files here:
Driver program (Console app in C#) – handles command line params and such and then calls into the …
Code to perform simple CORS manipulation (C# class)
The above two and driven (in my fast toggler) through the third file (command line batch file) which passes in the storage keys and storage account name for the service I was working with
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
One simple point to highlight – CORS properties are simply available on the Blob service object (and would be same for Table or Queue service within Storage):
Yes, this is a very simple API.
Showing the Service Object Contents
For those interested in the contents of these objects, here are a few ways to show content of properties (in code) before turning on CORS and after. (The object views are created using the technique I described my post on using JSON.NET as an object dumper that’s Good Enough™.)
DUMPING OBJECT BEFORE CORS ENABLED (just CORS properties):
DUMPING OBJECT BEFORE CORS ENABLED (but including ALL properties):
Current Properties:
{“Logging”:{“Version”:”1.0″,”LoggingOperations”:0,”RetentionDays”:null},”Metrics
“:{“Version”:”1.0″,”MetricsLevel”:0,”RetentionDays”:null},”HourMetrics”:{“Versio
n”:”1.0″,”MetricsLevel”:0,”RetentionDays”:null},”Cors”:{“CorsRules”:[]},”MinuteM
etrics”:{“Version”:”1.0″,”MetricsLevel”:0,”RetentionDays”:null},”DefaultServiceV
ersion”:null}
DUMPING OBJECT AFTER CORS ENABLED (but including ALL properties):
Current Properties:
{“Logging”:{“Version”:”1.0″,”LoggingOperations”:0,”RetentionDays”:null},”Metrics
“:{“Version”:”1.0″,”MetricsLevel”:0,”RetentionDays”:null},”HourMetrics”:{“Versio
n”:”1.0″,”MetricsLevel”:0,”RetentionDays”:null},”Cors”:{“CorsRules”:[{“AllowedOr
igins”:[“*”],”ExposedHeaders”:[“*”],”AllowedHeaders”:[“*”],”AllowedMethods”:1,”M
axAgeInSeconds”:36000}]},”MinuteMetrics”:{“Version”:”1.0″,”MetricsLevel”:0,”Rete
ntionDays”:null},”DefaultServiceVersion”:null}
Using ‘curl’ To Examine CORS Data:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Adding handle: conn: 0x805fa8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* – Conn 0 (0x805fa8) send_pipe: 1, recv_pipe: 0
* About to connect() to azuremap.blob.core.windows.net port 80 (#0)
* Trying 168.62.32.206…
* Connected to azuremap.blob.core.windows.net (168.62.32.206) port 80 (#0)
> OPTIONS /maps/azuremap.geojson HTTP/1.1
> User-Agent: curl/7.31.0
> Host: azuremap.blob.core.windows.net
> Accept: */*
> Origin: http://example.com
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
< HTTP/1.1 403 CORS not enabled or no matching rule found for this request.
< Content-Length: 316
< Content-Type: application/xml
* Server Blob Service Version 1.0 Microsoft-HTTPAPI/2.0 is not blacklisted
< Server: Blob Service Version 1.0 Microsoft-HTTPAPI/2.0
< x-ms-request-id: 04402242-d4a7-4d0c-bedc-ff553a1bc982
< Date: Sun, 26 Jan 2014 15:08:11 GMT
<
<?xml version=”1.0″ encoding=”utf-8″?><Error><Code>CorsPreflightFailure</Code><Message>CORS not enabled or no matching rule found for this request.
RequestId:04402242-d4a7-4d0c-bedc-ff553a1bc982
Time:2014-01-26T15:08:12.0193649Z</Message><MessageDetails>No CORS rules matches this request</MessageDetails></Error>*
Connection #0 to host azuremap.blob.core.windows.net left intact
CURL OUTPUT AFTER CORS ENABLED:
D:\dev\github>curl -H “Origin: http://example.com” -H “Access-Control-Request-Method: GET” -H “Access-Control-Request-Headers: X-Requested-With” -X OPTIONS –verbose http://azuremap.blob.core.windows.net/maps/azuremap.geojson
* Adding handle: conn: 0x1f55fa8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* – Conn 0 (0x1f55fa8) send_pipe: 1, recv_pipe: 0
* About to connect() to azuremap.blob.core.windows.net port 80 (#0)
* Trying 168.62.32.206…
* Connected to azuremap.blob.core.windows.net (168.62.32.206) port 80 (#0)
> OPTIONS /maps/azuremap.geojson HTTP/1.1
> User-Agent: curl/7.31.0
> Host: azuremap.blob.core.windows.net
> Accept: */*
> Origin: http://example.com
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
* Server Blob Service Version 1.0 Microsoft-HTTPAPI/2.0 is not blacklisted
< Server: Blob Service Version 1.0 Microsoft-HTTPAPI/2.0
< x-ms-request-id: d4df8953-f8ae-441b-89fe-b69232579aa4
< Access-Control-Allow-Origin: http://example.com
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Headers: X-Requested-With
< Access-Control-Max-Age: 36000
< Access-Control-Allow-Credentials: true
< Date: Sun, 26 Jan 2014 16:02:25 GMT
<
* Connection #0 to host azuremap.blob.core.windows.net left intact
Resources
A new version of the Windows Azure Storage Emulator (v2.2.1) is now in Preview. This release has support for “2013-08-15” version of Storage which includes CORS (and JSON and other) support.
Overall description of Azure Storage’s CORS Support:
I recently created a very simple client-only (no server-side code) that loads needed data dynamically. In order to access data from another storage location (in my case the data came from a Windows Azure Blob), the application needed to make a choice: how to load the data.
It really came down to three choices:
Load the data synchronously as the page loaded using an Inline script tag
Load the data asynchronously as part of initial page load using JSONP
Load the data asynchronously as part of initial page load using CORS
All 3 options effectively work within the Same Origin Policy (SOP) sandbox security measures that browsers implement. If access is not coming from a browser (but from, say, curl or a server application), SOP has no effect. SOP is there to protect end users from web sites that might not behave themselves.
Option 1 would be to basically have a hardcoded script tag load the data. One disadvantage of this is put perfectly by Douglas Crockford: “A <script src="url"></script> will block the downloading of other page components until the script has been fetched, compiled, and executed.” This means that the page will block while the data is loaded, potentially making the initial load appear a bit more visually chaotic. Also, if this technique is the only mechanism for loading data, once the page is loaded, the data is never refreshed, a potentially severe limitation for some applications; in the very old days, the best we could do was periodically trigger a full-page refresh, but that’s not state-of-the-art in 2014.
Option 2 would be to load the data asynchronously using JSONP. This is a fine solution from a user experience point of view: the page structure is first loaded, then populated once the data arrives. The client invokes the request using the XMLHttpRequest object in JavaScript.
Option 3 would be to load the data asynchronously using CORS. This offers essentially the identical user experience as option 2 and also relies on the XMLHttpRequest object.
Options 1 and 2 require that the data be encapsulated in JavaScript code. For option 2 with JSONP the convention is a function (often named callback) that simply returns a JSON object. The client making the call will then need to execute the function to get at the data. Option 1 has slightly more flexibility and could be simply a data structure declared with a known name like var mapData = ... which the client can access directly.
Option 3 with CORS is able to return the data directly. In that regard it is a little tiny bit more efficient since no bubble-wrap is needed – and is a lot safer since you are not executing a JavaScript function returned returned by a potentially untrusted server.
JSONP is not based on any official standard, but is common practice. CORS is a standard that is supported in modern browsers and comes with granular access policies. As an example, CORS policies can be set to allow access from a whitelist of domains (such as paying customers), while disallowing from any other domain.
For all three options there needs to be coordination between the client and the server since they need to agree on how the data is packaged for transmission. For CORS, this also requires browser support (see chart below). All options require that JavaScript is enabled in the client browser.
Summarizing CORS, JSONP, Inline
The following summary compares key qualities.
Inline JavaScript
JSONP
CORS
Comments
Synchronous or Async
Synchronous
Async
Async
Granular domain-level security
no
no
yes
In any of the three, you could also implement an authorization scheme. This is above and beyond that.
Risk
no
yes
no
JSONP requires that you execute a JavaScript function to get at the data. Neither of the other two approaches require that. There’s an extra degree of caution needed for JSONP data sources outside of your control.
Efficiency on the wire
close
close
most efficient
Both Inline and JSONP both wrap your data in JavaScript constructs. These add a small amount of overhead. Depending on what you are doing, these could add up. But minor.
Browser support
full
full
partial
Server support
full
full
partial
Servers need to support the CORS handshake with browsers to (a) deny disallowed domains, and (b) to give browsers the information they need to honor restrictions
Supported by a Standard
no
no
yes
Is it the future
no
no
yes
Safer. Granular security. Standardized. Max efficiency.
Lessons Learned Using CORS
Yes, my simple one-page map app (described here) ended up using CORS. In large part since it is mature, and the browser support (see below) was sufficient.
Reloading Browser Pages: In debugging, CTRL-F5 is your friend in Chrome, Firefox, and IE if you want to clear the cache and reload the page you are on. I did this a lot as I was continually enabled and disabling CORS on the server to test out the effects.
Overriding CORS Logic in CHROME: It turns out that Chrome normally will honor all CORS settings. This is what most users will see. Let’s call this “civilian mode” for Chrome. But there’s also a developer mode – which you enable by running chrome with the chrome.exe –disable-web-security parameter. It was initially confused since it seemed Chrome’s CORS support didn’t work, but of course it did. This is one of the perils of living with a software nerd; my wife had used my computer and changed this a long time ago when she needed to build some CORS features, and I never knew until I ran into the perplexing issue.
Handling CORS Rejection: Your browser may not not let your JavaScript code know directly that a remote call was rejected due to a CORS policy. Some browsers silently map 404 to 0 if against a CORS-protected resource. You’ll see this mentioned in the code for httpGetString.js (if you look at my sample code).
Testing CORS from curl: Helped by a post on StackOverflow, I found it very handy to look at CORS headers from the command line. Note that you need to provide SOME origin in the request for it to be valid CORS, but here’s the command that worked for my cloud-host resource (you should also be able to run this same command):
To understand where CORS support stands with web browsers, this fantastic site http://caniuse.com/cors offers a nice visual showing CORS support across today. A corresponding chart for JSONP is not needed since it works within long-standing capabilities.