One of the basic changes in Cloud Computing is the cost-transparency that comes with it: you know the cost of every CPU core you use, and every byte you read, write, or transmit. This is an amazing transformation in how much we know about our operations. (Of course, it may still be challenging in many cases to compare cloud solution costs to what we are paying today on-prem, since usually we don’t really know the actual on-prem costs.)
While hybrid cloud models will surely be around for many companies for a long time – we won’t all move to the cloud over night – the economics of moving to the cloud are too compelling to ignore. Many newer companies are heading directly into the cloud – never owning any infrastructure.
One of the the costs in managing a hybrid cloud model – where some data is on-prem, some data is in the cloud – is the raw data transfer when you copy bits to or from the cloud. This can cost you real money: for example, in the USA and Europe, both the Windows Azure Platform and the Amazon S3 services charge $0.10 per GB to move the data into the datacenter. If you have a huge amount of data, that cost can add up.
Announced today on the Windows Azure blog, as of July 1, 2011 the Windows Azure datacenters will no longer have a data transfer charge for inbound data. What are the implications?
Here are a few I can think of:
- Overall cost savings can only help general overall cloud adoption
- Backing up data from on-prem into the cloud just got more interesting (good point I stole from maura )
- HPC applications which may have a lot of data to move into the cloud for processing – but may never need that data to come out of the cloud (other than in much smaller digested form) just became more appealing
- Use of Windows Azure as a collection for disparate data sources from around the internet – for management, aggregation, or analysis – just became more attractive
- While experimentation on the cloud has always been cheaper than buying boxes, it now makes it even simpler and cheaper to try out something big in the cloud because you are now an even smaller blip on the corporate cost radar – go ahead, upload that Big Data and run your experiment – you can always delete it when you are done
- There are cloud storage vendors who sit on top of big cloud storage vendor platforms, such as on Azure and Amazon – if I was one of these vendors, I would be delighted – business just got a little easier
Points 2, 3, 4, and 5 above all deal with an asymmetric use of bandwidth where the amount of data moving into the cloud is far less than the amount of data leaving the cloud. With backups – your hope is to NEVER need to pull that data – but it is there in the event you need it. With HPC – in many cases you just want answers or insights – you may not care about all the raw data. With data aggregation – you probably just want some reports. With one-off experiments, when you are finished you just delete all the storage containers – so simple!
This is a big and interesting step towards accelerating cloud computing adoption generally, and Windows Azure specifically. This friction-reducing move brings us closer to a world where we don’t ask “should we be in the cloud?” but rather “why aren’t we in the cloud?”