Today I had the opportunity to speak at the Granite State Code Camp (#GSCC2021) in Manchester, NH. This was the first time I’ve given an in-person talk since the start of COVID and it was great to see so many smiling facing (even when partially obscured by a mask!).
Last year my focus was a more in-the-weeds talk called Running Azure Securely – which of these Azure security features are for me?. This year I stepped back a level and focused on Compliance. In the session I discussed security vs. compliance, the shared responsibility model, and touched on a few other features, but spent a good bit of time focused on what I am thinking about as the “Policy stack” where one can gather lots of insight about your workload’s compliance with technology controls indicated by various compliance standards – based on the Azure Policy capabilities, a pillar of governance, and rolled up and available from Azure Security Center Microsoft Defender for Cloud.
Azure Security Center as a brand is no more – it is part of a rebranding to Microsoft Defender for Cloud. I assume this renaming, announced at Ignite, is because it is a feature set that can span beyond Azure – for example, keeping an on on-premises resources and resources in non-Azure clouds like AWS.
The session was interactive (as preferred!) and many thanks to Kevin and Vishwas and the nice lady whose name I didn’t catch who I think worked for the college for help in overcoming technical limitations in the room I was speaking from.
Tonight I had the opportunity to speak at #VirtualBostonAzure to talk about raising the visibility of security signals in your environment by turning on your WAF. In demos the WAF available in Azure Front Door was used.
Azure offers thousands of security features. Some of them are easy to use and others are complicated. Some are free to use and some look really, really expensive. Which ones should I be using for my applications?
In this talk we’ll look at some ways to reason about which security controls you might want to apply and why. We’ll consider groups of Azure security features through a pragmatic lens of security best practices and defense-in-depth/breadth, but tempered by the reality that “more security” is not always the answer, but rather “what is the right security” for a situation. By the end of this talk you should have a better idea of the security feature set offered by Azure, why/when they might or might not be needed, and have discussed some ways to reason about how which are relevant you by helping you think about how to assess appropriately for multiple situations.
Do you have specific questions about the applicability of Azure security features already? Feel free to tweet your questions at Bill in advance to @codingoutloud and he’ll try to work answers to any questions into the talk in advance.
If you know your way around SQL Server, then you will find Azure SQL Database to be familiar territory. But some aspects are more familiar than others, which is especially true for security-related differences.
In this session we review the key differences around identity management and authentication (including multi-factor authentication), managing server credentials (or, even better, not needing to in some cases), how to audit logins (probably not what you expect), an overview of encryption and data masking options, and the supporting role of Azure Key Vault. We will also touch on compliance and disaster recovery to give the complete picture of powerful features you’ll definitely want to know about to protect your data.
This talk will cover relevant capabilities for both traditional Azure SQL Databases and the newer Azure SQL Managed Instances.
This talk assumes you are already familiar with SQL Server or another enterprise database.
On Tuesday July, 30, 2019 I had the opportunity to speak at North Boston Azure. The talk was part of a series on Running Azure Securely and was called Are all these Azure security features for me? and was not really a “talk” in that it was highly interactive. For those who attended, you will recall we filled in some slides collaboratively. Thus, they may not appear so polished for those of you who did not join live. Either way, please find the slides (“collaborative” and all) below.
This was an experimental approach for me and the feedback from the audience tells me it worked pretty well. The group at North Boston Azure was already knowledgeable and engaged, so hopefully made for a interesting experience for all involved (was certainly fun for me).
Ever try to figure out how to track who logged into your Azure SQL database? You checked all the usual ways you might handle that with a SQL Server database, but one-by-one find out they just don’t work. Here’s one way to do it.
To track who is logging into your Azure SQL database, enable auditing (here’s how to do that) with audit entries directed to an Azure storage blob. There are two ways to do this: at the database server level and at the individual database level. Either is fine, but for the example that follows, auditing is assumed to be at the db server level. The example query can be adjusted to work with auditing at the database level, but one of the two auditing options is definitely required to be on!
Run this query to find out all the principals (users) who have logged in so far today into your Azure SQL database.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
The query might take a while time to run, depending on how much data you are traversing. In one of my test environments, it takes nearly 20 minutes. I am sure it is sensitive the amount of data you are logging, database activity, and maybe settings on your blob (not sure if premium storage is supported, but I’m not using it and didn’t test with it).
Note: There are other ways to accomplish this, but every way I know of requires use of Azure SQL auditing. In this post we pushed them to blobs, but other destinations are available. For example, you could send to Event Hubs for a more on-the-fly tracker.
At most recent Boston Azure meeting I give (what turns out to be…) the first part of a multi-part talk on Running Azure Securely. Even though I did not cover all this content, I’ve attached the whole powerpoint deck below.
Last night’s Boston Azure meeting featured Marija Strazdas from @AlertLogic who spoke about the Shared Security Model for security in the cloud. I also showed in more detail some of the tools that Azure provides to help customers with their side of the responsibility model including some with Azure SQL DB, Storage, KeyVault, and Azure Security Center. Here are the slides I presented (though I didn’t get through most of them).
EDIT: Here is the deck presented by Marija Strazdas from @AlertLogic who spoke about the Shared Security Model for security in the cloud:
For my part, I offered a Birds-of-a-Feather session on Securing Applications in the Cloud (with examples drawn from Windows Azure Platform). In this session, I reviewed both pros and cons of cloud deployments from a security point of view, and attempted to make the case that, ultimately, either your applications will simply be safer in the cloud, or at least if you want them to be sufficiently safe, it will be more cost-effective to let the specialists at Microsoft (or some other trusted cloud vendor) handle much of the dirty work.
Notes from Javed Ikbal’s talk (http://10domains.blogspot.com) are in regular type. My editorial comments and thoughts are in italics or bold italics – so don’t blame these on Javed. 🙂
Key take-away – going to the Cloud is waaaay more about Business Tradeoffs than it is about Technology.
“There are 2 kinds of companies – those which have had a [data security]breach, and those which are going to have a [data security] breach” -Javed
Centralization of data makes insider threat a bigger risk -Javed
“On premise does not mean people are doing the right thing” –Javed – right on! I bet the majority of the fortune five-million (as 37 Signals refers to the medium and small business market) have insufficient IT – they just don’t know it. Any stats?
Someone from the audience stated there are more breaches in on-premise data centers than in cloud. Therefore cloud is safer. I don’t buy the logic. There could so many more publicized breaches in on-premise systems simply because there are so many more on premise data centers today. So this is easy to misinterpret. We can’t tell either way from the data. My personal prediction: today if there is a data breach for data stored in the cloud, people will not be able to believe you were reckless enough to store it in the cloud; 5 years from now, if there is a data breach for data stored on premise, people will not be able to believe you were reckless enough to store it locally instead of in the cloud which everyone will then believe is the safest place.
Someone from audience commented that business value of losing data will be balanced against business cost of it being exposed. This comment did not account for the PROBABILITY of there being a breach – how do you calculate this risk? I bet it is easier to calculate this risk on the cloud than on premise (though *I* don’t know how to do this)
Comment from Stefan: We can’t expect all cloud services to be up all the time (we were chatting about Google and Amazon downtime, which has been well documented). I completely agree – And many businesses don’t have the data to fairly/accurately compare their own uptimes with those of the cloud vendors – and, further, if the cloud vendors did have 100% up-time, that may destroy the economies we are seeing on the cloud today (who cares if it is 100% reliable if it is 0% affordable – that’s too expensive to be interesting)
Off-premise security != in cloud – different security issues for different data – Javed In other words, treat SSN and Credit Card data differently than which books I bought last year. But I can think of LOTS of data that is seemingly innocuous, but that SOME PEOPLE will balk at having it classified as “non-sensitive” – might be my bookmarks, movie rentals, books purchased, travel plans/history, many more… not just those that support identity theft and/or direct monetary loss (bank account hacks). I think it would be a fine idea for data hosts to publicly declare their data classification scheme – shouldn’t we all have a right to know?
I think IT generally – and The Cloud specifically – could benefit from the kind of thinking that went into GoodGuide.com.
Raw Notes Follow
The rest of these notes are a bit rough – and may or may not make sense – but here they are anyway…
Pizza & drinks, some social (sat next to Stefan Schueller from TechDroid Sytems and enjoyed chatting with him)
Went around the room introducing ourselves
People who were hiring / looking for work spoke up
Around 30 people in attendance
Meeting host: Aprigo – 460 Totten Pond rd, suite 660 – Waltham, MA 02451 – USA
Feisty audience! Lots of participation. This added to the meeting impact.
Twisted Storage talk
From Meetup description: Charles Wegrzyn – CTO at TwistedStorage Inc. (Check actually built an Open source cloud storage system back in ’05)
TwistedStorage is open source software that converts multiple storage
repositories, legacy or green-field, into a single petabyte-scale cloud
for unstructured data, digital media storage, and archiving. The Twisted
Storage Enterprise Storage Cloud provides federated search, electronic
data discovery with lock-down, and policy-driven file management
including indexing, retention, security, encryption, format conversion,
information lifecycle management, and automatic business continuity.
History of Building Storage Management software
Been downloaded 75k times
Re-wrote – now version 4 – in Python
Common anti-pattern observed in real world:
Users storing “stuff” in Exchange since that was a convenient place to store it
Results in a LOT of email storage (and add’l capacity is easy to keep adding on)
Can’t find your data (too much to logically manage)
Complexity, complexity, complexity
The Twisted Storage Way
Federated storage silos w/ adaptors/agents
Provide enterprise capabilities spanning sites (access control, audits, search/indexing – including support for metadata, simplified administration and recovery)
ILM = Information Lifecycle Management
Work-flow (Python scripts, XML coming)
Policy-driven (“delete this after 2 years”, “encrypt me”) (Python scripts)
Twisted Storage Design Goals
Always available content (via replication)
No back-up or recovery needed (due to replication)
Linear scalability (scales out)
Able to trade off durability with performance
Supports old hardware
Minimal admin overhead
Support external storage systems and linkage
Portable – will run on Linux, Windows, (iPhone?) – due to portable Python implementation
Pricing: Enterprise Edition: $500 / TB up to 2 PB (annual), minimum $10k for first 20 TB (see web site for full story)
versus competition like Centera which charge $15k/Silo + Enterprise Edition
From Meetup description: Javed Ikbal (principal and co-founder of zSquad LLC)- will talk about: “Marketing, Uncertainty and Doubt: Information Security and Cloud Computing”
What is the minimum security due diligence that a company needs to do before putting it’s data in the cloud?
Since 2007, Amazon has been telling us they are “.. working with a public accounting firm to … attain certifications such as SAS70 Type II” but these have not happened in 2+ years.
On one side of the cloud security issue we have the marketing people, whohype up the existing security and gloss over the non-existing. On the other side we have security services vendors, who hawk their wares by hyping up the lack of security. The truth is, there is a class of data for every cloud out there, and there is also someone who will suffer a data breach because they did not secure it properly.
We will look at Amazon’s EC2, risk tolerance, and how to secure the data in the cloud.
Javed is a principal and co-founder of zSquad LLC, a Boston-based information security consulting practice.
March 7, 2009 from WSJ – Google disclosed that it exposed a “small number” of Google docs – users not supposed to be authorized were able to view them. Google estimated < 0.05% of all stored Google docs were impacted – BUT! – this is a LOT of documents. http://blogs.wsj.com/digits/2009/03/08/1214/
Google’s official policy for paid customers states “at your sole risk” and no guarantee it will be uninterrupted, timely, secure, or free from errors
Amazon states it is not responsible for “deletioreach” – Javedn, destruction, loss” etc.
Google will not allow customers to audit Google’s cloud storage claims
Amazon says PCI level 2 compliance is possible with AWS, level 1 not possible
SAS 70 Type II reports not meaningful unless you can see which controls were evaluated
“on premise does not mean people are doing the right thing” –Javed
Perception of more breaches in on-premise systems – but there are so many more of them, it is easy to misinterpret
Business value of losing data will be balanced against business cost of it being exposed – but this does not account for the PROBABILITY of there being a breach – how do you calculate this risk? I bet it is easier to calculate this risk on the cloud than on premise (though *I* don’t know how to do this)
We can’t expect all cloud services to be up all the time – right, and many businesses don’t have the data to fairly/accurately compare their own uptimes with those of the cloud vendors – and, further, if the cloud vendors did have 100% up-time, that may destroy the economies we are seeing on the cloud today (it may be 100% reliable, but too expensive to be interesting)
Off-premise security != in cloud – different security issues for different data
“There are 2 kinds of companies – those which have had a [data security]breach, and those which are going to have a [data security] breach” -Javed
Centralization of data makes insider threat a bigger risk
Customers should perform on-site inspections of cloud provider facilities (but rare?)
Ask SaaS vendor to see 3rd party audit reports – SalesForce has one, Amazon does not (Google neither? What about Microsoft – not yet?)
Providers need to be clear about what you will NOT support – e.g., Amazon took 2 years to provide an answer… Amazon/AWS disclaimers are excellent models
Providers need to understand they may be subject to legal/regulatory discovery due to something a customer did
Unisys has ISO 27001-certified data centers (high cost, effort)
Creating Secure Software
Devs care about deadlines and meeting the requirements
If security is not in the requirements, it will not get done
if devs don’t know how to code securely, it will not get done right (if at all)
Train your devs and archs: one day will help with 90% of issues!
Build security into your software dev life-cycle
Let security experts, not necessarily developers, write the security requirements
Secure Code Review can be expensive – bake in an application security audit into your schedule, to be done before going live