Article by: Professor Avishai Wool, CTO and Co-founder at AlgoSec
With organisations having a seemingly insatiable appetite for the agility, scalability and flexibility offered by the cloud, it’s little surprise that one of the market’s largest providers, Amazon’s AWS, continues to go from strength to strength. In its latest earnings report, AWS reported a 45% revenue growth during Q4 2017.
However, AWS has also been in the news recently for the wrong reasons following a number of breaches of its S3 data object storage service. Over the past 18 months, companies including Uber, Verizon and Dow Jones have had large volumes of data exposed via misconfigured S3 buckets. Between them, the firms inadvertently made public the digital identities of hundreds of millions of people.
Sub-par security practices
It’s important to note that these potential breaches were not caused by problems at Amazon itself. Instead, they were the result of users misconfiguring the Amazon S3 service and failing to ensure proper controls were set-up when uploading sensitive data to it. In effect, data was placed in S3 buckets and secured with a weak password – or in some cases, no password at all.
Amazon has made several tools available to make it easier for S3 customers to work out who can access their data and to help secure it. However, organisations still need to use access controls for S3 that go beyond just passwords, such as two factor authentication, to control who can login into their S3 administration console.
But to understand why these basic mistakes are still being made by so many organisations, we need to look at the problem in the wider context of public cloud adoption in many enterprises. When speaking with IT managers that are putting data in the cloud, it is not uncommon to hear statements such as ‘there is no difference between on-premise and cloud servers’. In other words, all servers are seen as being part of the enterprise IT infrastructure and they will use whichever environment best suits their needs, operationally and financially.
Old habits die hard
However, that statement overlooks one critical point: cloud servers are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server storing sensitive data, it is still protected by other security measures by default. The server’s IP address is likely to be protected by the corporate gateway, or other firewalls used to segment the network internally and other security layers which stand in the way of potential attackers.
In contrast, when you provision a server up in the public cloud, it is accessible to any computer in the world. By default, anybody can ping it, try to connect and send packets to it, or try to browse it. Beyond a password, it doesn’t have all those extra protections from its environment that an on-premise server has. And this means you must put controls in place to change that.
These are not issues that the organisation’s IT teams, who have become comfortable with having all those extra safeguards of the on-premise network in place, have to regularly think about when provisioning severs in the data centre. There is often an assumption that something or someone will secure the server and this carries over when putting servers in the cloud.
So when utilising the cloud, security teams need to step in and establish a perimeter, define policies, implement controls and put in governance to ensure their data and servers are secured and managed effectively – just as they do with their on-premise network.
Security 101 for cloud data
This means you will still need to apply all the basics of on-premise network security when utilising the public cloud: access controls defined by administration rights or access requirements and governed by passwords; filtering capabilities defined by which IP addresses need connectivity to and from one another.
You still need to consider if you should use data encryption and whether you should segment the AWS environment into multiple virtual private clouds (VPC). Then you will need to define which VPCs can communicate with each other and place VPC gateways accordingly with access controls in the form of security groups to manage and secure connectivity.
You will also need controls over how to connect your AWS and on-premise environments, for example using a VPN. This requires a logging infrastructure to record actions for forensics and audits to get a trail of who did what. None of these techniques are new, but they all have to be applied correctly to the AWS deployment, to ensure it can function as expected.
Extending network security to the cloud
In addition to these security basics, IT teams also need to look at how they should extend network security to the cloud. While some security functionality is built into cloud infrastructures, it is less sophisticated than the security offerings from specialist vendors.
As such, organisations that want to use the cloud to store and process sensitive information are well advised to augment the security functionality offered by AWS with virtualised security solutions, which can be deployed within the AWS environment to bring the level of protection closer to what they are used to within on-premise environments.
Many firewall vendors sell virtualised versions of their products customised for Amazon. While these come at a cost, if you want to be serious about security, you need more than the measures that come as part of the AWS service. Ultimately you need to deploy additional web application firewalls, network firewalls and implement encryption capabilities to mitigate your risks of being attacked and data being breached.
This has the potential to add overall complexity to the security management. However, using a security policy management solution will greatly simplify this, enabling security teams to have visibility of their entire estate and enforce policies consistently across both AWS and the on-premise data centre while providing a full audit trail of every change.