Using cloud-services, security is your job too
Gábor Pék (Avatao CTO)
Being cloud-native won’t save you from external threats if you as a user are not aware of basic network security needs – cloud providers simply cannot do everything for you while due to the heavy demand to scale our services, there is unexpected urgency to be cloud-native. This shift allows for abstracting our infrastructure- and network layers into the software-defined space of clouds. Simultaneously, traditional perimeter security issues move silently to the table of IaaS providers, but certain control parameters are still in our hands. Due to this unclear mutual responsibility, we tend to forget about key security problems, for example, the protection of our APIs, storages, inter-service communication, or the code of our services. Most of the time we don’t even ask the most essential question: who is the attacker and what are his capabilities?
In this blog, we give a summary of some of the key threat actors and some suggested countermeasures that you should consider introducing before being cloud-native.
Securing the Infrastructure
The cloud-native world promises high scalability, reliability, and minimal maintenance from customers at the price of the lock-in effect.
The main difference between on-premise solutions and cloud-native networks is that in the latter case most of the security controls are owned by the provider. For example, traditional network attacks like VLAN hopping are still feasible by exploiting a vulnerability in the network stack, however, this is an acceptable risk of public clouds.
At the same time, virtualized networks give a lot of benefits. We can define virtual networks that can be isolated completely so as to restrict what our services can access at the network level. For more granular control, network policies can be defined to create firewall-like rules. In this way, we can filter what ports and protocols are allowed between different services.
Traditional physical servers are replaced by virtual machines or computing resources in an IaaS cloud. While traditional host security configurations still apply (e.g., software patching, file, and user permissions), we have to accept the risks of the virtual world such as multitenancy. As virtualization makes the software stack more complex there are various new attack vectors as well.
Atop virtual machines, another level of virtualization gained high popularity in the last decade: containers. For now, containers interweave our entire technology as they guarantee good isolation between processes and make horizontal scalability easy. Putting service into a container doesn’t mean that it is secure. Containers can also be exploited by an able attacker who can now access everything the service had access to. Container escapes are also realistic scenarios so different countermeasures can be applied such as restricting system calls (i.e., Seccomp), dropping Linux capabilities, using user namespaces, and so on. One of the most popular containerization technology today is Docker.
Secure IT Automation
In order to automatically configure your virtual infrastructure from templates you should use IT automation tools (e.g., Ansible, Puppet, Chef, Terraform) to define your infrastructure as a code. This way, your infrastructure will be reproducible, maintainable, and scalable. However, it’s crucial to take network security into consideration. One of the most important questions is the storage of your secrets (e.g., user logins, private keys). The simplest way is to push these secrets in an encrypted form into your source code management repository, however, access control is non-obvious here. Better yet, use a dedicated key-value store designed for secrets (e.g., HashiCorp Vault).
Due to the complex nature of access control parameters that cloud providers expose, a significant ratio of network security problems stems from misconfigurations or misunderstandings. The Million Dollar Instagram Bug is just one of the most thought-provoking examples. Security researcher, Wes Wineberg gained access to Instagram’s AWS S3 buckets and leaked various security keys. He stated that “with the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member. While out of scope, I would have easily been able to gain full access to any user’s account, private pictures and data.” For more details about AWS S3 access controls, we suggest reading the corresponding blog post from the Detectify team.
To mitigate similar issues several tools have been released over the years. Netflix’s Security Monkey monitors AWS and GCP policy changes and alerts on insecure configurations. Don’t forget, tools will not replace well-designed access controls and their careful maintenance.
While cloud providers give handy tools to make automated backups of cloud-native resources, it’s worth digging into the details a little bit more. First and foremost, backups should be handled with the same security mindset as the corresponding live data. We have to encrypt and sign the backups of sensitive data (e.g., secrets, keys) on the client-side. While encryption guarantees confidentiality, signing is proof that the backup was created by us and wasn’t modified. Always make sure to test the automatic restoration process before using a solution in your production system.
Logging and monitoring
Even if we follow good security best practices, it is important to monitor and understand what happens in and between our services. Central log collection helps to keep an eye on your infrastructure via the logs of web servers, applications, operating systems, and API calls. Thus, potential security problems such as application failure can be pinpointed easily. Google Stackdriver and Amazon Cloudwatch are two cloud-native examples for central logging and monitoring.
Collecting metrics help to catch certain spikes in user activity and anomalies in your network traffic, thus DoS attacks or malfunctioning services can be detected faster by well-defined hooks and alerts.
Securing your services
In the cloud-native world, we have to put an extra emphasis on the security of our services. This is one of the very few places, where we have full control over the security countermeasures.
Embed security into your CI/CD pipeline
Continuous integration and deployment (CI/CD) play a key role to quickly release our product. Problems arise when we ignore security in these fast iterations. That’s why we highly suggest embedding automatic security tests into your CI/CD pipeline such as static code analyzers, vulnerability scanners for dependencies (e.g., by using Snyk), docker images (e.g., Clair), and VM templates (e.g., CFRipper).
As cloud providers add fine-grained access controls, roles, and policies to protect our resources (e.g., computing instances, storages, databases and so on) from unauthorized accesses, a reasonable percentage of security issues is shifted to our shoulders. Here is a minimum todo list you should take into consideration.
The best practice to thwart SQL injection attacks is to use Object-Relational Mappings ORMs and prepared statements for your database queries. These templated queries encode strings properly and don’t allow to execute malicious user inputs. There are exceptions when, for example, the ORM library itself contains security bugs as pointed out by Snyk in their blog.
Access controls should be applied in the backend code so as to mitigate Unauthorized Direct Object References and API abuse. For a more complete list we suggest reading the OWASP top 10 guide(PDF).
A final suggestion here: Please, never commit your secrets to source code repositories. To prevent this, a recent tool from Skyscanner called Sonar Secrets was open-sourced.
The Serverless world or how to secure our functions?
Serverless functions (also known as Function as a Service (FaaS)) were designed to avoid the maintenance burden (e.g., OS and package updates) of virtual servers and containers so that developers can focus strictly on the functionalities they need to implement. Another huge advantage is that serverless functions scale elastically even when peak load arrives to our endpoints.
New security concerns are also raised. According to a PureSec document the 10 most critical security risks for serverless architecture in 2018 are the following:
- Function Event Data Injection
- Broken Authentication
- Insecure Serverless Deployment Configuration
- Over-Privileged Function Permissions & Roles
- Inadequate Function Monitoring & Logging
- Insecure 3rd. Party Dependencies
- Insecure Application Secrets Storage
- Denial of Service & Financial Resource Exhaustion
- Serverless Function Execution Flow Manipulation
- Improper Exception Handling & Verobe Error Messages
Make sure to have a clear roadmap in mind to mitigate the concerns above at the design phase of your product.
Share this post on social media!
We’d also love to hear your thoughts. Leave a comment below if you have any questions or feedback, or let us know what cybersecurity topic you’d like to read about next!
Reading Time: 8 minutes The purpose of this post is to present one of the most popular authorization manager open standards JWT. It goes into depth about what JWT is, how it works, why it is secure, and what the most common security pitfalls are.
Reading Time: 8 minutes Software development and application security go hand-in-hand. We asked the CISO of Skyscanner about this crucial relationship.
Reading Time: 10 minutes Every year, Ruby is becoming more and more popular thanks to its elegance, simplicity, and readability. Security, however, is an issue we can’t afford to neglect.