Cavirin Blog

It’s the week of Google Cloud NEXT and, as a Google Cloud Technology Partner, we are glad to see our efforts to add Google Cloud Platform (GCP) into the Cavirin family of cloud security products succeed. The March 2017 release of Cavirin's platform will include support for continuous security assessment of workloads on GCP, and marks a major milestone in our company’s vision to be the provider of consistent security solution across workloads running on multiple cloud providers’ platforms.

Regardless of the public cloud platform of choice for the enterprise, the fundamental problems remain, and manifest themselves in the form of the following questions:

  • As a CIO or CISO of the enterprise, can I safely migrate my business-critical workloads to the public cloud, and still have the same level of security management built over years of operational experience within my private data center?
  • Knowing that, security operations in the cloud is a totally different ball game, particularly with the expectation of the “shared responsibility model” from the cloud providers. Will I have a “security companion” to make migration less risky?
  • Once in the cloud, will I continue to be able to run my business-critical workloads securely, with an ability to monitor the risk posture quantitatively, and be able to report to the management and board convincingly about our security?

These are fair questions to be expected and, perhaps at a faster rate, as the trend towards public cloud migration of enterprise workloads are intensifying.  This was also confirmed by Diane Greene, the senior VP of Google Cloud, with the announcement this week of major names using GCP that include Disney, Home Depot, Verizon and Colgate-Palmolive.

We, at Cavirin, look at cloud security through a single prism: regardless of what cloud an enterprise may adopt, cloud security assessment/monitoring must be simple, canonical and consistent across the clouds. This seemingly simple objective, when viewed from the multitude of differences in today’s cloud topologies and operational procedures, gains significant importance since it allows us to address the cloud security concerns with a simple model.

Within Cavirin’s cloud security products the security orchestration is straight forward: with a few mouse clicks from our Control Plane User interface (or with the invocation of a few REST APIs, if you are a DevOps or SecOps professional), you can discover your GCP infrastructure assets, identify the resources with comprehensive details, assess & harden the resources against security benchmarks (CIS & DISA), and do this automatically and continuously.

The primary objective of this practice, assisted by Cavirin’s products, is to have a “security companion” for your GCP infrastructure. Fortunately, Cavirin has also the most comprehensive set of OS hardening rules that can automatically test any number of operating system versions that may be installed and operated on GCP running critical workloads. These rules and the automated tests enable the security assessment and continuous monitoring and significantly reduce the attack surfaces of our customers’ infrastructure.

 

 

 

 

With increasing reliance on the cloud, and in many cases on a single cloud service provider, the probability for a widespread (though infrequent) outage grows.  On Tuesday, AWS S3 storage experienced a major outage, taking down the back-ends of many sites that include Netflix, Slack, and HubSpot, two of which we use at Cavirin.  For enterprises that were single threaded, they just had to wait it out, and though the actual outage lasted only 4 hours, it took the remainder of the day for many to recover.  To give you an idea of the magnitude of the impact, AWS S3 supports over 150K sites and upwards of three trillion data elements.  Thousands of tweets were questioning if the Internet went down, just like last October with the Mirai outage.  Compounding the problem is that the storage service is shared across multiple AWS zones, and though an enterprise may distribute compute across geographies, due to practical or cost reasons they may depend upon a single storage instance. 

Despite immense amounts of automation, the human element may still be the weak link, as reported by USA Today – “The most common causes of this type of outage are software related,” said Lydia Leong, a cloud analyst with Gartner.  "Either a bug in the code or human error. Right now, we don't know what it was."    The publication Slate took a more somber view - “At this point, we practically expect that whatever personal information we enter into websites will be stolen.” 

So how to combat these types of outages as well as human risk?

First off, the larger enterprises do in fact have a cloud DR strategy.   For example, if AWS fails, the enterprise may have warm-standby capability on GCP, Microsoft Azure, or maybe on-premises.  Though most DR programs fail into the cloud, there is nothing precluding a scenario where an enterprise may have critical applications on-premises, less critical ones in the cloud, and an option to rehome these on-premises in times of emergency. 

What this implies is that the enterprise must have a security compliance architecture that spans these multiple domains.

The success of any sound DR strategy involves continuous replication of critical data to failover after the disaster is rectified, so that the business continuity is guaranteed. In addition, the replicated systems must have the same rigorous, continuous security monitoring and assessment requirements that is expected from live production systems. That way, when failover happens during outages, the restored systems and services will not have any vulnerabilities. The scope of any security platform such as Cavirin must include DR-replicated systems as well in addition to live production assets.

If enterprises have implemented AWS Hardening Benchmarks, and their workloads move to GCP, they should ensure that the same protections are in-place.  And this applies not only for conventional virtualized workloads but for containers as well.   They need to ensure that the hardening applied to a given OS on one cloud provider are also available on another, and that compliance is agentless and continuous to quickly build the baseline and identify any risk.  

It is in times of outages that IT is stressed the most and likely to make mistakes. 

Here, automation of the security compliance process is critical.  In the same way, if workloads move from the cloud to on-premises and vice-versa, the same benchmarks, rules, and automation must span these different domains.   Having to use one tool on one CSP and another in-house is yet another area of potential failure.

We will never be able to totally prevent outages, but by implementing best practices based on available security tools, the enterprise will be able to more effectively protect against negative customer impact or worse.


Snippet of AWS Eastern US status during outage.

As many noted, even accurate reporting of the outage was unavailable for a while, which harkens back to the Mirai US DNS outage last October.

 

This week yet another Linux vulnerability was discovered - CVE-2017-6074 – that could be exploited to gain kernel code execution from an unprivileged processes. The vulnerability is associated with the DCCP protocol.

The DCCP protocol is recommended by the security benchmarks to be disabled to reduce the attack surface. 

DISA RHEL 6 STIG reads “Disabling DCCP protects the system against exploitation of any flaws in its implementation.

The CIS Security Benchmark for Debian 8 reads “The Datagram Congestion Control Protocol (DCCP) is a transport layer protocol that supports streaming media and telephony. DCCP provides a way to gain access to congestion control, without having to do it at the application layer, but does not provide in-sequence delivery. If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface.

Cavirin’s solution automates the assessment of these security baselines in your hybrid cloud. It continuously protects you from vulnerabilities arising out of misconfiguration and such zero-day vulnerabilities arising out of default attack surface. Vulnerabilities such as these do not really bother you if you used the solution to detect the presence of such uncommon network protocols and already reduced the attack surface by disabling them all together if not in use. You cannot really protect what you don’t see and Cavirin’s solution helps you with security evidence, audit reports, and operational procedures instead of verbal security assurances and recommendations.

Last night I had the pleasure to attend a panel hosted by the EC Council on insider threats.  Panelists included the CISO from San Francisco, the VP of Systems from Macy’s, and most interestingly, Eric Snowden’s former boss at Booz Allen Hamilton.   All three were covering various aspects of the STRIDE model.   For example, the crisis that SF ran into about 8 years back, where a single employee held the city network hostage by collecting router passwords, was a combination of disclosure and elevation.   It took the mayor, at the time, to diffuse the situation.

The NSA suffered the same, with Snowden, in the first week of his new assignment in Hawaii, requesting passwords from colleagues and spending off hours on-site.   More damaging, it is rumored but not confirmed that his credentials from his former IT role were not revoked.   This, married up with his new access to higher levels of classification, created an opportunity.    And it is never a single issue.   His CIA HR records, if shared with the NSA, which they were not at the time, would have raised additional flags, and at the time, employees were not subject to daily exit searches, providing him with the opportunity to exit with his USB dongles.

Left to Right - 

Steven Bay, former boss of Eric Snowden

Joe Voje, CISO, City of San Francisco

Brian Phillips, VP, Macy’s Systems and Technology

And, Macy’s was the first to admit that their procedures in place to address insider capture of PCI data were not up to snuff.   They are now, and just recently the company has taken a very aggressive approach on limiting data access as part of their announced store closures, since there is an interim period where employees have been notified but are still employed.

Net-net, the last decade has been a learning experience, across both commercial and government, but with increased focus, awareness, and sharing of best practices, we’re making progress.

 

The CISO is under immense pressure, expected to manage a dozen or more vendors across perimeter, endpoint, network, application, and data security, not to mention having to be an expert on policy and operations.  Hackers in many cases have the upper hand, and the human element is still the weak link. 

Because of this, more and more enterprises are realizing that what we offer to automate some of this is no longer a nice-to-have…. It is a must-have!   At the same time, we’re able to clearly show our differentiation from the vulnerability assessment vendors, and we are more versatile than the cloud-only solutions.  Look at it this way, best articulated by one of our customers, Cepheid.  VA will tell you how many windows and doors you have, and which are open.   We take the next step, and tell you how to close them.  And, if you are so inclined, we’ll do the closing.  

The API-first architecture of our new Pulsar platform was also top of discussion, with potential ecosystem partners realizing the need for a unified view of overall security compliance, be it server, endpoint, identity, or vulnerability, and across all clouds and containers.  If you missed it, check out our Pulsar General Availability PR.  In all, a more than successful first day for Cavirin’s first RSA presence, based on both the quantity, and more importantly, the quality of discussions and demos. 

(Breaches photo from SS8 shirt at RSA - thanks!)

 

 

 

 

 

 

First of a multi-part series on the CIS benchmarking process, by Pravin Goyal.

ON CIS BENCHMARKS

What are CIS Benchmarks?

The CIS Security Benchmarks program provides well-defined, un-biased and consensus-based industry best practices to help organizations assess and improve their security. The Security Benchmarks program is recognized as a trusted, independent authority that facilitates the collaboration of public and private industry experts to achieve consensus on practical and actionable solutions. Because of the reputation, these benchmarks are recommended as industry-accepted system hardening standards and are used by organizations in meeting various compliance requirements such as PCI and HIPAA.

What is the typical CIS benchmark development process?

CIS Benchmarks are created using a consensus review process comprised of subject matter experts. Consensus participants provide perspective from a diverse set of backgrounds such as consulting, software development, audit and compliance, security research, operations, government, and legal. Each CIS benchmark undergoes two phases of consensus review. The first phase occurs during initial benchmark development. During this phase, subject matter experts convene to discuss, create, and test working drafts of the benchmark. This discussion occurs until consensus has been reached on benchmark recommendations. The second phase begins after the benchmark has been published. During this phase, all feedback provided by the Internet community is reviewed by the consensus team for incorporation in the future versions of the benchmark.

What does it take to develop a new benchmark?

It is easy to contribute to CIS benchmarks. Just write to the CIS community program managers with your proposal for addition. The respective program manager will respond to you followed by a call to understand your proposition and discuss timelines, project announcement and project marketing to attract community participants. With some internal approvals, the project is created in around two weeks of time.

How long does it usually takes to develop a new benchmark?

It usually takes around 12-24 weeks based on the number of participants in the community and the size of the project.

Who else is providing security benchmarks like CIS does?

I would say none. CIS provides the broadest set of benchmarks covering both software and hardware. These include databases, operating systems, applications, mobile operating systems, firewalls, browsers, office applications and almost anything else that touches IT. The only other agency that provides a subset of the benchmarks is DISA. Also, sometimes vendors provide security documentation in the benchmark format. For example, VMware provides a VMware vSphere hardening guide for securing vSphere deployments.

How can we contribute?

Join the existing CIS communities. It is exciting and challenging, and you will get to work with amazing people.

How do we implement CIS benchmarks in our product?

You have two ways to implement CIS benchmarks. The first one leverages the content directly from CIS. The second method is to develop your own proprietary content to implement the benchmark.

Tell us a bit about CIS Docker and CIS Android benchmarks?

Both CIS Docker and CIS Android benchmarks have fascinating community members. I had the privilege to work on both as an author. One thing interesting to note is that CIS Docker benchmark exists from Docker version 1.6.  At that time not many people knew Docker or Docker security. But, the community did an amazing job by documenting 84 security recommendations! That is the power of community.  I'll cover Docker and Android in more detail in a future segment.

 

Cavirin provides security management across physical, public, and hybrid clouds, supporting AWS, Microsoft Azure, Google Cloud Platform, VMware, KVM, and Docker.

 

Address

5201 Great America Pkwy Suite 419  Santa Clara, CA 95054

- 1-408-200-3544

  sales@cavirin.com

  press@cavirin.com

  info@cavirin.com

Monday - Friday: 9:00 - 18:00

Cavirin US Location