As cloud use in the enterprise has evolved, so have cloud security priorities. Organizations are less worried about unauthorized consumer cloud applications and are more concerned about the security of sensitive data in strategic enterprise cloud services. Cybersecurity incidents involving sensitive data in the cloud can be less frequent, but more damaging. The number of data breaches detected is trending down, decreasing about 30% year over year since 2016.1 At the same time, the average financial loss per breach has been increasing year over year. This same global survey reported a 58% increase in losses per breach since 2016.
For security concerning sensitive business data and sanctioned enterprise cloud applications, Insider Threat, a threat that comes from an individual with authorized access to an organization’s IT systems, is a natural priority. Whether malicious or accidental, security incidents involving insiders can put large amounts of highly sensitive data at risk, even when IT security teams may consider data “secure” within sanctioned cloud applications.
Insiders are responsible for 43% of all data breaches,2 but there is a general consensus across the security industry that breaches attributed to insiders tend to be more detrimental to the organization.3,4 A large majority of breaches from insider threats involve malicious intent, while only 28% are accidental.3 This area of risk most frequently involves current and former employees but contractors and consultants can also put data at risk. Given the high cost and difficulty of detecting incidents, addressing Insider Threat is a key element of any organization’s cloud security strategy.
In this in-depth article, I will cover the risks, priorities, and emerging security tools and the unique way in which McAfee detects and prevents Insider Threats.
Detecting Insider Threats
Attention has shifted from the security of cloud service providers to the secure use of data in sanctioned cloud applications. With critical business data and infrastructure accessible in the cloud, a single rogue user can commit significant damage.
There is no single attack vector for insider threat. Insider attacks detected in previous years highlight the variation in the categories of perpetrators and their approaches. The sensitivity and scale of data available to users within organizations has led to some of the most notorious security failures. While current employees are the biggest perpetrators of insider attacks, accounting for 30% of all incidents, former employees can also cause damage.3
For example, in a recent lawsuit that Google subsidiary Waymo filed against Uber in February 2017, the company alleges that a former Google employee downloaded 14,000 sensitive documents related to self-driving car technology before leaving the company. The former employee subsequently led the self-driving car project for Uber, and Uber’s technology bears a striking resemblance to components developed by Waymo.
Insiders can also aid external actors in stealing data, deliberately or accidentally. In 2015, a former Morgan Stanley financial advisor pleaded guilty to stealing 730,000 account records from 2011 to 2014 and saving them on a personal server at home. It is suspected that Russian hackers stole the data when it was on his home server. During this time, the employee was also in discussions with two other banks that compete with Morgan Stanley about potential employment.
Enforcing the secure usage of cloud services will be the central security challenge for organizations. Gartner predicts that through 2020, 95% of cloud security breaches will be due to inappropriate or negligent use 5 rather than a breach of the cloud providers’ security.
Insider Threat Variants
There are a number of ways, intentional and accidental, in which insiders can put corporate data at risk. Example scenarios include:
- A malicious employee downloads data from a corporate sanctioned cloud service, then either uploads it to an unsanctioned cloud storage service or walks away with the data on a physical device
- A careless employee downloads sensitive data from a corporate sanctioned cloud service, then shares it with a third-party such as a vendor or partner
- An unsuspecting employee downloads data onto a personal device and later loses the data to an external actor when the personal device is breached
- An employee with account credentials compromised in a phishing attack or with weak or shared passwords loses data
- Privileged users of a cloud service (such as administrators) change security configurations inappropriately or create a backdoor to access sensitive data with nefarious intent
- · A disgruntled employee (or a careless employee) deletes sensitive data or changes internal system settings, causing disruption to the normal operations and/or loss of data.
- A former employee breaks into the employer’s network (or whose access to data & systems was not terminated) with an intent to steal data for personal financial gain.
- Contractors or consultants employed by an organization access and store a copy of sensitive data maliciously or unknowingly.
Existing Approaches to Insider Threat Detection
Traditionally, enterprises have relied on security information and event management (SIEM) solutions to detect threats. SIEMs collect logs and events from multiple sources including routers, switches, firewalls, servers, applications, and more and analyze them to check for matches to pre-defined rules. If a match is found, an alarm is triggered.
There are several limitations of this approach as applied to cloud usage. With this approach, each pattern of behavior that can signal a threat must be added as a rule. A security expert must either continuously and manually add a large number of new correlational rules or manually correlate raw log data from different sources to identify a threat. If the security expert lacks subject matter expertise or doesn’t understand the signature of a threat enough to create a rule for it, the SIEM becomes ineffective in detecting these types of threats.
A new approach to threat detection relies on Static Machine Learning. A machine learning tool observes a system for a certain amount of time to create a baseline for what would be considered normal behavior. Once the baseline has been established, any activity that is outside of the observed pattern will trigger an anomaly alert. The biggest flaw is that static machine learning becomes ineffective without sufficient training data. This creates challenges for less popular cloud services and activities that a cloud service supports but are not commonly performed.
User Behaviors That Tell-All
A common and core tenet of human behavior in varying contexts is the definitive association between an intent and the behavior. This association can almost always be extrapolated to say that intent and behavior can be mapped distinctively as a one to one relationship. From an Insider Threat perspective, understanding the intent behind the user’s actions is key to identifying a user as malicious. Considering that intent is not obvious, security teams can analyze behavior to infer intent. User intent, malicious or not, can fall into many categories. The ability to differentiate between intents can direct an analyst’s attention to atypical intents. In other words, deviation from typical user behavior can indicate scenarios that can potentially turn into insider threats.
Analyzing behavioral patterns to differentiate normal behavior from scenarios that need attention and intervention is not a new concept. Financial institutions analyze behaviors to detect fraudulent transactions, healthcare providers to suggest treatment protocols, and manufacturers to predict machine failures. Try as they may, credit card thieves have a difficult time perfectly mimicking normal transactions in granular detail.
Behavioral analysis can also aid in detecting external hackers. Hackers may be able to gain access to an organization’s systems when a user’s account credentials are compromised, but it is not easy to emulate the user’s normal behavior. Security teams have the opportunity to detect and respond to internal and external threats with behavioral analysis.
Insider Threat Behaviors
Risky insider behavior can take many forms, some of the key indicators include:
- Excessive data downloaded, copied, or accessed
- Accessing data that is irrelevant to the user’s role
- Accessing data and systems at odd times
- Accessing data and systems after a long recess
- Accessing data and systems from odd locations, networks, or devices
- Taking unlikely actions and activities, in remote access sessions
- Unlikely sequences of actions
- New collaborations with external users and abnormal sharing patterns
The Theory that Powers Insider Threat Detection—the McAfee Way
Consider all the cloud service usage information produced by a large organization. The data of interest is the different actions taken by enterprise users within the cloud service. The granularity of the data includes the level of individual actions taken by each of the users, along with meta information about both the user (geo-location, IP address, role, department) and the action (file & object names, shared links).
Depending on the cloud service, certain actions tend to occur more often than others and be performed by more users. For example, “Login (to O365)” is probably the most observed action taken by every enterprise user, whereas “Create User (in O365)” is observed only for a limited number of users with administrative privileges. ‘Update (Salesforce) Ticket’ should be expected to be used only by a subset of the enterprise users, but at a relatively high frequency.
Although behavioral models can be constructed for commonly observed actions and users, it becomes a challenge to capture predictability around infrequent actions and sparse usage. A full-view of sparse and dense behavioral modeling that is a key differentiator in McAfee’s ability to detecting insider threats.
User behavior may be defined as a composite of activity counts, activity category counts, files or objects touched, bytes downloaded or uploaded, number of times a service is accessed, rate of access, time of access, and more measured either across one service action, one cloud service provider, or a homogenous group of service actions and cloud services. In the context of one organization and a single cloud service, individual user behavior will likely vary across multiple dimensions such as time of use, rate of use, aggregate use, level of use, etc. The source of variation in use may arise as much from personal preferences as from enterprise enforced policies and practices. Without visibility into the corporate policies or an individual’s preferences, the only observable artifact is the actual usage. The observed usage is a manifestation of the hidden state of the user. The challenge is to model the user behavior as a combination of unobserved components and identify where the combination varies from one user to the other.
UEBA for Insider Threat Detection
User and Entity Behavior Analysis (UEBA) entails the ability to build accurate behavior models for users across cloud services, continuously integrate additional data to refine the model, and create constantly evolving profiles for individual users and groups of users. UEBA relies heavily on machine learning to identify behavioral patterns from historical usage data. Every user is essentially assigned a distinct mathematical representation of their behavior
There are five primary attributes of an effective Insider Threat detection framework. These principles collectively serve to increase the rate of accurately detecting threats while minimizing false positives:
- Reduction of usage data to a concise behavior
- Self-learning without manual input
- Grouping users based on behavior
- Awareness of distinct usage across time
- Visibility into cross-cloud threats
Reduction of usage data to a concise behavior
Consider a Salesforce user whose usage has been captured using months of activity logs, containing a large number of data points. Referencing ongoing behaviors with such stored usage data will be taxing and inefficient, leaving aside any temporal inconsistencies or usage biases.
It would be ideal to convert the historic usage data of raw behavioral data into a concise mathematical model, which greatly simplifies the process of comparing new activity against prior activity. At the same time, this method retains an information-dense representation of user behavior using higher-order polynomials.
Self-learning without manual input
Activity patterns in the cloud, even for a single user, are constantly evolving as the subject takes on new roles within the organization or changes habits. Once an environment has been observed, the behavioral models determined, and the baselines established, the threat detection framework should continue to evolve its models without excessive human guidance as it observes new and, often, dissimilar behavior. This characteristic sets UEBA apart from traditional heuristics and static models. These traditional approaches require a significant number of manual updates to ensure accurate threat detection, while UEBA-driven approaches evolve on their own.
Grouping users based on behavior
Take the example of four employees at an organization who take vacations on a regular basis every few months at different times. Upon returning from their vacations, all four upload a large amount of data in the form of vacation photos to personal folders in their corporate Box accounts. By automatically grouping these four users and their particular behavior together, a pattern can be drawn from their behavior that might not have been evident if their behavior was observed in isolation. Grouping users helps improve the accuracy of threat detection, especially when usage data is scarce. In cases where usage of a particular service or time period is scarce for a user, their activity can be compared to that of similar users to infer whether it is anomalous.
Awareness of distinct usage across time
Let’s go back to our imaginary Salesforce user. At the beginning of the quarter, she performs a lot of activity on the accounts she owns. Near the end of the quarter, she will again display a flurry of activity. But in between, she may exhibit little activity. If a system averages her activity over the whole quarter, it would infer a low level of usage and trigger numerous false positive alerts at the beginning and end of the quarter. Instead, if the system draws seasonal and cyclic patterns of behavior across time frames, then it will automatically correlate the amount of account activity to the period of time within a quarter to accurately differentiate normal behavior from a true threat.
Visibility into cross-cloud threats
Cloud threat protection can only be effective with visibility into activity and threats that span multiple cloud services. Evaluating user activities beyond an initial login to include user movement across cloud services and the context with which that movement occurs allows a solution to protect enterprise data, wherever it travels. For example, several failed login attempts within a single cloud service might not be cause for concern, but a user triggering failed logins across multiple cloud services could be a sign of a real threat. Another example would be a user downloading a large report from Salesforce and subsequently uploading data to another file-sharing service. Detecting this activity as a potential threat can only be done with a cross-cloud insider threat detection solution.
Cloud Threat Protection Best Practices from the Trenches
Even the most advanced threat protection technology can be rendered ineffective when not properly implemented. Below are some of the proven best practices and must-haves when implementing a cloud threat protection solution.
- Focus on multidimensional threats and not simple anomalies: Imagine a user logs in from a new IP address, downloads a higher-than-average volume of data, or changes a security setting within an application. In isolation, these are anomalies and not necessarily indicative of a security threat. Focus detection on threats that combine multiple indicators and anomalies, providing strong evidence that an incident is in progress.
- Start with machine-defined models, then refine: Aside from accuracy limitations, it’s difficult to get started with threat protection by configuring detailed rules without the context of thresholds. Start with unsupervised machine learning which analyzes user behavior and automatically begins detecting threats. Augment with follow-on analyst feedback to fine-tune threat detection and reduce false positives.
- Monitor all cloud usage for shadow and sanctioned applications: Cloud activity within one service might incorrectly appear routine because threats are often signaled by activities across multiple services. Correlate activity across multiple applications, and a pattern will appear if a threat is in progress. It is important to start with visibility into both sanctioned and unsanctioned cloud services to get the full picture of a user’s risk.
- Leverage your existing SIEM and SOC workflow: Events generated by a cloud threat protection solution should flow into your existing security operations center (SOC) and SIEM solutions in real time via a standard feed. This capability will allow security experts to both correlate cloud anomalies with on-premises anomalies and allow the integration of cloud threat incident response with existing incident response workflows.
- Correlate cloud usage with other data sources: Looking at a single data source to detect threats is inadequate. Additional information can add context to reduce false positives and false negatives. Contextual data can include whether the user is logging in using an anonymizing proxy or a TOR connection and whether their account credentials are for sale on the Darknet.
- Whitelist low-risk users and known events: A general rule of thumb is to allow a system to generate as many threat events as the security team has the bandwidth to investigate. Increasing thresholds is one way to adjust the system. Another is to whitelist events generated by low-risk, trusted users. This capability can protect your IT security team from being inundated with false positives.