Insider threat

Insider threats within the cloud

Frank Siemons
November 23, 2017 by
Frank Siemons

Contrary to common perception, time and time again reports show that the most significant security threats to an organization are the so-called Insider Threats. Research estimates hold these threats responsible for at least 40% , but potentially all the way up to 75% or more, of all data breaches. News coverage is relatively limited because a story about a disgruntled employee is not as interesting as a story about a nation-state attacker or a criminal organization. Security Professionals, however, need to take these threats very seriously. Detection capabilities are limited, and the potential impact of an insider breach can be far-reaching.

Means, Motive, and Opportunity

These 3 words are essential to many criminal convictions in the United States, but when it comes to the identification of insider threats and the associated risks, they are also invaluable.

In order to be classified as an "insider," an actor must already have some level of (once) legitimate access to a target system and some knowledge about it (Opportunity and Means).

The actor's motives can widely vary. A data leak or breach does not always need to be intentional. A poorly designed system change could open the system up to an attack by accident. A user could also place data in a publicly accessible location without intent. A small mistake could lead to millions or even billions of dollars in damages and could destroy a carefully built up reputation.

There could, however, be some darker motives at play. A disgruntled employee or ex-employee could do a lot of damage to their employer with a data breach. Also, think of a salesperson leaving the business and taking the organizations customer database to their new employer.

In the case of corporate or state level espionage, the attacker has far more resources available.

Instead of attempting to gain external access to a system, an internal worker could be placed or persuaded (blackmail, financial incentive, etc.) to obtain information beneficial to the interested party or to open a backdoor into the system.

Why is this of particular concern with a cloud platform?

One of the main benefits of operating a service within a cloud platform is the virtually unlimited accessibility. Unless specific sources are whitelisted, the services are accessible from anywhere and by anyone with appropriate credentials. This also provides a risk. An ex-employee for instance, whose account has not been properly removed, could still access these services through the internet. And what about that sales agent that left the business in order to work for a competitor or an ex-systems administrator who has created a since-forgotten backdoor account? A stolen account could also be used by anyone, without the need for physical network access. Account lifecycle management is critical to these online services.

This is especially the case because cloud services are often more business critical and often hold much more data. After all, a significant reason to move services to the cloud is the increased availability this provides.

Cloud services often communicate with each other or with local services via an Application Programming Interface API key. This key is not only used to identify its user, but it is also often used to secure the communications, similar to the use of a very complex password. This means API keys should be stored and communicated securely. It is not always an easy process to change an API key. Quite often this needs to be done simultaneously on all systems that are using the key, in order to avoid temporary outages. This has led to unchanged API keys being used for extended periods of time, sometimes years. A systems administrator could have taken a copy of the key before leaving the business, providing full access to the service for as long as the key stays valid. As with password management, API key management is critical.


There are many detection methods covering insider threats, some of these more effective than others. Insider threat detection is mostly based on abnormal user behavior. This is because there are no traditional tell-tale signs of an attack such as large amounts of failed logins or exploitation alerts; the user already has some form of (legitimate) access. Many products are able to monitor and baseline normal user behavior and can alert when a user, for instance, downloads a lot of customer data from the company shares or accesses many different systems and databases within a short period of time. Advancement in Machine Learning has allowed for a lot of progress around these anomaly detection systems over the recent years. In fact, virtually unlimited amounts of user behavioral factors can be taken into account, such as whether the user is on holidays, logs in from an unusual location have been online for more than 12 hours, etc. Some vendors such as DarkTrace and Rapid7 have built specialized products in this field, called User Behaviour Analytics.

Another range of products is the Data Leak Protection and Prevention category. Files transferred to and from USB, shares, the internet and cloud services can be scanned for keywords, classification, and contents. Based on configured policies such a system could detect and potentially even block (if placed inline) data being exfiltrated from a protected system or network.


As the saying goes, prevention is better than cure. Prevention is not always easy, because of the unpredictability of human behavior. Regular account and permission reviews (including the mentioned API keys), training to prevent human errors and tested methods such as "separation of duties" and "least privilege" should already be best practice in any secure environment. They are especially critical when dealing with insider threats.


Considering the statistics mentioned earlier, insider threats need to be controlled with priority. The risk of not addressing this issue is simply too great.

Technology only has a limited reach at the moment. Detection systems exist, some even prevent a breach, but the main issue is that people with the right access levels can be very creative. The User Behavioural Analytics field is making progress though. Time will tell if new Machine Learning algorithms can outsmart the creativity and persistence of determined people.

Frank Siemons
Frank Siemons

Frank Siemons is an Australian security researcher at InfoSec Institute. His trackrecord consists of many years of Systems and Security administration, both in Europe and in Australia.

Currently he holds many certifications such as CISSP and has a Master degree in InfoSys Security at Charles Sturt University. He has a true passion for anything related to pentesting and vulnerability assessment and can be found on His Twitter handle is @franksiemons