Blog: Red Teaming

Red Teaming. Practice what you preach

PTP Red Team 24 Jun 2021

We carry out plenty of Red Teaming for customers. As a CBEST, STAR-FS and GBEST accredited supplier, our Red Team work with many large regulated organisations every day of the week.

We frequently remind our clients how a simulated attack can be one of the best ways to assess prevention, detection and response capabilities and identify weaknesses in people, process and technology.

We Red Team ourselves from time to time, from attacks targeted at a single individual, through to wider campaigns. It’s a really good way of honing our Red Team techniques, but even better for testing the response of the business and the people within it.

We practice what we preach, so recently ran a no-holds-barred simulated attack against ourselves, comprising multiple advanced campaigns.

Many of the Tactics, Techniques, and Procedures (TTPs) we use are developed in-house for specific client engagements so will not be covered in this post. The advice we provide will help mitigate the same attacks though.

Note: This post isn’t the whole story, there is more to come that we will post soon.

Need-to-know

The importance of operating on a need-to-know basis is essential with Red Team simulations. If you want a facsimile of real responses and behaviours, you cannot have anyone but a core team of staff know what is being played out.

Without that an organisation cannot truly test their real response to an attack. Even the smallest leak from a board member for example can compromise the whole simulation.

In some client engagements we are instructed that a key individual has the power to prevent incident escalation if they deem it unnecessary or not beneficial. The challenge there is that this is not what happens in the real world, and it makes it all too easy for word to spread, and so compromise the exercise.

In the recent Red Team against ourselves, we ensured that only a handful of the Red Team were aware of the test. None of the principals of the business were aware in advance.

So what actually happened?

Attackers will often target new starters. That’s what happened in one of the exercises we performed, yet one of our most recent starters was the first to flag suspicious content.

In other exercises, our strong laptop builds and a very robust password policy, along with internal staff training and mandatory password manager use certainly helped.

Within 4 minutes of the first alert from the new member of staff, an all-company communication was sent by our internal IT team. Just like everyone else they were not aware of the exercise. This effectively ‘burned’ the campaign. The whole company was now aware.

The individuals who had been targeted in the attack were all phoned individually to verify if they had interacted with the campaign.

A key response demand was to preserve anonymity to avoid apportioning blame and ensure that nobody felt afraid to report themselves should they have interacted with one of the campaigns.

Clean up actions were effective, though were curtailed once the incident was flagged as an exercise. This was primarily to ensure that unnecessary effort wasn’t expended into the evening.

What did we learn?

Our incident response plan worked; a response team was formed within 5 minutes and critical actions were taken to contain the incident.

Triage was effective, as with all incidents a post incident review was carried out and enhancements were made to the response plan. The response plan needed improvement around allocation of actions. It also needed improvement to ensure that, if key personnel were unavailable, it would still be effective, and the crisis team were empowered to make decisions quickly.

We also need to work to protect the response team and assist with internal communication. During an incident, everyone in the business needs to be kept informed, but also kept at a respectful distance to allow the team to respond without interference. Business principals are often (understandably!) the worst culprits.

That a new starter flagged the incident quickly suggests that our security awareness training and induction programme was effective, though we took from this that we should ensure we do even more refresher training with longer-serving members of the team.

Creating a positive culture of security is a critical requirement in any organisation, this is honed through open discussion, training and education packages that run continually, trust in staff and non-judgemental behaviour when staff raise concerns – even rewarding them for highlighting potential attacks.

Advice

Our attack used less known features of Microsoft products, leveraging techniques and weaknesses found by our in-house capability development team.

We will be going through a disclosure process with Microsoft to raise the issues found. Whilst these aren’t strictly vulnerabilities, they certainly facilitate Red Team attacks.

It is important to first gain insight into the level of threats faced by one’s organisation. As a penetration testing firm, last year’s news about the FireEye and SolarWinds hack made it even more apparent that everyone, including us are targets.

Malicious actors obtain information in several ways and not necessarily through hacking. OSINT and scraping information are core disciplines for simulated attack assessments. The methods used to obtain information vary however my colleague Tony has written a post that does a great job of detailing some common techniques.

Adversaries will often use tools like LinkedIn for information gathering. A campaign by the UK CPNI shows just how much of a problem this is.

As a reminder, creating a positive culture of security, keeping operating systems and applications up to date, implementing strong email security controls, good password management and a strong password policy that verifies that passwords are not re-used, simple, default or even blank will help mitigate many attacks.

Alongside that, In-depth config reviews of cloud environments such as O365 and Azure are crucial. On many engagements, the Red Team have observed that an obscure, seemingly insignificant misconfiguration in a cloud environment can facilitate attacks or lead to serious consequences.

Our Red Team

Our Red Team is composed of dedicated operational and development teams that work together to develop and deliver sophisticated adversarial simulations.

Our teams regularly carry out regulated threat intelligence led exercises using frameworks such as CBEST, GBEST, GCASE TBEST, TIBER, iCAST, STAR, STAR-FS, to name a few, as well as a vast range of custom assessments. There’s more about our Red Teaming here.