Most large organizations and enterprises at least try to take security pretty seriously. This means that the front door is not only usually locked, it is fortified and reinforced. This makes it hard for the bad guys to get in. So, do they give up? Of course not! What they do instead is look around back and start rattling the door knobs on the shed and cellar and the servants entrance and try to work their way in that way.
High value targets are usually locked down and secured pretty well, but this is not always the case for lower value targets. Once compromised, these lower value targets can provide a useful platform from which to attack other systems. For example, while traffic from the internet to internal hosts may be tightly limited, in many cases traffic between machines in the DMZ may not be as well regulated. Thus if you can own one machine in the DMZ, it can be easier to compromise other systems.
A collaboration of four senior members of the Cisco IPS signature team recently culminated in the public release of a guide on writing custom signatures for Cisco IPS, the #1 IPS platform of the Internet. The idea behind this move is to give our customers an easier way to develop their own signatures, allowing them to more easily discover and block unwanted traffic in their networks. At the same time it helps in understanding existing signatures written by members of the IPS signature team.
Tell me if this sounds familiar… you are asked to perform a penetration test on customer’s network to determine the security posture of their assets and the first thing they do is give you a list of assets that you are NOT allowed to test, because they are criticalsystems to the business. Ironic isn’t it? This is exactly the difficulty you can expect when performing penetration testing in the cloud, but multiplied by ten.
There is a lot to think about and plan for when you want to perform a penetration test in a cloud service provider’s (CSP) network. Before we get into the technical details, we need to start with the basics.
With the Black Hat and DEF CON security conferences last week in Las Vegas, two topics are top of mind for me and those in my organization: best practices for securing the network and the importance of applying software security updates. An event like Black Hat or DEF CON certainly raises awareness, but what’s really important is to take that awareness and embed it into daily management of the network. For the most part, those practices are followed on end points and applications. Unfortunately, our data indicates that patching in the infrastructure is much less consistent. This is usually based on complexity and the demands of uptime placed on the network. Events like Black Hat give my teams an opportunity to deliver training on implementing network-based mitigations and defenses. In many cases, participants in these events are simply unaware of what is available in newer versions of our products.
In many exploit scenarios, an attacker finds a target and, if possible, establishes remote control over the system through known or unknown exploits. Whether the attacker uses a buffer overflow, insecure configuration, phishing for credentials, or cookie-stealing, the goal is clear: get a remote shell and gain complete control. Then what?
It is this post-exploitation environment that has interested me at this year’s Black Hat 2011. Several talks and trainings discuss post-exploitation techniques, and I’d like to share them in the interest of research – and defense.