These days botnets are all over the news. Often we hear them described in vague, ominous terms designed to grab people’s attention. In simple terms, a botnet is a group of computers networked together running a piece of malicious software that allows them to be controlled by a remote attacker, better known as a botmaster. Often I think people abuse their readers to a certain extent by over-hyping certain threats. I would like to take a more reasonable approach here.
Our team has a lab dedicated to running malicious software that we refer to as our malware lab. We use the lab to ensure our security products work against various real-world threats. Basically, we do things like intentionally leaving hosts un-patched behind security devices and purposefully infect and attack boxes protected by various devices. This helps to ensure that in a worst-case scenario we know our products work. To that end, I periodically track down new samples of malware. Recently, I came across a sample that could be used to create your own botnet.
I will explain exactly what this bot does; I’ll even show you some of the code. This is a very simple and generic example of a bot and is very likely no threat to your network. It’s designed as a kit to be distributed to inexperienced botmasters. It’s the Easy-Bake Oven of botnets, but the concepts I will cover extend to the most complex botnets.
This will be the first in a series of posts exploring a bot written in the Java programming language. Because the Java is easier to read than most, throughout this series we will explore the actual code for the more interesting features.
Read More »
I’ve covered the proliferation of digital traces, as well as how those footprints can be combined to de-anonymize data, eroding the privacy of users. This week, we see another chapter emerge in this storyline, with a report from Computerworld about tools for mining social networks and other open sources. In this week’s Cyber Risk Report, we talked about the risks posed by these tools to organizations, and I’d like to expand on that, as well as some benefits, here in this post.
Read More »
Today we are releasing the 2009 Cisco Annual Security Report, which pulls together a full year’s worth of cyber security-focused collaboration from across the entire Cisco Security Intelligence Operations team. The 2009 Cisco Annual Security Report is a comprehensive look back at the year’s highlights with an eye towards what we can expect to see going forward.
Throughout 2009 we saw a host of new threat developments, gripping front line skirmishes, inspirational cases of the White Hats locking arms to combat evil, and alarming new levels of cybercrime audacity and sophistication.
With this report the Cisco Security team is introducing new tools to help customers better assess the evolving threat landscape. To address IT security professionals’ demands for better insights on the threat pipeline, we are introducing The Cisco Cybercrime Return on Investment Matrix (CROI), which is a framework for quickly assessing techniques and business models criminals will be investing and divesting from. Uniquely, the CROI Matrix is built from the perspective of the cybercriminal and how they rate their portfolio of scams and techniques from an investment perspective. We believe this approach is fundamental to understanding how the threat landscape will look in the coming year.
Read More »
To date, a major gap exists in vulnerability standardization: there is no standard framework for the creation of vulnerability report documentation. While the computer security collective has done a bang-up job in several other areas, including categorizing and ranking the severity of vulnerabilities in information systems with the widespread adoption of the Common Vulnerabilities and Exposure (CVE) dictionary and the Common Vulnerability Scoring System (CVSS), this lack of standardization is evident in every vulnerability report, best practice document, or security bulletin released by any vendor or coordinator. This blog post explores a nascent standard to close this gap.
Lack of Standard Promotes Chaos
Conventionally, the documentation of vulnerabilities is an ad hoc, producer-specific, and overtly non-standard process. Each vendor compiles, collates, and produces their own version of a vulnerability document that may or may not be similar to comparable reports by other vendors. To see examples of this, consider the 2008 multi-vendor “outpost24 TCP” vulnerability report from major producers such as Cisco, Microsoft, or CERT. Because each producer employs a unique and non-cooperative document structure, users must manually parse individual reports to find information that is germane to their environments. Additionally, the documents are typically flat and do not facilitate nor support any sort of automated processing.
Read More »
In this week’s CRR, we continued to follow an interesting roller coaster of events that has overshadowed electrical companies in Brazil over the past few weeks. There have been reports that recent power failures were a result of computer hacking, a rebuttal that the failures were not caused by hacking, and finally reports that power company websites were hacked into (though without any power failures). This has resulted in a flurry of media reports, fear mongering about “cyber attacks,” and general uncertainty about what is and is not possible.
Read More »