Today we released the Cisco 2010 Midyear Security Report, a report that provides a high level and thought provoking discussion of the technological, economic, and demographic shifts bearing down on IT security. As you’ll see in the report, the first half of 2010 has been a very interesting time. ScanSafe has always had an unparalleled view of the Web threat landscape, thanks to the ten of billions of Web requests processed in real-time. Now, thanks to Cisco’s acquisition of ScanSafe, we can extend our threat data analysis even further.
As part of our efforts to improve what we do at Cisco Security Intelligence Operations, next week — just in time for Black Hat — we are introducing a project to merge threat analysis across all Cisco security teams. The first product of this is the Cisco 2Q10 Global Threat Report, which merges threat analysis from Cisco IPS, Cisco IronPort, and Cisco ScanSafe data. Not only can we now report the who, what, when and where of Web threats, but we can share our bird’s eye view into what types of attacks are happening on enterprise networks — including how they can sometimes correlate to attack outbreaks on the Web. And we’re going to do this every quarter.
Read More »
Credit card thieves have taken their efforts to collect card information to the next level, as shown in recent reports of card skimming devices that have been uncovered in Utah and Florida. In the past, ATM machines were targeted, causing banks to increase the security around their machines, and collecting stolen card information on storage media inside the machines increased the risks that the thieves had to take to profit from their schemes. Now, as the fraud arms race escalates, the card skimming criminals have embedded Bluetooth or cell phone transmitters inside targeted machines so that the stolen information can be relayed to them without necessarily visiting each machine. We covered some practical suggestions for gas stations, but now let’s look at the details and how this could guide us in defending our borderless networks.
Read More »
Malware authors use a variety of obfuscation techniques to foil researchers and operate as covertly as possible on a user’s system. To that end, some of the techniques, like frequent changes of the executable (possibly daily) are designed to obstruct basic detection techniques. Often times, given a specific piece of executable code, it is not trivial to determine if the code is a piece of malware or just a random piece of software. Fortunately, there are variety of techniques to help someone determine if a piece of code is malicious or not.
Many of these techniques partially come from forensics or malware reverse engineering disciplines. Most of these techniques will work on all types of malicious files, although packer detection and entropy will work best on executable files. A previous blog post titled “A Brief History of Malware Obfuscation” by Mike Schiffman provides background information on malware obfuscation. Below, I’ll highlight several of the techniques and give a brief discussion of the good and bad of each technique.
Read More »
As individual datasets appear on the public Internet, they add to the ability for interested parties to identify individuals through correlation with other datasets. As more and more information becomes accessible, anonymity quickly degrades and actionable intelligence about an individual increases. But correlation of this information is a major challenge, and one that is quickly being filled by data brokers that aim to solve this challenge for their customers. As our culture dives deeper into social media and how it can enrich user experiences, the value of this correlation effort increases.
Can the law keep up with the personal information that is aggregated at sites like Spokeo.com? This is one glaring side effect of the ability to extract intelligence from such a dataset. Technology has outpaced existing laws and failure to balance legal protection with technological advancement could do harm to both consumers and those who seek to use this information to make better decisions.
Read More »
DNS Security Extensions, or DNSSEC for short, is something most people working with DNS have heard about. In fact, the first working documents in the IETF were posted in September 1994, and now almost 16 years later, the root zone has finally been signed. In fact, the root zone is being signed today, July 15, 2010. This marks the end of a process that started on the 27th of January, 2010, when the first key material was made available in the root zone.
But what does “signing the root zone” imply? And what is this DNSSEC anyway? Most people have heard about PKI or Public Key Infrastructure. It is a special kind of system using asymmetric keys — asymmetric because one party encrypts with one key and another party decrypts with the other in a key pair. What is special is that the public keys (or rather, a hash of them) are all signed with the key of a parent node in a strict hierarchy, except for the key that is in the root node. That key is where all trust is bootstrapped from, and that root key is known and trusted by anyone. Because of the strict hierarchy of signatures on the keys, it is possible to, from the trusted root key, derive trust with any other key in the hierarchy.
Many PKIs have been deployed in the world, and the most well known are the keys used for SSL or TLS, specifically when using it for web access, or to be more specific, when using it to secure the HTTP protocol. But all of the initiatives so far have had the problem that people have not had the ability to really select one root, but instead have had to choose from a list of many root keys. If you look in the list of trusted CAs in a web browser for example, you will see that it is a very long list. In order to secure one’s website it is necessary to utilize a certificate from this list of trusted CAs.
Read More »