As the Nexus platform has become a staple in the data center environment, securing the environment begins with the Nexus Operating System (NX-OS). The recently published NX-OS hardening guide seeks to deliver on that. The Cisco NX-OS Hardening Guide provides information to help administrators and engineers secure NX-OS system devices, inherently increasing the overall security of a network environment. With the ever-increasing opportunity for exploits and vulnerabilities to prevail, it is imperative that organizations adopt and apply best practices to harden their infrastructure devices. We all know that an environment is only as strong as the weakest link, thus every effort should be made to ensure that each device is hardened.
One of the more (in)famous examples of malware is the banking Trojan Zeus. We have covered Zeus before (Seth Hanford’s post, Zeus: Getting a Taste of its Own Medicine), but like William Shatner, it is one of those things that never seems to get old. Zeus is interesting because it was one of the more successful commercial or productized forms of malware, but more than that, it was a financial crimeware solution.
Zeus was sold in the form of a kit, and has been available in freeware, cheap and expensive versions ranging in price up to several thousand dollars or more. The kit allowed you to build malware that would help you steal banking and identity information. The malware has an initial configuration baked in when you do the build process, but once it goes live on the host it phones home for a dynamic configuration, which includes where to upload stolen data to, hosts file entries etc.
What is CVSS -- (the Common Vulnerability Scoring System)? How can it help me manage risk -- and why is it an important step forward in security research? In this short video Gavin Reid CVSS Program Chair share’s his perspective on the vulnerability scoring standard
Sometimes it is interesting to take a look at darknet data and see what you come across. If you are not familiar with the term “darknet,” I am using the definition used by some in the service provider community where a darknet is a set of address space which contains no real hosts. That means no client workstations to initiate conversations with servers on the Internet. It also means no advertised services from those ranges, such as a webserver, a DNS server, or a database server. There is really no reason to see any traffic destined for addresses within those ranges. From a network point of view, it should be as desolate and deserted as the town of Pripyat in the Ukraine, within the evacuation zone due to the Chernobyl disaster back in the 1980s. However, in practice, you do see traffic to those address ranges, which is what makes that traffic somewhat interesting. Traffic destined to those ranges could be the result of malware attempting to locate machines to infect, part of a research project or it could be as simple as a misconfiguration or a typographical error. One example of traffic resulting from a typo would come from attempting to ping a host and typing the wrong address in. However, it would be hard to believe that all of the traffic seen in a darknet is the result of a mistake.
Setting up a darknet does not have to be hard to do. If your organization has address space that is not being used, then all that you need to do is advertise a route for those addresses and leave them unused. In our case, we have advertised several ranges and we collect Netflow data for the traffic destined to them from a nearby Cisco router. That Netflow data is exported to a collector, such as nfcapd, where it is aggregated for further analysis.
As I travel the world, I ask my customers two simple questions:
First, are you virtualizing your data center? (Universally the answer is yes.)
Second, have you deployed any virtual security solution? (Universally the answer is no.)
Wow. How can this be? Does a virtual data center not need security? Not a chance. It needs security more than ever. Most customers are confining their virtualized infrastructure into secure zones, or virtual local area networks (VLANs). That’s useful for a first phase, but excessive VLAN segmentation holds us back from achieving the efficiencies of the utility computing model—and it also gets really complicated really quickly.