Cisco Blogs

Cisco Blog > Security

Dynamic Detection of Malicious DDNS


This post was co-authored by Andrew Tsonchev.TRAC-tank-vertical_logo-300x243

Two weeks ago we briefly discussed the role of dynamic DNS (DDNS) in a Fiesta exploit pack campaign. Today we further analyze and explore the role of DDNS in the context of cyber attack proliferation and present the case for adding an operational play to the incident response and/or threat intelligence playbook to detect attack pre-cursors and attacks in progress. Read More »

Tags: , , ,

Can You Guess Your ROI on Your Secure Access?

No need to guess now!

Cisco commissioned Forrester Consulting to examine the business value and potential return on investment (ROI) enterprises may realize by implementing Cisco Identity Services Engine (ISE)—a leading secure access solution. This is available in the recently published Forrester TEI (Total Economic Impact) Research. Four customers were interviewed for this study and covered use cases for policy-governed, unified access across the following use case scenarios: guest services; BYOD; full access across wired, wireless, and VPN; and policy networking. The calculation was based on a composite organization of 10,000 employees that reflected the four interviewed customers from higher education, utilities, and financial services markets.

Benefits were 75 percent reduction in support calls related to network issues and improved compliance reducing data exposure, breaches, and potential regulatory/remediation costs that could add up to hundreds of thousands or even millions of dollars. Most recently, the Ponemon Institute Live Threat Intelligence Impact Report 2013 indicated that US$10 million is the average amount spent in the past 12 months to resolve the impact of exploits. The benefit of secure access cannot be taken lightly.

Read More »

Tags: , , , ,

Don’t Miss: [Webinar] Preparing K-12 Networks for Common Core Feb 5

If you’ve worked on a K-12 wireless network, you’ll know that one of the main customer careabouts is adapting to Common Core Standards. Online testing and BYOD places even higher demands on a high quality, high performing network. What exactly needs to be taken into consideration when designing these networks?

Join us tomorrow Wednesday, February 5 for a great, informational webinar packed with tips and tricks on how to design K-12 networks to optimize for Common Core. If you work in education IT or are a partner or network consultant that handles lots of K-12 school district deployments, this is the webcast for you. We’re starting at 10am PST and will run for about 45-60 minutes–and there’ll be a chance for you to ask questions directly to Cisco engineers.

Register here today, or read the full article: Is Your Network Ready for Common Core Standards?

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Taking Complexity Out of Network Security – Simplifying Firewall Rules with TrustSec

Bruce Schneier, the security technologist and author famously said, “Complexity is the worst enemy of security.”

We have been working with some customers who agree strongly with this sentiment because they have been struggling with increasing complexity in their access control lists and firewall rules.

Typical indicators of operational complexity have been:

  • The time that it can take for some organizations to update rules to allow access to new services or applications, because of the risks of misconfiguring rules. For some customers, the number of hours defining and actually configuring changes may be an issue, for other customers the biggest issue may be the number of days that it takes to work through change control processes before a new application is actually in production.
  • The number of people who may need to be involved in rule changes when there are high volumes of trouble tickets requiring rule changes.

Virtualization tends to result in larger numbers of application servers being defined in rule sets. In addition, we are seeing that some customers need to define new policies to distinguish between BYOD and managed endpoint users as part of their data center access controls. At the same time, in many environments, it is rare to find that rules are efficiently removed because administrators find it difficult to ascertain that those rules are no longer required. The end result is that rule tables only increase in size.

TrustSec is a solution developed within Cisco, which describes assets and resources on the network by higher-layer business identifiers, which we refer to as Security Group Tags, instead of describing assets by IP addresses and subnets.

Those of us working at Cisco on our TrustSec technology have been looking at two particular aspects of how this technology may help remove complexity in security operations:

  • Using logical groupings to define protected assets like servers in order to simplify rule bases and make them more manageable.
  • Dynamically updating membership of these logical groups to avoid rule changes being required when assets move or new virtual workloads are provisioned.

While originally conceived as a method to provide role-based access control for user devices or accelerate access control list processing, the technology is proving of much broader benefit, not least for simplifying firewall rule sets.

For example, this is how we can use Security Group Tags to define access policies in our ASA platforms:


Being able to describe systems by their business role, instead of where they are on the network, means that servers as well as users can move around the network but still retain the same privileges.

In typical rule sets that we have analyzed, we discovered that we can reduce the size of rule tables by as much as 60-80% when we use Security Group Tags to describe protected assets. That alone may be helpful, but further simplification benefits arise from looking at the actual policies themselves and how platforms such as the Cisco Adaptive Security Appliance (ASA) can use these security groups.

  • Security policies defined for the ASA can now be written in terms of application server roles, categories of BYOD endpoints, or the business roles of users, becoming much easier to understand.
  • When virtual workloads are added to an existing security group, we may not need any rule changes to be applied to get access to those workloads.
  • When workloads move, even if IP addresses change, the ASA will not require a rule change if the role is being determined by a Security Group Tag.
  • Logs can now indicate the roles of the systems involved, to simplify analysis and troubleshooting.
  • Decisions to apply additional security services like IPS or Cloud Web Security services to flows, can now be made based upon the security group tags.
  • Rules written using group tags instead of IP addresses also may have much less scope for misconfiguration.

In terms of incident response and analysis, customers are also finding value in the ability to administratively change the Security Group Tag assigned to specific hosts, in order to invoke additional security analysis or processing in the network.

By removing the need for complex rule changes to be made when server moves take place or network changes occur, we are hoping that customers can save time and effort and more effectively meet their compliance goals.

For more information please refer to

Follow @CiscoSecurity on Twitter for more security news and announcements.

Tags: , , , ,

Back to the Future: Do Androids Dream of Electric Sheep?

As information consumers that depend so much on the Network or Cloud, we sometimes indulge in thinking what will happen when we really begin to feel the effects of Moore’s Law and Nielsen’s Law combined, at the edges: the amount of data and our ability to consume it (let alone stream it to the edge), is simply too much for our mind to process. We have already begun to experience this today: how much information can you consume on a daily basis from the collective of your so-called “smart” devices, your social networks or other networked services, and how much more data is left behind. Same for machines to machine: a jet engine produces terabytes of data about its performance in just a few minutes, it would be impossible to send this data to some remote computer or network and act on the engine locally.  We already know Big Data is not just growing, it is exploding!

The conclusion is simple: one day we will no longer be able to cope, unless the information is consumed differently, locally. Our brain may no longer be enough, we hope to get help, Artificial Intelligence comes to the rescue, M2M takes off, but the new system must be highly decentralized in order to stay robust, or else it will crash like some kind of dystopian event from H2G2. Is it any wonder that even today, a large portion if not the majority of the world Internet traffic is in fact already P2P and the majority of the world software downloaded is Open Source P2P? Just think of BitCoin and how it captures the imagination of the best or bravest developers and investors (and how ridiculous one of those categories could be, not realizing its potential current flaw, to the supreme delight of its developers, who will undoubtedly develop the fix — but that’s the subject of another blog).

Consequently, centralized high bandwidth style compute will break down at the bleeding edge, the cloud as we know it won’t scale and a new form of computing emerges: fog computing as a direct consequence of Moore’s and Nielsen’s Laws combined. Fighting this trend equates to fighting the laws of physics, I don’t think I can say it simpler than that.

Thus the compute model has already begun to shift: we will want our Big Data, analyzed, visualized, private, secure, ready when we are, and finally we begin to realize how vital it has become: can you live without your network, data, connection, friends or social network for more than a few minutes? Hours? Days? And when you rejoin it, how does it feel? And if you can’t, are you convinced that one day you must be in control of your own persona, your personal data, or else? Granted, while we shouldn’t worry too much about a Blade Runner dystopia or the H2G2 Krikkit story in Life, the Universe of Everything, there are some interesting things one could be doing, and more than just asking, as Philip K Dick once did, do androids dream of electric sheep?

To enable this new beginning, we started in Open Source, looking to incubate a project or two, first one in Eclipse M2M, among a dozen-or-so dots we’d like to connect in the days and months to come, we call it krikkit. The possibilities afforded by this new compute model are endless. One of those could be the ability to put us back in control of our own local and personal data, not some central place, service or bot currently sold as a matter of convenience, fashion or scale. I hope with the release of these new projects, we will begin to solve that together. What better way to collaborate, than open? Perhaps this is what the Internet of Everything and data in motion should be about.

Tags: , , , , , , , , , , , , , , , , , , , , , ,