No need to guess now!
Cisco commissioned Forrester Consulting to examine the business value and potential return on investment (ROI) enterprises may realize by implementing Cisco Identity Services Engine (ISE)—a leading secure access solution. This is available in the recently published Forrester TEI (Total Economic Impact) Research. Four customers were interviewed for this study and covered use cases for policy-governed, unified access across the following use case scenarios: guest services; BYOD; full access across wired, wireless, and VPN; and policy networking. The calculation was based on a composite organization of 10,000 employees that reflected the four interviewed customers from higher education, utilities, and financial services markets.
Benefits were 75 percent reduction in support calls related to network issues and improved compliance reducing data exposure, breaches, and potential regulatory/remediation costs that could add up to hundreds of thousands or even millions of dollars. Most recently, the Ponemon Institute Live Threat Intelligence Impact Report 2013 indicated that US$10 million is the average amount spent in the past 12 months to resolve the impact of exploits. The benefit of secure access cannot be taken lightly.
Read More »
Tags: byod, ISE, ROI, secure access, security
If you’ve worked on a K-12 wireless network, you’ll know that one of the main customer careabouts is adapting to Common Core Standards. Online testing and BYOD places even higher demands on a high quality, high performing network. What exactly needs to be taken into consideration when designing these networks?
Join us tomorrow Wednesday, February 5 for a great, informational webinar packed with tips and tricks on how to design K-12 networks to optimize for Common Core. If you work in education IT or are a partner or network consultant that handles lots of K-12 school district deployments, this is the webcast for you. We’re starting at 10am PST and will run for about 45-60 minutes–and there’ll be a chance for you to ask questions directly to Cisco engineers.
Register here today, or read the full article: Is Your Network Ready for Common Core Standards?
Tags: bandwidth, byod, common core, computer based assessment, computer-based, district IT, educate, education, endpoints, high density, IT, K-12, K12, learn, mandated online testing, mobile, mobility, network, online testing, school, security, standards, state standards, technology, wi-fi, wifi, wired, wireless, wlan
Bruce Schneier, the security technologist and author famously said, “Complexity is the worst enemy of security.”
We have been working with some customers who agree strongly with this sentiment because they have been struggling with increasing complexity in their access control lists and firewall rules.
Typical indicators of operational complexity have been:
- The time that it can take for some organizations to update rules to allow access to new services or applications, because of the risks of misconfiguring rules. For some customers, the number of hours defining and actually configuring changes may be an issue, for other customers the biggest issue may be the number of days that it takes to work through change control processes before a new application is actually in production.
- The number of people who may need to be involved in rule changes when there are high volumes of trouble tickets requiring rule changes.
Virtualization tends to result in larger numbers of application servers being defined in rule sets. In addition, we are seeing that some customers need to define new policies to distinguish between BYOD and managed endpoint users as part of their data center access controls. At the same time, in many environments, it is rare to find that rules are efficiently removed because administrators find it difficult to ascertain that those rules are no longer required. The end result is that rule tables only increase in size.
TrustSec is a solution developed within Cisco, which describes assets and resources on the network by higher-layer business identifiers, which we refer to as Security Group Tags, instead of describing assets by IP addresses and subnets.
Those of us working at Cisco on our TrustSec technology have been looking at two particular aspects of how this technology may help remove complexity in security operations:
- Using logical groupings to define protected assets like servers in order to simplify rule bases and make them more manageable.
- Dynamically updating membership of these logical groups to avoid rule changes being required when assets move or new virtual workloads are provisioned.
While originally conceived as a method to provide role-based access control for user devices or accelerate access control list processing, the technology is proving of much broader benefit, not least for simplifying firewall rule sets.
For example, this is how we can use Security Group Tags to define access policies in our ASA platforms:
Being able to describe systems by their business role, instead of where they are on the network, means that servers as well as users can move around the network but still retain the same privileges.
In typical rule sets that we have analyzed, we discovered that we can reduce the size of rule tables by as much as 60-80% when we use Security Group Tags to describe protected assets. That alone may be helpful, but further simplification benefits arise from looking at the actual policies themselves and how platforms such as the Cisco Adaptive Security Appliance (ASA) can use these security groups.
- Security policies defined for the ASA can now be written in terms of application server roles, categories of BYOD endpoints, or the business roles of users, becoming much easier to understand.
- When virtual workloads are added to an existing security group, we may not need any rule changes to be applied to get access to those workloads.
- When workloads move, even if IP addresses change, the ASA will not require a rule change if the role is being determined by a Security Group Tag.
- Logs can now indicate the roles of the systems involved, to simplify analysis and troubleshooting.
- Decisions to apply additional security services like IPS or Cloud Web Security services to flows, can now be made based upon the security group tags.
- Rules written using group tags instead of IP addresses also may have much less scope for misconfiguration.
In terms of incident response and analysis, customers are also finding value in the ability to administratively change the Security Group Tag assigned to specific hosts, in order to invoke additional security analysis or processing in the network.
By removing the need for complex rule changes to be made when server moves take place or network changes occur, we are hoping that customers can save time and effort and more effectively meet their compliance goals.
For more information please refer to www.cisco.com/go/trustsec.
Follow @CiscoSecurity on Twitter for more security news and announcements.
Tags: ASA, byod, security, Security Group tags, TrustSec
As information consumers that depend so much on the Network or Cloud, we sometimes indulge in thinking what will happen when we really begin to feel the effects of Moore’s Law and Nielsen’s Law combined, at the edges: the amount of data and our ability to consume it (let alone stream it to the edge), is simply too much for our mind to process. We have already begun to experience this today: how much information can you consume on a daily basis from the collective of your so-called “smart” devices, your social networks or other networked services, and how much more data is left behind. Same for machines to machine: a jet engine produces terabytes of data about its performance in just a few minutes, it would be impossible to send this data to some remote computer or network and act on the engine locally. We already know Big Data is not just growing, it is exploding!
The conclusion is simple: one day we will no longer be able to cope, unless the information is consumed differently, locally. Our brain may no longer be enough, we hope to get help, Artificial Intelligence comes to the rescue, M2M takes off, but the new system must be highly decentralized in order to stay robust, or else it will crash like some kind of dystopian event from H2G2. Is it any wonder that even today, a large portion if not the majority of the world Internet traffic is in fact already P2P and the majority of the world software downloaded is Open Source P2P? Just think of BitCoin and how it captures the imagination of the best or bravest developers and investors (and how ridiculous one of those categories could be, not realizing its potential current flaw, to the supreme delight of its developers, who will undoubtedly develop the fix — but that’s the subject of another blog).
Consequently, centralized high bandwidth style compute will break down at the bleeding edge, the cloud as we know it won’t scale and a new form of computing emerges: fog computing as a direct consequence of Moore’s and Nielsen’s Laws combined. Fighting this trend equates to fighting the laws of physics, I don’t think I can say it simpler than that.
Thus the compute model has already begun to shift: we will want our Big Data, analyzed, visualized, private, secure, ready when we are, and finally we begin to realize how vital it has become: can you live without your network, data, connection, friends or social network for more than a few minutes? Hours? Days? And when you rejoin it, how does it feel? And if you can’t, are you convinced that one day you must be in control of your own persona, your personal data, or else? Granted, while we shouldn’t worry too much about a Blade Runner dystopia or the H2G2 Krikkit story in Life, the Universe of Everything, there are some interesting things one could be doing, and more than just asking, as Philip K Dick once did, do androids dream of electric sheep?
To enable this new beginning, we started in Open Source, looking to incubate a project or two, first one in Eclipse M2M, among a dozen-or-so dots we’d like to connect in the days and months to come, we call it krikkit. The possibilities afforded by this new compute model are endless. One of those could be the ability to put us back in control of our own local and personal data, not some central place, service or bot currently sold as a matter of convenience, fashion or scale. I hope with the release of these new projects, we will begin to solve that together. What better way to collaborate, than open? Perhaps this is what the Internet of Everything and data in motion should be about.
Tags: ai, Android, artificial intelligence, Big Data, BitCoin, Blade Runner, cloud, Do Androids Dream of Electric Sheep, Fog, Fog computing, H2G2, Internet of Everything, internet of things, IoE, IoT, krikkit, M2M, Moore Law, Nielsen Law, open source, p2p, privacy, security
In my last blog, “Has Hybrid Cloud Arrived? Part 1: And How Will it Shape the Role of IT Going Forward?” we looked at the business drivers of a hybrid cloud and previewed the key requirements. In this blog, we will look at Cisco InterCloud – a hybrid cloud solution, we announced this week at Cisco Live! Milan, to address the hybrid cloud needs for enterprise and service provider customers.
Business leaders today are heavily growth-oriented and are looking at new ways of deploying applications to obtain greater agility. That is where we see hybrid cloud becoming mainstream as it frees businesses to run applications on-demand and where it’s most cost-effective. Cisco InterCloud was announced to address this opportunity and facilitate optimal hybrid cloud deployments.
Cisco InterCloud comes with unique capabilities that enable enterprises to connect their private cloud to heterogeneous public clouds. It creates the notion of a single scalable hybrid cloud for all physical, virtual and cloud workloads – an infinite datacenter where the public cloud is treated as a virtual extension of the data center. Cisco InterCloud is designed with these tenets:
Open: Customers are excited about Cisco InterCloud, as it is an open solution that gives customers the freedom to choose hypervisor on private cloud and select their public cloud from a rich ecosystem of cloud providers. Service providers like InterCloud as it is open API based, integrates with multiple cloud platforms, e.g., CloudStack, vCloud, and OpenStack and enables them to rapidly offer a hybrid cloud solution. It reduces the effort to onboard enterprise customers. Cisco InterCloud thus provides a multi-cloud, multi-hypervisor cloud experience.
Secure: Another key factor in hybrid cloud adoption is the need to address the security and compliance concerns of public cloud deployment. Cisco InterCloud provides end-to-end secure connectivity by encrypting traffic between the enterprise private cloud and the service provider cloud. It also ensures workload security by encrypting all data-in-motion within shared multi-tenant public cloud. Additionally, customers can also deploy network services such as zone based virtual firewall and edge firewall for further workload security within public cloud.
Flexible: Customers demand bi-directional workload portability across private and public clouds. With Cisco InterCloud, customers not only can provision workloads from a self-service portal, but also with a click, migrate workloads to the public cloud and back. All of this activity happens behind the scenes as InterCloud converts workloads to the right VM format, such as VMware VMDK to AWS AMI, or to CloudStack format for providers such as BT. It makes workload portability easier as applications don’t need to be re-architected as IP addresses are retained upon migration and enterprise VLANs are extended into the cloud.
I believe that lines of business and developers are leading the journey to hybrid cloud adoption. IT has realized that it needs to shift away from its role as gatekeeper to instead being a partner to Lines of Business but IT faces certain challenges in doing so. IT has to deal with the overhead of integrating with each cloud provider and find ways to do in a secure manner. Cisco InterCloud enables IT to act as a cloud broker on behalf of lines of business. Cisco InterCloud provides unified hybrid cloud management through a built-in IT Admin portal and an extensible northbound API layer. It also allows IT to enforce consistent network security, L4-7 services and workload policies throughout the hybrid cloud.
This week’s Cisco InterCloud announcement demonstrates our continued commitment to customers. We envision a future where customers have an array of cloud options and can pick the ‘best fit’ based on workload needs, performance, cost, and location requirements. We are going into beta next quarter and have announced general availability soon afterwards. As 2014 dawns, we see a shift towards mainstream hybrid cloud adoption — hybrid cloud is finally here for real.
Tags: Cisco cloud, cisco intercloud, cloud, data center, Hybrid Cloud, security, virtualization