If you’ve worked on a K-12 wireless network, you’ll know that one of the main customer careabouts is adapting to Common Core Standards. Online testing and BYOD places even higher demands on a high quality, high performing network. What exactly needs to be taken into consideration when designing these networks?
Join us tomorrow Wednesday, February 5 for a great, informational webinar packed with tips and tricks on how to design K-12 networks to optimize for Common Core. If you work in education IT or are a partner or network consultant that handles lots of K-12 school district deployments, this is the webcast for you. We’re starting at 10am PST and will run for about 45-60 minutes--and there’ll be a chance for you to ask questions directly to Cisco engineers.
Register here today, or read the full article: Is Your Network Ready for Common Core Standards?
Tags: bandwidth, byod, common core, computer based assessment, computer-based, district IT, educate, education, endpoints, high density, IT, K-12, K12, learn, mandated online testing, mobile, mobility, network, online testing, school, security, standards, state standards, technology, wi-fi, wifi, wired, wireless, wlan
Bruce Schneier, the security technologist and author famously said, “Complexity is the worst enemy of security.”
We have been working with some customers who agree strongly with this sentiment because they have been struggling with increasing complexity in their access control lists and firewall rules.
Typical indicators of operational complexity have been:
- The time that it can take for some organizations to update rules to allow access to new services or applications, because of the risks of misconfiguring rules. For some customers, the number of hours defining and actually configuring changes may be an issue, for other customers the biggest issue may be the number of days that it takes to work through change control processes before a new application is actually in production.
- The number of people who may need to be involved in rule changes when there are high volumes of trouble tickets requiring rule changes.
Virtualization tends to result in larger numbers of application servers being defined in rule sets. In addition, we are seeing that some customers need to define new policies to distinguish between BYOD and managed endpoint users as part of their data center access controls. At the same time, in many environments, it is rare to find that rules are efficiently removed because administrators find it difficult to ascertain that those rules are no longer required. The end result is that rule tables only increase in size.
TrustSec is a solution developed within Cisco, which describes assets and resources on the network by higher-layer business identifiers, which we refer to as Security Group Tags, instead of describing assets by IP addresses and subnets.
Those of us working at Cisco on our TrustSec technology have been looking at two particular aspects of how this technology may help remove complexity in security operations:
- Using logical groupings to define protected assets like servers in order to simplify rule bases and make them more manageable.
- Dynamically updating membership of these logical groups to avoid rule changes being required when assets move or new virtual workloads are provisioned.
While originally conceived as a method to provide role-based access control for user devices or accelerate access control list processing, the technology is proving of much broader benefit, not least for simplifying firewall rule sets.
For example, this is how we can use Security Group Tags to define access policies in our ASA platforms:
Being able to describe systems by their business role, instead of where they are on the network, means that servers as well as users can move around the network but still retain the same privileges.
In typical rule sets that we have analyzed, we discovered that we can reduce the size of rule tables by as much as 60-80% when we use Security Group Tags to describe protected assets. That alone may be helpful, but further simplification benefits arise from looking at the actual policies themselves and how platforms such as the Cisco Adaptive Security Appliance (ASA) can use these security groups.
- Security policies defined for the ASA can now be written in terms of application server roles, categories of BYOD endpoints, or the business roles of users, becoming much easier to understand.
- When virtual workloads are added to an existing security group, we may not need any rule changes to be applied to get access to those workloads.
- When workloads move, even if IP addresses change, the ASA will not require a rule change if the role is being determined by a Security Group Tag.
- Logs can now indicate the roles of the systems involved, to simplify analysis and troubleshooting.
- Decisions to apply additional security services like IPS or Cloud Web Security services to flows, can now be made based upon the security group tags.
- Rules written using group tags instead of IP addresses also may have much less scope for misconfiguration.
In terms of incident response and analysis, customers are also finding value in the ability to administratively change the Security Group Tag assigned to specific hosts, in order to invoke additional security analysis or processing in the network.
By removing the need for complex rule changes to be made when server moves take place or network changes occur, we are hoping that customers can save time and effort and more effectively meet their compliance goals.
For more information please refer to www.cisco.com/go/trustsec.
Follow @CiscoSecurity on Twitter for more security news and announcements.
Tags: ASA, byod, security, Security Group tags, TrustSec
As information consumers that depend so much on the Network or Cloud, we sometimes indulge in thinking what will happen when we really begin to feel the effects of Moore’s Law and Nielsen’s Law combined, at the edges: the amount of data and our ability to consume it (let alone stream it to the edge), is simply too much for our mind to process. We have already begun to experience this today: how much information can you consume on a daily basis from the collective of your so-called “smart” devices, your social networks or other networked services, and how much more data is left behind. Same for machines to machine: a jet engine produces terabytes of data about its performance in just a few minutes, it would be impossible to send this data to some remote computer or network and act on the engine locally. We already know Big Data is not just growing, it is exploding!
The conclusion is simple: one day we will no longer be able to cope, unless the information is consumed differently, locally. Our brain may no longer be enough, we hope to get help, Artificial Intelligence comes to the rescue, M2M takes off, but the new system must be highly decentralized in order to stay robust, or else it will crash like some kind of dystopian event from H2G2. Is it any wonder that even today, a large portion if not the majority of the world Internet traffic is in fact already P2P and the majority of the world software downloaded is Open Source P2P? Just think of BitCoin and how it captures the imagination of the best or bravest developers and investors (and how ridiculous one of those categories could be, not realizing its potential current flaw, to the supreme delight of its developers, who will undoubtedly develop the fix — but that’s the subject of another blog).
Consequently, centralized high bandwidth style compute will break down at the bleeding edge, the cloud as we know it won’t scale and a new form of computing emerges: fog computing as a direct consequence of Moore’s and Nielsen’s Laws combined. Fighting this trend equates to fighting the laws of physics, I don’t think I can say it simpler than that.
Thus the compute model has already begun to shift: we will want our Big Data, analyzed, visualized, private, secure, ready when we are, and finally we begin to realize how vital it has become: can you live without your network, data, connection, friends or social network for more than a few minutes? Hours? Days? And when you rejoin it, how does it feel? And if you can’t, are you convinced that one day you must be in control of your own persona, your personal data, or else? Granted, while we shouldn’t worry too much about a Blade Runner dystopia or the H2G2 Krikkit story in Life, the Universe of Everything, there are some interesting things one could be doing, and more than just asking, as Philip K Dick once did, do androids dream of electric sheep?
To enable this new beginning, we started in Open Source, looking to incubate a project or two, first one in Eclipse M2M, among a dozen-or-so dots we’d like to connect in the days and months to come, we call it krikkit. The possibilities afforded by this new compute model are endless. One of those could be the ability to put us back in control of our own local and personal data, not some central place, service or bot currently sold as a matter of convenience, fashion or scale. I hope with the release of these new projects, we will begin to solve that together. What better way to collaborate, than open? Perhaps this is what the Internet of Everything and data in motion should be about.
Tags: ai, Android, artificial intelligence, Big Data, BitCoin, Blade Runner, cloud, Do Androids Dream of Electric Sheep, Fog, Fog computing, H2G2, Internet of Everything, internet of things, IoE, IoT, krikkit, M2M, Moore Law, Nielsen Law, open source, p2p, privacy, security
In my last blog, “Has Hybrid Cloud Arrived? Part 1: And How Will it Shape the Role of IT Going Forward?” we looked at the business drivers of a hybrid cloud and previewed the key requirements. In this blog, we will look at Cisco InterCloud – a hybrid cloud solution, we announced this week at Cisco Live! Milan, to address the hybrid cloud needs for enterprise and service provider customers.
Business leaders today are heavily growth-oriented and are looking at new ways of deploying applications to obtain greater agility. That is where we see hybrid cloud becoming mainstream as it frees businesses to run applications on-demand and where it’s most cost-effective. Cisco InterCloud was announced to address this opportunity and facilitate optimal hybrid cloud deployments.
Cisco InterCloud comes with unique capabilities that enable enterprises to connect their private cloud to heterogeneous public clouds. It creates the notion of a single scalable hybrid cloud for all physical, virtual and cloud workloads -- an infinite datacenter where the public cloud is treated as a virtual extension of the data center. Cisco InterCloud is designed with these tenets:
Open: Customers are excited about Cisco InterCloud, as it is an open solution that gives customers the freedom to choose hypervisor on private cloud and select their public cloud from a rich ecosystem of cloud providers. Service providers like InterCloud as it is open API based, integrates with multiple cloud platforms, e.g., CloudStack, vCloud, and OpenStack and enables them to rapidly offer a hybrid cloud solution. It reduces the effort to onboard enterprise customers. Cisco InterCloud thus provides a multi-cloud, multi-hypervisor cloud experience.
Secure: Another key factor in hybrid cloud adoption is the need to address the security and compliance concerns of public cloud deployment. Cisco InterCloud provides end-to-end secure connectivity by encrypting traffic between the enterprise private cloud and the service provider cloud. It also ensures workload security by encrypting all data-in-motion within shared multi-tenant public cloud. Additionally, customers can also deploy network services such as zone based virtual firewall and edge firewall for further workload security within public cloud.
Flexible: Customers demand bi-directional workload portability across private and public clouds. With Cisco InterCloud, customers not only can provision workloads from a self-service portal, but also with a click, migrate workloads to the public cloud and back. All of this activity happens behind the scenes as InterCloud converts workloads to the right VM format, such as VMware VMDK to AWS AMI, or to CloudStack format for providers such as BT. It makes workload portability easier as applications don’t need to be re-architected as IP addresses are retained upon migration and enterprise VLANs are extended into the cloud.
I believe that lines of business and developers are leading the journey to hybrid cloud adoption. IT has realized that it needs to shift away from its role as gatekeeper to instead being a partner to Lines of Business but IT faces certain challenges in doing so. IT has to deal with the overhead of integrating with each cloud provider and find ways to do in a secure manner. Cisco InterCloud enables IT to act as a cloud broker on behalf of lines of business. Cisco InterCloud provides unified hybrid cloud management through a built-in IT Admin portal and an extensible northbound API layer. It also allows IT to enforce consistent network security, L4-7 services and workload policies throughout the hybrid cloud.
This week’s Cisco InterCloud announcement demonstrates our continued commitment to customers. We envision a future where customers have an array of cloud options and can pick the ‘best fit’ based on workload needs, performance, cost, and location requirements. We are going into beta next quarter and have announced general availability soon afterwards. As 2014 dawns, we see a shift towards mainstream hybrid cloud adoption — hybrid cloud is finally here for real.
Tags: Cisco cloud, cisco intercloud, cloud, data center, Hybrid Cloud, security, virtualization
About two years ago, I went into a customer workshop on private cloud. As we were introducing ourselves around the table, the CIO turned to me with a pained expression and said, “Bob I have a different problem. My CFO and CEO just asked me if I knew how many of our users were accessing cloud services. They asked me if I knew how much we were spending or if there were any risks.” He said, “I don’t know the answers, and I don’t have a plan.”
In the months that followed, I would have countless other conversations with CIOs, that highlighted an emerging challenge—shadow IT. Shadow IT turns up when business groups implement a public cloud service without the knowledge of IT. In working with our customers, we have found that there are typically 5-10 times more cloud services being used than are known by IT.
The conversations I had with customers highlighted that shadow IT was creating several challenges—from monitoring cloud costs to managing service providers. One of the significant challenges with shadow IT is risk to the business. Specifically, we have seen five categories of risk arise:
#1 Data Security Risks
Company information being shared externally due to a cloud service breach is among our customers’ worst nightmares. Cloud vendors work hard to protect customers’ data. However, it falls to the business to know where their information lives and to protect it.
A security officer of a global non-profit organization recently shared with me that his organization wanted to use cloud services to help connect with donors and manage operations. However, they weren’t set up to govern providers and have no idea how donor information was being shared with cloud vendors. Many of our customers tell us they don’t have strong processes to manage cloud vendors, can’t track how their information is being shared, and often don’t know how vendors are keeping their information safe.
#2 Brand Risks
Brand risk goes hand-in-hand with a potential data security breach. If company information is stolen, or shared inappropriately, the consequences to an organization’s brand is immeasurable. Not only can a breach lead to negative press and customer backlash, but can also result in financial damages.
#3 Compliance Risks
Globally, organizations face evolving and expanding regulations that require them to retain information, maintain privacy, give people the ‘right to be forgotten,’ and more. As cloud services are used across all business functions, companies face the risk of falling out of compliance. Our customers tell us that violations are becoming more frequent as those responsible for enforcing compliance become less aware of what services are being used. Also, employees often don’t understand when using a cloud service can trigger compliance issues.
#4 Business Continuity Risks
Businesses need to ensure that cloud vendors they are using have strong business fundamentals or risk losing valuable corporate information if a vendor goes out of business or is purchased. Last year, a cloud storage provider Nirvanix went out of business and gave customers less than one month to move their data or risk losing it forever. These types of abrupt changes can lead to significant challenges in maintaining business continuity.
#5 Financial Risks
Recently, we helped a global equipment manufacturer discover that their employees were using over 630 cloud services, 90 percent of which were unknown to IT. These unknown services cost them nearly a million dollars annually. Costs are spiraling as businesses unknowingly purchase duplicate cloud services and lose their power to negotiate bulk contracts.
Identifying Cloud Risks With Cisco Cloud Consumption Services
The first step to managing the risks of shadow IT is to identify where you might face exposure. To help customers with this challenge, Cisco has introduced a new service designed to identify the business risks and costs resulting from shadow IT.
With Cisco Cloud Consumption Services, customers can know which public cloud services are being used in their business, become more agile, reduce risks, and optimize public cloud costs.
Using collection tools in the network, we help customers find out what cloud services are being used by employees across their entire organization. Our cloud experts then help customers identify and manage cloud security risks and compliance issues. Using a proprietary database of cloud vendors, we help companies identify the risk profile of services they are using and provide recommendations for managing these risks with stronger cloud service provider governance. The service also helps customers determine what they are really spending on cloud and find ways to save money.
Additionally, Cisco Cloud Consumption Services helps companies develop new processes for managing cloud vendors, from onboarding to termination. We help customers to proactively manage risks and deliver new services faster by establishing stronger cloud service management practices.
You can learn more about how we can help you understand your cloud usage and identify risks to your business at www.cisco.com/go/cloudconsumption
Many leaders that I speak with feel like they do not have a shadow IT problem, citing that their security protocols were set up to protect them. Think this is you? Think again! Recently we worked with a provincial government and discovered that they had over 650 public cloud services being used by their organization, despite blocking 90 percent of internet traffic. Simply put, if your employees have access to the internet, you have a shadow IT challenge.
I’d be interested to hear from you as to whether you feel you have challenges with shadow IT and what the risks could be. I look forward to your comments!
Tags: Cisco Cloud Consumption Services, Cisco Live! 2014, risk, security, Shadow IT