Perhaps you’ve seen the shirts. Maybe you’ve joined in or listened to an episode of Cisco Champion Radio. Or maybe you can not resist learning new things and having access to experts in your area of technical expertise.
Join us--submit your CIsco Champion for Data Center nomination today!
No matter the reason, if you are curious about the Cisco Champion program, now is the time to nominate. yourself or a colleague for consideration for 2015!
- October 1: Open call for nominations
- October 31: Deadline to submit nominations
- November 25: Cisco Champion Class of 2015 announced
Act now! It’s a great opportunity to participate in everything from blogger briefings to podcasts, and to get to know your industry and your peers better. We need your voice.
Tags: cisco champion, Cisco Champions, cloud, data center
This is part 1 of the “Your Business Powered By Cisco Customer Solutions Architecture (CSA)” blog series.
Many IT organizations are challenged to take advantage of the new technologies enabled by Virtualization, Cloud, Analytics and IoT. Applications enabled by these new technologies must be protected from unauthorized use but remain accessible, in a secure manner, from any device in any location throughout the world. With a vast array of new technology choices and a substantial installed infrastructure base, it is important to have a place to start --a solutions architecture-- that provides a framework for using these technologies that will drive business outcomes.
The CSA is a transformational customer-facing blueprint that delivers IT-based services for enterprise and service providers to achieve their business outcomes. To be relevant for our customers, the CSA was developed based on disruptive examples that Cisco engineers observed in the industry during their discussions with both enterprise and service provider customers worldwide.
Some of these disruptive examples include use of new technologies such as: Analytics, Cloud, Internet of Things (IoT), Internet of Everything (IoE) and Cyber security. It should also be stated that the front end for IT blueprint consulting is Cisco Consulting Services, and this CSA is representational of the sets of abstractions that describe the actual functions.
In all IT environments, both enterprise and service providers, Cisco sees two common trends: Read More »
Tags: cisco csa, cloud, customer solutions architecture, IoE, IoT, security, Service Provider, virtualization
What kind of a world will you live in three years from now? How about five? Will your personal robot pour you a drink after your self-driving car delivers you home? That’s where we’re headed, and it’s a pretty quick trip: self-driving cars are already on public roads and you’ll soon be able to buy that humanoid robot.
Cisco’s Collaboration team thinks a lot about the future—not just about how we’ll get around and get our drinks, but about how we’ll connect and collaborate. We’re passionate about the future of collaboration, about giving the world collaboration tools that are every bit as smart as those self-driving cars and whiskey-pouring robots.
Where we’re at: today’s challenges
Before we talk more about the future, let’s talk about where the industry is right now. Over the years, various vendors have given us audio conferencing, web conferencing, and video conferencing. Each of these technologies were introduced at different times, and have matured at different paces—with audio being the tried-and-true veteran, video conferencing the relative newcomer and web being the thing that came somewhere in-between.
Herein lies the problem: Read More »
Tags: audio, Cisco, cloud, collaboration, conferencing, video, virtual, web, WebEX
That is the approximate number of cloud services that Ken Hankoff, Manager of Cisco IT Risk Management’s Cloud and Application Service Provider Remediation (CASPR) Program believes Cisco’s 70,000 employees use. For the last 14 years, this program has assessed and remediated risks associated with using a cloud-hosted service.
An assessment process for new cloud services is a vital step toward reducing the risk of using externally hosted services. Many customers I speak with struggle to rapidly assess cloud services and integrate them into their IT organization. As part of my blog series on governing cloud service adoption, I asked Ken to share some of his ‘lessons learned’ in assessing the risks of cloud services and bringing them into Cisco IT’s fold.
How do you ensure that teams wanting to use new cloud services work with your team?
Our team is not in the business of sourcing cloud vendors. That responsibility lies with the individual business units and their architecture teams who are seeking to use the service, often in partnership with IT. Once a vendor is selected, there are two primary ways in which my team gets engaged. First, through the Global Contracts team as they have made Cloud Service Provider assessment a part of the contracting process, and second when a new service is being integrated within IT.
How do you evaluate whether a new cloud service is risky to the business?
We look at seven risk factors to create a formula for risk—business criticality, financial viability, security, resiliency, architectural alignment, regulatory compliance, and assessment status.
We establish the business criticality of the service to determine how Cisco would be impacted or disrupted in the event the capability provided by the vendor would go away, and whether we could react or compensate.
We then look at the financial viability of the vendor to give us comfort that they will remain in business. To evaluate vendors we leverage Dunn & Bradstreet’s Predictive Scores & Ratings. We rely heavily on Cisco’s Information Security (InfoSec) organization to provide us with a Security Composite Risk score. Depending on the parameters of the cloud provider engagement, InfoSec will look at the vendor’s application development process, infrastructure, data handling security, system-to-system interoperability, and other areas. For resiliency we focus on how they meet our standards around business continuity and disaster recovery to ensure that our business data will be there when needed, regardless of what happens.
We also need to ensure that we stay compliant with regulations. A vendor that has to comply with HIPAA, SOX, or other regulatory/privacy requirements poses a higher risk than one that doesn’t. For this reason, we look into whether regulatory compliance is a factor, and if so, that it is addressed appropriately.
Finally, we also assess if the vendor aligns to the broader architecture that Cisco IT is investing in to support the business. Vendors are deemed higher investment risk if they do not align to the business and operational roadmap that Cisco is pursuing.
We re-asses vendors on a periodic basis according to their overall risk score. If a service is overdue for a reassessment, that in itself increases the risk of doing business with the provider, so we factor it in.
In your opinion, what are the three most important things to manage the business risks of cloud services?
First, I would suggest establishing ownership and governance of cloud services via a centralized PMO at enterprise level, not just within IT. This ownership needs to go beyond just assessing vendors for security risk, and focus on establishing company-wide policies for overseeing cloud services at the enterprise level.
Second, provide visibility into existing services and how they are being used. This helps enable a catalog of assessed and approved vendors for people to access. If you can have fewer vendors being used, you can reduce your risk.
Third, continually monitor services across the board to know what risks we might be facing, and ensure that the service providers are meeting their SLAs. Additionally, this helps to ensure that investments aren’t being wasted. There is a natural CSP application lifecycle – selection, implementation, adoption, and eventually that service usage might decline and you may end up supporting something that has very few users if you don’t have a lifecycle approach to phasing out services.
What is your biggest lesson learned in assessing new cloud services?
I wish the program had collected more metrics earlier. What we are finding is that there are a significant number of services being contracted all over the company. By collecting really good metrics we might have been more effective in showing executives what services are being used, who is using them, and how. We are making good progress on this now, but I wish we started earlier.
How are you monitoring cloud services and gathering this intelligence?
Our professional service team has helped us a great deal. With the Cisco Cloud Consumption Services, we have begun to capture an enterprise view of what cloud services are being used, by whom and have a great dashboard of metrics we can now use to inform Cisco executives. I never imagined before we were using the software that we had nearly 2,000 cloud services in use, but with Cisco Cloud Consumption we now know and can monitor activity.
Learn more about how Cisco can help monitor and manage cloud providers at http://www.cisco.com/go/cloudconsumption.
Tags: Cisco IT, cisco on cisco, cloud, cloud governance, cloud risks, cloud security
This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.
Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs
Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
The process should not be disruptive to the current installation for cabling, performance impact or downtime
The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level
Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.
Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.
Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.
Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift.
Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.
For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.
As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.
“Good design doesn’t date. Bad design does.”
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization