Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
- Performance - 24 Tbps Switching capacity
- Reliability -- Redundancy for every critical component in the chassis including Fabric Card
- Flexibility -- Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
Read More »
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, nexus, NX-OS, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
This is the first of a four part series on the convergence of IT and OT (Operational Technologies)
Part 2 will cover the impact of the transition to IP on Physical Security and the convergence of Physical and Cyber Security.
Part 3 will discuss the convergence of IT and OT -- Operational Technology of all types outside the traditional realm of Information Processing.
Part 4 will look at how to actually make the transition to a converged IT/OT infrastructure and tips on overcoming the challenges.
Those of us in the Energy Industry know that the utilities segment is in transition. The network architecture, in particular, is undergoing change -- change that will bring challenges as well as opportunities for both Cisco and our customers.
Almost every communication application started as point to point serial — including computer communications. But the simple geometry problem of how many lines are needed to connect every vertex (node) of a polygon to every other vertex [ n(n-3)/2 if you’re curious ] shows that as the number of nodes grows, connecting each one to every other one quickly becomes infeasible.
The need to interconnect more and more devices lead to multi-drop or bus topologies and challenges of how to deal with sorting out who gets to talk when and the solutions of token passing, polling and TDM.
Circuit switching was a big breakthrough developed out of necessity as the number of telephone handsets exploded. Interestingly enough, look at the hierarchical topology of trunking and local switching and you may recognize analog similarity to NAT.
Initial application of networking often occurs as the use of Ethernet to replace serial communication with flat, layer-2 networks, to interconnect multiple nodes with polling and TDM used exactly as they were in serial systems. That’s where most SCADA systems still live today and why there are relatively few monitored points, limited by how quickly the polling loop can be traversed. Imagine trying to run the internet that way?
Fast forward and almost every industry and industrial application that started off as serial or circuit switched has migrated or is migrating to packet switched as IP packet technology has made astonishing progress along the price/performance curve.
High performance IP is now able to offer latency performance that used to require dedicated connections. Along with IP have come the tools to manage, diagnose, repair and secure the communication network. Relative to the billions of dollars invested by companies around the world in tools, security, management, etc. for IP, the investments being made in securing and improving serial or TDM are almost nonexistent.
Globally, Service Providers who built their industry on circuit switched analog and TDM are terminating those services as they move to complete their transition to IP.
Cisco continues to play a key role in transitioning serial/TDM technology to IP, helping customers get full benefit of the robust performance and security capabilities and features IP offers. Customers who have received End of Service notices for Framerelay are scrambling to find alternatives and at the same time achieve regulatory compliance.
As Operation Technology groups outside of IT increasingly use IT Information & Communication Technology (ICT), they need the same capabilities as IT.
What does this mean for Cisco and our customers?
Relationships with the business, including the operations side of the business are key. Budget is increasingly in the hands of the business rather than IT. As a result, Cisco and our customers’ IT departments are increasingly collaborating with the operational side of the business -- especially the OT, or ‘Operational Technologies’ part of our customer’s organization.
Cisco has specialized industry sales support teams in a group called CVA (Cisco Value Acceleration) Group, which I’m a part of, as well as Cisco Advanced Services and other Cisco Business Units (especially the IOTG, or Internet of Things Group) along with groups such as the Cisco Global Industries Center of Expertise (GICE) to understand the trends, business imperatives and compelling events creating opportunity with customers.
If you’d like to know more about these groups, Read More »
Tags: convergence, Energy, ip, network convergence, Operational Technologies, operational technology, OT, SCADA, utilities
Previously, we saw how Boeing division (BDS) and University of Siegen have deployed Multi-hop FCoE and realized significant benefits. This blog highlights similar benefits achieved by Engineering Shared Infrastructure Services (ESIS) department at Netapp.
Netapp’s ESIS department delivers and maintains end-to-end compute, storage, and network resources for internal Development and Quality Assurance engineers. These resources provide a platform for the innovation that creates storage systems and software, ultimately empowering NetApp customers around the world to store, manage, protect, and retain their data. The requirement was to have agility and versatility in providing storage connectivity between rack/blade Cisco UCS servers and NetApp clustered Data ONTAP storage arrays.
So, Netapp ESIS implemented an integrated model using Cisco Unified Fabric that supports FCoE from the UCS Servers through the Nexus Series Switches all the way to NetApp storage controllers.
This Unified Fabric architecture reduced the number of management points and provided easy scalability. The TCO benefits were quite significant -- Netapp saved $300K in the hardware costs, more than $80,000 in the implementation costs and 1/3 of an FTE’s time Read More »
Tags: Cisco Unified Fabric, convergence, FCoE, Multihop, Storage
In one of my earlier blogs, -- “How to get more SAN mileage….” -- I had highlighted how one can deploy End-to-End FCoE using a converged Director-class platform, like Nexus 7000, connected directly from the converged access switch, like UCS FI, in order to get the utmost agility. Well, this is how ITOCHU Techno-Solutions Corporation (CTC), a Cloud Service provider, deployed its network to get significantly higher mileage.
CTC provides a wide range of IT services for business customers in Japan. The company’s Cloud Platform Group recently launched its innovative ElasticCUVIC shared private cloud service, which helps customers reduce infrastructure cost and management complexity. With large numbers of VMs, CTC wanted to simplify its data center architecture and IT management while optimizing scalability. The challenge was to deliver high-performance, easy-to-manage cloud services at scale.
The company evaluated several storage networking solutions and turned to Cisco for Fibre Channel over Ethernet (FCoE) solutions, which greatly simplify the infrastructure and management. CTC built its two newest data centers in Yokohama and Kobe with ultra-high performance and flexibility in mind. CTC implemented an End-to-End FCoE architecture using Cisco Nexus 7000 Series Switches, Cisco UCS servers, and FCoE connections between the switches, servers, and FCoE storage arrays.
With the converged FCoE architecture, ElasticCUVIC is enabling CTC customers to gain Read More »
Tags: cloud, convergence, FCoE, SAN, Storage, UCS, Unified Data Center
Image Credit: Wikispeed.org
Mileage (miles per gallon) is one of the important criteria while buying any automobile and once bought, it is highly desirable to hit the maximum advertised mileage without significantly changing the driving habits or the routes (highway vs city mpg). Well, I have not been able to achieve that yet, so being a geek, I focused my attention on a different form of mileage (throughput per switch-port) that interests me at work. So in this blog, I would explore a way to get more SAN mileage from the Cisco UCS FI (Fabric Interconnect) without significantly affecting the SAN admin’s day-to-day operations.
Just a bit of background before we delve into the details -- The I/O fabric between the UCS FI and the UCS Blade Server Chassis is a converged fabric, running FCoE. The usage of FCoE within the UCS fabric is completely transparent to the host operating system, and any Fibre Channel block storage traffic traverses this fabric as the FCoE traffic. So, a large number of over 20,000+ UCS customers, using Block Storage, are already using FCoE at the access layer of the network.
Now, the key question is what technology, FC or FCoE, to use northbound on the FI uplink ports to connect to an upstream Core switch for the SAN connectivity. So, what are the uplink options? Well, the FI has Unified ports and the choice is using the same uplink port as either 8G FC -or- 10G FCoE. [Note that when using the FCoE uplink, it is not a requirement to use a converged link and one can still use a dedicated FCoE link for carrying pure SAN traffic].
1) Bandwidth for Core Links: This is a very important aspect for the core part of the network. It is interesting to note that 10G FCoE provides almost 50% more throughput than the 8G FC. This is because FC has a different bit encoding and clock-rate than Ethernet, and so 8G FC yields 6.8G throughput while 10G FCoE yields close to 10G throughput (post 1-2% Ethernet frame overhead)
2) Consistent Management Model: FCoE is FC technology with same management and security model, so it will be a seamless transition for a SAN admin to move from FC to FCoE with very minimal change in the day-to-day operations. Moreover, this FCoE link is carrying dedicated SAN traffic without requiring any convergence of LAN traffic. To add to that, if the UCS FI is running in the NPV mode, then technically the FCoE link between the UCS FI and the upstream SAN switch does not constitute a Multi-Hop FCoE design, as the UCS FI is not consuming a Domain-ID, and the bulk of SAN configurations like zoning etc. need to happen on only the Core SAN switch, thus maintaining the same consistent SAN operational model as with just the FC.
3) Investment Protection with Multi-protocol flexibility: By choosing FCoE uplink from the converged access layer, one can still continue to use the upstream core SAN Director switch as-is, providing the connectivity to existing FC Storage arrays. Note that Cisco MDS 9000 SAN Director offers Multi-protocol flexibility so that one can Interconnect FCoE SANs on the Server-side with the FC SANs on the Storage-side.
And, we have a winner… Read More »
Tags: convergence, Fabric Interconnect, FCoE, SAN, Storage, UCS, Unified Data Center