Previously, we saw how Boeing division (BDS) and University of Siegen have deployed Multi-hop FCoE and realized significant benefits. This blog highlights similar benefits achieved by Engineering Shared Infrastructure Services (ESIS) department at Netapp.
Netapp’s ESIS department delivers and maintains end-to-end compute, storage, and network resources for internal Development and Quality Assurance engineers. These resources provide a platform for the innovation that creates storage systems and software, ultimately empowering NetApp customers around the world to store, manage, protect, and retain their data. The requirement was to have agility and versatility in providing storage connectivity between rack/blade Cisco UCS servers and NetApp clustered Data ONTAP storage arrays.
So, Netapp ESIS implemented an integrated model using Cisco Unified Fabric that supports FCoE from the UCS Servers through the Nexus Series Switches all the way to NetApp storage controllers.
This Unified Fabric architecture reduced the number of management points and provided easy scalability. The TCO benefits were quite significant -- Netapp saved $300K in the hardware costs, more than $80,000 in the implementation costs and 1/3 of an FTE’s timeRead More »
Recently Cisco made significant efforts around open sourcing our H.264 implementation, including covering the MPEG-LA licensing costs for distribution and working with Mozilla to add support for H.264. However, in this attempt to unstick the logjam that has occurred in the standards bodies, the Internet Engineering Task Force (IETF) failed to reach consensus on the selection of a common video codec.
Cisco’s Jonathan Rosenberg explored this topic more in a recent Collaboration blog post. Read on to find out how we’re planning to move forward and why this conversation is definitely not over!
This is the first of a series of blogs that I plan to publish to start a dialog with our partner community. In these blogs, I’ll discuss the huge industry disruption now taking place, how Cisco Services is transforming itself to respond to that disruption, and how current and prospective partners can profit from the lucrative opportunities this disruption is creating.
In our industry, we see major disruptions every 20-25 years. Inflection points occur, platforms shift, and customer needs change dramatically. Today, we find ourselves well into the next major market evolution, one of unprecedented scale. To learn more, click here to view an online seminar where I discuss these trends with Chris Barnard, IDC AVP EMEA Network Life Cycle Services, and Leslie Rosenberg, IDC Research Manager, Worldwide Network Life Cycle Services.
Together, we are addressing the challenges — and tremendous opportunities – related to cloud, virtualization, big data, programmable networks, new consumption models, and changing buying centers. Some of our existing partners are executing on these opportunities and evolving their practices to compete, win, and ultimately enable innovative business solutions for all our customers. At the same time, we are attracting new partners into our ecosystem: ISVs, industry vertical players, and consulting firms, to name a few. Read More »
As the day draws to a close, and especially during the early morning, users become far more likely to click on links that lead to malware. Those responsible for network security need to ensure that users’ awareness of information security continues after work hours, so that users “don’t click tired.” Read More »
I am attending South Korea’s Big Data Forum in Seoul, and one question here is, “How big is Big Data?” My friend and colleague Dave Evans has pointed out that by the end of this year, more data will be created every 10 minutes than in the entire history of the world up to 2008. Now, that’s big!
Much of this data is being created by billions of sensors that are embedded in everything from traffic lights and running shoes to medical devices and industrial machinery—the backbone of the Internet of Things (IoT). But the real value of all this data can be realized only when we look at it in the context of the Internet of Everything (IoE). While IoT enables automation through machine-to-machine (M2M) communication, IoE adds the elements of “people” and “process” to the “data” and “things” that make up IoT. Analytics is what brings intelligence to these connections, creating endless possibilities.
To understand why, let’s step back and take a look at the classic approach to Big Data and analytics. Traditionally, organizations have tended to store all the data they collect from various sources in centralized data centers. With this model, if a retailer wants to know something about the buying patterns of a certain store’s customers, it can create an analysis of loyalty card purchases based on data in the data warehouse. Collecting, cleansing, overlaying, and manipulating this data takes time. By the time the analysis is run, the customer has already left the store.
Big Data today is characterized by volume, variety, and velocity. This phenomenon is putting a tremendous strain on the centralized model, as it is no longer feasible to duplicate and store all that data in a centralized data warehouse. Decisions and actions need to take place at the edge, where and when the data is created; that is where the data and analysis need to be as well. That’s what Cisco calls “Data in Motion.” With sensors gaining more processing power and becoming more context-aware, it is now possible to bring intelligence and analytic algorithms close to the source of the data, at the edge of the network. Data in Motion stays where it is created, and presents insights in real time, prompting better, faster decisions.