In our previous big data blogs, my Cisco associates have focused on the topic of building the best infrastructure for long-term success with big data. I’d like to start a new chapter in the series, focusing on building the right data strategy and analytics solutions.
Today, people, process, data and things function together through a combination of machine-to-machine, person-to-machine and person-to-person connections. We call this the Internet of Everything (IoE). While the IoE is making us all smarter, it is also creating more data, more types of data and in more places.
This wealth of data comes with major challenges but also has the potential for amazing opportunities. At Cisco, we’re all about helping our customers turn these challenges into opportunities. The first step begins with proper management of the massive amounts and types of data in multiple locations. From a solutions perspective, that first step is our agile data integration software, Cisco Data Virtualization. It abstracts data users need from multiple different sources and brings it together to give users a unified, friendly view of the data.
By leveraging this technology with additional solutions, our customers can access data across the IoE and use that data to respond quickly to change, make better decisions and gain a competitive advantage. Driven by the massive amounts of data in today’s IT environment, customers are facing huge expenses to add capacity to their existing enterprise data warehouses (EDW), the place where data is traditionally stored.
We help customers tackle the challenge of increasing enterprise data warehouse costs with Cisco Big Data Warehouse Expansion (BDWE). BDWE identifies infrequently used data and provides a methodology and tools to offload the data onto Hadoop, avoiding additional capacity costs and extending the life of the data warehouse.
I spoke with a customer recently who shared that one terabyte (TB) of data in an EDW costs $100,000 per year to maintain. That exact same amount of data for the same amount of time in Hadoop only costs $1,000 to maintain. This is a significant difference. By implementing an ongoing strategy to offload data from the primary system to Hadoop, our solution frees up resources to be utilized in more strategic ways. Additionally, we deploy Data Virtualization to act as a ‘virtual database,’ to access data regardless if it resides in the original warehouse or the new Hadoop data store. So not only does BDWE significantly lower costs, but the historical data remains easily accessible.
Our customers gain the business insights and outcomes they seek with a complete suite of software, hardware and services solutions that access and analyze data, no matter where it is stored on the network. After all, the power of data is not just in the ability to access it but to use it to change behavior or the way you run your business.
Not only do we connect more people, processes, data, and things than any other company, we can also bring analytics to data wherever it is—no matter how remote—to turn information into insights almost instantly. More to come in my next blog about Cisco’s analytics portfolio and how its helping tackle the next major IoE challenge, extracting value insight from your data.
To learn more about the benefits of Cisco analytics solutions and the power of our integrated infrastructure for big data, please join us for a webcast at 9 AM Pacific time on October 21st entitled ‘Unlock Your Competitive Edge with Cisco Big Data and Analytics Solutions.’ #UnlockBigData
To learn more about Cisco Data Virtualization, check out our page.
With nearly 500 attendees joining together at the Waldorf Astoria in New York City, the fifth annual Data Virtualization Day on October 1st, 2014 was the largest ever, 50% bigger than 2013’s record setting event. From kickoff to closing reception, the vanguard of data virtualization gathered to explore the latest trends, meet fellow innovators and drive data virtualization adoption forward.
The Importance of Data Virtualization
The use of data virtualization to connect increasingly distributed data resulting from acceleration of big data, the cloud and the Internet of Everything (IoE) was the hot topic on the day as organizations seek to gain advantage from these game-changing technologies. Cisco Data Virtualization is critical infrastructure, accelerating new capabilities, experiences, and opportunities by connecting device data, big data, data in the cloud, and traditional enterprise data in new and extraordinary ways.
Cisco’s Mike Flannagan, General Manager of the Data Analytics Business Group, kicked off the day highlighting the explosion of connected devices in the IoE. With 50 billion devices by 2020, Mike noted the business opportunities and data integration challenges are unprecedented.
Supporting business’s unquenchable thirst for data, CIS 7.0 Business Directory is the first data virtualization offering designed exclusively for business self-service.
To respond to the expanding data technology universe, CIS 7.0 Data Source SDK will speed development of high-performance data virtualization adapters for emerging and industry-specific data sources.
And CIS 7.0 Deployment Manager responds to accelerating data distribution which in turns leads to mega-scale data virtualization deployments.
Jim also foreshadowed Cisco’s continued innovation agenda including plans to
Deliver data virtualization at Intercloud scale
Provide directory, abstraction, federation, security, lineage and more to create more mature Hadoop environments
Address edge to center data challenges resulting from the integration of data in motion and data at rest in a world of 50 billion connected devices.
Customer Successes Highlighted
Alasdair P. Anderson, SVP Engineering at HSBC led off the customer cases studies by describing the bank’s expansive future-state data architecture based on Hadoop and data virtualization. Covering 65 petabytes of active data across 80 countries and 60 million customers and 7000 systems, data virtualization lowers total cost of ownership, improves agility, and enables greater business self service.
John Wrenn, VP Information Technology, Enterprise Applications at Flextronics next discussed how Flextronics uses data virtualization to provide data as a service for global supply chain that spans 40 distribution centers, 200 manufacturing centers and 20 design centers. In just over one year, John’s team has used Cisco Data Virtualization to integrate over 500 sources, allowing IT to match the pace of business.
Data Virtualization Leadership Award Winners Announced
Each year, the Data Virtualization Leadership Awards are announced at Data Virtualization Day. Past winners from Barclays, Compassion International and Pfizer joined Cisco on stage to recognize this year’s winners including:
Data Virtualization Champion Awards: Paul Dzacko, Lead Architect, Risk Systems, BMO and James Evans, Architect & Project Manager, Client Portal, HSBC in recognition of their leadership in consistently achieving and promoting data virtualization’s value across their organizations and the broader data integration market.
High Impact Award: Victor Campbell, Principal Architect, Long Island Power Authority (PSEG) in recognition of data virtualization leadership in an environment where the result was high impact and critical to the business. See the story here.
Agility Award: Pratima Botcha, Sr. Technical Architect, Information Technology, AT&T Services for her work in enhancing business agility through use of data virtualization technology and methods, rapidly establishing a path for high value across the organization.
Curious about OpenStack? Don’t miss this episode of Engineers Unplugged, where Kenneth Hui (@hui_kenneth) and Gabriel Chapman (@bacon_is_king) explain the difference between OpenStack the project, product, and service using bacon as an analogy. Don’t watch hungry.
Cue the unicorns.
**Want to be Internet Famous? Act now! Join us for our next shoot: VMworld Barcelona. Tweet me @CommsNinja!**
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Perhaps you’ve seen the shirts. Maybe you’ve joined in or listened to an episode of Cisco Champion Radio. Or maybe you can not resist learning new things and having access to experts in your area of technical expertise.
Join us--submit your CIsco Champion for Data Center nomination today!
This will probably date me, but when I started in telecommunications a few years back, 10-Mbps thin-net Ethernet was the cool new technology. I used to think, “Who would ever need that much bandwidth?” Since then, IT technology has changed dramatically, with applications continually demanding more and more bandwidth. Ethernet switching capacity has advanced by leaps and bounds to keep pace with demand, ratcheting up connectivity speeds from 10 Mbps to 100 Gbps. For most data centers today, 1- and 10-Gbps connectivity is commonplace. But now 100 Gbps is quickly gaining traction in the vertical markets that require the highest performance. If history is any indication, 100 Gbps will be commonplace in most data centers in the not too distant future.
To meet the high-performance demands of today’s service providers, research labs, and large enterprises, Cisco started shipping a 12-port 100-Gbps module for the Cisco Nexus® 7700 platform switches about a year ago.
Nexus 7700 100G, 12 port module
The module is based on the Cisco® F3 chip, which offers the industry’s most comprehensive data center feature set for the core and the data center interconnect, including multicast, Multiprotocol Label Switching (MPLS), Virtual Extensible LAN (VXLAN), Cisco Overlay Transport Virtualization (OTV), and Cisco Locator/ID Separation Protocol (LISP). The module was designed to deliver line-rate performance with a total switching capacity of 1.2 terabits per second (Tbps). So, theoretically, a fully loaded Cisco Nexus 7700 18-Slot Switch chassis with 192 100-Gbps ports could deliver up to 38 Tbps of bidirectional throughput. No matter how you slice and dice it, that’s a lot of throughput—enough to meet the demands of any network.
As a matter of fact, 192 100-Gbps ports with 38-Tbps throughput would make the Cisco Nexus 7700 18-Slot Switch the industry’s highest-density 100-Gbps switch, with the Industry’s highest throughput rate. To verify this industry leadership, we put the switch to the test.
Cisco commissioned Miercom to conduct an independent performance test on a fully loaded Cisco Nexus 7700 18-Slot Switch with 192 100-Gbps ports. One of the first challenges Miercom faced was how to generate 38 Tbps of traffic to test line-rate performance. Miercom called upon Ixia’s world-renowned labs to help. The solution called for multiple Ixia Xcellon modules in Ixia’s iSimCity lab to conduct the tests (Figure 2).
Figure 2. Ixia test lab set up
Along with the raw throughput testing, Miercom also tested critical Cisco Nexus 7700 platform features to see how they would perform under full load. Tested features included MPLS, IPv4/IPv6 multicast, and hitless In Service Software Upgrade (ISSU).
After the testing was complete, the Cisco Nexus 7700 18-Slot Switch proved that it offers the industry’s highest 100-Gbps density and performance with line-rate services and exceptional availability. This level of scale provides customers with many years of investment protection as they transition from 1 and 10 Gigabit Ethernet to 40 and 100 Gigabit Ethernet architectures in the future.
Robert Smithers, CEO of Miercom, summed up the test results nicely: “Miercom independently exercised and evaluated Cisco Systems Nexus 7718 and was frankly stunned by the incredible power and throughput of this system, coupled with consistent low latency and latency variation, as well as solid MPLS support and high-availability features. The first switch we have independently tested with 192 x 100GE ports, the Cisco Nexus 7718 is awarded Miercom Performance Verified in our ongoing Data-Center-Class 100GE Switch Study.”
For the full details of the test, check out the comprehensive Miercom test report and accompanying test/results video. Also, here is what Miercom and Ixia had to say in their press releases.
Here’s a quick summary of the Miercom test results:
Testing found the Cisco Nexus 7718 can forward at line rate on all 192 of its 100GE ports – delivering over 38 Terabits/s of bidirectional traffic
Testing confirmed the Cisco 7718 can distribute real-world IPv4 and IPv6 multicast traffic at wire speed, with each of 191 receiver ports handling 1,250 IGMPv2 groups
The Cisco 7718 can process real-world MPLS traffic at line rate on all 192 of its 100GE ports, with no loss and low latency
Testing confirmed that an active Supervisor module or fabric module can be replaced with no packet loss, with IPv4 and v6 traffic running at high capacity on all of its 100GE ports
The Cisco 7718 executed an in-service software upgrade, with IPv4 and v6 traffic running at high capacity on all 192 of its 100GE ports, with no loss