Cisco Blogs


Cisco Blog > Data Center and Cloud

Data Virtualization: Live at Cisco Live! San Francisco

It has been a great year for Data Virtualization at Cisco Live!   Milan, Melbourne, and Toronto were fantastic opportunities to introduce Data Virtualization to Cisco customer and partner audiences.  And we have saved the best for last with multiple activities at Cisco Live! San Francisco.

We kick things off on Monday May 18 with a by-invitation program for Cisco Data Virtualization customers and prospects.  We start the day at 3:00 with a special pass to John Chambers’ keynote address.  This is followed by a reception, data virtualization demo and tour in the World of Solutions hall.  And we close the evening with a dinner at one of San Francisco’s finest restaurants. Participants in this program return on Wednesday night for a special performance by Lenny Kravitz.   If you would like to join us, please contact Paul Torrento at ptorrent@cisco.com.

For those of you attending the full event, Data Virtualization is also featured in two sessions both entitled, Driving Business Outcomes for Big Data Environment.  I will lead a quick summary session on Thursday at 11:15am, with Jim Green providing a deeper-dive technical session from 11:30-12:30 that day.   In these sessions we will address one of the major issues organizations are facing as a consequence of exponential data growth – that is the huge expenses required to upgrade capacity in their enterprise data warehouses. To avoid this spend, customers are looking for lower cost alternatives such as offloading infrequently used data to Hadoop.  In these sessions you will find out about Cisco’s complete solution with Unified Computing System hardware and Data Virtualization software and Services methodology.

Please also stop by the Data Virtualization booth in the Cisco Services pavilion where we can chat about your business outcome objectives and how data virtualization can help.

And if you can’t make it to Cisco Live! San Francisco, then no worries.  Just check out the recording of my colleague Peter Tran’s session, Utilizing Data Virtualization to Create More Business Agility and Better Decision-Making, from Cisco Live! Milan.  It’s a great crash course intro to data virtualization.

Tags: , , , , , , ,

As Technology Changes ‘Everything,’ Don’t Forget About People

In a constantly changing world, getting the right talent focused on the most pressing challenges is essential — not just for companies, but for service providers, cities, and countries.

Today, the key driver of that rapid change is technology, particularly the explosion in connectivity known as the Internet of Everything (IoE). Cisco predicts that IoE will have connected 50 billion “things” by 2020, compared to 10 billion today. But for all the talk of things, IoE is not just about embedding sensors in shoes, jet engines, refrigerators, and shopping carts. The true opportunity arises when people, process, data, and things are connected in startling new ways.

In such an environment, collaboration is critical. Indeed, IoE-related innovations have the potential to improve and transform our world in profound ways. But no one company can solve these challenges. They will require partnerships and the open sharing of ideas and talent.

Technology companies, in particular, will need to change the ways in which they utilize their talent. For many decades, there was one way to access talent — by hiring it. Today, workforces are flexible and may be spread across time zones and continents. Knowledge workers still contribute as employees on company payrolls, of course. But increasingly, they are just as likely to collaborate on a specific project as partners or as subject-matter experts sharing knowledge within cross-functional or cross-industry groups.

That is why I feel so strongly about a recent out-of-court settlement in Silicon Valley regarding the free flow of talent from one organization to another. Apple, Google, Intel, and Adobe agreed to pay more than $300 million to 64,000 engineers who claimed that the companies’ hiring policies were hindering their career paths and access to higher salaries.

Read More »

Tags: , , , , , , , , , , , ,

Hortonworks Data Platform with Cisco UCS; and a note on incredible performance of Hive 13

Undoubtedly Big Data is becoming an integral part of enterprise IT ecosystem across major industry verticals, and Apache Hadoop is emerging almost synonymous with it as the as the foundation of the next generation data management platform. Sometimes referred to as Data Lake this platform serves as the primary landing zone for data from across a wide variety of data sources. Traditional and several new application software vendors have been building the plumbing -- in software terms data connectors and data movers -- to extract data from it for further processing. New to Apache Hadoop is YARN which is pretty much an operating system for Big Data enabling multiple workloads -- batch, interactive, streaming, and real-time -- all coexisting on a cluster.

The Hortonworks Data Platform combines the most useful and stable versions of Apache Hadoop and its related projects into a single tested and certified package. Cisco has been partnering with HortonWorks to provide an industry leading platform for enterprise Hadoop deployments. The Cisco UCS solution for Hortonworks Data Platform is based on the Cisco UCS Common Platform Architecture Version 2 for Big Data – a popular platform for Data Lakes widely adopted across major industry verticals, featuring single connect, unified management, advanced monitoring capabilities, seamless management integration and data integration (plumbing) capabilities with other enterprise application systems based on Oracle, Microsoft, SAS, SAP and others.

We are excited to see several joint wins with Hortonworks in the service provider, insurance, retail, healthcare and other sectors. The joint solution is available in three reference architectures, Performance-Capacity Balanced, Capacity Optimized and Capacity Optimized with Flash – all support up to 10 racks at 16 servers each without additional switches. Scaling beyond 10 racks (160 servers) can be implemented by interconnecting domains using Cisco Nexus 6000/7000/9000 series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using the Cisco UCS Central.

New to this partnership is Hortonworks Data Platform 2.1 which includes Apache Hive 13  which significantly faster than previous generation Hive 12. We have jointly conducted extensive performance benchmarking using 20 queries derived from TPC-DS Benchmark  – an industry standard benchmark for Decision Support Systems from the Transaction Processing Performance Council (TPC). The tests were conducted on a 16 node Cisco UCS CPA v2 Performance-Capacity Balanced cluster using a 30TB dataset. We have observed about 300% performance acceleration for some queries with Hive 13 compared to Hive 12. See Figure 1.

Additional performance are improvements expected with the GA release. What does this mean? (i) First of all, Hive brings SQL like abilities – SQL being the most common and expressive language for analytics -- to petabyte scale datasets – in an economical manner  (ii) Hadoop becomes friendlier for SQL developers and SQL based business analytics platforms (iii) Such performance improvements (from Hive 12 to 13) makes migrations from proprietary systems to Hadoop even more compelling. More coming. Stay tuned !

Figure 1:Hive 13 vs. Hive 12

hdpHDP hive13

Disclaimer: The queries listed here is derived from the TPC-DS Benchmark. These results cannot be compared with TPC-DS Benchmark results. For more information visit www.tpc.org.

Tags: , ,

Mining Copper Ore — and Digital Insights — in the Internet of Everything Economy

The Internet of Everything (IoE) is a juggernaut of change, transforming organizations in profound ways. It sows disruption, and it grants enormous opportunities. But this sweeping wave of change is not reserved for what we normally think of as “technology companies.” In the IoE economy, even seemingly “analog” endeavors must be bestowed with network connectivity, no matter how venerable a company’s roots or old its traditions.

In a world where Everyone Is a Tech Company, there are some great examples of older companies that are heeding this new reality. Retail, manufacturing, transportation, and education are just a few of the places where people, process, data, and things are being connected in startling new ways. Companies that are ahead of the IoE transformation curve will ensure their competiveness in marketplaces that are ever more vulnerable to disruption.

Dundee Precious Metals provides a great example of a company that is embracing change. A far-flung global organization, the company, for example, runs Europe’s largest mine in Chelopech, Bulgaria, from which it ships gold-rich copper ore to a smelter in Namibia. Yet through IoE-related technologies, executives at the company’s headquarters in Toronto, Canada, have gained unprecedented visibility into all aspects of their operations.

The end result? A boon in safety, efficiency, and productivity.

Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , ,

How Data Virtualization Helps Data Scientists

By now it is clear that big data analytics opens the door to unprecedented analytic opportunities for business innovation, customer retention and profit growth. However, a shortage of data scientists is creating a bottleneck as organizations move from early big data experiments into larger scale adoption. This constraint limits big data analytics and the positive business outcomes that could be achieved.

Jason Hull

Click on the photo to hear from Comcast’s Jason Hull, Data Integration Specialist about how his team uses data virtualization to get what they need done, faster

It’s All About the Data

As every data scientist will tell you, the key to analytics is data. The more data the better, including big data as well as the myriad other data sources both in the enterprise and across the cloud. But accessing and massaging this data, in advance of data modeling and statistical analysis, typically consumes 50% or more of any new analytic development effort.

• What would happen if we could simplify the data aspect of the work?
• Would that free up data scientists to spend more time on analysis?
• Would it open the door for non-data scientists to contribute to analytic projects?

SQL is the key. Because of its ease and power, it has been the predominant method for accessing and massaging data for the past 30 years. Nearly all non-data scientists in IT can use SQL to access and massage data, but very few know MapReduce, the traditional language used to access data from Hadoop sources.

How Data Virtualization Helps

“We have a multitude of users…from BI to operational reporting, they are constantly coming to us requesting access to one server or another…we now have that one central place to say ‘you already have access to it’ and they immediately have access rather than having to grant access outside of the tool” -Jason Hull, Comcast

Data virtualization offerings, like Cisco’s, can help organizations bridge this gap and accelerate their big data analytics efforts. Cisco was the first data virtualization vendor to support Hadoop integration with its June 2011 release. This standardized SQL approach augments specialized MapReduce coding of Hadoop queries. By simplifying access to Hadoop data, organizations could for the first time use SQL to include big data sources, as well as enterprise, cloud and other data sources, in their analytics.

In February 2012, Cisco became the first data virtualization vendor to enable MapReduce programs to easily query virtualized data sources, on-demand with high performance. This allowed enterprises to extend MapReduce analyses beyond Hadoop stores to include diverse enterprise data previously integrated by the Cisco Information Server.

In 2013, Cisco maintained its big data integration leadership with updates of its support for Hive access to the leading Hadoop distributions including Apache Hadoop, Cloudera Distribution (CDH) and Hortonworks (HDP). In addition, Cisco now also supports access to Hadoop through HiveServer2 and Cloudera CDH through Impala.

Others, beyond Cisco, recognize this beneficial trend. In fact, Rick van der Lans, noted Data Virtualization expert and author, recently blogged on future developments in this area in Convergence of Data Virtualization and SQL-on-Hadoop Engines.

So if your organization’s big data efforts are slowed by a shortage of data scientists, consider data virtualization as a way to break the bottleneck.

Tags: , , , , , , , , , , , , , ,