Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco Data Virtualization

It is an exciting day for Cisco Data Virtualization, our data integration software that connects all kinds of data from across the network and makes it appear as if it is in one place and in one consolidated view. To see it in action, check out this video on how we replaced Denodo with our own data virtualization technology at Cisco.

Today at Data Virtualization Day, in New York City, I will be joined by customers, partners and industry experts as we launch a major update to our flagship data virtualization platform, Cisco Information Server (CIS). CIS 7.0 will enable IT departments to deliver self-service data access and enable business agility like never before.

My favorite part of Data Virtualization Day is the time I get to spend with our customers and partners, talking about shared successes and upcoming product enhancements. Since joining Cisco through our acquisition of Composite Software in July 2013, data virtualization has been a key piece of our portfolio and is a vital solution to our customers’ challenges brought on by the Internet of Everything (IoE), Cloud and Big Data trends.

Data is exploding now more than ever before. The majority of data is generated automatically by connected devices with up to 50 billion devices expected by the year 2020. The data explosion is the result of the IoE, this hyperconnection of people, process, data, and things that will create new capabilities, richer experiences, and unprecedented economic opportunities for businesses, individuals and countries for those with ‘IoE Ready’ strategies, infrastructure and technical capabilities in place.

Cisco Data Virtualization is a key part of being ‘IoE Ready’ by connecting device data, big data, data in the cloud and traditional enterprise data in new and extraordinary ways. Organizations that tap into this data pool will be able to leverage it strategically to monitor customer sentiment and behaviors; identify market and competitive changes, anticipate market transitions while optimizing performance of assets and operations and achieving the upmost business agility. It will separate the market leaders from the rest of the pack and will turn the challenges of the IoE, Cloud and Big Data into amazing opportunities.

Many organizations are shifting traditional data center environments to cloud data environments in order to optimize data center investment, leading to more hybrid IT environments. Cisco Data Virtualization truly enables a hybrid IT model by helping our customers live in a “world of many clouds” – connecting people, communities and organizations with intelligent networking capabilities that unify resources within and between data centers and across clouds. Now our customers can deploy any hybrid IT mix they desire while retaining the access and insights they require and free from the constraints of traditional data center operations and economics.

With the pace of worldwide data growth accelerating; organizations using innovative methods for storing, accessing and analyzing data will thrive amongst their competition. There has never been a more exciting time in the history of technology, and data virtualization is at the heart of how our customers are gaining a business advantage from all of the new data at their fingertips.

Happy Data Virtualization Day!

 

Learn More

To learn more about Cisco Data Virtualization, check out our page

Join the Conversation

Follow @CiscoDataVirt #DVDNYC

Tags: , , , , ,

Data Driven Decision Making at Pfizer – A Case Study in Data Virtualization

Finding a molecule with the potential to become a new drug is complicated. It’s time-consuming. Fewer than 10 percent of molecules or compounds discovered are promising enough to enter the development pipeline. And fewer of those ever come to market.  At Pfizer, if it were not for data virtualization, it would be even more challenging.

Years of Data, Thousands of Decisions

The pipeline from discovery to licensing occurs in phases over 15-20 years, and few compounds complete the journey. The initial study phase represents a multimillion-dollar investment decision. Each succeeding phase – proof-of-concept study, dose range study, and large-scale population study – represents a magnitude-larger investment and risk than the one before.

Senior management and portfolio managers need to know:

  • Which projects the company should fund?
  • Which compounds are meeting Pfizer’s high standards for efficacy and safety?
  • What are scientists discovering in clinical trials?

Portfolio and project managers routinely make complex tactical decisions such as:

  • How to allocate scarce R&D resources across different projects?
  • How to prioritize multiple development scenarios?
  • What is impact of a clinical trial result on downstream manufacturing?

Before Pfizer adopted Cisco Data Virtualization, getting useful data to answer these questions took weeks or months.  Why so long?  The problem has several dimensions.  First, each phase of development generates massive amounts of data and requires extensive analysis to provide an accurate picture.  Second, data comes from Pfizer research scientists all over the world; from physicians; clinical trials; product owners and managers; marketing teams; and hundreds of different back-end systems.  Third, the scientific method is based on trial and error, with unpredictable results.  Thus no two decisions are alike and therefore the specific data required for each decision is unique.

Data Virtualization Provides the Solution

To support their decision-making needs, Pfizer needed a solution that would allow them to pull all this diverse information together in an agile, ad hoc way.  Cisco Data Virtualization – agile data integration software that makes it easy to access and gather relevant data, no matter where data sources reside – provided the solution.

With Cisco Data Virtualization, Pfizer’s research and portfolio data resides in one virtual place and provides “one version of the truth” that is available for everyone to use to address the myriad decisions that arise. Further, by applying virtualization instead of consolidation, infrastructure costs are also reduced.

According to Pfizer, “data virtualization is far less expensive than building specialized data marts to answer questions. With Cisco Data Virtualization, our portfolio teams get answers in hours or days for about one-tenth the cost.”

This data virtualization progress has not gone unnoticed. At Data Virtualization Day 2012, Pfizer was awarded the “Data Virtualization Champion” award for consistently achieving and promoting data virtualization value within the organization and across the industry.

Learn from other leaders in the industry and see who wins this year’s Data Virtualization Leadership Awards at Data Virtualization Day 2014 on October 1. Register now!

 

Learn More

To read more about this Pfizer case study click here.

To learn more about Cisco Data Virtualization, check out our page.

Join the Conversation

Follow us @CiscoDataVirt #DVDNYC

Tags: , , , , ,

Unlock The Value of Big Data with Cisco Unified Computing System

Big Data is not just about gathering tons of data, the digital exhaust from the internet, social media, and customer records.  The real value is in being able to analyze the data to gain a desired business outcome.  

Screen Shot 2014-09-21 at 8.18.12 PMThose of us who follow the Big Data market closely never lack for something new to talk about. There is always a story about how a business is using Big Data in a different way or about some new breakthrough that has been achieved in the expansive big data ecosystem. The good news for all of us is, we have clearly only scratched the surface of the Big Data opportunity!

With the increasing momentum of the Internet of Everything (IoE) market transition, there will be 50 billion devices connected to the Internet by 2020—just five years from now. As billions of new people, processes, and things become connected, each connection will become a source of potentially powerful data to businesses and the public sector.  Organizations who can unlock the intelligence in this data can create new sources of competitive advantage, not just from more data but from better access to better data.

What we haven’t heard about – yet—are examples of enterprises that are applying the power of this data pervasively in their organizations:  giving them a competitive edge in marketing, supply chain, manufacturing, human resources, customer support, and many more departments. The enterprise that can apply the power of Big Data throughout their organization can create multiple and simultaneous sources of ongoing innovation—each one a constantly renewable or perpetual competitive edge. Looking forward, the companies that can accomplish this will be the ones setting the pace for the competition to follow.

Cisco has been working on making this vision of pervasive use of Big Data within enterprises a reality. We’d like to share this vision with you in an upcoming blog series and executive Webcast entitled, ‘Unlock Your Competitive Edge with Cisco Big Data Solutions’, that will air on October 21st at 9:00 AM PT.

Register Now

 

 

I have the honor of kicking off the multi-part blog series today. Each blog will focus on a specific Cisco solution our customers can utilize to unlock the power of their big data – enterprise-wide-- to deliver a competitive edge to our customers.  I’m going to start the discussion by highlighting the infrastructure implications for Big Data in the internet of Everything (IoE) era and focus on Cisco Unified Computing System initially.

Enterprises who want to make strategic use of data throughout their organizations will need to take advantage of the power of all types of data. As IoE increasingly takes root, organizations will be able to access data from virtually anywhere in their value chain. No longer restricted to small sets of structured, historical data, they’ll have more comprehensive and even real-time data including video surveillance information,  social media output, and sensor data that allow them to monitor behavior, performance, and preferences. These are just a few examples, but they underscore the fact that not all data is created equally. Real-time data coming in from a sensor may only be valuable for minutes, or even seconds – so it is critical to be able to act on that intelligence as quickly as possible. From an infrastructure standpoint, that means enterprises must be able to connect the computing resource as closely as possible to the many sources and users of data. At the same time, historical data will also continue to be critical to Big Data analytics.

Cisco UCS Common Platform Architecture for Big Data from Cisco Data Center

Cisco encourages our customers to take a long-term view—and select a Big Data infrastructure that is distributed, and designed for high scalability, management automation, outstanding performance, low TCO, and the comprehensive, security approach needed for the IoE era. And that infrastructure must be open—because there is tremendous innovation going on in this industry, and enterprises will want to be able to take full advantage of it.

Cisco UCS for Big DataOne of the foundational elements of our Big Data infrastructure is the Cisco Unified Computing System (UCS).  UCS integrated infrastructure uniquely combines server, network and storage access and has recently claimed the #1, x86 blade server market share position in the Americas. It’s this same innovation that propelled us to the leading blade market share position that we are directly applying to Big Data workloads.  With its highly efficient infrastructure, UCS lets enterprises manage up to 10,000 UCS servers as if they were a single pool of resources, so they can support the largest data clusters.

UCS Mini

Because enterprises will ultimately need to be able to capture intelligence from both data at rest in the data center and data at the edge of the network, Cisco’s broad portfolio of UCS systems gives our customers the flexibility to process data where it makes the most sense. For instance, our UCS 240 rack system has been extremely popular for Hadoop-based Big Data deployments at the data center core. And Cisco’s recently introduced UCS Mini is designed to process data at the edge of the network.

Because the entire UCS portfolio utilizes the same unified architecture, enterprises can choose the right compute configuration for the workload, with the advantage of being able to use the same powerful management and orchestration tools to speed deployment, maximize availability, and significantly lower your operating expenses.  Being able to leverage UCS Manager and Service Profiles, Unified Fabric and SingleConnect Technology, our Virtual interface card technology, and industry leading performance really set Cisco apart from our competition.

So, please consider this just an introduction to the first component of Cisco’s “bigger”, big data story. To hear more, please make plans to attend our upcoming webcast entitled,  ‘Unlock Your Competitive Edge With Cisco Big Data Solutions’ on October 21st.  

Register Now

Every Tuesday and Thursday from now until October 21st, we’ll post another blog in the series to provide you with additional details of Cisco’s full line of products, solutions and services.

View additional blogs in the series:

     9/25:    Unlock Big Data with Breakthroughs in Management Automation

     9/30:    Turbocharging New Hadoop Workloads with Application Centric Infrastructure

Please let me know if you have any comments or questions, or via twitter at @CicconeScott.

 

 

 

 

 

 

Tags: , , , , , , , , , , , , , , , , , , , ,

Data Vault and Data Virtualization: Double Agility

Rick van der Lans is data virtualization’s leading independent analyst.  So when he writes a new white paper, any enterprise that is struggling to connect all their data (which is pretty much every enterprise), would be wise to check it out.

Rick’s latest is Data Vault and Data Virtualization: Double Agility. In a nutshell, the paper addresses how enterprises can craftily combine the Data Vault approach to modeling enterprise data warehouses with the data virtualization approach for connecting and delivering data.  The result is what Rick calls double agility as each approach accelerates time to solution in complex data environments.

Data Vault Pros and Cons

Adding new data sources such as big data and cloud to an existing data warehouses is difficult. The Data Vault approach provides the extensibility required.  This is the first agility.

Unfortunately, from a query and reporting point of view developing reports straight for a Data Vault‐based data warehouse results in complex SQL statements that almost always lead to bad reporting performance. The reason is Data Vault models distribute data over a large number of tables.

Losing Agility Due to Data Mart Proliferation

To solve the performance problems with Data Vault, many enterprises have built physical data marts that reorganize the data for faster queries.

Unfortunately valuable time must be spent on designing, optimizing, loading, and managing all these data marts.   And any new extensions to the enterprise data warehouse must be re-implemented across the impacted marts.

Data Virtualization Returns the Agility

To avoid the data mart workload, yet retain agile warehouse extensibility, Rick has worked with Netherlands based system integrator Centennium and Cisco to provide a better, double agility, alternative.

In this new solution, Cisco Data Virtualization, together with a Centennium-defined data modeling technique called SuperNova, replaces all the physical data marts.  So, no valuable time has to be spent on designing, optimizing, loading, managing and updating these derived data marts. Data warehouse extensibility is retained, but because the reporting is based on virtual, rather than physical models, they are very easy to create and maintain.

Meet Rick van der Lans at Data Virtualization Day

To learn more about this innovative solution as well as data virtualization in general, come to Data Virtualization Day 2014 in New York City on October 1.  Rick, along with the also sharp Barry Devlin, will join me on stage for the Analyst Roundtable. I hope to see you there.

 

Learn More

To learn more about Cisco Data Virtualization, check out our page

Join the Conversation

Follow us @CiscoDataVirt #DVDNYC

Tags: , , , , ,

Innovation Distinguishes Between a Leader and a Follower

Steve Jobs is arguably the most amazing innovator of our times.  I recently read some of his thoughts on innovation. His statement “Innovation distinguishes between a leader and a follower,” caused me to reflect upon my eight-year association with data virtualization, and consider who in the IT analyst community have been the innovative leaders.

Since 2006, I have worked with over one hundred IT analysts to define and advance the data virtualization market.  I even teamed up with one, Judith Davis, to co-author the first book on data virtualization, Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility.

Others such as Rick van der Lans, author of data virtualization’s second book, Data Virtualization for Business Intelligence Systems: Revolutionizing Data Integration for Data Warehouses and the seminal article, The Network is the Database, have contributed mightily to the market’s understanding of data virtualization’s capabilities, advantages and benefits.

The role call of top analysts doing innovative work continues with Noel Yuhanna of Forrester who wrote the analyst community’s first research paper on data virtualization in January 2006, Information Fabric: Enterprise Data Virtualization.

Gartner’s Ted Friedman and Mark A. Beyer, and more recently Merv Adrian, Roxane Edjlali, Mei Selvage, Svetlana Sicular and Eric Thoo, have been both descriptive and proscriptive about the use of data virtualization as a data integration delivery method, a data service enabler and a key component in what Gartner calls the Logical Data Warehouse.

Dave Wells, author of TDWI’s Data Virtualization Course, Data Virtualization: Solving Complex Data Integration Challenges, helped bring data virtualization into the mainstream.    As did Boulder BI Brain Trust members Claudia Imhoff, Colin White, John O’Brien, Ralph Hughes, John Myers and more who I recently wrote about in Rocky Mountains High On Data Virtualization.

Further, there have been myriad analysts who have amazing contributions.

  • The learned trio of Dr. Barry Devlin, Dr. Robin Bloor, and Dr. Richard Hackathorn have pushed the art of the possible.
  • While analyst / practitioners such as Jill Dyche, Mike Ferguson, Rick Sherman, Steve Dine, Evan Levy, David Loshin and William McKnight, via their hands-on client work, have “kept data virtualization grounded on reality street,” to quote Mike Ferguson.
  • And let’s not forget the Massachusetts’ Waynes — Wayne Eckerson formerly of TDWI and Wayne Kernochan, author of the eponymous Thoughts From a Software IT Analyst blog.  Their voices and insights have proven invaluable.

To quote Gene Rodenberry, “It isn’t all over; everything has not been invented; the human adventure is just beginning.”  The same is true for data virtualization.  So I look forward to more great insights from these innovators, as well as a new generation led by Puni Rajah of Canalys and Vernon Turner of IDC.

To see Rick van der Lans and Barry Devlin on stage and gain even more insights from the 2014 Data Virtualization Leadership Award winners, join us at Data Virtualization Day 2014 on October 1 in New York City.

Watch for a sneak peek of Data Virtualization Day 2014.

Learn More

To learn more about Cisco Data Virtualization, check out our page

Join the Conversation

Follow us @CiscoDataVirt #DVDNYC

Tags: , , , ,