Cisco Blogs


Cisco Blog > Data Center and Cloud

Transparently Offloading Data Warehouse Data to Hadoop using Data Virtualization

More data allows for better and more expansive analysis. And better analysis is a critical success factor for businesses today.

But most data warehouses use the once-in-never-out principle when storing data. So whenever new business activities occur, new data is added without removing old data to make room. New data sources, such as data from social media networks, open data sources, and public web services further expand the warehouse. Unfortunately, all this growth comes at a cost.

Is there a way you can have your cake and eat it too?

With Hadoop and Cisco Big Data Warehouse Expansion, you can.

Disadvantages of More Data

While everyone understands the business advantage that can be derived from analyzing more data, not everyone understands the disadvantages that can occur including:

  • Expensive data storage: Data warehouse costs include hardware costs, management costs, and database server license fees.  These grow in line with scale.
  • Poor query performance: The bigger the database tables, the slower the queries.
  • Poor loading performance: As tables grow, loading new data also slows down.
  • Slow backup/recovery: The larger the database, the longer the backup and restore process.
  • Expensive database administration: Larger databases require more database administration including tuning and optimizing the database server, the tables, the buffer, and so on.

Three Options to Control Costs

The easiest way to control data warehouse costs is to simply remove data, especially the less-frequently used or older data. But then this data can no longer be analyzed.

Another option is to move the lesser-used data to tape. This option provides cost savings, and in an emergency, the data can be reloaded from tape. But analysis has now become EXTREMELY difficult.

The third option is to offload lesser-used data to cheaper online data storage, with Hadoop the obvious choice. This provides a 10x cost savings over traditional databases, while retaining the online access required for analysis.

This is the “have your cake and eat it too” option.

The Fast Path to Transparent Offloading

Cisco provides a packaged solution called Cisco Big Data Warehouse Expansion, which includes the data virtualization software, hardware, and services required to accelerate all the activities involved in offloading data from a data warehouse to Hadoop.

And to help you understand how it works, Rick van der Lans, data virtualization’s leading independent analyst, recently wrote a step-by-step white paper, Transparently Offloading Data Warehouse Data to Hadoop using Data Virtualization, that explains everything you need to do.

Read The White Paper

Download Transparently Offloading Data Warehouse Data to Hadoop using Data Virtualization here.

 

Learn More

To learn more about Cisco Data Virtualization, check out our page.

Join the Conversation

Follow us @CiscoDataVirt.

Tags: , , , , , ,

Riding the Big Data Wave: Service Providers Can Accelerate Big Data Evolution to Unlock New Value

By Bill Gerhardt, Director, IBSG Service Provider

Big Data is the new oil! It has the power to transform economies, make businesses more efficient, and improve our daily interactions as consumers. However, like oil, data is not truly valuable until it has been refined—until it is analyzed and some valuable action is extracted from it. Although it has been the subject of much discussion, Big Data is really in its infancy—which begs the questions, “How will big data evolve? And what are the opportunities for service providers to create value in Big Data?”

Catching the Waves of Big Data Evolution

The term Big Data generally refers to the growing scope of data analytics in terms of the variety, velocity, or volume of data involved. Cisco’s Internet Business Solutions Group (IBSG) sees Big Data evolving along three waves and across the three dimensions of data, control, and consumer (see Figure 1). Today, most companies find themselves in the first evolution-ary wave, where data and analytics are siloed within specific business processes. In Wave 1, the result of data analytics is Read More »

Tags: , , , , , ,

Virtualizating Microsoft SQL on Cisco UCS, The Usual Suspects of why people don’t virtualize SQL Server

Virtualizating Microsoft SQL on Cisco UCS,  The Usual Suspects of why people don’t virtualize SQL Server

 

Read More »

Tags: , , , , , , , , , , , ,

Delivering More from Microsoft’s SQL Server 2008 with Cisco’s UCS Server Family

Guest Post by Raghunath Nambiar (UCS Performance Architect) and Frank Cicalese (UCS Systems Engineer)

Data’s at the heart of all business applications -- whether it be a real-time transaction processing or an enterprise decision support system – we know that data is driving the show.

Microsoft’s SQL Server is the database platform that many enterprises have adopted as it provides a scalable architecture, attractive price points, and supports a multitude of use cases such as OLTP and Data Warehouse configurations as well as providing attractive extensions for Business Intelligence modules.

Over the past several months we’ve seen the Cisco UCS server family support a range of SQL Server use cases resulting in improved performance and cost savings for our customers.  The UCS architecture provides key features that can help improve the quality of the SQL Server services you deliver:  Our extended memory feature and virtualization capabilities are two areas that help improve database performance and raise your SQL Server consolidation ratios.

We have couple of upcoming webinar on Cisco UCS and Microsoft SQL Server that you should attend.  We’ll cover the topics mentioned here and more such as OLTP and Data Warehouse. It’s happening on Tuesday, June 28th @ 7:00am PDT and 10:00am PDT. Registration is at http://www.cisco.com/go/semreg/urls/44768/1

Tags: , , , , , , , ,