Companies lose market opportunities daily because of a missing insight or an overlooked fact—simply because it’s not easily within plain view. Sales can’t connect with latest marketing programs. Marketing can’t connect with sales data. Shipping can’t connect with product inventory information. Manufacturing can’t connect with sourcing data. And, so on.
Despite enterprises typically having databases that contain answers to queries like these, critical operating information is often siloed in ways that prevent all stakeholders from obtaining knowledge vital to their tasks, and giving them the big picture of how the business is performing.
Hyper-Distributed Data: Information Overload
Yet overlooking one dot of data can make the difference in losing or winning new customers. A Forrester research analyst expresses the problem succinctly: “More data management needs equate to more data integration to deliver the knowledge to meet executive demands.”
Not only is data growing in size but it is also growing in speed with dramatic increases in computing bandwidth and power — and from the edge of the network to the cloud. Static, centrally stored data warehouses are giving way to dynamic, transient data that is captured on the fly, analyzed and acted upon in real time. Data sources have become hyper-distributed. With the advent of the Internet of Things (IoT), data is being created in more ways, by more devices, in more places than ever before—and is growing exponentially.
Out With the Old. In With the New.
Traditional approaches to database management are falling short. Historically, businesses would design an entire data warehouse schema, including how to extract and transform data, along with a process to load it into the warehouse. Users then would use an application to view the stored data. While appropriate for an older world where markets and technology moved more deliberately, this approach is too slow in development, redundant in replicated data, and manually intensive for today’s fast-changing business environment. Data that is out of sight and out of date can be a competitive downfall.
Enterprises are asking, “How can we cope with the rate and volume of data creation? How can we get critical data to the people who need it most? How do we protect our legacy data while investing in the future? How do we liberate the immense library of data trapped inside our company, so that it’s available to everyone?”
Liberation day is here, thanks to data virtualization. Put simply, data virtualization is integration software that makes it easy to access enterprise data, no matter where it resides. Users can query the data as if it existed in a single place. Common virtualization applications might include big data or cloud data integration, creation of an enterprise-wide data virtualization layer, extending an existing data warehouse, or federating business intelligence (BI) data. The needs will vary by where enterprises stand in their data management journey. The flexibility of data virtualization gives them a choice of places to start. Above all, a company can leverage all their enterprise data, eliminating data islands and giving employees access to the information they need–when, where and how they want it–with data virtualization.
From the results perspective, data virtualization helps businesses respond faster to constantly changing business intelligence needs. In fact, some of our customers using Cisco Data Virtualization have reported that users are realizing business insights 5 to 10 times faster than with traditional data integration. Equally important, IT management has seen time savings of more than fifty percent compared to their old data management methods, which required constant data replication and consolidation. Virtualization’s ability to reduce data management complexity also means that the IT organization can increase utilization of its existing storage and server investments, helping contain hardware expense and easing governance.
Be Prepared—to Succeed: The First Step for Better Data Outcomes.
Connecting the data dots—or data integration—is critically important in gaining business insights. But often overlooked is the need to get the data “right” first—before it is integrated and analyzed. Quality data helps ensure quality analysis and contributes to consistent business outcomes.
With the speed of data acquisition accelerating and multiplied by the collection of massive amounts of information, poor data preparation can dramatically slow business momentum. Aggregating unstructured and structured data from both external and internal sources, correcting misspelled words, creating context, eliminating duplicate information or blank fields, and reshaping columns are essential to preparing data for analytics you can depend upon. However, even those companies who recognize these facts know that proper data preparation takes considerable time and resources. Customers tell us that every moment spent on data preparation means less time is spent on interpreting data analytics and acting upon business insights.
That’s the reason why we introduced Cisco Data Preparation earlier this year. It helps business analytic teams who want to boost their data preparation productivity, while also reducing the risk of poor data quality. The Cisco self-service platform can either be deployed in the cloud or on customer premises.
Working with flat files, structured relational databases, or business applications like Salesforce.com, analysts can explore the data with visual, interactive data tools to quickly understand and identify their business requirements. Data can be scrubbed and modified in the moment using natural language processing without the need for coding, SQL or scripting.
Many times, analysts will have the need to pull multiple data sets together. Typically this is a difficult process, as analysts try to determine the best data fields to merge. Using Cisco Data Preparation, the ideal connections are automatically recommended, saving analysts considerable time and effort. Also important, teams can work together to share and reuse data using a centralized library. With built-in authentication and auditing capabilities, team members always know that they are operating in a secure environment where highly trustworthy data is being created together for a shared outcome.
If you haven’t heard of “data preparation” don’t be surprised. The category has only taken off in the last few years but it is quickly gaining altitude. Stewart Bond, Director at IDC, Data Integration and Access Software, reflecting the viewpoints of other leading analyst firms describes what is driving the growth, “Solutions are emerging in response to demand from today’s tech-savvy business users wanting more access to their data and IT’s desire to empower business users with more data access while still maintaining control.”
Are you confident that the employees in your company have all the access they need to connect all the dots of data for critical business insights? If not, do you think data virtualization and preparation could help you take a major step toward creating a bigger and better picture for them? Do you envision any obstacles? I would love to hear from you on these or other questions you may have. Thanks for your interest!