Cisco Blogs


Cisco Blog > Data Center

Cisco UCS is now Hortonworks “Operations Ready”

Cisco has been working closely with Hortonworks in delivering turnkey Big Data solutions that expedites the time to market for our joint Big Data Hadoop customers. Cisco’s industry leading UCS Integrated Infrastructure for Big Data, is designed to deliver performance at scale for a wide variety of Big Data workloads. We are working with Hortonworks to integrate Cisco UCS Director Express for Big Data with Apache Ambari, to provide a fully automated solution to deploy and manage Hadoop hardware, networking and the Hortonworks Data Platform. It is built on the solid foundation of the highly successful UCS management platform and the award winning UCS Director orchestration engine.

Today, we are excited that Cisco UCS is HDP certified and Operations Ready. The integration with Apache Ambari allows customers to now deploy and manage their Hadoop clusters in a reliable and consistent manner. The Operations Ready designation is a new certification introduced by Hortonworks to provide the additional assurance that the tool has been integrated with Apache Ambari APIs. Cisco UCS with Hortonworks delivers a fully validated solution and reduces the complexity of managing Hadoop clusters. Cisco is committed to bringing industry leading solutions for Big Data to market, in partnership with Hortonworks and other ecosystem partners.

Tags: , , , , , ,

Connected Analytics: Capturing the Value of the Internet of Everything

Ten large oil refineries produce about 10 terabytes of data each day, which equates to the entire printed collection of the U.S. Library of Congress.

One modernized city the size of Singapore can generate about 2.5 petabytes of data every day, which translates to all U.S. academic research libraries combined.

And with more than 14 billion, data-transmitting devices connected to the Internet today, growing to 50 billion by 2020, it is little wonder that most of us are overwhelmed by this mind-boggling explosion of data.

Wim 1

Turning this flood of raw data into useful information and even wisdom for better business decisions and quality of life experiences is what the Internet of Everything (IoE) is all about. This is a daunting task. According to IDC Research, just .5% of all data is used or analyzed, and online data volumes are doubling every two years from a combination of mobile devices, videos, sensors, M2M, social media, applications and much more.

Connected Analytics Portfolio

Last Thursday, however, Cisco unveiled our Connected Analytics portfolio for the Internet of Everything, a unique approach that includes software packages to bring analytics to the data, regardless of its location or whether it is in motion or at rest. This new generation of analytics tools for IoE can convert more and more data into valuable intelligence — from the inter cloud, to the data center to the network’s edge.

Read More »

Tags: , , , , , , , , ,

#EngineersUnplugged S7|Ep1: SAP HANA 101

In this episode of Engineers Unplugged, Tony Harvey (@tonyknowspower) and Craig Sullivan (@craigsullivan70) discuss the role of storage in SAP HANA. How does big data impact you? Watch and learn.

 

https://www.youtube.com/watch?v=TmZwq1UCo8w

Thinking about unicorns.

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , , ,

Building an Enterprise Data Hub, Evaluating Infrastructure

Cisco, MapR, Informatica

 

Want to get the most out of your big data? Build an enterprise data hub (EDH).

Big data is rapidly getting bigger. That in itself isn’t a problem. The issue is what Gartner analyst Doug Laney describes as the three Vs of Big Data: volume, velocity, and variety.

 

Gartner

Volume refers to the ever-growing amount of data being collected. Velocity is the speed at which the data is being produced and moved through the enterprise information systems. Variety refers to the fact that we’re gathering information from multiple data sources such as sensors, enterprise resource planning (ERP) systems, e-commerce transactions, log files, supply chain info, social media feeds, and the list goes on.

 

Data warehouses weren’t made to handle this fast-flowing stream of wildly dissimilar data. Using them for this purpose has led to resource-draining, sluggish response times as workers attempt to perform numerous extract, load, and transform (ELT) functions to make stored data accessible and usable for the task at hand.

Constructing Your Hub

An EDH addresses this problem. It serves as a central platform that enables organizations to collect structured, unstructured, and semi-structured data from slews of sources, process it quickly, and make it available throughout the enterprise.

Building an EDH begins with selecting the right technology in three key areas: infrastructure, a foundational system to drive EDH applications, and the data integration platform. Obviously, you want to choose solutions that fit your needs today and allow for future growth. You’ll also want to ensure they are tested and validated to work well together and with your existing technology ecosystem. In this post, we’ll focus on selecting the right hardware.

Cisco UCS Big Data Domain

 

The Infrastructure Component

Big data deployments must be able to handle continued growth, from both a data and user load perspective. Therefore, the underlying hardware must be architected to run efficiently as a scalable cluster. Important features such as the integration of compute and network, unified management, and fast provisioning all contribute to an elastic, cloud-like infrastructure that’s required for big data workloads. No longer is it satisfactory to stand up independent new applications that result in new silos. Instead, you should plan for a common and consistent architecture to meet all of your workload requirements.

 

Big data workloads represent a relatively new model for most data centers, but that doesn’t mean best practices must change. Handling a big data workload should be viewed from the same lens as deployments of traditional enterprise applications. As always, you want to standardize on reference architectures, optimize your spending, provision new servers quickly and consistently, and meet the performance requirements of your end users.

 

Cisco Unified Computing System to Run Your EDH

Cisco UCS for Big Data

The Cisco Unified Computing System™ (Cisco UCS®) Integrated Infrastructure for Big Data delivers a highly scalable platform that is proven for enterprise applications like Oracle, SAP, and Microsoft. It also provides the same required enterprise-class capabilities–performance, advanced monitoring, simplification of management, QoS guarantees–to big data workloads. With lower switch and cabling infrastructure costs, lower power consumption, and lower cooling requirements, you can realize a 30 percent reduction in total cost of ownership. In addition, with its service profiles, you get fast and consistent time-to-value by leveraging provisioning templates to instantly set up a new cluster or add many new nodes to an existing cluster.

 

And when deploying an EDH, the MapR Distribution including Apache Hadoop® is especially well-suited to take advantage of the compute and I/O bandwidth of Cisco UCS. Cisco and MapR have been working together for the past 2 years and have developed Cisco-validated design guides to provide customers the most value for their IT expenditures.

 

Cisco UCS for Big Data comes in optimized power/performance-based configurations, all of which are tested with the leading big data software distributions. You can customize these configurations further, or use the system as is. Utilizing one of Cisco UCS for Big Data’s pre-configured options goes a long way to ensuring a stress-free deployment. All Cisco UCS solutions also provide a single point of control for managing all computing, networking, and storage resources, for any fine tuning you may do before deployment or as your hub evolves in the future.

 

I encourage you to check out the latest Gartner video to hear Satinder Sethi, our VP of Data Center Solutions Engineering and UCS Product Management, share his perspective on how powering your infrastructure is an important component of building an enterprise data hub.

 

Gartner Video

 

 

 

 

 

 

 

 

 

In addition, you can read the MapR Blog, Building an Enterprise Data Hub, Choosing the Foundational Software.

Let me know if you have any comments or questions, or via twitter at @CicconeScott.

Tags: , , , , , , , , , , , , , ,

Step-by-Step Setup of ELK for NetFlow Analytics

Contents

 

 

Intro

 

The ELK stack is a set of analytics tools. Its initials represent Elasticsearch, Logstash and Kibana. Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Logstash is a tool for receiving, processing and outputting logs, like system logs, webserver logs, error logs, application logs and many more. Kibana is an open source (Apache-licensed), browser-based analytics and search dashboard for Elasticsearch.

ELK is a very open source, useful and efficient analytics platform, and we wanted to use it to consume flow analytics from a network. The reason we chose to go with ELK is that it can efficiently handle lots of data and it is open source and highly customizable for the user’s needs. The flows were exported by various hardware and virtual infrastructure devices in NetFlow v5 format. Then Logstash was responsible for processing and storing them in Elasticsearch. Kibana, in turn, was responsible for reporting on the data. Given that there were no complete guides on how to use NetFlow with ELK, below we present a step-by-step guide on how to set up ELK from scratch and enabled it to consume and display NetFlow v5 information. Readers should note that ELK includes more tools, like Shield and Marvel, that are used for security and Elasticsearch monitoring, but their use falls outside the scope of this guide.

In our setup, we used

  • Elasticsearch 1.3.4
  • Logstash 1.4.2
  • Kibana 3.1.1

For our example purposes, we only deployed one node responsible for collecting and indexing data. We did not use multiple nodes in our Elasticsearch cluster. We used a single-node cluster. Experienced users could leverage Kibana to consume data from multiple Elasticsearch nodes. Elasticsearch, Logstash and Kibana were all running in our Ubuntu 14.04 server with IP address 10.0.1.33. For more information on clusters, nodes and shard refer to the Elasticsearch guide.

Read More »

Tags: , , ,