Cisco Blogs


Cisco Blog > Security

Big Data in Security – Part III: Graph Analytics

TRACFollowing part two of our Big Data in Security series on University of California, Berkeley’s AMPLab stack, I caught up with talented data scientists Michael Howe and Preetham Raghunanda to discuss their exciting graph analytics work.

Where did graph databases originate and what problems are they trying to solve?

Michael: Disparate data types have a lot of connections between them and not just the types of connections that have been well represented in relational databases. The actual graph database technology is fairly nascent, really becoming prominent in the last decade. It’s been driven by the cheaper costs of storage and computational capacity and especially the rise of Big Data.

There have been a number of players driving development in this market, specifically research communities and businesses like Google, Facebook, and Twitter. These organizations are looking at large volumes of data with lots of inter-related attributes from multiple sources. They need to be able to view their data in a much cleaner fashion so that the people analyzing it don’t need to have in-depth knowledge of the storage technology or every particular aspect of the data. There are a number of open source and proprietary graph database solutions to address these growing needs and the field continues to grow.

Graph Read More »

Tags: , , , , , , , , , , , , ,

Big Data in Security – Part II: The AMPLab Stack

TRAC

Following part one of our Big Data in Security series on TRAC tools, I caught up with talented data scientist Mahdi Namazifar to discuss TRAC’s work with the Berkeley AMPLab Big Data stack.

Researchers at University of California, Berkeley AMPLab built this open source Berkeley Data Analytics Stack (BDAS), starting at the bottom what is Mesos?

AMPLab is looking at the big data problem from a slightly different perspective, a novel perspective that includes a number of different components. When you look at the stack at the lowest level, you see Mesos, which is a resource management tool for cluster computing. Suppose you have a cluster that you are using for running Hadoop Map Reduce jobs, MPI jobs, and multi-threaded jobs. Mesos manages the available computing resources and assigns them to different kinds of jobs running on the cluster in an efficient way. In a traditional Hadoop cluster, only one Map-Reduce job is running at any given time and that job blocks all the cluster resources.  Mesos on the other hand, sits on top of a cluster and manages the resources for all the different types of computation that might be running on the cluster. Mesos is similar to Apache YARN, which is another cluster resource management tool. TRAC doesn’t currently use Mesos.

 

AMPLab Stack

The AMPLab Statck
Source: https://amplab.cs.berkeley.edu/software/

Read More »

Tags: , , , , , , , , , , , , , , , , , , ,

Big Data in Security – Part I: TRAC Tools

TRACRecently I had an opportunity to sit down with the talented data scientists from Cisco’s Threat Research, Analysis, and Communications (TRAC) team to discuss Big Data security challenges, tools and methodologies. The following is part one of five in this series where Jisheng Wang, John Conley, and Preetham Raghunanda share how TRAC is tackling Big Data.

Given the hype surrounding “Big Data,” what does that term actually mean?

John:  First of all, because of overuse, the “Big Data” term has become almost meaningless. For us and for SIO (Security Intelligence and Operations) it means a combination of infrastructure, tools, and data sources all coming together to make it possible to have unified repositories of data that can address problems that we never thought we could solve before. It really means taking advantage of new technologies, tools, and new ways of thinking about problems.

Big Data

Read More »

Tags: , , , , , , , , , , , , , , , , , , ,

Building on Success: Cisco and Intel Expand Partnership to Big Data

This has been an exciting week. Further expanding its Big Data portfolio, Cisco has announced collaboration with Intel, its long term partner, for the next generation of open platform for data management and analytics. The joint solution combines Intel® Distribution for Apache Hadoop Software with Cisco’s Common Platform Architecture (CPA) to deliver performance, capacity, and security for enterprise-class Hadoop deployments.

As described in my blog posting, the CPA is highly scalable architecture designed to meet variety of scale-out application demands that includes compute, storage, connectivity and unified management, already being deployed in a range of industries including finance, retail, service provider, content management and government. Unique to this architecture is the seamless data integration and management integration capabilities between big data applications and enterprise applications such as Oracle Database, Microsoft SQL Server, SAP and others, as shown below:
CPA Magt 1
The current version of the CPA offers two options depending on use case: Performance optimized -- offers balanced compute power with I/O bandwidth optimized for price/performance, and Capacity optimized – for low cost per terabyte. The Intel® Distribution is supported for both performance optimized and capacity optimized options, and is available in single rack and multiple rack scale.

The Intel® Distribution is a controlled distribution based on the Apache Hadoop, with feature enhancements, performance optimizations, and security options that are responsible for the solution’s enterprise quality. The combination of the Intel® Distribution and Cisco UCS joins the power of big data with a dependable deployment model that can be implemented rapidly and scaled to meet performance and capacity of demanding workloads.  Enterprise-class services from Cisco and Intel can help with design, deployment, and testing, and organizations can continue to rely on these services through controlled and supported releases.

A performance optimized CPA rack running Intel® Distribution will be demonstrated at the Intel Booth at O’Reilly Strata Conference 2013 this week.

CPA at Strata 2013

References:

1. Cisco UCS with the Intel Distribution for Apache Hadoop -- Solution Brief
2. Cisco’s Common Platform Architecture (CPA) for Big Data
3. Paul Perez and Boyd Davis on Cisco and Intel Partnership on Big Data (Video)
4. Cisco and Intel Announcement -- blog by Didier Rombaut
5. Intel Guest Blog

 

Tags: , , , , , ,

Introducing Cisco UCS Common Platform Architecture (CPA) for Big Data

Updated: 10/01/2013

You may have heard that the digital universe is in petabytes, global IP traffic is in 100s of exabytes. These are mind bogglingly large metrics. Big data analytics can play a crucial role in making datasets in this space usable – by improving operational efficiency to customer experience to prediction accuracy. While Cisco is the global leader in networking -- Did you know that 85% of estimated 500 exabyte global IP traffic in 2012 will pass through Cisco devices ? – the company also builds an innovative family of unified computing products. This enables the company to provide a complete infrastructure solution including compute, storage, connectivity and unified management for big data applications that reduce complexity, improves agility, and radically improves cost of ownership.

To meet a variety of big data platform demands (Hadoop, NoSQL Databases, Massively Parallel Processing Databases etc), Cisco offers a comprehensive solution stack: the Cisco UCS Common Platform Architecture (CPA) for Big Data includes compute, storage, connectivity and unified management. Unique to this architecture is the seamless data integration and management integration capabilities with enterprise application ecosystem including Oracle RDBMS/RAC, Microsoft SQL Server, SAP and others. See Figure 1.

Figure 1:
UCS CPA

The Cisco UCS CPA for Big Data is built using the following components:

  • Cisco UCS 6200 Series Fabric Interconnects provides high speed, low latency connectivity for servers and centralized management for all connected devices with UCS Manager. Deployed in redundant pairs offers the full redundancy, performance (active-active), and exceptional scalability for large number of nodes typical in big data clusters. UCS Manger enables rapid and consistent server integration using service profile, ongoing system maintenance activities such as firmware update operations across the entire cluster as a single operation, advanced monitoring, and option to raise alarms and send notifications about the health of the entire cluster.
  • Cisco UCS 2200 Series Fabric Extenders, act as remote line cards for Fabric Interconnects providing a highly scalable and extremely cost-effective connectivity for large number of nodes.
  • Cisco UCS C240 M3 Rack-Mount Servers, 2-RU server designed for wide range of compute, IO and storage capacity demands. Powered by two Intel Xeon E5-2600 series processors and support up to 768 GB of main memory (typically 128GB or 256GB for big data applications) and up to 24 SFF disk drives in the performance optimized option or 12 LFF disk drives in the capacity optimized option. Also features Cisco UCS VNIC optimized for high bandwidth and low latency cluster connectivity with support for up to 256 virtual devices.
    Read More »

Tags: , , , , , , , , , , , , ,