The ELK stack is a set of analytics tools. Its initials represent Elasticsearch, Logstash and Kibana. Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Logstash is a tool for receiving, processing and outputting logs, like system logs, webserver logs, error logs, application logs and many more. Kibana is an open source (Apache-licensed), browser-based analytics and search dashboard for Elasticsearch.
ELK is a very open source, useful and efficient analytics platform, and we wanted to use it to consume flow analytics from a network. The reason we chose to go with ELK is that it can efficiently handle lots of data and it is open source and highly customizable for the user’s needs. The flows were exported by various hardware and virtual infrastructure devices in NetFlow v5 format. Then Logstash was responsible for processing and storing them in Elasticsearch. Kibana, in turn, was responsible for reporting on the data. Given that there were no complete guides on how to use NetFlow with ELK, below we present a step-by-step guide on how to set up ELK from scratch and enabled it to consume and display NetFlow v5 information. Readers should note that ELK includes more tools, like Shield and Marvel, that are used for security and Elasticsearch monitoring, but their use falls outside the scope of this guide.
In our setup, we used
- Elasticsearch 1.3.4
- Logstash 1.4.2
- Kibana 3.1.1
For our example purposes, we only deployed one node responsible for collecting and indexing data. We did not use multiple nodes in our Elasticsearch cluster. We used a single-node cluster. Experienced users could leverage Kibana to consume data from multiple Elasticsearch nodes. Elasticsearch, Logstash and Kibana were all running in our Ubuntu 14.04 server with IP address 10.0.1.33. For more information on clusters, nodes and shard refer to the Elasticsearch guide.
Read More »
Tags: Big Data, big data analytics, netflow, security
Connecting Dark Assets: An ongoing series on how the Internet of Everything is transforming the ways in which we live, work, play, and learn.
Racing down the wide, open highway on a beautifully crafted motorcycle is one of life’s most exhilarating rushes. At least I used to think so, before my wife talked me into taking up safer pastimes.
But Internet of Everything (IoE) technologies may be offering me a new lease on motorcycling. A new product called the Skully AR-1 is being billed as “The World’s Smartest Motorcycle Helmet.” And who am I to argue? Read More »
Tags: Big Data, Cisco, Cisco Consulting Services, employee productivity, Internet of Everything, IoT, job creation, Joseph Bradley
According to the Breach Level Index, between July and September of this year, an average of 23 data records were lost or stolen every second – close to two million records every day.1 This data loss will continue as attackers become increasingly sophisticated in their attacks. Given this stark reality, we can no longer rely on traditional means of threat detection. Technically advanced attackers often leave behind clue-based evidence of their activities, but uncovering them usually involves filtering through mountains of logs and telemetry. The application of big data analytics to this problem has become a necessity.
To help organizations leverage big data in their security strategy, we are announcing the availability of an open source security analytics framework: OpenSOC. The OpenSOC framework helps organizations make big data part of their technical security strategy by providing a platform for the application of anomaly detection and incident forensics to the data loss problem. By integrating numerous elements of the Hadoop ecosystem such as Storm, Kafka, and Elasticsearch, OpenSOC provides a scalable platform incorporating capabilities such as full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search, and telemetry aggregation. It also provides a centralized platform to effectively enable security analysts to rapidly detect and respond to advanced security threats.
The OpenSOC framework provides three key elements for security analytics:
A mechanism to capture, store, and normalize any type of security telemetry at extremely high rates. OpenSOC ingests data and pushes it to various processing units for advanced computation and analytics, providing the necessary context for security protection and the ability for efficient information storage. It provides visibility and the information required for successful investigation, remediation, and forensic work.
Real-time processing and application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry. The immediate application of this information to incoming telemetry provides the greater context and situational awareness critical for detailed and timely investigations.
The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.
During a breach, sensitive customer information and intellectual property is compromised, putting the company’s reputation, resources, and intellectual property at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:
- Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.
- Research external threat intelligence sources to uncover proactive warnings to potential attacks.
- Research a network forensics tool with full packet capture and historical records in order to determine context.
Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques.
When we built OpenSOC, one of our goals was to bring all of these pieces together into a single platform. Analysts can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.
No network is created equal. Telemetry sources differ in every organization. The amount of telemetry that must be collected and stored in order to provide enough historical context also depends on the amount of data flowing through the network. Furthermore, relevant threat intelligence differs for each and every individual organization.
As an open source solution, OpenSOC opens the door for any organization to create an incident detection tool specific to their needs. The framework is highly extensible: any organization can customize their incident investigation process. It can be tailored to ingest and view any type of telemetry, whether it is for specialized medical equipment or custom-built point of sale devices. By leveraging Hadoop, OpenSOC also has the foundational building blocks to horizontally scale the amount of data it collects, stores, and analyzes based on the needs of the network. OpenSOC will continually evolve and innovate, vastly improving organizations’ ability to handle security incident response.
We look forward to seeing the OpenSOC framework evolving in the open source community. For more information and to contribute to the OpenSOC community, please visit the community website at http://opensoc.github.io/.
Tags: analytics, Big Data, data loss, detection, OpenSOC
The Internet of Everything continues to gain momentum and every new connection is creating new data. Cisco UCS Integrated Infrastructure for Big Data is helping customers convert that data into powerful intelligence, and we’re working with a number of new partners to bring exciting new solutions to our customers.
Today, I want to spotlight Elasticsearch, Inc. and welcome them to the Cisco Solution Partner Program.
Elasticsearch excels at providing real-time insight into data – whether structured or unstructured, human- or machine-generated; by bringing a search-based architecture to data analytics. By combining the ELK stack with Cisco UCS, organizations benefit from a turnkey underlying infrastructure solution that provides them with real-time search and analytics for a variety of applications, from log analysis, to structured, semi-structured, or unstructured searches, as well as a web-backend for custom applications that use search-based analytics as a core functionality.
Mozilla is just one of the companies who are already benefiting from the joint solution with real-time search and analysis of data powering its defense platform, MozDef. The ELK stack leverages Cisco UCS’ fast connectivity for query, indexing and replication of data traffic. And Elasticsearch handles the full scale of event storage, archiving, indexing and searching of the data logs. The ELK stack and Cisco UCS also protect Mozilla’s network, services, systems, and audit data from hackers.
Partners like Elasticsearch are just one reason that Cisco UCS Integrated Infrastructure can help your company capitalize on the IoE data avalanche and deliver powerful and cost-effective analytics solutions throughout your enterprise.
Find out more at www.cisco.com/go/bigdata, or register for a webinar entitled, “Learn How Mozilla Tackles their Security Logs with Elasticsearch and Cisco”.
Thursday, November 13th
9:00 AM PST / 12:00 PM EST / 5:00 PM GMT
Are you interested in learning how to build enterprise applications on top of Elasticsearch and Cisco’s Unified Computing System (UCS) infrastructure? We’re holding a webinar to delve more deeply into how to optimize ELK on Cisco UCS infrastructure.
Cisco UCS unites compute, network, and storage access into a single cohesive system. By combining the ELK stack with Cisco UCS, businesses benefit by having a turnkey hardware-software solution for their search and analytics applications. In this webinar you’ll learn about the various UCS hardware profiles you should consider when deploying ELK and how Mozilla built MozDef, their custom SIEM application, using ELK on Cisco UCS.
- Introduction – Jobi George, Elasticsearch (5 minutes)
- Overview of UCS + ELK reference architectures – Raghunath Nambiar, Distinguished Engineer, Data Center Business Group, Cisco (10 minutes)
- How Mozilla Built MozDef on ELK and Cisco UCS – Jeff Bryner, Security Assurance, Mozilla (25 minutes)
- Q&A – Jobi George, Elasticsearch (~20 minutes)
Tags: analytics, Big Data, Cisco, Cisco UCS, Cisco Unified Computing System, elasticsearch, UCS
Last month, I had the honor of presenting at the Internet of Things (IoT) World Forum in Chicago. The event gave me the opportunity to do one of my favorite things, collaborate and network with all of my peers who are doing creative work within the world of IoT. One of my colleagues, Kowsalya Arunprakash, Lead Architect of Virtual Data Integration Services for Time Warner Cable, co-presented and shared a use case in which Time Warner is utilizing Cisco Data Virtualization to enhance its customer experience with analytics. In today’s blog, I’d like to share more details about this use case, because I think it’s a great example of how organizations are leveraging IoT solutions as a way to better serve their customers and separate themselves from their competition.
Time Warner Cable IntelligentHome is a home security and energy management system which users can control from their smartphone, tablet or computer to do things like view live video, arm/disarm their system, turn their home lights on/off or adjust the temperature of their thermostat. As you can imagine, each one of these pieces of equipment creates a fair amount of data through radio-frequency identification (RFID) and sensors. On top of this, as consumers generally do, users are going to social media to share their experience.
Utilizing Cisco Data Virtualization, Time Warner is able to couple this data along with sales, marketing and historical customer data to get a full 360 degree view of operational analytics. By operational, I am referring to intelligence like customer trends, sales analytics and resource allocation management.
This combination is an extremely powerful tool to understand the customer in order to create and adapt products and services that cater to their wants and needs. It’s an opportunity to see how the product is being used, via RFIDs and sensors, coupled with customers’ feedback and experience shared on social media as well as their long-term history of usage and preferences.
During her presentation, Kowsalya shared that by leveraging these insights Time Warner is able to improve sales, reduce customer churn and even work with local law enforcement and emergency services to respond faster to current events. To hear more, I encourage you to watch the video interview of Kowsalya and learn about how Time Warner is using Data Virtualization to derive value for their customers.
To learn more about Cisco Data and Analytics, check out our page.
Join the Conversation
Follow @MikeFlannagan and @CiscoAnalytics.
Tags: analytics, Big Data, Cisco data and analytics, data analytics, data virtualization, internet of things, Internet of Things World Forum, IoT, time warner cable