Cisco Blogs


Cisco Blog > Threat Research

Dridex Is Back, then it’s gone again

This post was authored by Armin Pelkmann and Earl Carter.

Talos Security Intelligence and Research Group noticed a reappearance of several Dridex email campaigns, starting last week and continuing into this week as well. Dridex is in a nutshell, malware designed to steal your financial account information. The attack attempts to get the user to install the malicious software on their system through an until lately, rarely exploited attack vector: Microsoft Office Macros. Recently, we noticed a resurgence of macro abuse. If macros are not enabled, social engineering techniques are utilized to try to get the user to enable them. Once the malware is installed on the system, it is designed to steal your online banking credentials when you access your banking site from an infected system.

Talos analyzed three separate campaigns in the last days, all distinguishable from their subject lines. Read More »

Tags: , , , , , , , ,

Microsoft Patch Tuesday for December 2014: Light Month, Some Changes

This post was authored by Yves Younan.

Today, Microsoft is releasing their final Update Tuesday of 2014. Last year, the end of year update was relatively large. This time, it’s relatively light with a total of seven bulletins, covering 24 CVEs. Three of those bulletins are rated critical and four are considered to be important. Microsoft has made a few changes to the way they report their bulletins. Microsoft has dropped the deployment priority (DP) rating, which was very much environment-specific and might not be all that useful for non-default installations. Instead, they are now providing an exploitability index (XI), which ranges from zero to three. With zero denoting active exploitation and three denoting that it’s unlikely that the vulnerability would be exploited. Another change is to more clearly report on how the vulnerability was disclosed: was Microsoft notified via coordinated vulnerability disclosure or was the vulnerability publicly known before being released? Read More »

Tags: , , , , ,

Understanding and Addressing the Challenges of Managing Information Security – A More Responsive Approach

Just like bad weather conditions found in nature, such as typhoons, hurricanes, or snowstorms, technology system defects and vulnerabilities are inherent characteristics found in a cyber system environment.

Regardless of whether it’s a fair comparison, weather changes are part of the natural environment that we have little direct control over, whereas the cyber environment is fundamentally a human creation. Despite these differences, the choices we make do have a direct implication even if they are not obvious. Take for example the use of lead-based or diesel fuel in vehicles, or controlled burns in the forest to clear land for agricultural use. Both have negative effects on air quality. The same is true for information technology developers, whose actions in designing software programs may unknowingly create software bugs or potential security risks because of their interactions with other non-tested, non-secure network systems and cyber environments.

Read More »

Tags: , ,

Exciting Fourth Edition of SecCon-X Bangalore

The week of November 10 was filled with learning and excitement for security technology enthusiasts at Cisco’s Bangalore campus as people gathered for SecCon-X 2014, Cisco’s largest annual cross-company security conference. The event scaled in scope and content compared to last year, starting with a dedicated customer engagement event, and was followed by two days of conference activities, including 21 presentations and 2 panel discussions by a varied mix of speakers and panelists from industry, academia, and Cisco. All the sessions were packed with 250+ participants and 350+ IP TV viewers each day, which was proof of how the Cisco community in Bangalore relished the event. The huge buzz around the vendor expo booths and the poster walls was heartening to see.

What was new this year?

  • 11 boot camp and training sessions on a wide range of security technology topics.
  • The Customer Engagement Event was a huge success with 20+ customers participating in the event, which enabled Cisco to communicate our vision, demonstrate our solutions, and hear from customers on the challenges they faced in the evolving threat landscape.
  • Events like Hack Your Device (7 teams filed security defects on various products), Capture The Flag (116 participated and 10 captured all the flags), and a Lunch & Learn session for Cisco Women in Cyber Security, were well arranged and much appreciated by all attendees.

Tags: , , , ,

Step-by-Step Setup of ELK for NetFlow Analytics

Contents

 

 

Intro

 

The ELK stack is a set of analytics tools. Its initials represent Elasticsearch, Logstash and Kibana. Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Logstash is a tool for receiving, processing and outputting logs, like system logs, webserver logs, error logs, application logs and many more. Kibana is an open source (Apache-licensed), browser-based analytics and search dashboard for Elasticsearch.

ELK is a very open source, useful and efficient analytics platform, and we wanted to use it to consume flow analytics from a network. The reason we chose to go with ELK is that it can efficiently handle lots of data and it is open source and highly customizable for the user’s needs. The flows were exported by various hardware and virtual infrastructure devices in NetFlow v5 format. Then Logstash was responsible for processing and storing them in Elasticsearch. Kibana, in turn, was responsible for reporting on the data. Given that there were no complete guides on how to use NetFlow with ELK, below we present a step-by-step guide on how to set up ELK from scratch and enabled it to consume and display NetFlow v5 information. Readers should note that ELK includes more tools, like Shield and Marvel, that are used for security and Elasticsearch monitoring, but their use falls outside the scope of this guide.

In our setup, we used

  • Elasticsearch 1.3.4
  • Logstash 1.4.2
  • Kibana 3.1.1

For our example purposes, we only deployed one node responsible for collecting and indexing data. We did not use multiple nodes in our Elasticsearch cluster. We used a single-node cluster. Experienced users could leverage Kibana to consume data from multiple Elasticsearch nodes. Elasticsearch, Logstash and Kibana were all running in our Ubuntu 14.04 server with IP address 10.0.1.33. For more information on clusters, nodes and shard refer to the Elasticsearch guide.

Read More »

Tags: , , ,