Avatar

It’s our pleasure to announce the public availability of GOSINT – the open source intelligence gathering and processing framework. GOSINT allows a security analyst to collect and standardize structured and unstructured threat intelligence. Applying threat intelligence to security operations enriches alert data with additional confidence, context, and co-occurrence. This means that you are applying research from third parties to your event data to identify similar, or identical, indicators of malicious behavior.

There is already so much open source [threat] intelligence (OSINT) available on the web, but no easy way to collect and filter through it to find useful info. GOSINT aggregates, validates, and sanitizes indicators for consumption by other tools like CRITs, MISP, or directly into log management systems or SIEM. While the threat intelligence sharing community matures, GOSINT will adapt to support additional export formats and indicator sharing protocols.

GOSINT Indicator Sources

You can think of GOSINT as a transfer station for threat indicators. The software allows threat intelligence analysts to judge whether an indicator is worthy of tracking or if it should be rejected. This decision making step is crucial in managing any set of threat indicators. Vetting by both a human analyst and GOSINT itself improves the quality of indicators threat detection efficacy. There is no limit to the number of indicator sources you can add.

We have realized a lot of value in many open source feeds as well as looking at Twitter via its API along with Cisco Umbrella, VirusTotal, and others.GOSINT Automatic Indicators

As part of the vetting process, currently GOSINT can take several actions to provide additional context to indicators in the pre-processing phase. An analyst can run indicators through Cisco Umbrella, ThreatCrowd, VirusTotal, and other sources. The information returned from these services can help an analyst reach a verdict on the value of the indicator, as well as tag the indicator with additional context that might be used later on in the analysis pipeline.

There is also a “Recipe Manager” that allows you to perform multiple operations on threat indicators from various sources. Say for example you want to always compare sha256 hash values from a favorite twitter feed with the VirusTotal API, and if there’s greater than 3 detections, add the hash indicators to production. The manager offers several configurable options to allow analysts to speed up their indicator processing and enriching.

GOSINT Recipe Manager

 

GOSINT also has another useful feature with its “Ad Hoc Input” option. This allows an analyst to point GOSINT at a URL and fetch any or all indicators available. For example, if an analyst reads a blog about a particular malware campaign or malware analysis, GOSINT can crawl the blog for indicators and import them for pre-processing. This ad-hoc method allows analysts to quickly import indicators from content that cannot be automatically subscribed to, or has intermittent data available.

GOSINT Ad-Hoc Indicators

If you work with threat indicators on a regular basis you should check out GOSINT today. Our CSIRT team has already seen value in importing these additional feeds into our analysis pipeline and GOSINT has been valuable in detecting the very latest threat indicators before they become stale.

Authors

Jeff Bollinger

CSIRT Manager

Infosec CSIRT

Avatar

Typically, Talos has the luxury of time when conducting research. We can carefully draft a report that clearly lays out the evidence and leads the reader to a clear understanding of our well supported findings. A great deal of time is spent ensuring that the correct words and logical paths are used so that we are both absolutely clear and absolutely correct. Frequently, the goal is to inform and educate readers about specific threats or techniques.

There are times, however, when we are documenting our research in something very close to real-time. The recent WannaCry and Nyetya events are excellent examples of this. Our goal changes here, as does our process. Here we are racing the clock to get accurate, impactful, and actionable information to help customers react even while new information is coming in.

In these situations, and in certain other kinds of investigations, it is necessary for us to talk about something when we aren’t 100% certain we are correct.

Read More

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

Like many of you out there, Vikram Hosakote was one of those kids who was always taking apart the remote controls and other electronic gadgets in his home to see how they worked.

By the time he was 11 he had gotten his hands on an x86 box, and soon he was learning COBOL and Fortran in support of the mainframes and batch processing at his school. Today Vikram works on Cisco’s Metacloud team, where he’s focused on OpenStack and deploying and developing cloud products for Kolla customers. He’s also a core-reviewer of the OpenStack Kolla and Kolla-Ansible project string. Which all means that he’s got terrific first-hand insights into cloud technology, container technology, and the efforts to allow them to work together seamlessly. In this episode, Vikram specifically speaks about:

  • What it’s like to grow up in India
  • How he got involved in open source
  • The current status of OpenDaylight
  • What the OpenStack Kolla project does
  • Use cases for containers
  • Why Kubernetes is so popular
  • The difference between what enterprises and service providers want in a cloud

See the video podcast on our YouTube page, or listen to the audio version on iTunes. And if you like what you hear, we invite you to subscribe to our channel so you don’t miss any of the other exciting podcasts we have scheduled over the next several months.

Authors

Ali Amagasu

Marketing Communications Manager

Avatar

By John Chapman, Cisco Fellow, CTO, Cable Access, Cisco

For the first time in the United States, we’ll be hosting our Full Duplex (FDX) DOCSIS proof of concept with Intel at the CableLabs Summer Conference this week. We unveiled the demo this past May at ANGA.COM

FDX: Groundbreaking Technology So why is FDX so important?

An all-DOCSIS 3.1 downstream plant would allow operators to reach speeds of 10 Gigabits per second in the downstream. With FDX DOCSIS, the upstream would be extended to 5 Gbps. And if a node contains one forward path and two reverse paths, then that optical node would have 10 Gbps capacity in both the downstream and upstream, matching the 10 Gbps wavelength used to feed the digital optical node. So with DOCSIS FDX, you’d have the equivalent to a fiber wavelength – without actually running fiber.

About the FDX Demonstration

The proof of concept demonstration is only 96 MHz symmetrical, because we are relying on current technology, and today’s cable modems “top out” at an upstream spectral location of 204 MHz. The 96 MHz shared spectrum in the demo will have a downstream rate of 890 Mbps (with 4K QAM) and an upstream rate of 680 Mbps (with 1K QAM).

Here’s what you’ll see at the CableLabs conference:

A 96 MHz chunk of spectrum, located between 108 MHz and 204 MHz, with traffic moving simultaneously up and downstream. We’ll show a node transmitting and receiving at the same time, on the same spectrum.

Watch the Cisco and Intel ANGA.COM demo on periscope.

As you can see, the capabilities presented in our FDX demo have expanded significantly. At last year’s CableLab’s conference, we gave you a taste of what we were working on with FDX and demonstrated the echo canceller functionality – no packets, just raw bits. This year’s demo is the real deal.

Intel’s contribution to the demonstration is its FPGA (Field Programmable Gate Array) silicon, used in the remote PHY device (RPD) and the Intel® Puma™ 7 SoC, used in the cable modem devices. The Intel FPGA is valuable for its role in rapidly responding to marketplace developments – like FDX. It was a fully programmable, DOCSIS 3.1 Remote PHY system on chip (SoC) platform running OpenRPD open source software. The Intel Puma™ 7 SoC provides robustness against interference that enables the downstream traffic to be received from the CMTS while the other modem is sending upstream data in the same spectrum.

The magic in the FPGA silicon is our (substantial) FDX work on echo cancellation. One way to envision this is to think about the noise-canceling headphones people tend to wear on airplanes. The headphones listen to the ambient noise of the plane, and create an inverse signal that cancels it out. In the case of FDX DOCSIS, what one modem transmits, the other sees as noise. An FDX echo canceller, like we’re demonstrating at CableLabs Summer Conference, removes multiples of such “echos.”

Why is this happening? Ultimately, echo cancellation is an example of what can be achieved — when you have a whole lot of gates, and a whole lot of DSP horsepower.

What FDX Means to the Cable Industry

We believe FDX ultimately means that DOCSIS will live on as industry standard, because FDX creates what is essentially a fiber equivalent, throughput-wise, with coax. Moving 10 Gbps downstream, toward nodes, is already within reach; two upstream node ports running FDX yields 10 Gigs upstream.

See you in Keystone, CO – August 6-9

Make sure you stop by and see us @CableLabs Summer Conference in Keystone #Colorado on August 6. Have questions and comments, Tweet us @CiscoSP360.

Authors

Daniel Etman

Product Marketing Director

Cisco's Cable Access Business

Avatar

CLI programming skills are still relevant in IOS XE, but adding a Linux container with Python natively embedded in release 16.5 takes programmable networks to the next level.

With another Cisco Live fading in the rear-view mirror, I find myself reflecting on some of the really interesting events that happened. I’m not talking about the customer appreciation event or partner parties, but rather what did I see that was different from previous years? What are the trends that I see emerging that might not have been as important last year?

The DevNet team with Chuck Robbins at Cisco Live 2017 in Las Vegas

One of the trends that has presented itself over the past several Cisco Lives and just continues to create more and more “buzz” is the addition of programming and the capabilities showcased inside the DEVNET area of Live.

For me, each time I attend a Live, I seem to spend more and more of that time at DEVNET, and this year was no exception. It’s not just the interactive and newer ways content is being shared with our customers; ways like workbenches or self-paced labs, but it’s the energy from all of our customers who are seeing the value in programming the network.

Traditional network engineers look to supercharge their networks with Python on IOS-XE

For a traditional network engineer, who has spent his or her career living in CLI and building out networks one way, some of these concepts and technologies can be a bit overwhelming. What exactly does Ansible do? What are the differences between RESTCONF and NETCONF and why should I care? How does YANG fit into this? Why has Python become so important? But with the anxiety around all of these changes, there is also a tremendous amount of excitement. I witnessed and participated in many conversations around what would be the best thing to learn first and which programming technology would be the most impactful immediately.

In the midst of all of this, I was running a session in its maiden voyage at DEVNET — “Supercharge Your Network with Python on IOS-XE.”  There was a lot of interest in this one from the traditional network engineers, the same types of folks I pointed out earlier, who had been feeling a bit overwhelmed. This session demonstrated some of the key capabilities we have added to IOS-XE with the release of the 16.5 around adding a Linux container with Python natively embedded. Aside from learning some basics of Python, what my audience found most exciting was how we have not only added a container that can run Python, but how we have integrated that container into some of the core IOS services that our customers have come to know and love.

The two key integration points that caused so much conversation were the built in library that permits the Python container to run native IOS commands as part of a script, and the ability to leverage Embedded Event Manager inside IOS to trigger an on-box Python script. These two really provided the “Eureka” moment for the traditional network engineers as they began to understand why putting a Python container would be useful.

Now, the vision of an automated network, can be seen in more clarity. Using something like EEM to create a trigger based on a response to changing network conditions has been possible for years, but what could be done in response to this trigger has always been limited to more static IOS commands. Using a programming language like Python affords the ability to be much more impactful and dynamic on the changes that can be pushed out.

CLI programming skills still relevant in IOS XE

I think one of the most valuable lessons that customers came away with from these new programming capabilities in IOS-XE, was that the knowledge of IOS programming through CLI is not something that has become irrelevant. The lesson that only through this knowledge of using CLI can the capabilities of things like on-box Python or configuring via NETCONF be fully impactful. Details of the workbench session I ran can be found on my Github site.

The new wave of programming network devices running IOS-XE can be read in more detail here. On this page, we discuss application hosting on IOS-XE, running Python, using NETCONF and RESTCONF, or Day 0 Provisioning. Topics on this page are the cornerstone of how IOS-XE is becoming an open and programmable operating system for running a network. I also encourage you to try DevNet’s Learning Track on Network Programmability for Network Engineers. It’s a set of self-paced learning labs that takes you from learning the foundational elements of programming to working with device level interfaces, and developing with network controllers like APIC-EM.

It’s a new day for operating and running a network, and I am very excited to see where these technologies lead. As a new session at Cisco Live this year, I am thrilled with how much interest it generated, and I am eagerly awaiting what the next year, and the next Cisco Live, holds.


We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

Twitter @CiscoDevNet | Facebook | LinkedIn

Visit the new Developer Video Channel

Authors

Ryan Shoemaker

Technical Solutions Architect

Sales

Avatar

What can you do when a growing cloud bill is evidence of your success?

Congratulations! You have pushed and pulled to get your IT department to embrace a more agile service delivery model. You have added public cloud as an addition to your on-premises IT service delivery portfolio. But as a result, your CIO is now having a conversation with your CFO about that budget line item called “monthly cloud bill.”

CIO: “Based on your input we moved to a usage-based IT service model.”

CFO: “Yes, but our cloud costs are up 50% over last year.”

CIO: “We’ve worked hard to shift how IT sources and delivers services. It’s working.”

CFO: “OK. But, you need to find a way to cut this cloud bill.”

Cloud cost cutting is different

A focus on cost-cutting is not new for IT operations. But optimizing your cloud bill is different than trimming operating costs and wringing efficiencies from on-premises operations. It’s a hard cost. Someone cuts a check to pay your cloud vendor every month. And it is pay-per use.

Quite simply, the goal with cost optimization in a pay-per-use environment is to avoid consumption that doesn’t add value.

If you have one or more IaaS clouds, or even software defined datacenter on-premises, you can use your cloud management and orchestration tools to optimize consumption and cut big pieces out of your cloud bill.

These simple lifehacks use automation and API driven orchestration to drive efficiencies:

  1. Standardize instance size – If you give a techie a credit card and say, “Go provision what you need,” they often reach for more than they need just in case. But “just in case” can mean “twice as much.” For example, if you study the Microsoft Azure pricing calculator, each step up in instance size doubles the price. (e.g., D1 = $0.14 per hour. D2 = $0.28 per hour etc.) Conversely, each step down is half the price.

Lifehack: Figure out the optimal instance size for each tier of frequently deployed workloads. That starts by determining when performance stops being affected by an increase in payment for instance size. Then, get everyone who deploys that workload to deploy with the optimal configuration.

  1. Turn it off when you are done – Cloud bills are littered with zombie workloads. As we move to a more agile and continuous software lifecycle, more workloads are deployed for shorter periods of time. But people get busy and forget.

Lifehack: Automate the deletion of short-lived workloads after a preset period of time. Set a policy to delete workloads for developers after two weeks. Or QA and test after two days. Or you can do this categorically for certain types of temporary workloads. Or even for anything related to an underfunded project.

  1. Suspend it when you are not using it – Developers don’t actually work 24-7. So why not automatically suspend their cloud workloads at night? That will cut a portion of your cloud bill by 50%.

Lifehack: Automate suspension of workloads for specific users on a pre-set schedule. Don’t forget to unsuspend on schedule if you want to want users to actually participate in this effort. Additionally, give them an easy way to click an exception if they get inspired and want to work in the middle of the night.

  1. Scale it when you need it – With APIs and load balancers, there is no reason to provision cloud resources that sit idle waiting for peak workload. Scale out and deploy more application instances when needed. Then scale back when not needed. That way, you won’t pay for what you don’t use.

Lifehack: Use scaling policies to automatically deploy more instances as usage grows. You can trigger scaling based on simple infrastructure metrics like CPU utilization. But it is even better to use more powerful APM tools like AppDynamics to instrument your entire application ecosystem. You can detect and respond to emerging user metrics or business process metrics to ensure quality of service where it counts most.

  1. Have a plan – If you have hard cost limits for projects, teams, or even individuals, remember that people respond well when they know the limits and can easily regulate their own usage.

Lifehack: Set a budget. Let people know someone is monitoring cloud usage and costs against that budget. Then communicate limits in a way that matters to these individuals. Make it easy for those individuals to see how their behavior is affecting tracked cost and usage metrics.

  1. Negotiate a discount – Yes, you can negotiate a discount with your cloud vendor. If you are spending approximately $1 million over three years you can ask for and get single digit price reductions. You need to be in the tens of millions spend rate to get custom discount. But you won’t get it if you don’t ask.

Lifehack: Consolidate your cloud accounts. If everyone centrally consumes cloud services, you can standardize services, reduce shadow IT, and take advantage of a whole range of other benefits.  But, you can also get a rolled up and aggregated view of usage and spending. You need that before you reach out and ask for a discount.

Cisco CloudCenter when it counts

Cisco CloudCenter is an application centric cloud management platform. As you might expect, it can help you implement all the lifehacks I have shared above.

I’ll post a follow up blog on how much cost reduction is possible using these techniques.  Obviously, your mileage may vary. But I have a simple worksheet that I’ll share.

Watch this 30 minute webcast to see how CloudCenter can help you implement these lifehacks to cut your cloud bill.

Authors

Kurt Milne

Marketing Manager, US

CloudCenter Marketing

Avatar

When I examined President Trump’s cybersecurity executive order, I focused on two things: What it really means and what are its most important aspects.

Formally titled the Presidential Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, the order was issued May 11.

In my first post on this subject I focused on what immediately struck me as a new demand for executive and leadership accountability. This is a topic that this president is fairly unique in emphasizing, and I think he has clearly demonstrated that he’s willing to hold people accountable in his administration.

Woman with image of data.
Today’s cyber defender must have specialized skills.

But there is another important part buried deep within the order (Sec. 3 (d)), one that cyber experts have largely overlooked: the cyber workforce, which the order addresses in a requirement for workforce development. The provision calls for several specific measures to assess the state of the cyber workforce, and to add to its ranks.

As a nation, we face a massive skills shortage for defenders in cyberspace. The evolution of this role has changed from the basic skills of a “firewall guy” to one that requires the operator to possess detailed technical capabilities. Today’s cyber defender must be able to perform advanced security analysis and be equipped to take definitive actions within the network to address security incidents as they occur.

This is a unique skillset and one that is created over many years of training. But here is the real challenge – we really don’t have many people with those skills. Estimates vary, but we need somewhere between 500,000 and 1 million such people just to fill today’s gaps.

The challenge is even sharper for the federal government because in most cases, agencies cannot hire foreign nationals for those jobs. Many such federal positions also require the employee to have an active security clearance.

So what do we do here?

Deep technical training of skilled personnel is clearly part of the answer, but so is trustworthy automation and a lot of it. But the details really matter here and this is the heart of the issue.

Today, many of these training environments have been built using open source toolkits including many of the federal government’s own defensive cyber operations toolsets. They build virtual networks for training and use inexpensive off-the-shelf or open source components. These systems are inexpensive, but they do not accurately replicate real networks and circumstances for the operators and provide very limited support options for a large training environment.

It will be absolutely critical for the US Government to partner with industry on adapting commercial toolsets and capabilities as well as building real-world virtual environments based on commercial products in use today to help adequately prepare the students for what they will see and use in the real world cyber jungle.

We must build extremely robust cyber defense classrooms that are rooted in large, individualized virtual environments (virtual networks) on a per student basis. And these classrooms must provide each student their own virtualized cyber workspace with a host of commercial products that are in common use today.

But training our next generation of cyber defenders on open source routers, switches, firewalls, IPS, and VPN technologies is both a disservice to the student and also provides the government a false sense of security. These operators will be completely unprepared to face real-world challenges when they emerge with their newly minted “Cyber Defender” badges.

So how do we go about solving this? It’s actually not all that complicated.

The federal government must focus its acquisition strategy on commercially available virtualized offerings. This means virtual routers, switches, firewalls, IDS, VPN Gateways, email and web security. Most importantly, the government should use commercial threat analytics running within these virtual training environments.

A simple example shown below is a cyber threat response clinic that Cisco runs to help train our own people and partners on defensive tactics.

Diagram of Cisco cyber threat response clinic.
Diagram of Cisco cyber threat response clinic.

Finally, the government must focus on building partnerships within the cyber defense industry – focusing on the products and technologies that are actually deployed today. That will help ensure that training environments closely mirror the types of network landscapes and threats that the trainees will one day face in real life.

Authors

Andrew Benhase

Federal Architect

US Federal

Avatar

Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between July 28 and August 04. As with previous round-ups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavior characteristics, indicators of compromise, and how our customers are automatically protected from these threats.

As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of date of publication. Detection and coverage for the following threats is subject to updates pending additional threat or vulnerability analysis. For the most current information, please refer to your FireSIGHT Management Center, Snort.org, or ClamAV.net.

Read more »

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

Vulnerabilities discovered by Aleksandar Nikolic and Tyler Bohan of Cisco Talos.

Today, Talos is disclosing multiple vulnerabilities that have been identified in the Kakadu JPEG 2000 SDK. The vulnerabilities manifest in a way that could be exploited if a user opens a specifically crafted JPEG 2000 file. Talos has coordinated with Kakadu to ensure relevant details regarding the vulnerabilities have been shared. In addition, Talos has developed Snort Rules that can detect attempts to exploit these flaws.

Read More

Authors

Talos Group

Talos Security Intelligence & Research Group