In the past few months, Talos has observed an uptick in the number of Chinese websites offering online DDoS services. Many of these websites have a nearly identical layout and design, offering a simple interface in which the user selects a target’s host, port, attack method, and duration of attack. In addition, the majority of these sites have been registered within the past six months. However, the websites operate under different group names and have different registrants. In addition, Talos has observed administrators of these websites launching attacks on one another. Talos sought to research the actors responsible for creating these platforms and analyze why they have become more prevalent lately.
In this blog post, we will begin by looking at the DDoS industry in China and charting the shift toward online DDoS platforms. Then we will examine the types of DDoS platforms created recently, noting their similarities and differences. Finally, we will look into the source code likely responsible for the recent increase in these nearly identical DDoS websites.
What if every time you wanted to roll out an app, you didn’t need to figure out what new computer/server you were going to put it on? What if I told you that there were compute resources built into your network? And, what if I told you they even had access to serial ports for IoT devices? Shall we talk?
On June 20th, Cisco announced “The New Network. Intuitive.” The announcement included new programmability capabilities inside of Cisco switches and routers, providing a strong “YES” to the questions above – Yes, compute resources are in the network; Yes, they have access to the serial ports. So yes … let’s talk about how you might move some of your software closer to the devices at the edge of your network.
When we have devices at the edge of the network, we are starting to delve into the world of IoT. And, in IoT, we almost always have four things:
A thing
Some data
A network
An algorithm (that’s where your software comes in)
By using the capabilities in the New Network, I’m going to show you how to better manage number 2: Data. There’s more. But this is a key benefit to doing this the new way instead of the old way.
If you want to see some other related blogs, you may want to warm up with a blog by Jeff McLaughlin, that describes some of the new things you can do with the new network. In another blog, Hank Preston discusses what may be the easiest way for a network engineer to get started using programmability features in the network.
Why host apps on a router?
Use Case #1
Imagine you want to run a small program at a branch office. What have you done in the past? I think what most folks have done is to install some sort of compute appliance. That network diagram would look something like this.
Figure 1 – Compute Devices at Branch Offices
The problem with this methodology is that you now have a compute appliance installed, which needs to be maintained. It consumes space and electricity. What if you could eliminate the device completely and get that same functionality out of the network router, which you need anyway?
Use Case #2
Another example of old school is this IoT implementation. In the past, it was common to use a terminal server to connect multiple serial devices to the network, and then push that traffic back to a centralized compute resource. Below is a picture of what that would look like.
Figure 2 – Old School Terminal Server for IoT Connectivity
The problem with this method is that if the network is down, then the IoT Devices are not supported by compute. Also, this method transmits every byte, that goes into and out of the IoT devices, across the network. And finally, this terminal needs to be maintained.
What if you could eliminate the device completely and get that same functionality out of the network router, which you need anyway?
The New World Solution – Solves both use cases!
In the below diagram, you will notice that we have eliminated a fair bit of hardware and headache from the diagram! Let’s talk about just a couple of the advantages of this method.
Figure 3 – Edge Compute in a Router
First, you have entirely eliminated a device at the edge of the network, reducing time and effort of management at the edge. Second, you can run the code at the edge instead of the middle of the network. What if the application was super simple, like a heat sensor on some machinery?
Here’s some pseudocode to make the point: If read(IoT Device 1) > “180” then { set(IoT Device 2) := “OFF” msg(“IoT Device 2 status [off],[heat]”, “central alarm app”) }
So, what’s nice about this? How about the fact that the only time traffic goes over the network is when it actually needs do? How about the reliability is dramatically increased because the compute resource is directly connected to the IoT Devices?
But wait, there’s more. What if you wanted to increase the reliability? For the price and effort we had before, we can add another router and increase the reliability of the whole setup at the edge as shown in the diagram below. And for the record, this is almost exactly what one of our customers is doing to improve public safety.
What does app hosting on a router look like?
Cisco has several different methods for putting an app on a router. For today, I’m going to talk about how it looks when you do it on IOS XE, since we just announced some new functionality in that area. This operating system makes it possible to run both virtual machines (KVM) and Linux Containers (LXC) in order to host applications right on the routers.
If you would like to learn about the details of what is working today and how this all works, you can visit the Cisco DevNet IOS XE page. We have a fair bit of information for you to get started. Another great place to learn about this from slightly different perspective is our IOX page. In the meantime, I’ll provide a little bit more detail on the router configuration part – just to show you what it looks like from a Cisco IOS perspective. But, I highly recommend hitting one of the links above to really get the best description and details.
How does this really work?
On a Cisco IOS XE router, there are 5 major steps to get an application running on the router using a KVM. Below, is some sample code just to give you an idea of what it might look like.
NOTE: I am intentionally leaving out details to make this readable in a blog. You need to go here in order to get the details and exact commands.
Conclusion
In this blog, I wanted to provide a short introduction to running an application on a Cisco device running IOS XE. I hope that the “Why” section was compelling. Having seen what some of our customer are doing with IoT and this functionality, I hope you will explore it. The “What” section was strictly designed to cement the picture, “You can run containers right on the box!”. And finally, in the “How” section, I showed you a sampling of the commands it would take to put a KVM package onto a router and get it running. Clearly, the “How” section is incomplete. But, hopefully, you get an idea that it’s not really all that hard. And, I did leave out how to create the package. That would be too much detail for a blog. But, you can get all the details you want if you simply go to Cisco DevNet’s IOS XE, or IOX, page to learn more.
We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!
Email continues to be both the number one way business people across the globe communicate, as well as the number one threat vector that can endanger the very thing it is trying to enable, getting business done. However, our global economy means now, more than ever, senders and receivers of email can be anywhere in the world. Email policies, controls and requirements need to be easily configurable and controllable to meet the demands of today’s global business requirements.
These requirements include the ability to control email from senders based on geographical location, while not restricting the requirements so tightly that an unknown senders critical email from a previously unknown region isn’t automatically blocked. The ability to make intelligent decisions about email from certain geographies to allow valid emails even if most email from a geography is considered suspect. Controls, content filters and detailed reporting and message tracking are critical to ensure best practices for a secure global email security strategy.
Email Controls Based On Your Sender’s Geographical Location
First, companies need to control email from senders based on their geographical location. If an organization doesn’t have a business requirement to communicate with senders from a region, that administrator needs full control of how email from that region is received. It is time consuming and difficult to manually set up processes for each country’s requirements and also doesn’t easily allow that business to adapt to a business that may change.
New combined configuration options available in Cisco Email Security allow administrators full control as well as the ability to set more flexible policies for their entire organization. Profiles can be created and senders assigned that control aspects such as message size, recipients per hour, messages per hour or when to enforce SPF, DKIM and DMARC. This enables flexibility for companies to engage in new markets while still complying with email security requirements.
Balance Enabling Known and Unknown Senders Based on Geographic Location
Second, there needs to be a balance between enabling communication with unknown senders from a company with good intentions, while restricting potential malicious email from that same geography. Since senders change with frequency, it is very difficult for an administration to create and set static relationships for authorized senders. White lists and black lists are not sufficient due to this fast turnover of senders, and instead, granular control, is what is needed, outlining that certain internal groups are allowed to communicate while others should not.
Many organizations need to take this control to a more granular level. The polices that are created at a global basis for an organization can be further tuned at the “mail from” and/or “recipient to” pairings. Based on business policies, administrators can now interrogate all aspects of an email in conjunction with Geographical IP information to enforce granular requirements. In general, content filters define:
conditions that determine when the appliance uses a content filter to scan a message
actions that the appliance takes on a message
action variables that the appliance can add to a message when modifying it
Now with the integration of Geo IP into “Content Filters” administrators can look deeper into the body and attachments of an email to decide how to process the email in question based on their geographical location. Actions such as Subject line modification, BCC’ing another recipient, attachment analysis and quarantining can be now enforced, based on business policies around geographical characteristics.
Sender Visibility – Correlating Profile and Email Together From Geographic Locations
Lastly, administrators working in a global business need to be able to gain better visibility into senders from geographies by correlating inbound email with sender profiles from geographically diverse sources. Geographic Distribution reports are necessary to give deep information of all TCP connections as well as email messages that have been processed by the gateway. Administrators can immediately see what geographies are send the most valid “clean” email while which have a much more malicious intent. Taking this one step deeper into message tracking, administrators or the help desk staff can now leverage the geographical source of email in the tracking queries.
Businesses are global and policies must be configured to align with email flow and threat exposure potential without hampering business productivity. Administrators must be able to combine very complex business requirements in a very simple intuitive interface and email controls must be granular enough to get down to the smallest increment of sender/recipient pairings while powerful enough to control all inbound email.
It’s hard to imagine where we’d be as a civilization without the dreamers and doers who aren’t afraid to ask “Why?” or “How?” Just the discoveries and advances in science and medicine and technology in my lifetime are mind boggling: ice on Mars and new planets outside our solar system, developments in genome sequencing and stem cell therapy, and personal computing and cell phone technology—even Cisco’s launch of a new era of networking with The Network. Intuitive.
As someone who frequently wonders, “Why didn’t I think of that?” I’m amazed by the capacity and creativity of those who come up with these big ideas. What quality makes a person decide that something might be true and then leads them to persevere until they prove themselves right—or wrong? And then, what makes them try again?
I don’t know the answers to these questions, but I’m grateful that there are those who are so willing to question and try. They’re the reason that we’ll someday have a cure for cancer or people on Mars—or maybe even houses that clean themselves.
What can be done to help make new discoveries possible? One thing is to equip researchers with superior compute and storage resources, which enable them to calculate, manipulate, evaluate, and then store the mountains of data that they gather on the way to any major discovery. High-performance computing (HPC) solutions from Cisco provide researchers—including at the world’s top research universities—with the capacity and capability to answer critical research challenges. Check out this webinar for a great introduction to HPC.
HPC solutions from Cisco are primarily Ethernet based (though Infiniband is also supported), which means the technology is easy to understand and use and can interoperate seamlessly with the other Ethernet-based systems typically found in a research environment. Importantly—especially in the case of cost-conscious colleges and universities—the use of Ethernet also means that as HPC solutions are replaced in the research environment, they can easily be repurposed for use in other places on campus. In addition, Cisco HPC solutions are scalable and can adapt as workloads evolve with support for everything from small message, such as MPI, to jumbo frames for storage reads and writes.
Finally, HPC solutions from Cisco are part of the company’s Connected Research offering, which combines the comprehensive security needed to protect intellectual property with the technology required to foster innovation and collaboration, with research colleagues in the next office or around the globe.
One adopter of high-performance computing solutions from Cisco, Wake Forest University in Winston-Salem, North Carolina, has used high-performance computing for more than a decade, and today, university researchers, faculty members, and students are working toward new breakthroughs in physics, chemistry, biology, computer science, and economics. In physics for example, researchers are studying energy amplification and identifying new ways to store hydrogen, which might ultimately help cars use fuel cells; while in economics, researchers are using analytics to understand market trends using real-world data from program partners.
And I’m looking forward to the reveal of the next Big Thing.
I’m an inventor. I never dreamed I would be, but I am. Thanks to Cisco.
I got the call from Cisco to interview back in 2010, after moving back to India from living and working on the paradise island country of Curacao. I’d always wanted to work for Cisco!
I moved from a laid-back lifestyle to an enthusiastic, charged-up environment with people who want to create change for the world. The Cisco culture influenced me so much, and taught me the most important inventor lesson – it’s never too late, for anything!
I started the journey to inventor/patent holder when I got the chance to work for the (super) engineering team known as SDU (Systems Development Unit) which changed names several times, but now called IPG. (Industries Product Group)
We were working to solve challenges that hadn’t been seen at Cisco, and my manager gave me my second inventor lesson. Do something out of the box.
My team started working on connected manufacturing, specifically on PROFINET (Industrial Protocol), and we got support from Cisco to go through a three-day, instructor-led training on PROFINET.
On the last day of training, my co-inventor and I were having tea, and started a friendly debate about the lack of security of this industrial protocol. As many inventors do, we put our thoughts down on a napkin and within two months of discussion with the Patent Team, we got the approval. Three years later, I stand here with my first Patent Plaque in my hands! That’s my third inventor lesson. Work with great people!
The biggest reason I love working here is a chance to work in different teams (Currently working in Digital Transformation Office (DTO) – India Sales), different technologies and colleagues who have become friends for life now. The advice to anyone is never think of any work small or big, if things are not working reach out to senior members/management and don’t hesitate. That’s my fourth inventor lesson. Be bold of failing and starting all over again.
Over the last seven years at Cisco I’ve achieved a lot. (I’ve also became a father, which has taught me patience!) I keep myself ever-engaged with new technologies by achieving certifications like CCIE R&S, CCIE DC, CCNA, CCNP VCP, PROFINET… the list is ever growing.
I leave you with a piece of advice, if I can, then you can too!
Want to join a company that encourages you to pursue your passions?
Let me just start by saying this…10 minutes before we even started the webinar, the questions started rolling in. The first question, of course, being “Do I need to understand programming to be a network engineer?” Something we’re hearing over and over these days. If you’re asking my honest answer on that, I would say…it couldn’t hurt.
In this TechWiseTV workshop we cover a lot, from YANG data modeling, to encoding, to transport protocols…and then we even dive into Python, specifically for the Catalyst 9000 switches just recently announced. Jeff McLaughlin and Fabrizio Maccioni are the experts on this subject. Check out the corresponding TechWiseTV episode with these guys.
I did want to spend a little time talking about YANG data modeling and things like NETCONF and XML, JSON. If you’ve only really heard these words before, but aren’t entirely sure how they work together or even what they have to do with programmability that’s definitely okay! You can still start diving into programmability and learn along the way. I highly recommend the Learning Labs on DevNet.
Here’s a quick primer, though:
Network Management Protocols and Encoding
There are three fairly common network management protocols: NETCONF, RESTCONF, and gRPC. Due to the lack of scalability, I’m leaving the old SNMP out here. Besides, everyone is probably pretty familiar with it already. These three protocols are really about transport…how we’re delivering or communicating the information.
NETCONF
Uses XML encoding
Goes over HTTP 1.1
RESTCONF
Uses XML or JSON encoding
Uses the same tools as REST, but is not the same as say a REST API
gRPC
Uses Google Protocol Buffers (GPB encoding)
Goes over HTTP2
Allows you to create your own CRUD methods
The encoding is really the format in which the machine can read the calls we are sending it (or some other machine or service is sending it). Machines have trouble just reading human language so we have to send it a format it expects. XML is a bit older and is not very easily read by humans so many people prefer JSON. JSON is still formatted for a machine, but it’s a lot easier for the human mind to comprehend. GPB is really just numbers, no strings. So, obviously not very easily read by a human, but there are benefits to using number codes as well.
Data Modeling Language
YANG (Yet Another Next Generation) has become a standard data modeling language, at least in the world of network devices. What it actually reminds me of is a class in an object oriented programming language. The data model tells us what the machine is looking for, maybe we want to change the speed of a port programmatically. The YANG data model provides that template for what can be configured for a port. Then we can use something like XML or JSON to describe that the port should be 10GB.
As YANG becomes a standard data modeling language used to structure data for multiple devices, we can more easily integrate these devices and utilize the same kinds of programmability skills to interact with them.
Still confused? Make sure to watch the workshop and head on over to DevNet for more information. Or maybe we answered some of your questions during the live Q&A. Here’s a sample, but you can find the rest on SlideShare:
Q. Do the Cisco Catalyst 9500 Series Switches support Bidirectional Forwarding Detection BFD)?
A. Yes. Capabilities are parity with existing Catalyst platforms plus more capabilities.
Q. Will the Cisco Catalyst 9000 Series Switches support VXLAN?
A. Cisco Catalyst 9000 Series Switches support VXLAN encapsulation (data plane). The Software-Defined Access (SDA) use VXLAN encapsulation.
Q. Is there an ordering guide for the Cisco Catalyst 9000 Series Switches yet? Can you provide the link?
A. The ordering guide is not available yet but the datasheets are posted. Please visit: http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9300-series-switches/datasheet-c78738977.html
Q. Is a virtual switching system (VSS) or any Multi-Chassis Link Aggregation Group MC-LAG) feature available?
A. Cisco Stackwise Virtual technology is supported on Cisco Catalyst 9500, which is equivalent to VSS for redundancy.
Q. Are these Cisco Catalyst 9000 Switches the same platform like the software defined data center Nexus 9000 switches?
A. These are new Cisco Catalyst switches designed for the campus, unlike the Nexus 9000 switches positioned for the data center.
Q. Can you repeat the comparable previous generation models?
A. Cisco Catalyst 9300 is a replacement for the Catalyst 3850. The Catalyst 9400 is a replacement for the Catalyst 4500. The Catalyst 9500 is a replacement for the Catalyst 4500-X.
Q. Can you post the part numbers for mGig versions of Cisco Catalyst 9300?
A. C9300-48UXM-E/A
Q. Are there Cisco Small Form-Factor Pluggable (SFP) models planned?
A. There are SFP Models present in the Cisco Catalyst 9500 Series. They include the 40G and 10G models.
It’s no secret that manufacturers are concerned about security. Recent events have shown that manufacturing is a rich target for threats that cause physical damage, facility downtime, and breaches of customer data and intellectual property.
The reason for these increasing security threats is that the manufacturing industry offers unique exploitation opportunities, such as:
Legacy equipment or industrial IoT devices that were built with minimal or no security in mind.
Recently, Cisco published the 2017 Midyear Cybersecurity Report (MCR), which reflects not only these areas of concern for manufacturers, but also the changing security landscape for many industries. The 2017 study was fielded from July through September 2016 and included 2,912 respondents from 13 countries across multiple industries.
Some important cybersecurity findings for the manufacturing industry:
28% of manufacturing organizations reported a loss of revenue due to attack(s) in the past year—the average lost revenue was 14%.
46% of manufacturing organizations use six or more vendors, with 20% using more than ten. 63% use six or more products, with 30% using more than ten products.
Nearly 60% of manufacturing orgs report having fewer than 30 employees dedicated to security, while 25% consider a lack of trained personnel as a major obstacle in adopting advanced security processes and technology.
The cybersecurity report covers technology trends, impact to businesses, adversary tactics, vulnerabilities, opportunities to better defend against risk, and how to communicate with management.
Since public disclosure in April 2017, CVE-2017-0199 has been frequently used within malicious Office documents. The vulnerability allows attackers to include Ole2Link objects within RTF documents to launch remote code when HTA applications are opened and parsed by Microsoft Word.
In this recent campaign, attackers combined CVE-2017-0199 exploitation with an earlier exploit, CVE-2012-0158, possibly in an attempt to evade user prompts by Word, or to arrive at code execution via a different mechanism. Potentially, this was just a test run in order to test a new concept. In any case, the attackers made mistakes which caused the attack to be a lot less effective than it could have been.
Analysis of the payload highlights the potential for the Ole2Link exploit to launch other document types, and also demonstrates a lack of rigorous testing procedures by at least one threat actor.
Attackers are obviously trying to find a way around known warning mechanisms alerting users about potential security issues with opened documents. In this blog post we analyse what happens when an attack attempts to combine these two exploits in a single infection chain and fails.
Although this attack was unsuccessful it has shown a level of experimentation by attackers seeking to use CVE-2017-0199 as a means to launch additional weaponized file types and avoid user prompts. It may have been an experiment that didn’t quite work out, or it may be indication of future attacks yet to materialise.
By now you’ve probably heard that Cisco’s new, intent-driven network brings you the capability to monitor encrypted network traffic for malware, without decrypting it. But maybe you aren’t clear on just what that means or why it matters.
Simon Blissett, writing on Cisco’s Financial Services blog channel, explains it very well in non-techie language:
Think of data packets like your suitcases when you fly around the world. If you don’t want anyone else to see in your case, you lock it with a big padlock. However, airline security needs to know what is in the case to keep everyone safe but doesn’t want to open every locked case. So it uses scanners to check that the suitcase doesn’t contain guns, liquids etc. without having to open the case. [Encrypted Traffic Analytics] does the same with encrypted network traffic – analyzes it for malware without opening the packet.
…[T]his is a major step forward. Customers and third parties are increasingly interacting with us using encrypted messages – especially as everyone is worried about privacy and data security. Now customers and third parties can continue to act in this way while the organization can ensure that this increased privacy and security does not come at the expense of their network integrity or cyber security – no hidden malware sneaking in under the cover of encrypted traffic.
For more, read Simon’s full blog post by clicking here, visit our ETA page, and watch the video below.