Avatar

Tim Harmon is a Cisco Champion, an elite group of technical experts who are passionate about IT and enjoy sharing their knowledge, expertise, and thoughts across the social web and with Cisco. The program has been running for over four years and has earned two industry awards as an industry best practice. Learn more about the program at http://cs.co/ciscochampion.

==========================================

Welcome to Part 2 of the Cyber Security Capture the Flag (CTF) series. Part 1 discussed the importance of planning and how to effectively design the CTF event. Once the planning and designing phase has occurred, it is time to start developing the CTF. In this phase, we will discuss what needs to be done in order to implement the event. This will include securing a venue, getting the equipment (software and hardware) and setting everything up.

The first thing in this phase that is extremely important is for the venue to be secured. The venue can be located in an office, classroom or even a gymnasium as long as there is Internet and electrical access. For example, some companies have hackathons (2 to 5 day competitions with programming) in gymnasiums such as LAHack and NYHack. The area needs to be big enough for the CTF event you are planning to implement. This means, if you want to have a Jeopardy-style CTF that has 10 teams of 4 to 5 people, then you need to have a big room with a minimum of 12 tables (10 for the teams, 1 for the servers and 1 for the sign-in and misc.). The team should get a contract signed with the company that is providing the team with the venue for the CTF event after the plan and design has been laid out.

Securing a venue was the hardest part of the event as the event date was getting closer and we were unsure of where we were going to have it. Luckily, one of our professors at National University talked with us about our situation and since he was the Dean of Technology at Coleman University, he was able to secure his lab room for us to use. With him allowing us to use the lab, we were able to concentrate on getting the software and challenges ready to put onto the equipment. After securing the venue, we needed to check out the lab and make sure that it had everything we needed for the event. One thing that was not expected, we ran into problems as we tried to attack the defending machines from the attacking machines. Since the workstations were part of the school, the IT department had placed anti-virus software on them for other lab usage and we could not do what we wanted to do. It caused us to run a Jeopardy-style CTF instead of a hybrid (Jeopardy-style and Attack/Defend). The figure below shows the different network diagrams we developed for the CTF (Hybrid, then Jeopardy).

The team needs to then ensure that all equipment is at the venue and working properly. You may need to change out some hardware, or even reimage a workstation or server, as things may have happened before the venue and equipment were secured for the event. When everything is working properly and there is no need to replace or repair anything, the software for the CTF event can be put onto the workstations. The software used for the CTF can include Windows, Linux (Ubuntu, Red Hat, Fedora, CentOS) and Kali Linux.

The scoreboard for the CTF can be as simple as using Microsoft Excel with a timer and the team manually inputs the scores (if doing attack/defend), or it can be complex with a scoreboard framework that has already been developed. My team used the PTCoreSec scoreboard (https://github.com/PTCoreSec/CTF-Scoreboard) as it was pretty simple to implement but more complex than doing the Microsoft Excel spreadsheet scoreboard. Some CTF events use the iCTF framework from the UC Santa Barbara International Capture The Flag (iCTF) Competition (https://ictf.cs.ucsb.edu) and a few will use Facebook’s CTF platform (https://github.com/facebook/fbctf). Your team can choose whichever CTF scoreboard that will work with the requirements you have for the CTF event. The figure below shows what my team’s PTCoreSec scoreboard looked like. If you have ever participated in the National Cyber League (NCL, https://www.nationalcyberleague.org), you can notice that their CTF is based on the PTCoreSec scoreboard but much more high tech.

The next component of developing the CTF event is coming up with the different challenges. There should be different levels of challenges ranging from easy to hard and it would be wise to let the participants know what tools they can use and what tools that are not allowed to use. For example, they can use NMAP to scan the IP address range you give them for a challenge but cannot use Shodan for it. Shodan actually scans the entire Internet and it can get government officials to come visit you. There are a lot of free tools that can be used to complete the challenges and they should be on the machines the participants will be using (if the team hosting the event is providing the equipment) or have the participants download the tools on their own laptops that they brought to the event. Below are a few examples of challenges that were used in my team’s CTF.

After the challenges are created, they need to be placed into the scoreboard and the team needs to ensure all of the tools are on the appropriate devices for the event. Once that is done, the team needs to test all of the challenges and make sure that the answers are able to be inputted into the scoreboard. If any of the testing fails, the challenge needs to be tweaked or replaced by a different challenge. There should be a final test run the day before the CTF event as something could change and then the system should be tested an hour before the event starts to ensure it is responding properly. There needs to be a process that will be done if any issues arise during the event. The next phase will be the implementation phase of the CTF where the team will implement the CTF event. Be on the lookout for Part 3: Implementing in the Cyber Security CTF Blog Series.

Authors

Tim Harmon

Cyber Security & Network Professional

Cisco Champion

Avatar

Practitioners in the telecommunications space are almost universally aware of the ISO/OSI network management models. These models, along with work done in the International Telecommunications Union, have long defined the dominant model for fault, configuration, account, performance, and security management, or FCAPS.

Over the last two to three decades, FCAPS has become a proven approach to network management that works very well in a centralized, single-provider environment. Increasingly though, many companies today have distributed workloads that may run across multiple cloud networks. In this case, what metrics are there to ensure that a workload, and indeed an end-to-end workflow, is being executed correctly? If there is a fault, how do you know which cloud provider is responsible? If there is a compromise to your workflow, how do know where, to what degree, and how long it went undetected?

Blockchain technology presents both a challenge and an opportunity in this space.  In an enterprise blockchain operating environment, credentialed participants are part of a self-managed ecosystem facilitating the movement and validation of high value transactions. Providing operational integrity transparency that covers FCAPS requirements across the entire ecosystem could be one of the single largest barriers to mainstream enterprise blockchain adoption. That’s the challenge.

The opportunity is that blockchain technology can also facilitate distributing FCAPS capabilities across multiple administrative domains.  It is commonly understood, but hardly realized or practiced at this nascent stage, that enterprise blockchain solutions must be implemented with the rigor of critical infrastructure systems.  While FCAPS traditionally covers the telecommunication industry well, the methodology used to implement and execute FCAPS requirements can also be useful for developing blockchain critical systems.  Traversing administrative domains is where blockchain technology may be able to deliver value while at the same time knocking down potential barriers to broader adoption.

Blockchain technology is uniquely able to apply management tools across multiple heterogeneous networks.

Consider some of the ways blockchain can help decentralize FCAPS capabilities:

Fault management can be supported by creating hashed snapshots—or digital representations—of the state of each vendor’s network and sharing them across the entire multi-vendor blockchain ecosystem. If and when faults occur, an ecosystem-wide data log is immediately available and with new fault recovery tools, the appropriate response can be executed.

A blockchain network could be used to plan and record configuration management for workflows that span multiple operators and are highly sensitive to operational changes. Having visibility, or at least an agreed upon level of operational transparency, could yield a competitive advantage for infrastructure providers seeking to push transparency as a key differentiator.

For account management, and for that matter administration management for non-billing networks, blockchain technology is uniquely suited to record and track the participation of ecosystem operational teams and their coordination of workflows or machine-to-machine peering payments between infrastructure providers. Blockchain adoption for this function may spawn a new era of high efficiency just-in-time business enablement or a new model for Internet exchange monetization.

Performance management in the blockchain space is an area that, outside of delivering high performance blockchains, has seen little to no activity. However, as a tool for supporting network performance compliance, blockchain technology could be leveraged to provide an immutable record of “by transaction” network performance. Again, when used as a common framework across a collection of providers, this could be a competitive advantage for an ecosystem.

And finally, security management is an area where blockchain technology may be very well suited. As complex workflows span multiple providers and servicers, a single view to an ecosystem’s overall security posture may be challenging to deliver without blockchain technology. Enabling classical threat prevention, threat monitoring, and threat remediation with blockchain could deliver on one of the many promises that threat intelligence collaboration tools have yet to fully deliver, all the while maintaining confidentiality and providing an immutable record of cybersecurity health.

Naturally, some FCAPS concepts may see blockchain adoption before others, and some aspects may never see blockchain adoption. However, the ability for FCAPS to be implemented in a fully decentralized fashion supporting increasingly heterogeneous networks and resources may yield both competitive advantages and some significant monetization opportunities.  That said, it is still too early to know the full potential, as there is still much work to be done to truly understand the impact of blockchain technology in this space.

As part of the Trusted IoT Alliance, Cisco and other companies are working to define common models and methods that may be used to implement some of these decentralized FCAPS concepts.  Specifically focusing on the smart contract layer, the first place to start is with registration.  Registering a device, be it an NFC tag, a video sensor, or even a network appliance, is a keystone function in any network and in a blockchain network, this is no exception.  Alliance founders have been working together on early experiments demonstrating that device registration in a blockchain-agnostic approach is indeed possible to achieve.  While this early success is a good indicator of what might be possible, there are new challenges being discovered and there is much work to be done over the coming months to refine and harden the models and methods being developed. Look out for some great work ahead.

 

Authors

Avatar

Vulnerability Discovered by Aleksandar Nikolic

Overview

Talos is disclosing TALOS-2017-0274/CVE-2017-2784, a code execution vulnerability in ARM MbedTLS. This vulnerability is specifically related to how MbedTLS handles x509 certificates. MbedTLS is an SSL/TLS implementation aimed specifically at embedded devices that was previously known as PolarSSL.

 

The vulnerability exists in the part of the code responsible for handling elliptic curve cryptography keys. An attacker can trigger this vulnerability by providing a specially crafted x509 certificate to the target which performs a series of checks on the certificate. While performing these checks the application fails to properly parse the public key. This results in the invalid free of a stack pointer. There is a mitigating factor associated with this vulnerability in that the memory space that is pointed to is zeroed out shortly before the vulnerability is triggered. However, since it’s designed to be used in embedded platforms that may not have modern heap exploitation mitigations in place it may be possible to achieve code execution in certain circumstances.  Full details of the vulnerability are available in our advisory.

Read More >>

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

No matter the industry, organizations are challenged to find a comprehensive IT strategy that fits their unique needs. As an increasing number of these businesses are forced to go digital, they are also looking for a cloud solution that is user-friendly, secure, and compatible with their existing tools and devices. The same goes for federal agencies – but they require more.

What’s the “more”? Compliance. Federal agencies are not just looking for a secure and agile solution that is simple for employees to use. They need all of that plus a “cloud first” solution that is compliant with strict government security regulations, but understandably so. The platform must check all the boxes before the government can trust it to secure the nation’s most secret and sensitive information.

Here’s the twist: this extra requirement means limited options for federal agencies to consider; and one of the only solutions in the market right now forces agencies to go to a fully-digital IT platform while only being compatible with certain devices. Not only does this make the solution more expensive, but it adds many layers of complexity when transitioning employees to the new platform.

What agencies need is a cloud platform that meets government security regulations, allows them to ease into the digital world, and use the devices that make the most sense for their budget and their employees. Cisco’s new, hosted collaboration solution for government (HCS-G) gives agencies a better option that is both tailored and flexible. It’s a transitional cloud solution that does more than check the boxes … it enables the government to protect and serve the nation with the following in mind:

  • Comfort – Cisco’s HCS-G is compatible with the tools and devices agencies are already familiar with, so the transition to digital IT has a better user experience in both in the office and on a mobile device.
  • Collaboration – HCS-G includes Cisco’s collaboration tools, such as voice conferencing, messaging and video – creating a differentiated experience for users across the country.
  • Cloud – HCS-G is a cloud solution – meeting the government’s “cloud first” mandate for budget.
  • Confidence – All tools (video messaging, conferencing, etc.) are on one platform, so you can have the confidence in the ease of assessibility and management.
  • Compliance – HCS-G is FedRAMP® authorized, meaning it meets all government security regulations and policies for protecting data and privacy.

Adhere to the government’s “cloud first” strategy mandate, implement a certified and secure platform, and introduce a user-friendly solution … all within budget. It’s a significant decision especially for federal agencies, that doesn’t have to be as challenging anymore.

 

Continue to follow us here and at @CiscoGovt for exciting news and updates. And for more information, check out our Cisco.com pages for government and HCS for Government.

Authors

Larry Payne

Vice President, Sales

US Public Sector

Avatar

On Feb 3, Lori guest blogged about “programmability is the future of NetOps”. Lori is a frequent guest blogger and you can check her other blogs here. Cisco Insieme Business Unit and F5 have been innovating on the DevOps front for several years now and Lori is taking us through recent automation and orchestration trends in this blog.

Containers. Even though the technology itself has existed for more years than many IT professionals have been out of college, you can hardly venture out onto the Internet today without seeing an ad, article, or tweet about the latest technology darling.

They are one answer to the increasingly frustrating challenge of portability faced by organizations adopting a multi-cloud strategy, which means most according to every survey in existence. Containers, too, provide an almost perfect complement to development organizations adoption of Agile and DevOps as well as their growing affinity for microservices-based architectures.

But containers alone are little more than a deployment packaging strategy. The secret sauce that makes containers so highly valued requires automation and orchestration of both the containers themselves as well as their supporting infrastructure. Load balancers, application routers, registries, and container orchestration are all requirements to achieving the simplicity of scale at the speeds required by today’s developers and business.

Growth is inevitable, and the speed at which container-based systems grow once they’ve become ensconced in production is astonishing. Per last year’s Datadog HQ survey on the technology, “Docker adopters approximately quintuple the average number of running containers they have in production between their first and tenth month of usage. This phenomenal internal-usage growth rate is quite linear, and shows no signs of tapering off after the tenth month.”

Imagine managing that kind of growth in production manually, without a matching increase in headcount. It boggles the mind.

Even if you can imagine it, consider that the same survey found that “at companies that adopt Docker, containers have an average lifespan of 2.5 days, while across all companies, traditional and cloud-based VMs have an average lifespan of almost 15 days.” So, managing “more” becomes not only more in number, but more frequently, as well. Such volatility is mind-boggling in a manual-world, where configurations and networks must be updated by people every time the application or its composition changes.

NetOps never envisioned such a dynamic environment in production, where the bulk of business “gets done” and reliability remains king. Balancing reliability with change has always been a difficult task, made exponentially more troublesome with the advent of micro-environments with micro-lifecycles.

Therefore, it’s imperative for NetOps to not just adopt but embrace with open arms not only the technical aspects of DevOps (automation and monitoring) but its cultural aspects, as well. Communication becomes critical across the traditional siloed domains of network and application operations to effectively manage the transition from manual to automated methods of adjusting to the higher frequency of change inherent in container-based applications. You can’t automate what isn’t known, and the only what to know the intimate details of the apps NetOps is tasked with delivering is to talk to the people who designed and developed them.

No matter how heavy the drum beats on those siloes, it’s unrealistic to expect those deeply entrenched walls to break down amid their digital transformation. Developers aren’t going to be programmatically manipulating production network devices, nor are network engineers going to be digging around in developers’ code. But communication must be open between them, as the rate of change increases and puts pressure on both groups to successfully deploy new technologies in support of the faster pace of business and the digital economy. The walls must become at least transparent, so the two groups can at least see the grimaces of pain when something breaks. Through that communication and shared experience comes empathy; empathy that’s necessary to bring both groups together in a meaningful way to design and architect full-stack solutions that incorporate both network and app services as well as the entire application architecture.

Scripts are easy. Shared responsibility and effort is not. But it is the latter that is becoming an imperative to designing networks that can support emerging architectures at the speed (of change) required. The technical side of DevOps is not unfamiliar territory for NetOps, though its skills will no doubt requiring sharpening in the coming sea storm of change. The cultural side of DevOps is less familiar (and more uncomfortable – trust me, I get that, really) for all involved. But only by collaborating and communicating will these highly automated, dynamic, and rapidly cycling technologies can succeed and allow businesses to reap the rewards of adoption.


Related Link: http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/f5-devops-innovation-wp.pdf

 

 

Authors

Ravi Balakrishnan

Senior Product Marketing Manager

Datacenter Solutions

Avatar

On the heels of announcing the strategic alliance with Docker, today at DockerCon, Cisco announced that we are joining Docker in the Modernize Traditional Applications (MTA) program. MTA is a turnkey program consisting of products and consulting services from Docker and Cisco.

Most organizations strive to get the agility in developing and delivering business applications while increasing data-center efficiencies. Exactly the promises of cloud and containerization. However, according to IDC, only 3% of organizations have achieved optimized cloud deployments.

Till now, many cloud and container solutions focused exclusively on greenfield, cloud-native projects. IT organizations were left behind seeking solutions for their existing applications. Re-architecting everything is not an option as IT is already resource constrained. Designed for IT operations teams, the MTA Program modernizes traditional applications without the need to modify source code or re-architecting the application.

The MTA program accelerates application modernizations efforts for our joint customers and puts their container adoption in the fast lane.

Announcing General Availability of Contiv 1. 0 and Cisco Advanced Services Offers for Contiv

We got an overwhelming response when we announced the early availability of Contiv at Cisco live! EMEA, Berlin. Today, we are thrilled to announce the general availability of Contiv 1.0.

Since March, we worked with many early customers to make Contiv deployments more resilient, more scalable and more secure. In other words – it is production-grade. This GA release offers a rich set of capabilities to meet the needs of most demanding container networking requirements. Customers who are using previous 1.0 EA release, would be able to seamlessly upgrade to the GA version.

Over last two months, many of our customers asked, “how do I get started with Contiv as quickly as possible without waiting to build a new talent pipeline.” They understood the “what” and “why” about Contiv but they were eager to take action and get started with the “how.”

Today, to help our customers accelerate container adoption, we are announcing the expansion of Cisco Services to include Container Networking advise, implement and optimize services.

These services will help our customers bridge the skillset gap, avoid organization silos, as well as provide them with the best-practices that we have developed working with many other customers.

Available now, the Container Networking Services offers include:

  1. Container Networking Strategy Workshop

This one-day, on-site workshop with Cisco or partner subject matter experts will help explore IT requirements for container networking, identify solution use cases to be deployed and a roadmap for completion. For example, Contiv supports multiple networking backends such as overlay, Layer 2, Layer 3 and Cisco ACI mode. This workshop will help identify which mode(s) works best for your workloads.

  1. Container Networking Implementation Service

Deploys production-grade container network solutions across heterogeneous hybrid deployments – bare-metal, virtual machines, private or public clouds for our customers. This service also maps application security requirements to Contiv’s network security policy model ensuring traffic isolation and application micro-segmentation.

3. AIM Optimization Service

This comprehensive service, offered as a subscription, optimizes your Contiv deployments for scale and performance. It provides networking automation with security considerations built from the ground up. The service also includes post-implementation optimization and on-site support option.

Today’s announcements are the next big step to accelerate container adoption in organizations just like yours. We are excited to embark on the journey to modernize applications with you.

If you are at DockerCon, don’t forget to visit Cisco booth G13 in the expo halls to learn more. Additionally, we are co-hosting with Docker, a hands-on workshop to get you started with Contiv. Register early for Docker Networking with Contiv as space is very limited.

Please follow @projectcontiv, @ciscocloud and @ciscodc to get the latest updates from DockerCon.

Enjoy your time at DockerCon, we look forward to engaging with you!

Learn More:

  1. Docker press release on the MTA program
  2. Read more about the MTA program from Docker
  3. Getting started with Contiv has never been easier; start today using step-by-step tutorials.
  4. Learn about validated container solutions for Cisco UCS and FlexPod

Authors

Amit Sharma

Product Marketing Manager

Avatar

When organizations begin their search for an advanced, next-generation endpoint security solution to protect PCs, Macs, servers, and mobile devices, they have a lot of different vendors to choose from and a lot of questions. Can it prevent attacks? What kind of malware can it protect against? What if malware gets in, can it still help me? How do I deploy it? Is management of the tool easy? Will it protect my endpoints on and off the corporate network?

Whether I’m attending a cybersecurity conference, a customer forum, or just in my day-to-day interactions with security practitioners, I get asked these questions. I think any endpoint security solution should provide all of the following “must-haves”:

1. Cloud or on-premises deployment options, across multiple operating systems

Cloud deployment of a next-gen endpoint security solution ensures flexibility, easier management, scalability, and real-time threat intelligence delivery. But sometimes organizations require an on-premises deployment to satisfy stringent privacy requirements dictated by their industry, like in government or finance.  Your next-gen endpoint security solution should offer both deployment options.

Furthermore, every endpoint in the enterprise should be protected, whether it’s a Windows PC, Mac, Linux system running on a server, or a mobile device. No endpoint is immune to an advanced cyberattack. Ensure that the technology provides coverage for all of the different types of endpoints used throughout the organization.

2. Prevention Capabilities

Prevention is your first line of defense. Preventing cyberattacks and blocking malware at point-of-entry in real time is essential. To ensure the best possible prevention, make sure your next-gen endpoint security solution provides the following:

  • Global Threat Intelligence – a team of threat hunters detecting the newest threats and uncovering zero-days to keep you protected 24/7
  • AV Detection – let your Next-Gen Endpoint Security solution do all the AV heavy lifting and consolidate protection onto one lightweight agent
  • Proactive Protection – identify and patch vulnerabilities, and analyze and stop suspicious low-prevalence executables fast

3. Integrated Sandboxing Capabilities

Sandboxing is essential for static and dynamic analysis of unknown files. Don’t settle for a third-party sandboxing product that must work alongside your endpoint security solution. Sandboxing should be built-into, and fully integrated with, your next-gen endpoint security solution. Submitting suspicious files to the sandbox should be easy and seamless, and not require multiple management systems.

4. Continuous Monitoring and Recording

No prevention method will ever be 100% effective. Advanced malware can get into your endpoints, and if you have no visibility into what files are doing on your endpoints, you’ll be blind to the presence of a potential compromise.

Therefore, your endpoint security solution must watch everything on all of your endpoints (on and off the corporate network) at all times so you can quickly spot malicious intrusions and stop them quickly. It must provide continuous monitoring of all files on every endpoint, regardless of file disposition, and record the activity of those files so you can quickly access the recorded history of those files and quickly scope a compromise from start to finish. This continuous monitoring will provide the ability to spot malicious behavior when it happens and give you visibility into where malware came from, where it’s been, what it’s doing, and how to stop it – before damage can be done.

5. Rapid Time to Detection

The industry average to detect a breach after it occurs is 100 days. That’s insane. It’s plenty of time for malware to infiltrate your organization and exfiltrate confidential information. Your endpoint security solution should be able to speed up your time to detection and spot threats in hours or minutes, not days, weeks or months.

6. Agentless Detection

Sometimes an organization cannot install an endpoint agent on every single endpoint throughout the enterprise, or they would like visibility into devices that do not have an operating system that can support an endpoint agent. Also, some malware is file-less and might not be visible to an endpoint agent. Therefore, your endpoint security solution should provide agentless detection. Make sure it can uncover file-less or memory-only malware, catch malware before it compromises the OS-level, and get visibility into devices where no agent is installed.

7. Easy, streamlined management interface for efficient decision-making

Organizations face a myriad of attacks each day, often more than they can triage efficiently or effectively. Many security teams are simply buried in security alerts each day. They need security solutions that are easy to use and help them make fast and informed decisions.

Look for a next-gen endpoint security solution with an easy-to-use management interface that even a tier 1 analyst can use. Make sure that the interface allows you to quickly assess the health and state of your security deployment at both a macro and micro level. Make sure that the workflow to address a malware intrusion is seamless, intuitive and flexible, allowing you to triage, manage, and respond to possible breaches fast and effectively.

8. Simple, Automated Response

Responding to a cyberattack can be difficult and time-consuming. After a breach, many security teams might not have the tools to rapidly respond and remediate. Some reach out to costly third parties to do the work for them.

Your next-gen endpoint security solution should enable you to respond and remediate threats quickly and comprehensively, without the need to engage with an outside vendor.  Make sure the solution can accelerate investigations and reduce management complexity by searching across all endpoints for IoC’s and malware artifacts; easily connect the dots on a malware compromise, from start to finish, across all endpoints and the network; and systemically respond to and remediate malware across PCs, Macs, Linux, and mobile devices – automatically or with just a few clicks.

9. Not just a siloed point product but rather part of a larger integrated security architecture

Many vendors offer endpoint security products that are just that – point-products. These products are not integrated with other security tools, and when deployed, simply add to the mixed bag of security products from multiple vendors used throughout the enterprise. Many organizations use upwards of 60 different security tools. That’s a nightmare. Each product has its own management system and displays information in different ways. This requires more people to operate and makes it harder to decipher threat information, connect the dots to understand the full scope of an attack, and respond quickly. Juggling all of these siloed tools will slow you down.

Instead, you should deploy an integrated threat defense, whereby every security tool in your arsenal can work together to fight threats systemically. Make sure that your next-gen endpoint security solution can be deployed as part of an integrated system of security technologies that can work together to close security gaps and detect threats faster across your entire security ecosystem – from endpoint to network, email, and web. Threat information and event data should be shared and correlated across all security tools, and communicated to the security team in common formats.

We know that you have choices out there when it comes to endpoint security tools. Make sure your endpoint security solution has these 9 “must-have” capabilities to ensure the best protection for your organization. And make sure Cisco AMP for Endpoints is on your short list, as it provides all of these capabilities. In a recent study, IDC looked at 11 different endpoint security tools and named Cisco AMP for Endpoints a leader in the industry. We took a deeper look at a few of the top contenders in that report and compared them with Cisco AMP for Endpoints in this comparative table.

Authors

John Dominguez

Product Marketing

Cisco Security Business Group

Avatar

When I ask the question of Healthcare CIOs and CISOs “What keeps you up at night?” one of the most common answers I receive – after the usual jokes about indigestion, or the snoring spouse, is the problem of what to do about securing medical devices in our hospitals. Most healthcare executives are acutely aware of the problem (to some degree at least), but very few have an effective or scalable solution at hand to address this ever-growing risk.

[This is a two-part story. The first part can be read here.]

I recently met with the CIO and CISO of a large US healthcare system to chat about how the system was going about securing its 350,000 network attached medical devices. They were busy assessing and profiling all of the disparate devices from a multitude of different vendors that the pre-merger, independent hospitals had purchased over the past twenty years or so. The Health System had multiple teams of third party vendors from many of the big names in bio-engineering, working with its own IT team to review configurations, firmware and OS/ application versions, and to make updates where necessary in order to improve the security posture of these devices.

The CIO however was greatly concerned by the number and churn in these endpoints – given warranty replacement units and new devices arriving at hospitals seemingly on a weekly basis. He was concerned whether they would ever be able to get in front of their hardening project, and whether reconfiguration and lock-down would ever really secure these network attached systems at the end of the day.

After listening carefully to his plan and all the activities he and his CISO had sanctioned, I suggested cautiously, that perhaps the health system was on the wrong path. My argument was that they would never be able to keep up with and manage 350,000 disparate biomedical devices, growing by twenty percent per annum, using a strategy essentially designed to manage PCs and workstations. One where domain level tools could be used to patch and configure the vast majority of endpoints. The manpower requirements alone I suggested, would consume his entire IT team’s bandwidth and budget at some point, if not very soon.

I suggested that he abandon entirely all thoughts of securing individual endpoints by locally hardening devices, and by disabling services like TFTP, FTP, TelNet and SSH, that many of his medical devices had left the factory with enabled, and instead look at other control points to secure those devices (compensating security controls) that would enable much higher levels of automation, and reduce the margin for human error that a manual process would inevitably lead to.

I suggested that he use his network as the control point rather than attempt to manage so many individual endpoints. By enabling TrustSec – a built-in access system in his newer Cisco switches and routers, he could lock down each endpoint device whether wired or wirelessly attached to the network, and control in a uniformed manner exactly which ports and protocols each device could communicate on, which users could administer each device, and which other devices each medical device could communicate with, i.e. specifically authorized canister, gateway or clinical information systems only…. and nothing else!

By employing ISE (Cisco Identity Services Engine) to set access policy, which would then be enforced by TrustSec, (something that was already being used to manage guest wireless access), the health system could create uniform enterprise policy implementation across all sites and locations, and avoid the need for possibly hundreds of firewall engineers to write and update access control lists in switches, routers, and firewalls. What’s more, rules written in ISE could be written in easy-to-understand business language, rather than complex access control syntax for direct entry into infrastructure devices by firewall and network engineers.

Furthermore ISE could be used to survey and profile each of model of medical device, such that a profile could be developed and assigned once for each model, and applied globally across the entire enterprise of 350,000+ medical devices, thus automating security for the almost un-securable!

I continued, “What’s more, the same profile you assign to a medical device in one hospital, is used for a similar device in another hospital so long as its all part of the same ISE domain. Thus you can more effectively manage your medical device asset inventory across hospitals, by quickly assigning medical devices when and where needed rather than to tie up money in potentially hundreds of unused assets in each location.”

“In other words” I explained, “Using ISE and TrustSec, you can provide your users with dynamic segmentation capabilities such that you can take a medical device (or truck load of medical devices) from one site to another site in need of those devices, (for perhaps local disaster management), and have those devices immediately recognized by the network and assigned the right access permissions as soon as they are plugged in or otherwise connected to the network. No need to engage a firewall or network engineer to add MAC addresses to an ACL (access control list) at 2am in the morning – just plug it in and it will work!”

Essentially you will have an enterprise-wide dynamic automated user and device access system, that is enterprise policy-driven in easy to understand language (versus firewall and switch syntax), that will actually save your biomed team money because they can run a minimal asset inventory across the entire health system. What’s more, in so doing, you are actually securing the un-securable and protecting medical devices from attack, as well as protecting the main hospital business network from being attacked from an easily compromised medical device.

A large number of leading US healthcare delivery organizations are already using ISE and TrustSec to secure their medical devices, research and intellectual property, PHI, PII and other confidential information, by security segmentation of their networks and IT systems. Many are working towards micro-segmentation at the individual device level. Many more are using the same segmentation approach and technology to isolate their PCI payment systems, their guest and contractor network access, and for network access quarantine to perform posture assessments on laptops and mobile devices re-attaching to the network after being used to treat patients in the community.

For more information on this approach, read Cisco’s Segmentation Framework and the Software-Defined Segmentation Design Guide.

For information about how Cisco’s Security Advisory Services can to assist you to design secure segmentation in your environment, please review Cisco’s Security Segmentation Service or contact your Cisco sales team.

Authors

Richard Staynings

Cybersecurty Healthcare Leader

Cisco Security