Nobody loves door-to-door salesmen. Unless, that is, they’re offering fiber. Then, they’re more popular than the ice cream truck.
If you’re a cable operator, that’s a big problem. Your competitors—both telcos and new entrants like Google—make fiber sound better than chocolate. They’re rolling it out to the node, sometimes all the way to the home. Your subscribers don’t care how it works. They just know it’s fast.
Here’s the thing: consumers may say they want fiber. But what they really want is faster Internet speeds. Fiber is one way to give it to them, but it’s not the only way. Let’s look at the options.
Fiber vs. Coax with DOCSIS 3.1
You could gear up for the fiber race yourself. After all, once you have fiber running to every home, you have almost infinite bandwidth. Gigabit speeds today, 10-gigabit tomorrow. Capacity for all-IP delivery of Internet, broadcast, and 4K video on demand. Once you go fiber, you’re good.
The downside is that you’re building out an entirely new network. You’re digging trenches and laying conduits. It takes years. And it’s a huge capital expense that you won’t recoup for a long time.
Now, take coax. It’s sitting in the ground right now. You don’t have to invest in digging up everybody’s yard to build out a new network. And with DOCSIS 3.1 technology, you can achieve gigabit speeds over that existing plant by just swapping out equipment in the home and pedestal. For pretty much all of today’s applications, you can offer experiences just as good as fiber to the home. And at a fraction of the cost.
The downside is that coax is still an older medium. It was designed for a different time and a different set of (almost exclusively broadcast) services. DOCSIS lets you do amazing things with coax, but down the road, it won’t give you the same capacity or longevity.
So which one should you choose? The good news is; you don’t have to.
Running Fiber in Phases
For greenfield sites such as new-construction neighborhoods and apartment buildings, fiber makes sense. The installation costs are comparable. But for existing coax footprint, DOCSIS 3.1 gets you to gigabit speeds and beyond for a lot less. So why not do both?
Start by adopting fiber deep architecture models. By pushing fiber out closer to the node, you eliminate layers of equipment—lowering maintenance costs and power use. You can use new remote PHY technologies to move cable modem termination elements out of the head-end and closer to subscribers. And you can transition to fiber and IP where the installation costs are lower—right up to the pedestal. Then, for new construction, go fiber the rest of the way. For existing coax plant, go DOCSIS 3.1.
Now you’re incrementally evolving to fiber in phases. It makes business sense. And you’re extending the life of your multi-billion-dollar coax investment while you do it.
Weighing Decision Points
Right now, coax with DOCSIS 3.1 meets subscriber demand for a lot less than a full fiber overhaul. But that won’t be the case forever. And it still takes investment to transition to fiber deep architectures and DOCSIS 3.1. If the goal is to one day have an all-fiber network, at some point you’ll want to make the switch. What DOCSIS 3.1 buys you is time.
It might take 10 years or more to achieve the vision of fiber everywhere and all services running over IP. But for now, you can get gigabit speeds over your existing coax plant, even as you build out fiber elsewhere in your network.
Even better, you can compete with the slickest fiber competitors on the market today, without having to match their capital investments. So when the fiber salesman knocks, your subscribers will be too busy enjoying their lightning-fast services to answer.
Find out more
To discover how you could offer super-fast internet to your customers through a phased rollout, head to Cisco’s Cable Access Solutions today.
This blog is a co-authored by Jeff Bollinger & Gavin Reid
Are You Too Confident in Your Incident Response?
When Charles Darwin stated “Ignorance more frequently begets confidence than does knowledge,” civilization’s evolution from Industrial Age to Information Age was nearly a century away. Yet, when it comes to many aspects of IT, he nailed it. In today’s world, a measured, consistent, and creative approach to incident response and security monitoring delivers the most effective and efficient results for your organization. This blended approach makes human analysts the most critical component of any security operations center (SOC). SOCs utilize security analysts of varying skill and experience levels, so maintaining a consistent level of response can be difficult. Plus, cognitive biases can arise throughout any type of analysis or investigation which can lead to false conclusions or other errors. So if your goal is to understand the source, root cause, and impact of a threat (security incident), the analysts you rely on must be able to understand and avoid cognitive biases in their work.
A common cognitive bias exhibited by human analysts, something known as the Dunning–Kruger (DK) effect, might sneak in and create problems with accuracy and consistency. DK is a cognitive bias which suggests that relatively unskilled people may suffer an illusion of superiority and believe that their abilities are much higher than they really are. This is caused by their inability to recognize their own limits and accurately evaluate their own abilities.
“When researchers Dunning and Kruger asked participants to perform specific tasks (such as solving logic problems, analyzing grammar questions and determining whether jokes were funny), they took it one step further. After completing the task, they asked the participants to evaluate their own perceived performance in comparison to other participants.
Dunning and Kruger then divided the results into four groups, depending on actual performance, and found that all four felt their performance was above average. This meant that the lowest-scoring group (the bottom 25%) showed a very large illusory superiority bias. The two researchers attributed this to the fact that the individuals who were worst at performing the tasks were also worst at recognizing the skill needed to perform the task. This was later supported by the fact that, given training, the worst subjects improved their estimate of their rank as well as getting better at the tasks.” [1]
How Cognitive Bias Impacts Your Security
Falling victim to the DK effect isn’t always a sign of lacking intelligence. In the case of incident response, it could simply suggest a cognitive bias based on the analyst being over-confident, when in fact they are just unprepared. As a result they think they are capable of completing an investigation when in reality they are lacking all the resources and knowledge needed to make the right observations. Basically, “not knowing what you do not know” may be the challenge many incident response teams need to overcome, rather than an outside influence. So, is there a way your team can be sure they are fully prepared and informed, and not suffering from the DK effect?
Fortunately, yes – there is. It’s always been easy for incident responders to jump to conclusions based on the first piece of relevant data they discover. After all, analyst are humans too and want to quickly solve security problems. But when combined with a poor understanding of their organization’s own capabilities, this can lead analysts to jump to wrong, or incomplete conclusions. This is an expected behavior so be sure to plan for it. No matter how hard you try, you can’t make all people the same – and you really wouldn’t want to. Therefore, it is extremely important to have a well thought out and documented playbook to ensure a consistent approach – regardless of skill-level. Remember that no analysts will behave the same way on a consistent basis. So be sure to include that aspect in your SOC playbook so that you and your team will develop a realistic understanding and approach to investigation and documented response.
A Real World Scenario
Let’s take a quick look at a scenario to help illustrate this:
Your organization receives a phishing email with a zip archive containing a Nemucod variant (a JavaScript malware/ransomware dropper)
The JavaScript file is obfuscated to mask any indicators
The person receiving it reports the threat to CSIRT, who retrieves the malicious sample and investigates for any collateral activity
Junior analyst #1 analyzes the JavaScript file in a malware sandbox and the file successfully executes under wscript.exe, and downloads a malicious payload – then the payload executes in the analysis environment
Analyst #1 cuts off access to the domain serving the payload, using RPZ, redirects the domain to an internal sinkhole server and proceeds to the remediation phase and closing out the case
Up to this point, a solid analysis approach has been chosen and executed, but can you be certain that the problem is resolved? Shouldn’t your analysts also be asking:
How many other hosts might have reached out to these domains?
Are there any other domains associated with this campaign?
Who in your organization actually executed the attachment, versus simply deleting it?
What other indicators can we pick up to monitor for, at a host or network level?
Now suppose a few weeks later a similar incident is detected when a separate host attempts to resolve a domain redirected to your RPZ sinkhole:
Senior analyst #2 re-analyzes the original sample and digs deeper, identifying that there was logic in the dropper for backup domains, and further analyzes it until all indicators are identified and blocked
Analyst #2 determines that analyst #1 did not realize the JavaScript code itself implements backup domains in its configuration section for retrieving the payload, if the initial attempt failed.
As we see, the original issue resurfaced because Analyst #1 fell victim to the DK effect. Their confidence in the output, based on the information they processed, was too high. Plus, their lack of experience in thorough investigations led them to believe they had enough information to be confident in their findings. Fortunately, Analyst #2’s more thorough analysis revealed additional indicators and led to a better overall response. In keeping with the Dunning-Kruger findings, Analyst #2 indicated that more analysis of host based indicators was necessary to determine the true impacts to infected hosts. Basically, the seasoned analyst, prepared by experience, did not feel confident the analysis was complete, while the junior analyst, less prepared, became confident too early in the process.
Can You Dump Dunning-Kruger?
Quality measurements can be subjective, but there may be quantitative ways to compare the work of analysts. Decision trees, flow charts and other organizational aids can all help increase consistency in the response approach for all analysts. Thorough documentation that provides clear and direct instructions on how to perform analysis tasks will also help your organization dump the Dunning-Kruger effect. But in the end, it is a step-by-step methodology outlining the investigative process, one that serves to standardize approaches while introducing checks and balances, that will help your organization deploy the right approach in the right situation. By doing so, you can prepare your analysts for the increasing threats that lie ahead and prevent “over-confidence” from destroying your organization’s confidence in their incident response.
With all that has been written and said about data preparation, you are probably convinced your enterprise is ready to take advantage of its ability to rapidly transform inconsistent, incomplete, and inaccurate data into the clean, complete and ready-to-use data that your analytics require. This cleaned up version of data can help inform business decisions as well as achieve business and IT leaders’ efficiency, cost, and risk mandates.
I’m sure you would agree data preparation is a powerful and strategic resource with multiple benefits. So now the only question is who buys the data preparation solution for your enterprise?
The Ideal Buying Team
While there’s not a 100 percent right answer to the question above, it seems the most successful adopters select and purchase their data preparation tools collaboratively with representatives from both the business and IT sides of their organizations.
Teaming ensures that everyone’s voice is heard and collective objectives are met. It avoids false starts that might occur when someone goes it alone. Nothing can be as frustrating as finding out after the fact the tool selected lacks key capabilities, is too IT-oriented for business analysts to use, or doesn’t scale to meet your data volume and performance requirements.
Round Up, then Skill Up Your Team
Who will be on your data preparation buying team? Give this some serious thought. What may prove helpful to you is to create a table to assist you in identifying and selecting ideal team members from your company. For your convenience, I’ve added a downloadable buyer’s template you can use to simplify this process.
Once you’ve determined who you need on your buying team, it’s time to bring them on board. To skill them up, relook at my earlier blogs as they provide both direct insights as well as indirect links to rich data preparation content sources.
Having spent most of my career in product marketing, and having participated in Gartner Magic Quadrant evaluations across multiple technology categories, I can tell you that these things are a lot of work. You might think that Gartner just waves a magic wand and drops some dots on a chart, but you’d be very wrong.
I’ll admit that Cisco is particularly strong in communicating with our key analysts. And I believe that goes a long way. Our response to this year’s Gartner Magic Quadrant for Unified Communications survey was 59 pages of excruciating detail that covered everything from product features and functions to marketing and sales strategies.
As part of the evaluation, Gartner also asks for a list of customers to gain feedback. They also schedule a dedicated briefing with us to review our response to their survey. Gartner rolls up this information into its assessment, adding accumulated knowledge from briefings and customer conversations throughout the year. Combined, all this ultimately determines the placement of that magic dot.
I’m proud to say that Gartner has positioned Cisco as a leader in its Magic Quadrant for Unified Communications for the ninth year. Gartner places us highest on “ability to execute” and furthest on “completeness of vision” axis in the evaluation.
https://youtu.be/61wRDVKhpys
We’re proud of the product vision we’ve been delivering in the collaboration group. And we’re excited about the journey we’re taking with you to digitize your business. Collaboration is a rapidly evolving space. We don’t rest on past success but continue to innovate — and disrupt where necessary. Our mission is to bring you the best possible collaboration experience now — and into the future. We believe that the latest report reflects an understanding of that vision and deep insight into our execution.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Cisco. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Gartner, Inc., Gartner Magic Quadrant for Unified Communications 2016, Bern Elliot, Steve Blood, Megan Marek Fernandez, July 14, 2016.
An issue I can relate to – the lack of women in cybersecurity. While women represented 25% of computing professionals in 2015, they only represented 10% of the information security professions.
Cisco gives us time to volunteer to make a difference in the world where we have a personal stake. I put my hours to use to change this statistic, and went back to summer camp!
Dr. Tony Coulson, director of CSUSB Cybersecurity Center, begins the week-long camp with a selfie.
A GenCyber summer camp, that is. A joint venture between the National Security Agency (NSA) and the National Science Foundation (NSF), this camp tries to address the tremendous shortage of cybersecurity professionals in the United States by creating interest at the K-12 level. The specific camp I attended was a collaboration between California State University, San Bernardino (CSUSB) and the Girl Scouts of San Gorgonio Council.
I’ve been involved with K-12 STEM education events for 10 years and as a researcher for Cisco’s Security and Trust organization no cause was a better fit for my time.
Make no mistake, the 5-day event that hosted 250 middle school girls on the CSUSB campus was no ordinary summer camp. These girls learned to fly drones and program an Atari Breakout-like game on a Raspberry Pi, a tiny computer they got to take home after camp ended.
They created skits that demonstrated good cyber hygiene principles (such as “update software regularly” and “be cautious with free Wi-Fi”) using their imagination and a box of random props.
I recruited four of my Cisco colleagues from the Austin, TX site to donate their time as well, and two went above and beyond by spending 60+ hours preparing and leading a web security workshop.
(L to R) David, Aaron, Alicia, Sadaf and me excited to begin the week before the opening ceremony.
I spent the entire week as a blue group co-leader, trying to impart my technical knowledge on the 50 girls assigned to us. I was amazed how fast the campers picked up concepts typically learned at the high school or collegiate level such as TCP/IP, navigating the Linux command line, and forensic analysis of USB thumb drives.
As this was the first time working with such a large group of pre-teens, I took many notes as my co-leads (a combination of Girl Scout Troop Leaders, licensed K-12 teachers, and counselors) stressed invaluable life skills such as teamwork, empathy, self-esteem, and taking responsibility.
By the end of the camp, I’m not sure who learned more, myself or the 250 brilliant girls who left ready to take on the world.
I do know I was grateful to go home with a box of Thin Mints and my first Girl Scout Patch in over 20 years.
I guess the good news this time is we don’t have a similar conflict as we did back in Berlin Feb 18, 2016 where we had AnsibleFest and Cisco Live in the same week.
At ChefConf, attendees asked where to find more info on the demo we showed during the event of installing chef on Nexus. Here it is http://bit.ly/29tTe1X. More links and assets can be found in my previous blog.
You may wonder why Cisco has been actively participating at these types of events, the answer is simple: automation. As customers embrace the DevOps model in their environment, they want automation in their network to simplify and accelerate application deployment. Our NX-OS enables integration of DevOps tools like Ansible, Puppet and Chef to program and automate networks.
We’ve highlighted different use cases and capabilities using such tools in these white papers high level and deep dive.
With Ansible 2.1 release, we have support across Cisco’s operating system NX-OS, IOS and IOS-XR. This means bringing configuration simplicity and automation to both Nexus and Catalyst switches. We’ll show several demos at the AnsibleFest like configuring VxLAN eVPN fabric, ACLs, and more…
We’ll have our experts on the floor in both the Networking Hub and main lobby that can answer your questions and demonstrate what you can do with Nexus and Catalyst.
Stay up to date on the latest version of the NX-OS Cisco-Ansible Module and IOS Cisco-Ansible Module. Also, visit the Cisco marketplace and Ansible-Cisco page. Ansible Modules are developed in partnership with Cisco and are opensource to help network administrators to manage Cisco Network Elements using Ansible.
Cisco Live Las Vegas saw thousands of people come together to talk tech and get the inside track on the industry. Here’s a look at the top five video trends to emerge from this year’s event. But what does it all mean for Service Providers.
Now is the time for cloud TV
Cloud TV was an ever-present topic at the show. The tech underpins all the advances shaping the industry. It’s now seen as a key platform for TV and video across all services. So what’s driving this change?
The short answer is virtualization. A virtualized infrastructure comes with a raft of benefits. Chief of these are more agility and low costs. But it’s also the foundation for cloud-based services.
Agile, app-based modes of delivery that are soon to be the norm for video, depend on the cloud. So, if you haven’t virtualized yet, it’s time to explore it soon.
Tightly linked to the call for bespoke, multi-device content is the rise of the skinny bundle. Consumers expect to choose how they pay for content. They want to pick when and how they view it. The industry is moving away from a one-size-fits-all payment plan. Instead, skinny bundles now put content choice in the hands of the viewer.
NOW TV is a great example. It’s contract-free and available on more than 60 devices. So it works for a family watching at home, and it works for a student going off to college. Give them a Now TV box with IP access and they can decide what they want to watch.
The market is fragmenting
All this choice does have its downsides. The huge range of options can be too much for users. While one bill may seem expensive, it’s nice and easy to manage. Keeping track of multiple devices and bills can be a pain.
As a Service Provider, you need developers who can support these devices, platforms and applications. Again, the answer lies with virtualization. It’s agile. It’s scalable. And it gives you a low-risk environment for development and testing.
Speed is more important than ever
With on-demand consumption across multiple devices soon to be the norm, cherry picking platforms just isn’t an option. You need to make sure that your service is on all screens, from the day of launch.
Your developers most likely work to a 12-month cycle for a set-top box. For apps that comes down to a nine-day release cycle. That release could serve 50 different devices. It’s an exponential challenge. The success of the likes of Netflix is down to the cloud.
Security means protecting the value of your business
You need to hit these platforms quickly but you need to do it securely, too. We tend to talk about protecting content. But software security is all about using better development to protect the value of your business. Digital rights management isn’t just about encrypting at one end and decrypting at the other. It’s about building the right business models for each platform and shaping them to different demographics.
The security challenges will of course vary by platform. But it’s vital that you offer a consistent pricing structure across them. No one wants to pay for content on Android that they could watch for free on a Kindle Fire. A clear, coherent approach is vital to customer loyalty. It’s key to making all that freedom of choice feel manageable, and liberating, for users.
Find out more
Watch exclusive content from Cisco Live and find out how you could learn from Cisco’s expertise by attending the next event here.
The pace of business is accelerating. To keep up companies need to be able to move fast and seize new opportunities. That means keeping employees in touch with the people, tools and information they require, wherever and whenever they need them. If you can help them do this securely, the rewards can be huge. So how can it be done?
Today’s workers are more mobile. They keep working on the road, at home, and at customer sites. According to the Cisco VNI Global Mobile Data Traffic Forecast, there will be 5.5 billion mobile users worldwide by 2020. That’s up from 4.8 billion in 2015. For road warriors, every missed call or email is a lost business opportunity.
Mobile connections are getting more complex too. And employees on the go have a host of options. An employee might call into a WebEx conference on their smartphone on their way to work using a cell connection. But when they get to the office, their phone will need to automatically switch to their company’s Wi-Fi network to get the best signal.
So how can you help companies deliver the secure, non-stop mobile experience they expect, regardless of access method? Virtual private networks (VPNs) are a great way to support enterprise mobility, and the latest technology can help your enterprise customers make the most of it.
Mobile connections that are good to go
For example, security is top of mind for every enterprise with a mobile workforce. The threat landscape is changing, and a new breed of hackers are taking aim at mobile devices to get their hands on sensitive business and personal data.
An Evolved Packet Data Gateway (ePDG) can help service providers ensure every user’s mobile experience remains secure, regardless of where they connect. An ePDG works as a gateway between Wi-Fi, untrusted networks and cellular networks. When a user moves to an untrusted network for data, voice or video services, the ePDG creates IPsec and Internet Key Exchange Version 2 (IKEv2) tunnels to protect the traffic with strong encryption.
But what about keeping the VPN connection while users roam across networks and in and out of wireless coverage? With an IP VPN, employees can stay connected even when they are on the move. Instead of logically tying the endpoint of the network tunnel to a physical IP address, each tunnel is tied to the IP address at the device. So connections won’t drop just because a user switches networks. It’s the key to seamless, safer mobility.
The right network makes it possible
It’s clear that VPNs have a huge potential for businesses that need to support a more mobile workforce. They will look for a service provider that can deliver the steady performance and security their employees need, whether they are in the office, at home or on the go. This is your chance to offer that service. Grab it now.
Virtual reality is getting a lot of play at the moment. It’s the hot topic in the gaming world and is now gaining traction in video. So how can you roll out the VR features consumers demand – quickly and at low cost?
Virtual reality is enjoying rapid growth, and it looks set to have a transformative impact on a host of industries. In fact, Computerworld reports that companies are already demonstrating VR applications for the military, healthcare, retail and other sectors.
There has been a wide range of responses to VR as a market opportunity. Some broadcasters and service providers are jumping in with confidence. Others aren’t so sure.
Why is VR gaining traction so fast?
VR is going to be a big part of the entertainment landscape over the next few years for several reasons:
User devices are affordable. You’ll increasingly see VR headsets bundled and discounted as an add-on. Gaming platforms, mobile phones, tablets and other devices will offer them.
It’s an immersive experience. This makes it attractive for advertisers or anyone who wants to capitalize on a highly focused audience.
It’s easy to integrate. Content providers can use existing tools to provide a 360-degree experience and blend VR with existing content.
It’s fun. VR provides a great experience and it’s so easy to use.
You can move into VR quickly and with confidence
If you’re among the cautious, waiting to see if VR succeeds, you need to get into the market now. Once users migrate to another platform or service, you’ll have to work harder and invest more to win them back.
There are three requirements for market participation:
Great video content.
Consistent, high-quality delivery of video.
Support for a wide range of end-user headsets.
With cloud-based services, you gain:
Agility: Delivering VR from the cloud means you can be agile and adapt to the dizzying speed of change in the VR marketplace. This includes support for the steady stream of new headsets offered by an ever-expanding mass of companies. You can leverage the speed of updates and new releases in the cloud to be on top of new technology, competitive moves and consumer-consumption models.
Great scalability: Launch services for new VR hardware faster. Extend your footprint and reach new customers while maintaining the quality that VR demands.
Time to market: Deliver more and higher quality video content across multiple networks. The scale and openness of the cloud means you can roll out new formats required for VR without the need for long testing procedures.
With such a clear path to VR success in your sights, it’s time to make this lucrative new service a reality. To make it great, deliver it through the cloud.
Find out more
To learn how the move to the cloud can help you accelerate your VR strategy, visit cisco.com/go/infinitevideo.