A shift in global economic power towards Asia has forced developed countries in particular to re-evaluate their own economic futures. A premium is now placed on the speed of innovation and the ability of countries to transition away from traditional manufacturing of more products, or mining of more resources. The impact of digital technology on business models, supply chains and customer expectations is well publicized, but perhaps the most profound aspect for Vocational Education and Training (VET) institutions relates to changing requirements for skills. In Australia, we know that up to 44% of jobs – 5.1 million – are at risk from digital disruption.
These economic and educational shifts have forced Australia’s education leaders to address the changing skill requirements and adapt their institutions accordingly. To help our customers learn from the successful digitization and curriculum change already taking place in Asia, we founded the Australian TAFE Study Tour.
The 2016 Australian TAFE Study Tour brought 10 TAFE Institute leaders from Victoria, Queensland, South Australia and the Australian Capital Territory to Singapore and South Korea to learn more about the impact of digital disruption and emerging trends in vocational education and training.
The group spent two days in Singapore at Singapore Polytechnic and Temasek Polytechnic, one day in Songdo to examine smart city initiatives and South Korea’s approach to start-up incubation and one day in Seoul, South Korea to see the role of education in meeting the needs of the economy.
The overwhelming message TAFEs took away from the Asian immersion was the importance of getting started. Institutes that were visited as part of the tour indicated that they didn’t wait for perfect information, or ideal implementation conditions. One of the most challenging aspects of embracing innovation, institutes reported, was grappling with ambiguity. Australian institutes are under intense pressure (funding, mounting ‘performance pressure’ and rising expectations) and embracing risk and uncertainty can be challenging. However, getting started on Digital Campus projects need not be overly risky, resource-intensive or complex.
Check out this infographic that outlines some of the key insights from the Study Tour, and read the full report here.
We’re always on the lookout for the next big application to drive Internet traffic. When we see a type of traffic almost doubling in a year, we put it on our watch list. Below are five that topped our watch list based on the most recent Cisco Visual Networking Index (VNI) Complete Forecast.
VIRTUAL REALITY (VR) TRAFFIC QUADRUPLED IN 2015
Virtual reality has come of age. Head-mounted displays are now affordable and provide compelling immersive experiences to consumers. Though content availability is still limited, global VR traffic more than quadrupled in 2015, from 4 Petabytes per month in 2014, to 18 Petabytes per month in 2015.
Will VR become more than a niche application? We believe it will, but we do not expect VR to be mainstream before 2020. Even with VR traffic multiplying 61-fold by 2020, it will still remain less than 1% of total Internet traffic. Content availability will be one barrier to growth. VR is very different from linear media, and it will take the entertainment industry time to transform. Gaming is the industry most ready to develop content for VR hardware, and will be the first to yield substantial traffic.
VIDEO SURVEILLANCE TRAFFIC GREW 90%
Home video surveillance traffic took off last year, increasing 90%. The combination of new cameras and cloud-based services allows users to access a live streaming video of their home from anywhere. Internet video surveillance traffic nearly doubled in 2015, from 272 Petabytes per month in 2014 to 516 Petabytes per month in 2015.
The implications for Internet access providers are profound. Home video surveillance increases a home’s upstream traffic dramatically, and the volumes are high because video is uploaded continuously to the cloud, regardless of how frequently or infrequently a user actually looks at the video feed. Service providers are concerned about this type of “vampire bandwidth” effect, analogous to vampire electricity. At high-resolution, a video camera can upload as much as 60 Gb per month, more than the average household’s total Internet traffic today (49 GB/mo).
Internet video surveillance traffic will increase 10-fold between 2015 and 2020. Globally, 4 percent of all Internet video traffic will be due to video surveillance by 2020, up from 1.5 percent in 2015.
SMARTPHONE TRAFFIC GREW 86%
Smartphones accounted for only 8 percent of total IP traffic in 2015, but experienced a growth rate much higher than most other device categories. Not all this traffic crossed mobile networks – in fact, the majority of smartphone traffic is offloaded to Wi-Fi. We expect smartphones to continue this fast pace of growth, with smartphone traffic exceeding PC traffic by 2020. Such a conclusion would have been unimaginable 10 years ago.
HOUSEHOLDS EXCEEDING 500 GB PER MONTH GREW 76%
The number of Internet households exceeding 500 GB grew from 5.8 million to over 10.2 million in 2015. The increasing availability of online video content has made it possible for “cord-cutting” households to opt to receive video content exclusively through online video providers rather than through cable or satellite. These households consume a large volume of data each month, and we believe cord-cutting is one reason behind the growth of households exceeding the 500 GB benchmark. For the first time, we’ve seen significant numbers of users encountering fair-usage limits on the fixed side, and in response we’ve seen a number of service providers increase those limits. In general, fixed limits are still more generous relative to average usage than mobile limits.
INTERNET GAMING TRAFFIC GREW 67%
Now that there is a good supply of on-board storage on gaming consoles, gamers are downloading files rather than using discs. Due to file downloads, Internet gaming traffic grew 67% in 2015 and will grow 7-fold from 2015 to 2020. Globally, Internet gaming traffic will be 4 percent of consumer Internet traffic in 2020, up from 2 percent in 2015.
The download of game files often occurs during peak hours. If this continues to be the case, perhaps we will see partnerships between content providers and network providers to manage download times and avoid congestion.
A LOOK AT THE FUTURE
Will these applications rise beyond “watch list” status and transform the composition of Internet traffic? We hope you will let us know what you think as you review the complete VNI study. Also, please join us for our webcast on June 14!
Defense in depth is a well understood and widely implemented approach that can better secure your organization’s network. It works by placing multiple layers of defense throughout the network to create a series of overlapping and redundant defenses. If one layer fails, there will still be other defenses that remain intact. However, a lesser known yet equally important approach is the concept of detection in depth.
While defense has and always will be a cornerstone of cybersecurity, your organization also needs the ability to detect and respond to attacks. That’s where detection in depth comes in, providing a similarly redundant, overlapping approach. Fortunately, most organizations today have an arsenal of security detection and response tools available that can accomplish this. The architecture of your network and how it relates to defense in means interoperability between your intrusion prevention and your flow monitoring tools, your advanced malware solution, and your Domain reputation system
Using IPS as an example: It may alert to an attack originating from, or directed to, your organization’s assets. When that alert is sent, an incident responder will typically query other sensoring systems and available historical reputational data to better understand the attack and what steps, if any, to take. This is where detection in depth begins. Building on that example, if your organization has increased depth or capabilities with tools like DNS logging, host based IPS logs, advanced malware solutions and Netflow, the incident responder will have a much more accurate and complete understanding of the alert. Similar data from different tools or sources can then help confirm the activity, while different data may show new components of the attack that were not visible with the first source.
With detection in depth it is important to understand that it is the quality of your data sources – not quantity – that drives better understanding. While four high quality sources will always be better than one, three very high fidelity sources can be much better than five poor ones. Equally important is interoperability of your data. How these detection capabilities complement each other and work together is critical to the success of your investigations. The context that different sources can bring has to be automated. In our earlier example (an IPS alert), if there are ten different attack data sources that the incident responder has available, the responder will log into one or two until they find some relevant data and then move on. This often leaves masses of data untouched. Merging those differing capabilities or sources and presenting to the responder in a unified view, reduces response time and helps eliminate guesswork, leaving no available stone unturned.
The Achilles heel of detection in depth is how to gather all the relevant sensoring data, combine it, provide the needed context from historical attack data, and then provide consolidated alerts that contain as much as possible of the organizational understanding of the attack. To overcome this, you must have a foundational detection and response framework based on interoperability. One way to help understand how far along in maturity your organization is with detection in depth, is to look at how much effort your incident response team spends getting the data contextualized and compared. Is your process automated enough that they can spend all of their time working on the results? Or do they spend most of their time gathering data before they can even start?
In cybersecurity creating a series of overlapping and redundant defenses is critical to success. By adding a similarly constructed approach for detection and response throughout your network that unifies all available threat information, your organization can gain even more security and peace of mind.
This vulnerability was discovered by Aleksandar Nikolic of Cisco Talos.
PDFium is the default PDF reader that is included in the Google Chrome web browser. Talos has identified an exploitable heap buffer overflow vulnerability in the Pdfium PDF reader. By simply viewing a PDF document that includes an embedded jpeg2000 image, the attacker can achieve arbitrary code execution on the victim’s system. The most effective attack vector is for the threat actor to place a malicious PDF file on a website and and then redirect victims to the website using either phishing emails or even malvertising.
This vulnerability was discovered by Dave McDaniel, Senior Research Engineer.
Summary
iPerf is a network testing application that is typically deployed in a client/server configuration and is used to measure the available network bandwidth between the systems by creating TCP and/or UDP connections. For each connection, iPerf reports maximum bandwidth, loss, and other performance related metrics. It is commonly used to evaluate and quantify the impact of network optimizations and for obtaining baseline metrics related to network performance.
iPerf3, developed by ESnet and Lawrence Berkeley National Laboratory, is a complete redesign of the original iPerf application and uses a forked cJSON library. Cisco Talos recently discovered that the forked version of the cJSON library contains a vulnerability that can lead to Remote Code Execution (RCE) on systems running the iPerf3 server daemon. This vulnerability is related to the way in which the forked cJSON library parses UTF-8/16 strings. There are currently several public iPerf3 servers that are accessible from the internet that may be susceptible to remote exploitation using this vulnerability. While the authors of the underlying cJSON library have since released a patch that resolves this vulnerability, the version of cJSON shipped with iPerf3 3.1-1 is vulnerable. The updated version of the iPerf3 application can be obtained here.
As companies digitize and take advantage of technological innovations to fundamentally change the way they do business, IT and business people need to stay on top of emerging technology trends and prepare to adopt those that will enable new business models and contribute to competitive advantage.
Software will play a major role in the next IT development. Think about the pivotal role of software in making cloud, mobility, and SaaS realities. Even the web itself couldn’t take off until the browser came along.
Enabling New Business Models
As I meet with customers and partners, the conversation often turns to innovation. Everyone wants to know where the next Uber, Tesla, or Airbnb will come from and what technology will drive it.
An example of an emerging technology that could eventually affect every enterprise and every government is blockchain. If you’re not familiar with it, blockchain is the technology that makes Bitcoin and other cryptocurrencies possible. And it’s getting a lot of attention.
“The technology most likely to change the next decade of business is not the social web, big data, the cloud, robotics, or even artificial intelligence. It’s the blockchain, the technology behind digital currencies like Bitcoin.
Blockchain technology is complex, but the idea is simple. At its most basic, blockchain is a vast, global distributed ledger or database running on millions of devices and open to anyone, where not just information but anything of value – money, titles, deeds, music, art, scientific discoveries, intellectual property, and even votes – can be moved and stored securely and privately.” – Harvard Business Review
Don Tapscott, the author of Wikinomics, and his son Alex Tapscott, have just published Blockchain Revolution: How the Technology Behind Bitcoin is Changing Money, Business, and the World. Alex thinks blockchain has the power to change the world.
“Today we are caught in the grip of a troubling prosperity paradox. The economy is growing but fewer people are benefiting. Youth unemployment is stubbornly high, median incomes are slipping and new business formation is hitting multi-decade lows in the developed world. The rise of the Internet has done little to alleviate the bureaucratic bloat and inefficiencies globally, stranding trillions of dollars of dead money in the dark economy. With blockchain technology, a world of possibilities opens up to begin to address these trends.” – Alex Tapscott
While it’s too early to know whether blockchain will have the profound effect Tapscott and many others hope for, it has the potential to bring about big changes in business models.
With blockchain, we could see more discrete, specialized microservices working frictionlessly together, with near-zero transaction costs, which would open all kinds of applications in microtransactions that were not previously profitable. And because the blockchain is distributed, and each block is accessed only when it is needed, there could be significant changes to the way in which information is stored, backed up, bought, and sold.
Software as a Key Innovation Ingredient
There are many considerations for companies looking to take advantage of these emerging technologies.
Software provides needed flexibility on the innovation journey, allowing you to incorporate updates and new functionality transparently, and without disruption to ongoing IT operations. Software innovation bridges the gap between your existing IT capabilities and new capabilities, like advanced security, mobility, and collaboration, opening the door to the creation of new business models.
New software must be simple to buy, install, and maintain. Overly complex software will slow implementation and impede innovation. Consumption models will need to align closely with a company’s needs, IT budget and timetable for adoption of new technology.
So, as you evaluate emerging IT trends and advances in software, and the new business models they can enable, think carefully about how you can capitalize on these new opportunities when they arise. You can be sure your competitors are.
In order for you to adopt video conferencing pervasively, we know we have to make it as easy to deploy and support as it is to use. On the user-experience front, many of you have told us that we have the best in the industry.
On the deployment front, we’ve significantly accelerated our pace. In March, we delivered Cisco Spark registration on the SX10 QuickSet. And we promised to get the rest of our desk and room systems cloud connected by the end of the year.
We’re way ahead of schedule. We now expect Spark registration this summer for our DX70 and DX80 desktop endpoints, MX Series, and the remainder of our SX Series room-based video systems.
Our engineering teams have been working hard to push the pace. We all know cloud registration is the Holy Grail for easy and broad video adoption since it doesn’t require infrastructure investment.
Scheduled for July release, Collaboration Endpoint 8.2 (CE 8.2) software will support Cisco Spark registration on the additional endpoints.
Endpoint Software Extends to the Desktop Many of you have asked us to bring the delightful experience and core features from our room-based video endpoints to the desktop.
The DX70 and DX80 along with the rest of our room-based systems will standardize on Collaboration Endpoint software. Beginning with CE 8.2, DX Series endpoints will offer a simplified set-up process as well as a compelling user experience consistent with our room-based systems, particularly the SX10.
In the weeks ahead, be sure to get your video endpoints on the new Collaboration Endpoint 8.2 software and let me know what you think of the new seamless experience!
If you’re in Vegas for Cisco Live next month, please stop by the to see live demos of the Spark-registered MX200 and DX70.
DOCSIS 3.1 has finally arrived! After years of talk and development, we are finally beginning to see the initial roll-outs of this next-generation technology. A few years have passed since the benefits of DOCSIS 3.1 were touted. Are those benefits still relevant today? It is worth revisiting what those benefits are.
In order to fully understand the benefits of DOCSIS 3.1, it is necessary to understand the boundaries of DOCSIS 3.0. DOCSIS 3.0 was a transformational technology in its own right and time. It provides the capacity to provide up to 1 GB of data to a service group, the ability to offer a high class of service, and provides many features and functionality that help operators with managing the customer, reporting and reliability.
Over time, the strengths of DOCSIS 3.0 become its weakness. The ability to achieve 1 GB downstream with up to 32 QAM becomes a limitation. Long-term bandwidth projections predict that DOCSIS 3.0 will begin to reach maximum capacities as soon as 2019 (without continuing to scale down service group sizes). In addition, competition driving 1 GB classes of service has accelerated the need for something beyond DOCSIS 3.0. The once high service group capacities of 3.0 platforms are now no longer enough. As service groups migrate to smaller and smaller groups of homes passed to manage bandwidth availability, more and more ports are required. The continual scaling of chassis, optics and other equipment to accommodate this growth becomes unsustainable.
To put this scale into perspective, some operators have said they will need to split nodes from 4 to 10x what they are today over the next 10 years. And this is with the full capacity used on 3.0 chassis. The result would be 10x the CMTS chassis, 10x the optics, and 10x the nodes. Facilities, rack space and power requirements cannot scale with this growth.
For a time, these inevitabilities were pushing many operators to consider a whole-sale infrastructure transition to FTTH and PON technologies. The challenge with this was the complete overhaul of the entire network from video, to data provisioning, to OSP cabling and equipment to CPE. The cost, technology and knowledge change, and disruption to the customers (and roadsides) made this a very unattractive option.
Enter DOCSIS 3.1. The first problem solved is the 32 channel limitation. DOCSIS 3.1 provides the ability to bond much larger groups of spectrum together to provide a true 1 GB Class of service and beyond. This also assists in the scaling problem. Whereas before, node segmentation would often be required when groups meet the 32 QAM limitation; the ability to use the full spectrum for data removes that requirement.
DOCSIS 3.1 also allows for enhanced spectral efficiency. For math purposes, consider that a 3.0 256 QAM channel provides approximately 40 MB of throughput. DOCSIS 3.1 uses Orthogonal Frequency Division Multiplexing (OFDM) technology that allows QAM modulations to reach 1, 2, 4k and beyond. A 1k QAM provides approximately 50 MB of throughput or a 25% increase in the same amount of spectrum. When you combine this capability with distributed access architectures (DAA), we see added improvement resulting in 4k QAM modulation and beyond. Therefore DOCSIS 3.1 provides the bandwidth with more ‘bang for its buck’.
High level-comparison features and capabilities of Next-Gen 3.1 platforms vs legacy 3.0. *Numbers may vary slightly by vendor chassis
Originally, the new DOCSIS 3.1 and DAA technologies were designed with smaller and smaller cascades in mind. However, testing has shown that improvements can be made over some of the longer cascades that exist today. For example, it is possible to achieve 1024 QAM where 256 QAM currently exists. This improved performance continues to increase as you get down to smaller and smaller cascades.
Addressing the Upstream
As data rates increase, the upstream continues to become more and more of a choke point. Studies suggest that the upstream capacity should be 10% of the highest class of service offered. For example, for a 1 GB service to be fully functional, approximately ~100MB of upstream throughput is required. As larger and larger data pipes are brought to each service group, the upstream limits will be pushed. DOCSIS 3.0 allows for a 5-85 upstream, allowing room for growth to handle this change. DOCSIS 3.1 pushes the split to 5-200 which allows for HFC systems to theoretically achieve a GB symmetrical service.
The Importance of DAA
DAA architectures such as Remote Phy or Remote Mac/Phy are inseparable from DOCSIS 3.1 when discussing the benefits next-gen DOCSIS platforms. While 3.1 chassis do traditionally offer a higher amount of port density in a chassis, this still becomes a limitation of the box. The next generation of CCAP platforms have more throughput potential than the physical RF output limitations can take advantage of. DAA becomes extremely valuable in that it removes that limitation by providing a digital link to the node itself, eliminating the limitation of physical RF ports. This also provides better link performance which continues to compliment the ability to achieve higher-orders of modulation (better throughput performance across the same amount of spectrum).
Improvement in MB throughput of spectrum by leveraging higher orders of modulation made possible by DOCSIS 3.1 and DAA.
Perhaps the greatest benefit of DOCSIS 3.1 is that it dramatically extends the life of the HFC network and physical architecture. By extending the life of the physical infrastructure, it extends the life of all the assets of the network—from the video platform, existing CMTS chassis and provisioning systems, optical infrastructure, OSP, and CPE.
The New Urgency of a Long-Term Plan
DOCSIS 3.1 in many ways did swoop in and save the day, but it also brings to light a flaw and errors that cannot be made again. For nearly a decade, many cable operators got trapped in operational mode without a long term strategy. Had 3.1 not come along, the push to get to FTTH would be exploding at a rate that the supply cannot provide. 3.1 has brought new life to existing infrastructure, and has allowed for a more graceful migration to fiber deep, higher bandwidth capacities, system upgrades, service migration and virtualization. All of these solutions need to be executed with an eye on the longer term future, to ensure that the things we do today compliment the needs of tomorrow instead of simply extending the limits of the past.
For more information on DOCSIS 3.1 or to discuss your network’s options, contact CCI Systems or follow us on social media.
Micro-segmentation has become a buzzword in the networking industry. Leaving the term and the marketing aside, it is easy to understand why customers are interested in the benefits that Micro-segmentation provides.
The primary advantage of Micro-segmentation is to reduce the attack surface by minimizing the possibilities for lateral movement in the event of a security breach. With traditional networking technologies this is very hard to accomplish. However, SDN technologies enable a new approach, by allowing degrees of flexibility and automation not possible with traditional network management and operations, making Micro Segmentation a distinct possibility.
I describe ACI Micro Segmentation capabilities in this short presentation I did at Network Field Day during Cisco Live Berlin.