Avatar

For as long as anyone can remember, making a car trip has involved the risk of hitting a traffic jam. Your journey might grind to a standstill for any number of reasons: bank holiday traffic, unexpected roadworks – or a local street party you never knew about…

But new developments in traffic management are starting to end this frustration by harnessing data from sensors on roads and vehicles, as well as apps and more. The time may come sooner than you think when motorways will use this information to manage your speed and satnavs will automatically route you away from problem spots. And that’s before we get on to the topic of driverless cars, seamlessly flowing through intersections without stopping.

This vision is becoming closer and closer to reality, thanks to rapidly developing network technology. Last year, Cisco partnered with the city of Hamburg in Germany to open Europe’s first ‘smart road’. It’s part of a project that aims to use a variety of sensors to help manage aspects of the city, like traffic flow, more intelligently. Timely and accurate data could help the city open moving bridges at the quietest times, for example, and reroute traffic around them when this happens.

Telemetry: feeding your network’s ‘brain’

The mind boggles when thinking about how to extend an operation like this across a city – not just for traffic but also lighting, port logistics and the environment. It’s a hugely complex undertaking that is beyond the capacity of any human brain to grasp at once. But when you have the right data, it becomes a real possibility.

The way modern networks function, managing masses of digital traffic moving at high speed, is pretty similar to the way cars travel through a city. Effectively, a programmable network uses a virtual ‘brain’ to make the ‘muscles’ of your existing infrastructure work harder and direct data packets more efficiently. But even a digital brain needs information to make decisions. This is the job of telemetry.

From pull to push

While data is required to manage any network, the difference with telemetry is that it provides large amounts in real time. Traditionally, network managers have polled operational data from their networks for information on what’s going on. But increasing complexity is making this approach inadequate, even when it’s carried out at regular intervals.

Telemetry turns this approach on its head by pushing data from the network automatically rather than waiting to be asked. It provides a constant flow of information in real time, that can be used to manage performance effectively. The data can be combined with programmable possibilities to spot problems in the network and automate diversions, or simply to route packets in the most efficient manner.

Of course, you don’t want any old data. That’s why Cisco’s telemetry technology uses intelligent models to structure it in a useful way. We use YANG to create our models because of its emergence as an industry standard that can be easily integrated with the monitoring tools you already have – or those you want to develop in future.

The data provided by our telemetry technology is also compatible with standard big data tools, enabling you to carry out further analysis. We’re committed to an open approach that encourages collaboration and flexibility, and enables future growth.

Route to the future

The way our telemetry technology enables digital traffic to take an intelligent route through a network is similar to how smart traffic management irons out physical bottlenecks. Whether the result is more uptime or happier drivers, data helps these systems operate with greater efficiency.

It’s highly appropriate that our programmable networks powered by telemetry are being used to develop smart systems in places such as Hamburg – and that big hitters like Netflix rely on telemetry to run their systems smoothly.

Doubtless the technology has many more possibilities yet to be imagined. What’s certain is that this is the route to the future – not just for cars, but for everything else, too.

KEY TAKEAWAYS

  • New developments in traffic management are starting to harness data from sensors to route traffic efficiently. This vision is becoming closer and closer to reality, thanks to rapidly developing network technology.
  • The way modern networks function, managing masses of digital traffic moving at high speed, is similar to the way cars travel through a city.
  • Effectively, a programmable network uses a virtual ‘brain’ to make the ‘muscles’ of your infrastructure work harder and direct data packets more efficiently. But even a digital brain needs information to make decisions. This is the job of telemetry.
  • Traditionally, network managers have polled operational data from their networks for information on what’s going on. Telemetry turns this approach on its head by pushing data from the network automatically rather than waiting to be asked.
  • Of course, you don’t want any old data. That’s why Cisco’s telemetry technology uses intelligent models to structure it in a useful way, that’s compatible with your existing monitoring tools.
  • If you want to learn more: visit our Evolved Programmable Network page.

Learn more about innovation in Programmable Networks here or watch our expert interview about Programmable Network.

 

Authors

Christian Thomas

Head of Pre-sales engineering

Global Service Provider,Cisco France

Avatar

Recently, I was fortunate enough to be invited to present at the NSW GovDC conference. The fourth industrial revolution, described by Dr Klaus Schwab in his book launched at the 2016 World Economic Forum, has not just begun; it’s in full swing. This is not simply a trend or a fad. The fourth industrial revolution is a fusion of technologies across the physical, digital and biological worlds, creating entirely new capabilities and dramatic impacts on political, social and economic systems.

Cloud computing has become the foundation and engine room for most digital initiatives in this next industrial revolution. Industry has moved beyond the initial formative and turbulent phases of cloud and is now entering the third phase – consolidation – a realistic and practical acceptance of hybrid cloud, in which some apps will run in private cloud and some in public clouds. Some may run in both. Most established organisations such as government agencies, are actively shifting focus to the business outcomes versus technology.

However, the technology has never been better positioned to enable this. Monolithic apps on bare metal compute infrastructure have been virtualized over the past decade, which has enabled greater asset efficiency. The next decade will be characterized by apps constructed as one or more microservices within containers. This will facilitate greater opportunity to focus on business outcomes, faster delivery, greater scale and more reliable services. Importantly ‘cloud native’ apps will be able to run on either or both private and public cloud – whichever is most appropriate.

This raises the question of governance and management of these apps, given that they can now run both on-premise as well as in the public cloud. How does an organisation take advantage of the benefits enabled by new app constructs and capabilities and retain control and ensure no ‘lock in’ in any one particular public cloud?

The new keyword in cloud is “policy”. In much the same way as public policy ensures that citizens or organisations operate in a certain prescribed way, app policy ensures that the app behaves in the way the organisation wants it to behave

Cisco Cloud Center (or ‘CliQr’), born out of Cisco’s recent acquisition of CliQr, provides this policy management whereby a policy can be applied to the application itself. CliQr is an application-defined cloud orchestration platform that enables customers to easily model, deploy and manage new and existing applications to any cloud and data center environment. In-turn providing superior scalability, faster deployment (from weeks to minutes) and rigorous security requirements. CliQr makes it simpler for customers to automate and manage application policies across the entire data stack. CliQr already interoperates with ServiceNow and is integrated into Cisco’s Application Centric Infrastructure (ACI) and Unified Computing Systems (UCS) solutions.

Although we haven’t focused on data in this blog, in considering future trends, I’d like to add one more consideration. At Cisco, we believe that as much as forty percent of future data will never reach the data centre. That is because either the latency will be too high or there simply is no need for high backhaul bandwidth. In such situations, data can’t move to the query hosted in the data centre. Instead, the query has to be handled at the edge of the network, where the data is. We refer to this as the FOG. There are several use cases already in sectors such as resources (remote oil and gas rigs) and transport (congestion management) and we expect more to open as more sensors are deployed.

The fourth industrial revolution is here. Cloud computing and application development is evolving rapidly to enable organisations to rapidly become part of this transition and take advantage of the many opportunities it provides.

To see the full talk:

https://youtu.be/AmZ6c2CA8E4

Authors

Kevin Bloch

Chief Technology Officer (CTO)

Cisco

Avatar

Last week, I discussed the economics of on-premises storage using the S3260 Storage Server vs. the cloud. Net net, the S3260 is 56% less expensive.

Today, I wanted to talk about server consolidation of multiple 2RU rackmount servers on to a S3260. Yes, I’m going to try and sell you fewer servers.

2RU rackmount server like the Cisco UCS C240 M4 support both large and small form factor drives. Depending upon LFF or SFF, the capacity of the drive varies greatly. We’ll be focusing on LFF drives which are up to 10TB. This gives us 120TB raw in 2RU.

The S3260 supports up to 60 LFF 10TB drives when using one server node or 600TB raw in 4RU.

S3260 vs. C240 M4 LFF

S3260 vs. C240 M4 LFF

 

 

Now let’s look at an example of consolidating five C240 M4 LFF on to a S3260.

If I’m targeting for 600TB, I would load up five C240 M4 servers with 12 drives and a S3260 with a single server node with 60 drives. What are the results?

S3260 vs. C240 M4 LFF

 

There are a lot of ways to configure CPU, memory, and IO so the CapEx and power savings are variable depending upon the configuration. The space, cabling, and management savings will remain the same.

If you need more compute power in your S3260, you can sacrifice four hard drives and replace them with a second compute node.

Best part of the S3260 is that it is managed with UCS Manager right alongside B-Series blade servers and C-Series rack servers. One tool, one set of processes and procedures to manage your environment regardless of form factor.

Hopefully I’ve piqued your curiosity enough reach out to your Cisco Account Team or Partner to learn more about the S3260 Storage Server.
https://www.youtube.com/watch?v=dYMknAsXgH4&feature=youtu.be

Authors

Bill Shields

Senior Product Manager

UCS Solutions Product Management

Avatar

For the past year Americans have read about polling data almost daily, so the casual observer may be forgiven for thinking there are mountains and mountains of data being collected to help make sense of this year’s unusual election. The truth is, though, that polling data is not big data, either in volume, velocity, or variety – the three ways in which data can be “big.” It might be useful to do a few back-of-the-envelope calculations to show just how tiny this data is.

The polls listed in the HuffPost Pollster (the database used by many election prognosticators) amount to only about 2.5 gigabytes of data, by our estimates. (The Pollster database lists 372 polls with an average of 40 questions posed to an average 3,000 respondents per poll, and the data from each respondent averages a little over 2KB.) Additional polls outside the Pollster database and exit polling data on election day do not add more than a few gigabytes more, so the total amount of unique polling data gathered for this election cycle is likely under 5 gigabytes, well under the conventional “big data” threshold of 100 terabytes.

An impressive figure, this isn’t. Most of us have more than 5 gigabytes of data stored on our phones. But polling data is of higher value to the world than the hundreds of family photos and videos most of us are carrying around with us. Polling data can move markets and impact international relations. It’s value and importance is high enough that it is likely to have been downloaded and replicated many thousands of times.

Replicated how many times? Perhaps as many as 10,000. We can assume that most of the 2,500 colleges and universities in the US are tracking polling data during this election, and we can assume a few thousand international observers as well. Add to that the number of media organizations, large financial organizations, and other interested groups (about 500 of them) and we’re probably looking at the 5 gigabytes having been stored 10,000 times, amounting to 50 terabytes. In addition, some groups run election simulations based on the models build with polling data, and this may take up significant space. Simulated election results would likely amount to 8 gigabytes per simulation, with several hundred simulations being run to amount to a few terabytes per study. Twenty-five such studies would add 50 terabytes to our 50 terabytes from polling data for a grand total of 100 terabytes. This still does not make polling data into big data despite reaching 100 terabytes, because the 100 terabyte threshold is for a single instance of the data, not all instances stored in many different locations.

1

If the total of 100 terabytes were all stored in a data center (and it’s not – much is simply stored on laptops), how would that compare to the total data volumes found in data centers? Our imminent Global Cloud Index, to be released November 10, will contain data on the storage capacity and data stored in data centers. Data stored in data centers is currently 171 exabytes globally, or 171,000,000 terabytes, which means that data associated with US election polling and forecasting represents a measly 0.0001% of total data stored in data centers around the world. The smallness of polling data is a nice illustration that data need not be “big” to be important.

What makes up the bulk of the data stored? Of overall data stored in data centers, 32 percent is associated with web and cloud services (AWS, Google Cloud, Dropbox, Youtube, Google Search), 15 percent is data stored by government, and the manufacturing, healthcare, and transportation verticals account for 9, 8, and 7 percent, respectively. Basic science logs an impressive 5 percent (or over 8 exabytes) thanks to the large amounts of data created by bioinformatics and experiments such as CERN’s Large Hadron Collider.

Want to learn more about cloud, data, and the resulting traffic? Stay tuned on SP360 for far more detail in our upcoming Global Cloud Index released tomorrow. You may also register for Global Cloud Index forecast update presentation on November 15, 2016 (Americas and EMEAR) or November 29, 2016 (APJ).

Join our conversation on Twitter through #CiscoGCI.

Authors

Arielle Sumits

Senior Analyst

Service Provider Marketing

Avatar

The Digital Hospital Design group (Health Informatics Society of Australia) held an expanded roundtable discussion as part of the HIC 2016 conference.  The focus of the discussion was around three large hospital innovation projects in Australia and the common lessons that could be learned on the drivers of innovation with these facilities.AL28159

The three projects selected were:

  • The construction of the Box Hill Hospital, a new 600 bed facility in Melbourne
  • The construction of the Bendigo Hospital, a new 400 bed regional Victorian facility
  • The EMR roll out at Princes Alexandra Hospital in Queensland, an existing 1000 bed facility in Brisbane.

Each of these sites represented very different environments for innovation.  There were 7 lessons distilled from the discussions:

  • The importance of a transformational mindset
  • Thinking in terms of a clinical change and not an ICT project
  • Patient safety needs to take primacy
  • Simulation of workflows de-risk “go live”
  • Clear benefits need to underpin a successful business case
  • You must get the underlying infrastructure right
  • Creative communications are important in getting through to the clinicians

From these discussions there were three issues that emerged as prerequisites for success

  1. Major investments in and a sophisticated approach to change management are critical. Clinical leadership, not just clinical buy-in, is a prerequisite for success particularly in large scale implementations
  2. Benefits need to be clearly understood if they are to be realized. While there will generally be unintended consequences a clear view of intended benefits needs to be established early so that the realisation can be planned for
  3. The underlying infrastructure needs to be robust and scalable. Gaps in the infrastructure platform are a common source of failure for initial implementations.

The issue of how to drive innovation in our healthcare system is critical for its future sustainability.  These discussions further emphasized the importance of the two balancing forces of innovation, technology and the people who are enabled by the technology.  Getting engagement, educating and enthusing the hospital staff is a critical element of what needs to be a planned and explicit innovation process.

Further details on the discussion can be found in the paper “HIC Digital Hospital Design and Implementation: Good Innovation Practice in Australian Healthcare.

Authors

Brendan Lovelock

Health Practice Lead

Cisco Australia

Avatar

Today, Microsoft has released their monthly set of security bulletins designed to address security vulnerabilities within their products. For a detailed explanation of each of the categories listed below, please go to https://technet.microsoft.com/en-us/security/gg309177.aspx.

This month’s release is packed full of goodies, but you don’t want to wait to review them over Thanksgiving dinner as there are 14 unique bulletins addressing multiple vulnerabilities.

<<Read More>>

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

#CiscoChampion Radio is a podcast series by Cisco Champions as technologists. Today we’re discussing new routers and awesomeness.

Cisco Champion 2016Get the Podcast

  • Listen to this episode
  • Download this episode (right-click on the episode’s download button)
  • View this episode in iTunes

Cisco Guest
Dax Choksi (@daxesh_choksi), Product Manager, Enterprise Routing

Cisco Champion Hosts
Enda Cahill (@endacahill10), Technical Director
Eric Perkins (@perk_zilla), Data Center Solutions Samurai
Tim Miller (@broadcaststorm), Network Engineer

Moderators
Lauren Friedman (@lauren)

Continue reading “#CiscoChampion Radio, S3|Ep. 29: New Routers and Awesomeness”

Avatar

Holidays seem to come earlier every year in the U.S. One could see Christmas decorations up inside major retailers even before Halloween. Very soon, a series of American and Ethnic holidays will be upon us, and Tis the Season of Giving. For this holidays season, as you search for a gift that will keep on giving back to your organization, check out a new member of our “Swiss Army Knife” Cisco ISR 4000 collection, the ISR 4221.

Not Just Any Swiss Army Knife

Often coined as the “Swiss Army Knife” of branch networking by customers, Cisco ISR 4000 Series is praised for its versatility and adaptability all in one box – very much like the pocket utility knife. Its performance, scale and rich feature sets are best in class. For these reasons, Cisco ISR 4000 also earns the marquee: Rolls-Royce of All-in-One Branch Networking – see Three-time Award Winner. The ISR 4221 is no different: small, but mighty. 

A Gift that Keeps on Giving

What’s New? The ISR 4221 is mightier than the ISR G2, vis-à-vis ISR 1921 and 1941. It is a desktop-like, industrially designed 1RU box priced cost-effectively for mid-markets.

ISR 4221 Key Highlights:

  1. It meets all digital network requirements – see Figure 1

network requirements for the digital org Anna blog post 11_8_16

Figure 1: Benefits of Migrating to ISR 4000. Learn more.

2.  With respect to performance, the ISR 4221 kicks it up a notch with throughput starting at 35 and upgradeable up to 75 Mbps.
3.  Despite a 1RU form factor, this economical “Swiss Army Knife” has unmatched values in its class for a few reasons:

a.  It has multi-services support for SD-WAN and is programmable and automated   with APIC-EM controller and IWAN app for consistent business-driven and policy-based operations across network domains.
b.  It is an application-aware platform complete with Intelligent Path Control (PfR), application visibility (NBAR2) and Network Contention Control (H/QoS).
c.  It comes with Cisco IOS built-in security features for complete branch threat defense, such as Zone-based Firewall, FirePOWER Threat Defense, Network Address Translation (NAT), and IPSec VPN.
d.  It is a standard-based Linux virtualization platform. With Linux Virtual Container (LXC), signed network service like Snort IPS can be spun up as virtual machine any time.
e.  It is equipped with up to 2×8 integrated switch ports – a true all-in-one platform – for pop-up or micro branches where the average square footage is as little as 350-sf.
f.  It supports IPv6 and legacy WAN connectivity, such as 3G/4G Cat4 LTE, for IoT use cases such as ATM/Kiosk or industrial environments without dedicated MPLS transport.

For in-depth details, check out ISR 4000 Model Comparison page.

Priced competitively, order ability is available beginning mid-November. The ISR 4221 will ship with latest Cisco IOS-XE release. Learn more via the following resources:

  • Tune in to Cisco Champion Radio to hear Dax Choksi and Lauren Friedman discuss key highlights about the ISR 4221.
  • Watch a replay of Cisco Live Cancun new product introduction PSO session (PSOCRS-2221) with Shankarnarayan Dharmarajan and also see a live demo of the new WebUI.
  • Cisco.com – for your perusal, view a collection of collaterals about ISR 4221: At-a-Glance, Data Sheet, and FAQ

Authors

Anna Duong

Products & Solutions Marketing

Enterprise Network and Cloud

Avatar

With the political election season finally drawing to a close, the one thing that has been a bit of a silver lining is the renewed interest and focus on the country’s Manufacturing sector. To truly grasp the impact of this industry on the U.S. economy, The National Institute of Standards and Technology (NIST) provided some insights:

$1 spent in manufacturing creates $1.40 for the U.S. economy. U.S. manufacturing is the 9th largest economy globally.

To make our manufacturing sector as competitive as possible, we need to address the skills and knowledge gap, and emphasize the need for more flexible training and e-learning options. In particular, training around industrial security will be key in the new economy, which includes protecting intellectual property and keeping factories humming without fear of cyber and physical attacks.

webinar_security_1_v2

For those interested in taking their network security knowledge a step further, we’re hosting a webinar on November 17: Industrial Security: Understanding How IT and OT Meet at the Firewall. This is powered by Industrial IP Advantage, our e-learning coalition with our partners Rockwell Automation and Panduit.

Robert Albach, one of my colleagues at Cisco, will present on best practices for implementing firewalls across a converged, IoT-ready network while addressing the security priorities for both IT and OT.

I truly believe that successful convergence of the information technology and operational technology can bring important business benefits for your company’s strategy and competitiveness. When both functions are better aligned, you will have more disciplined, multi-pronged approaches to addressing security. And your operations can better focus on meeting company strategies.

Register for the webinar here. We hope you can join us!

 

 

 

 

Authors

Douglas Bellin

Global Lead, Industries

Manufacturing and Energy