I was fascinated recently when I read about the detection of gravitational waves for the first time ever. “With this discovery”, the article states, “invisible objects in the universe may soon become visible”. That’s the exact analogy I use when I describe Cisco’s implementation of Fog Computing, or IOx as Cisco refers to it. IOx opens up a new level of visibility in networking infrastructure and decision-making by pushing application logic and data analytics to the network edge. And, just as LIGO (the ruler used to measure the gravitational waves) changes how we measure gravitational forces, IOx changes how we process data.
Why do we need intelligence at the edge? Cisco forecasts that there are going to be 50 billion connected “things” or “devices” in the “Internet of Things” world by 2020. Most of these devices are at the edge of the network and can generate zettabytes of data. So, a traditional two-layer IoT architecture that connects the devices and things directly to cloud, where the IoT application logic resides, presents two big challenges. The first is these applications have strict latency requirements; the devices, often geographically distributed, are connected through low bandwidth uplinks and moving large data to and from cloud impacts it adversely. Second, it costs to send and receive so much data to and from the cloud.
It is, therefore, more optimal to employ a combination of cloud computing and edge computing to industrial IoT solutions. The application logic residing in the data center or cloud could focus on long-term data processing, central aggregation and historical analysis of data. The application logic residing in the edge, closer to the IoT devices, could then focus on performing analytics at the edge itself, making real-time decisions and optimizing the bandwidth by intelligently reducing the data before it is sent to cloud. The compute capability that is inherent in the edge network devices can itself be used for performing this edge processing and analytics.
In order to provide a comprehensive and customer-focused experience, the engineering team engaged with multiple customers spanning several industries and geographies and derived many forward looking requirements including ease of remote manageability, security, edge analytics, multi-tenancy and the need for a policy based application life cycle. Cisco understands that the IoT market’s growth is mandated upon enabling verticals to develop their own applications in their domains that ultimately help end customers. With IOx, we provide a software platform that helps organization develop, manage and deploy applications to the edges. The platform supports device orchestration and analytics at the edge devices. The platform hides network heterogeneity and complexity from developers, essentially eliminating the mismatch between a developer’s preferred language and the IOx development environment by providing a web-like development experience.
Like with the recently-detected gravitational waves opening up the cosmos in a whole new way, the most exciting aspect about data processing at the network edge is that it opens up a world of new possibilities that simply couldn’t be achieved if all of the raw data had to make a trip to the cloud. The promise of IOx is resulting in a new class of edge applications being built for use-cases as diverse as city governance, retail, heavy industry, mining and transportation to name a few.
The newly announced Cisco Digital Network Architecture, or DNA as we like to refer to it, provides an open, extensible and software-centric platform for all things digital, including IOT. A central element of Cisco DNA is Evolved Cisco IOS XE, which among other things supports application hosting and edge computing, so that IOx-based micro-services can be hosted virtually anywhere in the network, on virtually any platform. IOx accelerates innovation in IoT by enabling BYOI (Bring Your Own Interface) and BYOA (Bring Your Own Application) creating the ability for apps to become “network aware”. Similar to the notion of big data platforms, which brought compute to storage, processing IoT data at the edge is about bringing compute to the network. The edge is pushing the limits of networking excellence and we invite you to join our journey.
#CiscoDNA #digitaltransformation #IOx #IOT
Excellent article highlighting the critical need for distributed FOG computing closer to the source of data to make IoT data analytics more effective…
Yes. We need an intelligent edge.
IoT and IOx can bring amazing use cases to life. We see example of local ‘intelligence’ which in turn connects to cloud ‘intelligence’ in some current solutions-
Smart Licensing Satellite – Which provides local intelligence for licensing and in turn connects to Cisco Cloud.
Plug-and-Play – Which has a local service for day-zero deployment automation and in turn connects to a Cisco Cloud Service.
We could evolve these and other solutions to create ‘Smart Connected Devices’ by leveraging the ideas from Fog Computing.
Agree Hari. We have many such interesting use cases.
Great article . Enabling data Analytic to the IWAN solution will defiantly take IWAN to the next level of intelligence.
Before putting data on to the WAN link /cloud , IWAN is making real-time decisions based on intelligence collected and only relevant data goes on wan cloud. With Cisco WAAS added on for WAN optimization to data going on WAN , we will be providing another layer of optimization and end result will be a better User experience/Application performance.
Cisco WAAS + IWAN + IOx = Cisco Rocks
Rohan – when we combine the IWAN and IOX use cases we can solove many interesting use cases for our customers
Excellent Article !
Thanks
Great read!, Loved it. It becomes a case of small data analysis which in turn improves response time to events. Possibly, even respond to some critical or known events locally
Yes respond locally is right way to think. All Possible by having an intelligent edge.
Good blog. IOX will surely unleash the next wave of innovations, enabling our customers and many different use cases. It aligns well with the overall IOT market which is a rising market.
yes many use cases. With an open eco system we can enable useful applications to be built on top.
Like the analogy used. More intelligence at edge == more Cisco relevance and value add at edge == more business growth opportunities. WIN-WIN-WIN situation!
yep.
Nicely captured the essence of Edge computing and Cisco’s vision.
Thanks, The power of #CISCODNA
Great blog and happy to see the results of North Star architecture with Evolved Cisco IOS XE and its value for customers.
The Power of #CISCODNA
Great write, compact n precise . keep them coming :).
Nice blog on Fog and Cisco DNA from Anand. I think intelligence at the edge is going to help drive the evolution to responsive networks as part of Cisco DNA. I’m very happy to see us moving in this direction. Taking advantage on the visibility and context that is available at the edge of the network should let us produce better Network ROI and better customer experience. I think this is just the start.
Yes Peter. As Data continues to grow faster than Bandwidth we need an intelligent edge for distributed processing.
So, more compute and storage at the edge? Makes sense!
Playing the devil’s advocate here 🙂
IoT data sources typically tend to be low power devices which are not really expected to transmit high throughput data. Further the trend in analytics is to operate on large volumes of data for the algorithms to perform better data analysis. Filtering out some data at the network edge may adversely affect these systems’ ability to perform deep analysis of the data to extract more interesting patterns in the data.
How many of the IoT applications today depend on latency sensitive data? I assume we are not talking about video cameras uploading videos, which cannot be hosted at the network edge due to storage constraints. If it is sensor data, is there a real requirement for handling msec level latencies to mandate application footprint at the network edge? Note that these applications will be hosted in an environment that may not be optimal for their deployment and therefore latencies could be introduced at that hop as well, negating any benefit of deploying something close to the sensor.
I ask these questions not as a naysayer, but hoping to get answers with the belief that these would have been analyzed in depth as we are embarking on the DNA journey.
Srikar – Thank you for the comments.
Its important to understand that Data is growing 10X faster than Bandwidth. It is not possible to send all the data to the cloud. Secondly majority of the data generates is not of much value if not acted with a given timeframe.
So. lets take a simple example. A Cisco 819 router is monitoring pressure via a sensor in a remote oil&gas location. The sensor will periodically send the pressure and this can be processed locally and if it exceeds the range specificied, it will be sent over the WAN link for processing in the cloud/DC. This may result in action like shutting the valve.
Agree with you that data is growing at 10x the bandwidth and we will only see the proliferation of IoT devices from here on. However is the trend in the direction of a individual sensor sending higher volume of data/sec or is it in the direction of relatively intelligent devices that can themselves be programmed to filter and upload sampled data to a cloud analytics application?
In the example you mention, the application hosted on the Cisco819 is 3rd party and may itself introduce latency that may negate the RTT delay should the application be deployed in the cloud. Further, why are we assuming that sensors themselves will not evolve to perform minimal processing so that the sampling of data happens right there, instead of an application running on the router? Thanks.
The use case determines what to monitor in the sensor and this will not always be the same. This need to programmable and can change depending on the weather/situation. There is a fundamnetal difference between requirements for tradiitonal client / server computing in the web world and needs for IOT world.
Regarding your comment on latency and RTT – distributed processing will enable this happens in real time. DO note that in many case the uplink from remote location is very very slow.
Nice article.. Its high time for fog computing to turn the corner with compute being easily available in edge devices. It also comes with added advantage of Intelligent Traffic optimization
This will be available across all Edge notes..
Great article! Network automation, virtualization and cloud services enabled by DNA provides a powerful foundation for IOT.
Thanks
Great Article, Anand. Looks like we are moving from Core to EDGE computing via limited success with cloud. Hope EDGE computing will be more than a buzz term.
I am sure CISCO DNA will have many use cases with commercialization or monetization in mind. Good luck !
Bringing the IOT data processing and localized decision making at edge makes sense from many angle like location awareness, data pruning, reduced latency, leveraging converged infrastructure intelligence (for wired and wireless) to name a few.
But it will be interesting to see how Cisco can help to standardize and simplify the space in terms of protocol, security, management etc. which are some of the key challenges.
this is built on an open ecosystem allowing 3rd party apps.
Fog opens up so many new possibilities, we should perhaps put IoT sensors on roof tops to predict the kind of roofs we need at our homes, perhaps build sensors for Earthquake prone areas to collect data, perhaps build IoT for areas in himalayas where we loose hundreds of precious lives every year, Oil and natural gas, Coal Mines where we can build better human safety by understanding the performance of shafts. Your article was very thought provoking with truly endless possibilities. I think we can give back to the community by improving human lives with Fog.
Fog opens up so many new possibilities, we should perhaps put IoT sensors on roof tops to predict the kind of roofs we need at our homes, perhaps build sensors for Earthquake prone areas to collect data, perhaps build IoT for areas in himalayas where we loose hundreds of precious lives every year, Oil and natural gas, Coal Mines where we can build better human safety by understanding the performance of shafts. Your article was very thought provoking with truly endless possibilities. I think we can give back to the community by improving human lives with Fog.
yes balaji agree. Its a huge market and we will find many cases to improve the world.
Nicely written Anand – the comparison to gravitational waves is spot on. This makes the network front and centre of the IoT architecture. Fog computing will be a key component of Cisco’s DNA, and will change the way applications are built and deployed moving forward
Verizon is planning to move away from wired business. By 2020 all the devices on Verizon’s network will be wireless ( mostly 4G and 5G). Wireless RF bandwidth is always a costly bet to deal with . Fog computing with intelligence on the edge will be must to to sustain the large amount of wireless deployment . Very nice blog Anand thanks .
Agree as we have a Data Tsunami having an intelligent edge will help.
A compute on the edge could be a real enabler for machine learning applications. Individual data sources no more burden a central entity. Instead, an immediate compute ability converts it into a learning source. This learning created at one place can then be exported to others.
Hi Anand,
Wholehearted agree that with the exponential growth in the IoT or edge devices, the distributed computing and storage servers with the low power, the low cost which serves as intermediate and aggregation points will be required. These systems would need to be operated intelligently on their own. They have the ability to do self-upgrade and repair (imagine these can run in a very remote place). The opportunity is great and I’m glad Cisco is thinking about this.
At NodePrime, we also develop intelligent distributed data stores which can dedupe, aggregate, filter metrics, inventory blob, logs and process locally with very low entry and operational cost. It can auto-discover other data store in the cluster, self-upgrade and scale to thousand of devices and million of metrics per second per data store.
Great article!
Tom Vo
Nice write Anand.
I do have a question though, what does network infrastructure have to do here other than allow third party software to run in them ? Considering the fact that we are looking at physical devices, there isn’t a lot of ‘network orientation’ – if I may call it that – that is necessary here.
Probably a few real world examples might clarify these things. Its probably me, but even in the manufacturing example that is cited, I cannot see any network specific function that cannot be done by an application.
you are going to have a network device for many reasons – downstream wireless connectivity, upstream routing Ethernet/3G/4G, there is also a level of protocol translation that may come in play from old protocols over serial to IP.
Great article on Fog Computing . Analytics is the future of Networking especially as we align more towards cloud architecture With this majority of Application level issues and use cases will be addressed better reducing complexity . This also helps us to explore many new use cases and fields (ex IoT smart power flow system analytics for energy consumption reduction )
Great Article- Looking forward to be part of this exciting journey!
Great Article Anand. It clears the fog around cloud and fog computing 🙂
Combination of Cloud and Fog makes very effective use of both the technologies and would make it very cost effective for IOT. It also could be suitable for many any other applications that can help in avoiding overload on the cloud and the network.
Good day! I just want to give an enormous thumbs up for the good information you have right here on this post. I can be coming back to your weblog for more soon.
http://www.jobdescriptionsample.org/2016/04/08/sr-network-analyst-job-description/
Thanks for the informative delineation of the distributed computing ecosystem. I have a few queries about the projected adaptation of cloud and fog computing worldwide.
Do you think the projection of the anticipated scale of the “Internet of Things” to the tune of 50 billion devices by 2020 is more than a little ambitious considering:
1) The adoption of IPv6 is so slow. A full twenty years after the idea was first mooted in RFC 1883, IPv6 adoption worldwide has reached just over 10%. Of these, only North America has over 25% of connected users using native IPv6.
We would need all devices to support native IPv6 in order to support this scale of IP enabled devices. So possibly we should also focus our efforts on developing products and utilities that make native IPv6 administration as automated and painless as possible. [1]
2) In order to ensure end-to-end security, practically all network elements would need to support encryption. Would low power, low cost and low memory/CPU end devices have the ability for encryption using TLS/DTLS?
3) These large number of devices would cause a lot more interference on the limited 2.4 GHz spectrum that is freely available unless more spectrum is made available in the 5GHz or even in the 28-39 GHz and higher bands (that have serious line of sight limitations).
4) While we provide the ecosystem, the end user devices as well as the cloud analytics apps are third-party. How does Cisco find a foothold in the applications space that would be the real revenue generator? This is considering that data-centers, cloud based computing and storage etc. are being heavily commoditized with the likes of Amazon, Google, Rackspace, Microsoft, Facebook etc. driving the prices of their hosted cloud services lower and lower everyday.
That being said, there is definitely a lot of scope for harnessing the power of edge devices for stratified data analytics. The Mobility Express setup, for example, could be used as a distributed computing setup. Since load balancing between APs is dependent on the physical location of APs, there would be some APs that would not have any clients attached and could not be used for load balancing due to physical distance from clients. Even so, the setup could be used to distribute RRM calculations and analytics for loaded APs (with large numbers of active clients) amongst a set of unloaded APs (without many/any active clients), calculations being done in parallel at different idle APs and the results aggregated at the controller AP.
[1] https://www.google.com/intl/en/ipv6/statistics.html
This site seems to remove all formatting in any comments that are published, thus reducing readability. Apologies for the single paragraph appearance of my post. 🙁
Siddhartha,
Thank you for you comment:
1) 50B new devices by 2020 will happen. The US is already out of v4 addresses. see this article :
http://arstechnica.com/information-technology/2015/07/us-exhausts-new-ipv4-addresses-waitlist-begins/
Many customers are planning to turn of ipv4.
2) Ensuring we have secure communication with endpoints is crucial. there are a variety of methods for this, Also keep inmind that these sensors may not be always “on” ti conserve power and battery life.
3) Our open ecosystem will allow for 3rd party to write apps as well.
Thanks
Anand
Great article – the power IOT enabled by DNA!