Cisco Blogs


Cisco Blog > Perspectives

IoT, from Cloud to Fog Computing

Introduction

Nowadays, there are billions of devices connected to the Internet, and this has led to some advances in the Electronics and Telecommunication technology developments in recent years which resulted in various kinds of very powerful devices with communication and networking capabilities that have attracted the industries to adopt this technology into their daily business to increase their efficiency. Other than the industrial sector, there are other sectors like assisted living services, public services, etc., which have a big demand for Information and Communication Technology developments. Therefore, there is the need for a new paradigm in M2M communication which enables “Things” connectivity to the Global Internet Network. This paradigm is known by the term IoT.

IoT is the network of physical objects or “Things” embedded with electronics, SW, Sensors and connectivity to enable it to achieve value and service by exchanging data with the manufacturer, operator and/or other connected devices through advanced communication protocols without human operation. The technology of IoT has been evolved according to the environment based on information communication technology and social infrastructure, and we need to know the technological evolution of IoT in the future.

By connecting billions or even trillions of devices to the Internet, we realize that there are a lot of applications that are being used by the industries, the government, the public, etc. For example, the Intelligent Transport System (ITS) application which monitors the traffic in a city by wireless sensors or video surveillance, and sends the information to the users on their mobile devices with the help of the Global Positioning System (GPS) transceiver, to let the users avoid traffic jam and prevent accidents. This is only one application example out of a lot more examples like smart home and e-Health applications. Massive amounts of data are being generated by billions of connected devices and transferred throughout the network to the Internet.

Here come the benefits of integrating IoT with the cloud. One obvious benefit of this integration, is the flexibility the user gets in accessing the services that are offered by the cloud provider through a web interface. This also gives the flexibility to the M2M service provider to offer its services to more customers. So, what is Cloud Computing?

Cloud computing is usually a model for enabling convenient, on-demand network use of a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that may be rapidly provisioned and released with minimal management effort or vendor interaction. It is a new computing technique which achieves options for renting of storage infrastructure and computing services, renting of business processes and overall applications. This new technique simplifies the clients computing jobs by renting resources and services.

Cloud systems are located within the Internet, which is a large heterogeneous network with numerous speeds, technologies, topologies and types with no central control. Because of the non-homogeneous and loosely controlled nature of the Internet, there are many issues especially quality of service related ones remain unresolved. One such issue that affects the quality of service severely is network latency. Real time applications with which users directly interact with are badly affected by delay and delay jitter caused by latency in networks.

The other major issue confronted with cloud computing is security and privacy. Since the cloud systems have been located with the Internet, user requests, data transmission and system responses need to traverse a large number of intermediate networks depending on the distance between the users and systems. When customer data is out there in a public cloud, there is a risk of them being compromised of their integrity and confidentiality. Deeper the data inside the Internet, higher the risk as the data has to travel a long distance to and from the user’s computer to the cloud system, even if the data is encrypted. Similarly the availability of the cloud systems can also be attacked using various methods. Thus it can be seen that cloud systems at present face various security threats due to very nature of their implementation within the Internet coupled with location independence.

Cloud computing model is an efficient method for holding and managing private data centers and it frees the enterprise and customers from the specifications of so many details which may create a problem for latency sensitive applications that require large numbers of nodes in order to meet the delay requirements. IoT requires mobility support and wide range of Geo-distribution in addition to location awareness and low latency features. Therefore, we needed a new platform to meet all these requirements. There is a new platform that delivers a new set of web applications and services to the end-users, by extending cloud platform. This new platform is called Fog Computing and also known as Fogging.

What is Fog Computing?

The term “Fog Computing” was introduced by the Cisco Systems as new model to ease wireless data transfer to distributed devices in the Internet of Things (IoT) network paradigm. Cisco defines Fog Computing as a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. By doing so, Fog reduces service latency, and improves QoS, resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity). (Read more http://cisco.re/1KUnXCX)

Why Fogging?

Fog model provides benefits in advertising, computing, entertainment and other applications, well positioned for data analytics and distributed data collection points. End services like, set-up-boxes and access points can be easily hosted using fogging. It improves QoS and reduces latency. The main task of fogging is positioning information near to the user at the network edge.

Fogging Advantages:

  1. The significant reduction in data movement across the network resulting in reduced congestion, cost and latency, elimination of bottlenecks resulting from centralized computing systems, improved security of encrypted data as it stays closer to the end user reducing exposure to hostile elements and improved scalability arising from virtualized systems.
  2. Eliminates the core computing environment, thereby reducing a major block and a point of failure.
  3. Improves the security, as data are encoded as it is moved towards the network edge.
  4. Edge Computing, in addition to providing sub-second response to end users, it also provides high levels of scalability, reliability and fault tolerance.
  5. Consumes less amount of band width.

Fogging Disadvantages:

Introduces certain demerits on the selections of technology platforms, web applications or other services.

Cloud Computing vs Fog Computing

From Table 1 and Table 2, it can be seen that Cloud Computing characteristics have very severe limitations with respect to quality of service demanded by real time applications requiring almost immediate action by the server.

Table1

Table2

Please check the following video, where Michael Enescu, CTO of Open Source Initiatives, Cisco discusses the shift from cloud to fog computing and the Internet of Things.

IoT Applications and Fog Computing

Check table 3, to find out the vital role of fogging in IoT

Table3

Conclusion

Fog computing performs better than cloud computing in meeting the demands of the emerging paradigms. But of course, it cannot totally replace cloud computing as it will still be preferred for high end batch processing jobs that are very common in the business world. Hence, we can come to the conclusion that fog computing and cloud computing will complement each other while having their own advantages and disadvantages. Edge computing plays a crucial role in Internet of Things (IoT). Studies related to security, confidentiality and system reliability in the fog computing platform is absolutely a topic for research and has to be discovered. Fog computing will grow in helping the emerging network paradigms that require faster processing with less delay and delay jitter, cloud computing would serve the business community meeting their high end computing demands lowering the cost based on a utility pricing model.

Tags: , , , , , , ,

A Summary of Cisco VXLAN Control Planes: Multicast, Unicast, MP-BGP EVPN

With the adoption of overlay networks as the standard deployment for multi-tenant network, Layer2 over Layer3 protocols have been the favorite among network engineers. One of the Layer2 over Layer3 (or Layer2 over UDP) protocols adopted by the industry is VXLAN. Now, as with any other overlay network protocol, its scalability is tied into how well it can handle the Broadcast, Unknown unicast and Multicast (BUM). That is where the evolution of VXLAN control plane comes into play.

The standard does not define a “standard” control plane for VXLAN. There are several drafts describing the use of different control planes. The most commonly use VXLAN control plane is multicast. It is implemented and supported by multiple vendors and it is even natively supported in server OS like the Linux Kernel.

This post tries to summarize the three (3) control planes currently supported by some of the Cisco NX-OS/IOS-XR. My focus is more towards the Nexus 7k, Nexus 9k, Nexus 1k and CSR1000v.

Each control plane may have a series of caveats in their own, but those are not covered by this blog entry. Let’s start with some VXLAN definitions:

(1) VXLAN Tunnel Endpoint (VTEP): Map tenants’ end devices to VXLAN segments. Used to perform VXLAN encapsulation/de-encapsulation.
(2) Virtual Network Identifier (VNI): identify a VXLAN segment. It hast up to 224 IDs theoretically giving us 16,777,216 segments. (Valid VNI values are from 4096 to 16777215). Each segment can transport 802.1q-encapsulated packets, theoretically giving us 212 or 4096 VLANs over a single VNI.
(3) Network Virtualization Endpoint or Network Virtualization Edge (NVE): overlay interface configured in Cisco devices to define a VTEP

VXLAN with Multicast Control Plane
VXLAN1

Read More »

Tags: , , , , , ,

Why Network Design in the Modern Era Has Kept Me from Retirement

Network Design in the Modern Era1In youth-oriented Silicon Valley, it’s risky to mention this, but I’ve been around for a long time. In fact, in theory I could retire! I already moved to a small town in the Pacific Northwest where the cost of living is low, and I could spend my days hiking in the mountains.

But actually I can’t retire. Why? The networking field is too interesting! In addition, modern networking, with its emphasis on design, applications, policies, and users, focuses on the same concepts that have interested me from the beginning. Not only that, but I firmly believe that with today’s network design tools, we are positioned to build networks that are faster, larger, and even more user-friendly than ever. How could I retire when that’s the case?

In the Beginning

I started my career as a software developer. This was long before agile software development became popular, but nonetheless there was a focus on agility and flexibility. The goal was to develop software that could be used in multiple ways to support a broad range of users. The focus was on user behavior, application modeling, systems analysis, and structured design. Read More »

Tags: , , , , ,

Plan to Be Secure; Secure to Your Plan

The routine goes something like this. First a breach of security occurs somewhere in the enterprise, it could be something as small as a single computer getting infected or it could be a massive data loss. It seems like that’s a wide range of events, but often the reaction in an enterprise is the same. The IT executives have a meeting to determine fault and then the analysts and engineers are given the task of making sure that that particular incident never happens again. The analysts and engineers then reply with budget requests for new software and hardware from their favorite vendors. Unfortunately the end result is generally that money is spent and security is only moderately improved, if at all.

In the midst of reacting, everyone forgets that technology doesn’t configure itself and that the weakest link are the people. Instead of ramming in the latest and greatest in technology, we should be leading our company to review, create (if necessary) and rewrite our security policies. Without a policy, security tools are like unguided missiles that we hope hit their target. Read More »

Tags: , , ,

How Cisco Certifications Landed Me the Coolest Job in South Florida; Literally

When I started with my first Cisco router back in 1995, I never would have imagined I would someday be the technology lead for an ice arena of an NHL team. I also would never have predicted the impact that having a Cisco certification would have on being recruited to that position.

Most of my career up until now was spent working in the small and medium business space, primarily on ISP and telecom space working with voice and networks with some software and infrastructure design in the middle. Cisco was a large part of everything that I did from routing and switching to voice over frame relay followed by voice over IP, with a large emphasis on small bandwidth efficiency and signalling. I’m even the lead inventor on an issued patent relating to intelligent rerouting of fax traffic on VoIP systems.

I never thought much about certifications. I have a BA in Economics which has served me well as a business owner and largely found all my work via word of mouth. There were not a lot of people who understood VoIP payload and signalling tuning, starting from the MC3810 and up through the as5300/as5800 series. This was primarily in international carrier / wholesale VoIP traffic and engineering.

As VoIP became more of a commodity good and the cost of equipment came down, this market dried up. In hindsight, I should have paid more attention to Cisco exiting that market, which proved to be a good decision. As my clients and partners moved on to other ventures and I was forced to begin prospecting.

Suddenly, here I was with 30 years since I’d written my first program and roughly 20 years of internet and Cisco experience and I was struggling. I had a lot of experience, but didn’t have a portfolio of work that included any big names, mostly small businesses that no one had heard of. I needed a way to give new clients the confidence to call me. I knew that once I started the conversation, I could close the deal. Before that, however, I needed to actually get that call or email. Read More »

Tags: , , ,