Cisco Blogs


Cisco Blog > Mobility

Wi-Fi & Taxes: Digging into the 802.11b Penalty

It’s that time of year again in the US – Tax Time!  That time of year where we review the previous year’s bounty, calculate what’s due, and re-evaluate our strategies to see if we can keep more of what we worked for.  Things change; rules, the economy, time to retirement, and before you know it you find yourself working through alternatives and making some new decisions.

Anyway, as I was working through the schedules and rule sheets, my mind wandered and I started to think about Wi-Fi and the taxes associated with it.  In my day job, I often play the role of forensic accountant.  Like a tax accountant, I’m always looking for a way to get more or understand why there isn’t more already.  So along those lines, lets talk about a little known tax that you may well be paying needlessly.  I’m talking of course about the dreaded 802.11b Penalty.

Wi-Fi protocols like 802.11b are referenced by standards committees for the workgroup that develops them.  In the 2.4 GHz spectrum, there is 802.11b, 802.11g, and 802.11n.  Back in 1997, 802.11b  was the first modern Wi-Fi protocol ratified by the IEEE and it allowed transmissions of 11 Mbps, a major jump forward from the previous 2 Mbps  that was possible with the original 802.11 standard.

tax1

After 802.11b came 802.11a, and then 802.11g.  Both of these protocols where a radical departure from the simplistic 802.11b structure and employed Orthogonal frequency-division multiplexing (OFDM) modulation (now standard in every 802.11 protocol created since then).  OFDM allowed for Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The Three Mega Trends in Cloud and IoT

A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2×1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:

  1. Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
  2. Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
  3. The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.

Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…

Tags: , , , , , , , , , , , , , , , , ,

Open Source is just the other side, the wild side!

March is a rather event-laden month for Open Source and Open Standards in networking: the 89th IETF, EclipseCon 2014, RSA 2014, the Open Networking Summit, the IEEE International Conference on Cloud (where I’ll be talking about the role of Open Source as we morph the Cloud down to Fog computing) and my favorite, the one and only Open Source Think Tank where this year we dive into the not-so-small world (there is plenty of room at the bottom!) of machine-to-machine (m2m) and Open Source, that some call the Internet of Everything.

There is a lot more to March Madness, of course, in the case of Open Source, a good time to celebrate the 1st anniversary of “Meet Me on the Equinox“, the fleeting moment where daylight conquered the night the day that project Daylight became Open Daylight. As I reflect on how quickly it started and grew from the hearts and minds of folks more interested in writing code than talking about standards, I think about how much the Network, previously dominated, as it should, by Open Standards, is now beginning to run with Open Source, as it should. We captured that dialog with our partners and friends at the Linux Foundation in this webcast I hope you’ll enjoy. I hope you’ll join us in this month in one of these neat places.

As Open Source has become dominant in just about everything, Virtualization, Cloud, Mobility, Security, Social Networking, Big Data, the Internet of Things, the Internet of Everything, you name it, we get asked how do we get the balance right? How does one work with the rigidity of Open Standards and the fluidity of Open Source, particularly in the Network? There is only one answer, think of it as the Yang of Open Standards, the Yin of Open Source, they need each other, they can not function without the other, particularly in the Network.  Open Source is just the other side, the wild side!

Tags: , , , , , , , , , , , , , , , , , ,

HDX Blog Series #3: 802.11ac Beamforming At Its Best: ClientLink 3.0

Editor’s Note: This is the second of a four-part deep dive series into High Density Experience (HDX), Cisco’s latest solution suite designed for high density environments and next-generation wireless technologies. For more on Cisco HDX, visit www.cisco.com/go/80211ac.  Read part 1 here. Read part 2 here.

The 802.11ac wireless networking standard is the most recent introduction by the IEEE (now ratified), and is rapidly becoming more accepted and reliable industry standard. The good news is that the client and vendor adoption rate for 802.11ac is growing at a much higher pace as compared to when 802.11n was introduced back in 2009. There has been an accelerated growth seen with the mobile and laptop devices entering the wireless market embedded with an 802.11ac WiFi chipset. Unlike in the past, laptop, smartphone and tablet manufacturers are now acknowledging the fact that staying up to date with the latest Wi-Fi standards is as important for the bandwidth hungry users as having a better camera or a higher resolution display.

With the launch of the new 802.11ac AP 3700, Cisco introduces the Cisco HDX (High Density Experience) Technology. Cisco HDX is a suite of solutions aimed towards augmenting the higher performance, more speed and better client connectivity that 802.11ac standard delivers today.

ClientLink 3.0 features as an integral part of Cisco HDX technology designed to resolve the complexities that comes along with the new BYOD trend driving the high proliferation of 802.11ac capable devices.

So what is ClientLink 3.0 technology and how does it work?

ClientLink 3.0 is a Cisco patented 802.11ac/n/a/g beamforming technology Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Congratulations to 2013 IEEE-SA International Award Recipient Andrew Myles

ieeeEarlier this week, the IEEE Standards Association (IEEE-SA) announced the winners of the 2013 IEEE-SA Awards to honor standards development contributions. We are pleased to announce that Andrew Myles, Engineering Technical Lead at Cisco has been awarded the IEEE 802 SA International award for his extraordinary contribution to establishing IEEE-SA as a world-class leader in standardization.  Andrew has long been involved in IEEE-SA and led a long term initiative (2005-2013) in IEEE 802 to defend and promote IEEE 802 standards globally.

We want to congratulate Andrew on this tremendous recognition. The work of Andrew and others  contributors develop and promote high quality, efficient and effective IEEE standards.  This enables the Internet and the supporting network components to be the premiere platforms for innovation and borderless commerce they are today. These standards in turn are reflected in our products and solutions for our customers.  As we develop technological innovation for our customers, in parallel, we continue to drive global standards deployment. The results are the best innovative solutions that can solve and better our customers’ network environments. Read More »

Tags: , , , , , , , , ,