Everybody’s talking about 802.11ac, but we’ve sensed some confusion for next steps as far as how CIO’s and IT organizations should be approaching the new standard.
Should I move to 802.11ac?
You’re probably thinking: Chris, you’re a leader at Cisco, of course you want me to migrate to 802.11ac. That, my friends, is where you are wrong. There is no simple answer to the question of whether you should move your network to 802.11ac. Here’s my simple rule of thumb:
There is no premium for 802.11ac from Cisco. If you are deploying new Access Points’s today, you should be buying 802.11ac. If you’re not buying, you are probably satisfied with your network and how it will handle the growth of more and more clients associating with your network and the bandwidth demands that come with that client demand. If you feel you have a plan to handle this demand, then you are one of the few that can pass on 802.11ac.
That said, there is a strong ramp up for Cisco 802.11ac products in the market, the AP3700 is the fastest ramping access point in our history and we have yet to see if the AP2700 will claim that crown in the coming months. ABI Research estimates that currently 50% of new device introductions are 802.11ac enabled, a statistic expected to increase to 75% by the end of 2015. This is enough proof of the overwhelming interest in adding the benefits of 11ac to networks. Let’s take a step back and consider the basics of why people are moving to the new standard.
Today, everything is about getting what we want, when we want it. Instant gratification. It’s not just the millennials—we’ve all been conditioned to expect things within seconds. Could you imagine the days pre-Internet if you had the capability for on-demand movies? Read More »
Tags: 11ac, 11n, 802.11, 802.11ac, 802.11n, access point, AP, bandwidth, battery life, CIO, Cisco, client, consumer, dell'oro, deployment, device, education, End User, GHz, gigabit, HD, HDX, high density, IEEE, IT, laptop, macbook, mbps, Mhz, migrate, migration, network, networking, optimization, performance, retail, rf, Scalability, scalable, smartphone, spectral optimization, spectrum, standard, technology, university, visibility, wi-fi, wifi, wireless, wlan
The 2014 IEEE PES Transmission & Distribution Conference & Exposition is in the Windy City, bringing a half-century of industry innovation to the biggest and most exciting conference yet!
Check out Cisco’s presence at the IEEE show: McCormick Place; West Hall; Level 3, 2301 S. Lake Shore Drive; Chicago, IL 60616, and learn more about what Cisco is showing!
Here’s a run down of the demos you can see:
Field Area Network showcases how you can address multiple use cases such as Advanced Metering Infrastructure (AMI), Distribution Automation (DA), and Remote Workforce Management all over a single, multi-service IP network platform. The latest additions to the Connected Grid product portfolio, include the IR 500 Series Distribution Automation Gateway, PLC NAN modules and WiMAX WAN modules for the Cisco 1000 Series Connected Grid Routers, and the Connected Grid Network Management System. The Connected Grid Network Management solution allows you to securely manage multi-vendor, multi-technology, multi-service utility communication networks that can scale to millions of endpoints.
IOx showcases the best in networking operating systems, Cisco Internetworking Operating System (IOS), and the best in open source Linux working together to enable Fog computing. IOx allows data collection to move closer to the source, sensors and systems of origin. It reduces the cost of data collection by eliminating a separate server to run the interface or application and supports demanding utility and industry environments requiring hardened devices.
Substation Automation showcases how you can address mission critical grid operational as well as infrastructure support use cases over a converged network infrastructure.
You will see IEC-61850 GOOSE transport over the Ethernet station bus, partner products integration for Visualization/Control (HMI), serial DNP3/Modbus SCADA and ANSI 87L Line Current Differential Teleprotection transport over an MPLS WAN, along with video surveillance and access control for substation physical security. The latest additions to the Connected Grid product portfolio, include the IE-2000U small form factor Industrial Ethernet Series Switches for the Process bus with PRP red-box and high precision 1588 Power Profile functionality, the ASR-903 MPLS substation router with Async/Sync Serial Interface Modules, and the Prime Carrier MPLS wide areas network management system.
Cyber Security Cisco’s Agile Security Process can significantly reduce the risk of Cyber Threats. Having visibility into your network by seeing all the network traffic, learning what should and should not be there and which attacks are relevant, the Cisco Security Suite can adapt to your environment and remediate based on real threats. This can not only save you time and money, but allow you to focus on the real world security issues by reducing the amount of false positives and false negatives.
Cisco Developer Network (CDN) program facilitates partners to work with Cisco to develop products and solutions for the utility industry. The CDN program enables development, integration with Cisco solutions and certification of IP enabled grid endpoints using Radio-frequency (RF) and Power-line communications (PLC) technologies, distributed intelligence applications and third party communication modules for IOX based field area routers, transmission and distribution technologies as well as grid security and management software.
So be there or be square! Meet up withy Cisco specialists, hear about the latest trends, and see how Cisco is even more relevant to the Utilities sector than ever before!
For those of you looking for a handy map of ‘Where-to-go’, here it is below: Read More »
Tags: chicago, Energy, FAN, field area networks, IEEE, substation automation, utilities
It is with great pleasure I introduce you to a senior thought-leader from Cisco, Rick Geiger.
As some of you will know, Rick Geiger has been with Cisco for 7 years, and was formerly Director of Engineering in Cisco’s Physical Security Business Unit (now part of the IOTG, or Internet of Things Group)
Prior to joining Cisco, Rick was VP Engineering for GE Security in Physical Security, video surveillance and access control, so he knows a thing or two about Security. Rick has in depth experience in the global utility market with more than 10 years as VP Engineering and Chief Technical Officer for Itron. Itron is a key partner of Cisco, as many of you will know.
Presently Rick is Executive Director for Utilities and Smart Grid, on Cisco’s Value Acceleration team (formerly Business Transformation Team), and Rick and the Smart Grid Vertical Team serve the Americas Utility markets with Secure, Resilient and Scalable network solutions for smart grid, advanced metering, distribution automation, utility telemetry and energy management. Despite his USA remit, Rick is often asked to represent Cisco internationally.
Rick is also on the Gridwise Alliance Board of Directors and an IEEE Senior Member and Member of the Power and Energy Society. You can view Rick’s Biography here, along with his posts, including those on the Internet of Everything Blog: The Impact of Distributed Generation. Feel free to contact Rick by commenting on his future posts and engaging in the conversation!
Tags: cva, Energy, gridwise, IEEE, IoT, rick geiger, SmartGrid, utilities
It’s that time of year again in the US – Tax Time! That time of year where we review the previous year’s bounty, calculate what’s due, and re-evaluate our strategies to see if we can keep more of what we worked for. Things change; rules, the economy, time to retirement, and before you know it you find yourself working through alternatives and making some new decisions.
Anyway, as I was working through the schedules and rule sheets, my mind wandered and I started to think about Wi-Fi and the taxes associated with it. In my day job, I often play the role of forensic accountant. Like a tax accountant, I’m always looking for a way to get more or understand why there isn’t more already. So along those lines, lets talk about a little known tax that you may well be paying needlessly. I’m talking of course about the dreaded 802.11b Penalty.
Wi-Fi protocols like 802.11b are referenced by standards committees for the workgroup that develops them. In the 2.4 GHz spectrum, there is 802.11b, 802.11g, and 802.11n. Back in 1997, 802.11b was the first modern Wi-Fi protocol ratified by the IEEE and it allowed transmissions of 11 Mbps, a major jump forward from the previous 2 Mbps that was possible with the original 802.11 standard.
After 802.11b came 802.11a, and then 802.11g. Both of these protocols where a radical departure from the simplistic 802.11b structure and employed Orthogonal frequency-division multiplexing (OFDM) modulation (now standard in every 802.11 protocol created since then). OFDM allowed for Read More »
Tags: 802.11, access point, airtime, AP, battery, behavior, channel, client, data, data rate, device, efficiency, efficient, GHz, IEEE, mbps, mobile, mobility, native protocol, network, Packet, portable, protection mechanism, protocol, specification, spectrum, SSID, standard, tax, traffic, utilization, WFA, wi-fi, wifi, wireless
A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2×1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:
- Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
- Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
- The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.
Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…
Tags: Big Data, core, Data Gravity, Eclipse, edge, Enescu, ETL, Fog computing, IEEE, internet of things, IoT, krikkit, M2M, Moore, Nielsen, Open data, open source, virtualization