Cisco’s newest 802.11ac product, the Aironet 3700 Series Access Point is now orderable and shipping in the next few weeks. The AP 3700 features an integrated 11ac radio with a 4x4 architecture and Cisco’s High-Density Experience (HDX) Technology. HDX is a suite of features specific to the AP 3700 that delivers the best possible user experience, especially in high client density networks. HDX is enabled by a combination of hardware and software features on the AP 3700, features including:
CleanAir 80 MHz – Interference detection and mitigation
ClientLink 3.0 – RF link quality
Smart Roam – Intelligent roaming handoff
Turbo Performance – Performance with high client density
Aruba recently launched their 802.11ac access point, the AP-220 series, featuring a 3x3 design.
Miercom recently published a third-party evaluation of the performance between the AP 3702i and the AP-225. The report consists of a diverse range of test cases meant to gauge real-world performance of the access points. The tests include; multi-client performance, single client rate vs. range, performance in the presence of interference, and performance on reduced power. Here are some of the highlights from the report.
The AP 3700 performed very well in the multi-client performance test, thanks impart to HDX Turbo Performance. With 60 clients, the AP 3702i had a 6x performance advantage over the AP-225. The AP-225 struggled to serve all the clients and only mustered 40 Mbps total. The AP 3702i was able to transmit a healthy 236 Mbps, while maintaining fair throughput to each client.
The test consisted of 60 11ac clients, all associated to the 5 GHz radio. The clients used were 10 Dell E6430 laptops with Broadcom 4360 three spatial-stream chips, 20 Apple Macbook Air two spatial-stream laptops, and 30 Dell E6430 laptops with Intel 7260 two spatial-stream chips. Clients were setup in an open office environment surrounding the AP. Distances varied from 10’ to 50’.
By now, given all the launch and blogging activity activity over the past week or so, I am sure your understanding of and interest in Application Centric Infrastructure (ACI) will have grown. Many of you will be asking “how do I get started as quickly as possible?”, and “how can I free up some time and resources to investigate?” You understand the “what” - now, as I blogged recently on SDN, it’s time to understand more about the “why” and take action on the “how”. How then do you get off that start line as quickly as possible?
Get Set To Go With ACI
As with many things in life, it helps if you get help from someone who has “been there” and “done that”. And that’s where Cisco Services comes in, as Scott Clark, the VP for our Data Center Services team, introduced last week. So let’s talk about why Cisco Services should be your partner in this application centric world, and what services can help you.
Recently Cisco made significant efforts around open sourcing our H.264 implementation, including covering the MPEG-LA licensing costs for distribution and working with Mozilla to add support for H.264. However, in this attempt to unstick the logjam that has occurred in the standards bodies, the Internet Engineering Task Force (IETF) failed to reach consensus on the selection of a common video codec.
Cisco’s Jonathan Rosenberg explored this topic more in a recent Collaboration blog post. Read on to find out how we’re planning to move forward and why this conversation is definitely not over!
This is the first of a series of blogs that I plan to publish to start a dialog with our partner community. In these blogs, I’ll discuss the huge industry disruption now taking place, how Cisco Services is transforming itself to respond to that disruption, and how current and prospective partners can profit from the lucrative opportunities this disruption is creating.
In our industry, we see major disruptions every 20-25 years. Inflection points occur, platforms shift, and customer needs change dramatically. Today, we find ourselves well into the next major market evolution, one of unprecedented scale. To learn more, click here to view an online seminar where I discuss these trends with Chris Barnard, IDC AVP EMEA Network Life Cycle Services, and Leslie Rosenberg, IDC Research Manager, Worldwide Network Life Cycle Services.
Together, we are addressing the challenges — and tremendous opportunities – related to cloud, virtualization, big data, programmable networks, new consumption models, and changing buying centers. Some of our existing partners are executing on these opportunities and evolving their practices to compete, win, and ultimately enable innovative business solutions for all our customers. At the same time, we are attracting new partners into our ecosystem: ISVs, industry vertical players, and consulting firms, to name a few. Read More »
I am attending South Korea’s Big Data Forum in Seoul, and one question here is, “How big is Big Data?” My friend and colleague Dave Evans has pointed out that by the end of this year, more data will be created every 10 minutes than in the entire history of the world up to 2008. Now, that’s big!
Much of this data is being created by billions of sensors that are embedded in everything from traffic lights and running shoes to medical devices and industrial machinery—the backbone of the Internet of Things (IoT). But the real value of all this data can be realized only when we look at it in the context of the Internet of Everything (IoE). While IoT enables automation through machine-to-machine (M2M) communication, IoE adds the elements of “people” and “process” to the “data” and “things” that make up IoT. Analytics is what brings intelligence to these connections, creating endless possibilities.
To understand why, let’s step back and take a look at the classic approach to Big Data and analytics. Traditionally, organizations have tended to store all the data they collect from various sources in centralized data centers. With this model, if a retailer wants to know something about the buying patterns of a certain store’s customers, it can create an analysis of loyalty card purchases based on data in the data warehouse. Collecting, cleansing, overlaying, and manipulating this data takes time. By the time the analysis is run, the customer has already left the store.
Big Data today is characterized by volume, variety, and velocity. This phenomenon is putting a tremendous strain on the centralized model, as it is no longer feasible to duplicate and store all that data in a centralized data warehouse. Decisions and actions need to take place at the edge, where and when the data is created; that is where the data and analysis need to be as well. That’s what Cisco calls “Data in Motion.” With sensors gaining more processing power and becoming more context-aware, it is now possible to bring intelligence and analytic algorithms close to the source of the data, at the edge of the network. Data in Motion stays where it is created, and presents insights in real time, prompting better, faster decisions.