Editor’s Note: This is the second of a four-part deep dive series into High Density Experience (HDX), Cisco’s latest solution suite designed for high density environments and next-generation wireless technologies. For more on Cisco HDX, visit www.cisco.com/go/80211ac. Read part 1 here. Read part 2 here.
The 802.11ac wireless networking standard is the most recent introduction by the IEEE (now ratified), and is rapidly becoming more accepted and reliable industry standard. The good news is that the client and vendor adoption rate for 802.11ac is growing at a much higher pace as compared to when 802.11n was introduced back in 2009. There has been an accelerated growth seen with the mobile and laptop devices entering the wireless market embedded with an 802.11ac WiFi chipset. Unlike in the past, laptop, smartphone and tablet manufacturers are now acknowledging the fact that staying up to date with the latest Wi-Fi standards is as important for the bandwidth hungry users as having a better camera or a higher resolution display.
With the launch of the new 802.11ac AP 3700, Cisco introduces the Cisco HDX (High Density Experience) Technology. Cisco HDX is a suite of solutions aimed towards augmenting the higher performance, more speed and better client connectivity that 802.11ac standard delivers today.
ClientLink 3.0 features as an integral part of Cisco HDX technology designed to resolve the complexities that comes along with the new BYOD trend driving the high proliferation of 802.11ac capable devices.
So what is ClientLink 3.0 technology and how does it work?
ClientLink 3.0 is a Cisco patented 802.11ac/n/a/g beamforming technology Read More »
Tags: 802.11, access point, antenna, AP, beamforming, cell size, Cisco, client, client connectivity, ClientLink, device, downlink, hardware, HD, HDX, high density, IEEE, Industry Standard, LAN, mobile, mobility, network, rf, smartphone, software, solution, tablet, technology, wi-fi, wifi, wireless, wlan
Industry standard benchmarks have played, and continue to play, a crucial role in the advancement of the computing industry. Demands for them have existed since buyers were first confronted with the choice between purchasing one system over another. Over the years, industry standard benchmarks have proven critical to both buyers and vendors: buyers use benchmark results when evaluating new systems in terms of performance, price/performance and energy efficiency, while vendors use benchmarks to demonstrate competitiveness of their products and to monitor release-to-release progress of their products under development . Historically we have seen that industry standard benchmarks enable healthy competition that results in product improvements and the evolution of brand new technologies.
Over the past quarter-century, industry standard bodies like the Transaction Processing Performance Council (TPC) and the Standard Performance Evaluation Corporation (SPEC) have developed several industry standards for performance benchmarking, which have been a significant driving force behind the development of faster, less expensive, and/or more energy efficient system configurations.
The world has been in the midst of an extraordinary information explosion over the past decade, punctuated by rapid growth in the use of the Internet and the number of connected devices worldwide. Today, we’re seeing a rate of change faster than at any point throughout history, and both enterprise application data and machine generated data, known as Big Data, continue to grow exponentially, challenging industry experts and researchers to develop new innovative techniques to evaluate and benchmark hardware and software technologies and products.
I am co-chairing a workshop with my distinguished colleagues Chaitanya Baru, Meikel Poess, Milind Bhandarkar, Tilmann Rabl and others entitled Workshop on Big Data Benchmarking (WBDB 2012) , supported by the National Science Foundation (NSF.gov). This is a crucial initial step towards the development of an industry standard benchmark for providing objective measures of the effectiveness of hardware and software systems dealing with Big Data. Several industry experts and researchers have been invited to present and debate their vision on benchmarking big data platforms.
A report from this workshop will be presented at the just-announced 4th International Conference on Performance Evaluation Benchmarking (TPCTC 2012) , organized by the TPC, which will be collocated with the 38th International Conference of Very Large Data Bases (VLDB 2012), a premier forum for data management and database researchers, vendors and users. With this conference, we encourage industry experts and researchers to submit ideas and methodologies in performance evaluation, measurement and characterization in areas including, but not limited to: big data, cloud computing, business intelligence, energy and space efficiency, hardware and software innovations and lessons learned in practice using TPC and other benchmark workloads .
Cisco has been an active member of the TPC since 2010 and the SPEC since 2009.
 R. Nambiar, N. Wakou, P. Thawley, A. Masland, M. Lanken, M. Majdalany, F. Carman: Shaping the Landscape of Industry Standard Benchmarks: Contributions of the Transaction Processing Performance Council: Springer 2011
 Workshop on Big Data Benchmarking: http://clds.ucsd.edu/wbdb2012/
 TPC Press Release: http://finance.yahoo.com/news/transaction-processing-performance-council-announces-150000511.html
 TPCTC 2012 Call for Papers: http://www.tpc.org/tpctc2012/
Tags: Big Data, Big Data Benchmarks, Industry Standard, TPCTC 2012, WBDB 2012