Australian mobile data traffic will increase sixfold between 2012 and 2017, reaching 0.075 exabytes. Meanwhile, New Zealand mobile data traffic will increase eightfold in the same period.
These were the key findings from our latest Global Mobile Visual Networking Index (VNI), an ongoing initiative to predict global traffic growth, which were presented last week at Cisco Live in Melbourne by Dr. Robert Pepper, Cisco’s vice president for global technology policy (see slide presentation here).
Dr. Pepper identified four trends driving this data consumption growth: more users, more devices per user, faster network speeds and more media-rich content.
Anyone who has been involved with compliance knows that simplifying complexity is the key to maintaining a secure and compliant organization. It’s become quite apparent that sustaining compliance is a marathon, and the journey must be travelled with vigilance. This is not something that is an endpoint or a task, that once accomplished, can be shelved and forgotten; therefore, it is very helpful for merchants, who wish to become compliant or maintain compliance, to purchase solutions that are “certified.”
The fact that you are purchasing a product that’s already been validated as secure and “capable” of being compliant reduces the complexity and uncertainty associated with big-ticket items. Adding new credit card readers or a payment application in your stores is expensive, and knowing that these products are validated by the Payment Card Industry (PCI) Council gives merchants confidence that they’re making a wise and secure decision. Read More »
It’s been four years since Cisco officially introduced the Unified Computing System (UCS) and began the challenge of convincing the industry – and the channel – that we were serious about being a server player and serious about making a difference in the data center.
We all heard the naysayers at first, and especially our competitors, saying: “Cisco? Servers? What does a networking company know about servers?” We had our work cut out for us.
Today we’re proud to have a number of industry accolades for Cisco’s data center strategy behind UCS, and a continued and growing partnership with a range of channel allies building ecosystem-based solutions around UCS. Just look at how much this community has grown:3,000 partners actively selling UCS Solutions and more than 20,000 end customers. Partners are building software solutions using the UCS developer tools we’ve provided, and major strategic allies like VCE, NetApp and Hitachi are winning in the data center with our combined technology and solutions. Read More »
In this week’s episode of Engineers Unplugged, WWT’s Dave Kinsman (@virtualizethis) and Chris Gebhardt (@chrisgeb) take on the current buzz in the end-user computing space. Listen in on all things VDI, from storage to flash:
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.
Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:
Join us for a free webinar on March 27 discussing this blog series. Register here.
The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.
As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.