When I think about IT security, I don’t immediately start thinking about threats, hackers and countermeasures, but begin with what is happening to IT in general. Right now, the three big megatrends in IT can be summed up in three words: virtualization, collaboration, and mobility. Unfortunately, it’s become something of a Newtonian principle that any action driving information technology forward generates an equal or greater counteraction by hackers to corrupt and exploit the new technology. I also find it disconcerting that at any given time, the most aggressively marketed “solutions” to IT security problems represent a trailing indicator of what cyber criminals are actually doing to raise hell. Read More »
Reclaiming Mobile Cloud Services from OTTs: Seven Actions Service Providers Can Take to Capture a $60 Billion Opportunity
A rapidly expanding, tech-savvy middle class is driving an explosion of connected mobile devices, with close to a billion smartphones and tablets in the world today. These users are looking for new cloud-based “Connected Life” experiences from their mobile devices, creating tremendous opportunities for service providers (SPs). The key is in mobile cloud. The Cisco® Internet Business Solutions Group (IBSG) projects a direct worldwide mobile-cloud service opportunity of more than $60 billion by 2016, with an additional cloud pull-through market of $335 billion.
But so far, service providers have not taken the lead in offering cloud-based Connected Life services. That claim belongs to over-the-top (OTT) application developers, content providers, and device manufacturers, such as Google and Apple, who have moved quickly to take the high ground in this market.
OTTs Have First-Mover Advantage
This first-mover advantage has Read More »
Tags: app stores, Big Data, Cisco, cloud ecosystem, Connected Life, Content delivery networks, data center, First-mover, IBSG, innovation, market capitalization, mobile cloud, mobile cloud opportunity, over-the-top (OTT), pricing, Service Provider, Smartphones, Tablets, thin-client devices, virtualization, wi-fi
The so-called “data deluge” shows no signs of abating anytime soon. Facebook, for example, has more than 2.5 billion pieces of content and ingests more than 500 terabytes of new content daily. Mobile devices are driving this growth of data. The global proliferation of devices estimated to reach 10 billion by 2017—or 1.4 times the number of people on the planet. As a result mobile-data traffic is exploding. The recently released Cisco Visual Networking Index (VNI) predicts that global mobile-data traffic will increase 13-fold from 2012 to 2017, reaching 11.2 exabytes per month.
But along with the challenges inherent to this tsunami of data, opportunities abound for monetizing and optimizing information. All of those new mobile consumers—in developed and emerging markets alike—will demand enhanced Connected Life experiences that will be newer, better, and more personalized. Data is the “new oil” that will fuel this opportunity. Networks and the Internet have a critical role to play in the future of Big Data. First, they are the collectors and disseminators of data, gathering it from the millions of Internet-enabled devices, applications, and sensors, then storing it in the right place for analysis and further action. Second, they are creators of critical information on location, presence, device type, application, and more. Read More »
Salem Health is a network of hospitals, rehab facilities, and physician clinics located in the Pacific Northwest. The organization was experiencing major performance issues with their IT infrastructure, which was hurting business processes. And in the healthcare industry, that’s not something to be overlooked. Clinicians were experiencing too much downtime to provide top-of-the-line patient care, not to mention the lack of support and control the company had for their current environment.
In dire need of change, Salem Health overhauled their data center with the Cisco® Unified Computing System™ (UCS), powered by Intel® Xeon® processers. The transition was cost-effective – saving the business 68 percent in IT expenses – and provided enhanced data replication, decreased downtime, and greatly improved network access. As a result, Salem Health is able to deliver much better care to patients, and the business is back on track.
In the first few posts in this series, we have hopefully shown that not all cores are created equal and that not all GHz are created equal. This generates challenges when comparing two CPUs within a processor family and even greater challenges when comparing CPUs from different processor families. If you read a blog or a study that showed 175 desktops on a blade with dual E7-2870 processor, how many desktops can you expect from the E7-2803 processor? Or an E5 processor? Our assertion is that SPECint is a reasonable metric for predicting VDI density, and in this blog I intend to show you how much SPECint is enough [for the workload we tested].
You are here. As a quick recap, this is a series of blogs covering the topic of VDI, and here are the posts in this series:
- Introduction – VDI – The Questions you didn’t ask (but really should)
- VDI “The Missing Questions” #1: Core Count vs. Core Speed
- VDI “The Missing Questions” #2: Core Speed Scaling (Burst)
- VDI “The Missing Questions” #3: Realistic Virtual Desktop limits
- VDI “The Missing Questions” #4: How much SPECint is enough (you’re already reading this!)
- VDI “The Missing Questions” #5: How does 1vCPU scale compared to 2vCPU’s?
- VDI “The Missing Questions” #6: What do you really gain from a 2vCPU virtual desktop?
- VDI “The Missing Questions” #7: How memory bus speed affects scale
- VDI “The Missing Questions #8: How does memory density affect VDI scalability?
- VDI “The Missing Questions” #9: How many storage IOPs?
Addition and subtraction versus multiplication and division. Shawn already explained the concept of SPEC in question 2, so I won’t repeat it. You’ve probably noticed that Shawn talked about “blended” SPEC whereas I’m covering SPECint (integer). As it turns out, the majority of task workers really exercise the integer portion of a processor rather than the floating point portion of a processor. Therefore, I’ll focus on SPECint in this post. If you know more about your users’ workload, you can skew your emphasis more or less towards SPECint or SPECfp and create your own blend.
The method to the madness. Let me take you on a short mathematical journey using the figure below. Starting at the top, we know each E5-2665 processor has a SPECint of 305. It doesn’t matter how many cores it has or how fast those cores are clocked. It has a SPECint score of 305 (as compared to 187.5 for the E5-2643 processor). Continuing down the figure below, each blade we tested had two processors, so the E5-2665 based blade has a SPECint of 2 x 305… or 610. The E5-2665 blade has a much higher SPECint of 610 than the E5-2643 blade with just 375. And it produced many more desktops as you can see from the graph embedded in the figure (the graph should look familiar to you from the first “question” in this series).
And now comes the simple math to get the SPECint requirement for each virtual desktop in each test system: