Can you see it? The end is nigh! The end of this blog series, not necessarily “the end” as in AMC’s the Walking Dead sort of end. Are you Zombie stumbling across this blog from a random Google search? Here is a table of contents to help you on your journey as we once again delve into the depths and address another question on our quest to answer… The VDI questions you didn’t ask, but really should have.
Got RAM? VDI is an interesting beast both from a physical perspective as well as the care and feeding of it. One thing this beast certainly does like is RAM (and braaaiiiins). Just in case I am still being stalked by that tech writer, RAM stands for Random Access Memory. I spoke a bit about Operating Systems in our 5th question in this series, and this somewhat builds upon that in regards to the amount of memory you should use. Microsoft says Windows 7 needs: 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit). For the purpose of our testing, we went smack in the middle with 1.5GB of RAM. Does it really matter what we used for this testing? It does a little – one, we need to have sufficient resources for the desktop to perform the functions of the workload test, and second, we need to pre-establish some boundaries to measure from.
Calculating overhead. In order to properly account for memory usage, we need to take into account the overhead of certain things in the Hypervisor. If you want to learn more about calculating overhead, click here. Here are a couple of things we are figuring in overhead for:
Boot Camp: Connect, Discover, Learn with Cisco Monday, February 25, 8:30 a.m.–5:30 p.m. Session ID: SPO2400
The Cisco Boot Camp is dedicated to educating and enabling partners to sell and deploy Cisco solutions successfully.
Breakout Session: Cisco Unified Data Center: From Server to Network
Wednesday, February 27, 12:30–1:30 p.m.
Speaker: Satinder Sethi, VP, Server Product Management and Data Center Solutions, Cisco
Demos: Cisco Booth 1015!
VDI: Cisco UCS with VMware View
Cisco Servers: Cisco Unified Computing System with VMware
Cisco Nexus 1000V Family
Cisco Unified Management
Branch Office Consolidation with Cisco E-Series Server
EMC VSPEX Proven Infrastructure
Also in Cisco Booth 1015, we’ll be shooting multiple episodes of Engineers Unplugged! Drop by to see some of the superstars of IT in full whiteboard action. Topics range from automation to virtualization to SDN. Send me a Tweet @CommsNinja if you’d like to participate!
In the last fiscal quarter Cisco UCS reached another milestone with 20,000 (87% Y/Y growth) customers. The (no longer) new data center paradigm of fabric based computing must be resulting in unique customer benefits, and hence the market traction. Gartner defines Fabric based computing as follows:
Fabric-based computing (FBC) is a modular form of computing in which a system can be aggregated from separate (or disaggregated) building-block modules connected over a fabric or switched backplane. Fabric-based infrastructure (FBI) differs from FBC by enabling existing technology elements to be grouped and packaged in a fabric-enabled environment, while the technology elements of an FBC solution will be designed solely around the fabric implementation model.
I will dive deeper into why customers experience benefits with the Cisco Unified Computing System. So lets start with the term “Fabric”. A Lippis report helps us understand the data center fabric. In this tech target article by Michael Brandenburg we get some more background.
Legacy three-tiered data center architecture was designed to service the heavy north-south traffic of client-server applications, while enabling network administrators to manage the flow of traffic. Engineers adopted spanning tree protocol (STP) in these architectures to optimize the path from the client to server and allow for link redundancy. STP worked well to support client-server applications and its traffic flows, but proved inefficient for server-to-server or east-west communications associated with distributed application architecture.
…Server virtualization compounds the problem with spanning tree and the three-tiered architecture.
… data center fabric, a network where traffic from any port can reach any other node with as few latency-inducing hops as possible.
This is eye opening for those of us who live in the server and application world. Bottom line – the data center fabric will result in fewer hops and lower latency for servers communicating with each other in the data center.
So how is this achieved within the Cisco Unified Computing System? This is done with the Fabric Interconnect, which is the I/O hub and the very soul of the system. The Fabric interconnect consolidates three separate networks: LANs, SANs, and high-performance computing networks. The Fabric Interconnect provides consolidated access to both SAN storage and network attached storage (NAS) over the fabric. This means the Cisco Unified Computing System servers can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI. It also lowers costs by reducing the number of network adapters, switches, and cables.
The Cisco UCS Manager, which is the embedded device manger software in the Fabric Interconnect, gives users the ability to slice and dice this big chunk of physical network capacity of the system into much smaller subunits, with the ability to do it flexibly and to change the decisions with software configuration. With Cisco UCS, IT organizations can now deliver dynamic network infrastructure or network services across all types of applications—from applications like Oracle, SAP, three tier J2EEE, and Microsoft to virtualized applications from VMware, Microsoft, and Citrix.
In his blog John McCool ,Cisco SVP and CTO, defines Fabric as “… a highly available, high performance shared infrastructure built with integrated, intelligent compute, storage and network nodes that can be rapidly and simply organized around the requirements of a given workload.” In part 2 of this blog I will detail the automation and management of the fabric-based compute nodes (upto 160) connected to a single pair of UCS Fabric Interconnects.
Lynn University is a 50-year old private, coeducational institution located in Boca Raton, Florida. So how was this fairly small and quiet school selected to host the final 2012 presidential debate? It’s booming with technological innovation.
The school has long held the belief that student collaboration and sharing of knowledge is vital to the learning process, but realized with time, they need to increase student support through technology. To move to a 1-to-1 program entailed giving each student an iPad and overhauling its network environment. In late 2011, as this transformation was underway, Lynn discovered that they would also soon be the youngest school to ever host a presidential debate.
This meant the school had less than a year to undergo a complete technical refresh, so Lynn turned to Cisco for help. University CIO Chris Boniforti summed up his decision to select Cisco by saying “All of our diverse technical requirements, for both the debate and the university, could be done under one umbrella, with one vendor, and that was Cisco.”
This umbrella of technology included Cisco wireless solutions, Cisco Unified Computing System and Cisco security, voice and IP communications. Cisco joined forces with longtime partner Modcomp to deliver a solution the university could use well beyond the presidential debate. The result: A successful implementation that resulted in a “technically smooth” debate.
It’s important to note this project didn’t shut down once the debate was over. Today, the school is committed to providing a mobile platform for its entire faculty and students by the time the newest crop of freshmen arrive in fall later this year. The addition of the new business school will include lecture capture and resources-sharing tools, including video. Now embedded in the teaching environment, this benefit would not have been possible without Lynn’s new Cisco network.
I’m personally impressed with the university’s commitment to technology. They are a great example for other small schools looking for cost-effective innovation. What do you think? Is your school ready for this kind of transformation?
In a recent interview, the Director of IT Operations at a New York based Enterprise said that one of the biggest problems he was facing was maintaining customer satisfaction on performance as the data deluge grew unabated. According to an IDC 2012 report “..Data creation is taking place at an unprecedented rate and is currently growing at over 60% per year. IDC’s Digital Universe Study predicts that between 2009 and 2020, digital data will grow 44-fold to 35ZB per year..”. One ZB or Zettabyte is 1000 billion gigabytes… you get the picture.
The implications are that more data will be stored and processed on servers. Data could be on local disks or it could be in some large storage arrays, which are connected to the server by a network. It may be pre-processed and stored in a database for faster analysis. The computer (server) or applications must now quickly access the partially processed or raw data. The data could be structured as in ERP solutions or unstructured and handled by scale out Big Data applications. Nevertheless, data will have to flow back and forth through the network connecting servers and the storage. Additionally as Client Virtualization gains traction, data center servers would need to access large files located in storage devices most likely connected through networks. These use cases are addressed by the Cisco UCS and Fusion-IO partnership and therefore generated a whole lot of interest in the June 2012 announcement. In a recent interview at CiscoLive London, Cisco Executive, Paul Perez, reiterated the importance of the collaboration, and benefits to Cisco UCS customers.
So how does Fusion-io ioDrive2 accelerate data access? It optimizes the use of existing network bandwidth for data i/o intensive workloads with a low