Want to get the most out of your big data? Build an enterprise data hub (EDH).
Big data is rapidly getting bigger. That in itself isn’t a problem. The issue is what Gartner analyst Doug Laney describes as the three Vs of Big Data: volume, velocity, and variety.
Volume refers to the ever-growing amount of data being collected. Velocity is the speed at which the data is being produced and moved through the enterprise information systems. Variety refers to the fact that we’re gathering information from multiple data sources such as sensors, enterprise resource planning (ERP) systems, e-commerce transactions, log files, supply chain info, social media feeds, and the list goes on.
Data warehouses weren’t made to handle this fast-flowing stream of wildly dissimilar data. Using them for this purpose has led to resource-draining, sluggish response times as workers attempt to perform numerous extract, load, and transform (ELT) functions to make stored data accessible and usable for the task at hand.
Constructing Your Hub
An EDH addresses this problem. It serves as a central platform that enables organizations to collect structured, unstructured, and semi-structured data from slews of sources, process it quickly, and make it available throughout the enterprise.
Building an EDH begins with selecting the right technology in three key areas: infrastructure, a foundational system to drive EDH applications, and the data integration platform. Obviously, you want to choose solutions that fit your needs today and allow for future growth. You’ll also want to ensure they are tested and validated to work well together and with your existing technology ecosystem. In this post, we’ll focus on selecting the right hardware.
The Infrastructure Component
Big data deployments must be able to handle continued growth, from both a data and user load perspective. Therefore, the underlying hardware must be architected to run efficiently as a scalable cluster. Important features such as the integration of compute and network, unified management, and fast provisioning all contribute to an elastic, cloud-like infrastructure that’s required for big data workloads. No longer is it satisfactory to stand up independent new applications that result in new silos. Instead, you should plan for a common and consistent architecture to meet all of your workload requirements.
Big data workloads represent a relatively new model for most data centers, but that doesn’t mean best practices must change. Handling a big data workload should be viewed from the same lens as deployments of traditional enterprise applications. As always, you want to standardize on reference architectures, optimize your spending, provision new servers quickly and consistently, and meet the performance requirements of your end users.
Cisco Unified Computing System to Run Your EDH
The Cisco Unified Computing System™ (Cisco UCS®) Integrated Infrastructure for Big Data delivers a highly scalable platform that is proven for enterprise applications like Oracle, SAP, and Microsoft. It also provides the same required enterprise-class capabilities–performance, advanced monitoring, simplification of management, QoS guarantees–to big data workloads. With lower switch and cabling infrastructure costs, lower power consumption, and lower cooling requirements, you can realize a 30 percent reduction in total cost of ownership. In addition, with its service profiles, you get fast and consistent time-to-value by leveraging provisioning templates to instantly set up a new cluster or add many new nodes to an existing cluster.
And when deploying an EDH, the MapR Distribution including Apache™ Hadoop® is especially well-suited to take advantage of the compute and I/O bandwidth of Cisco UCS. Cisco and MapR have been working together for the past 2 years and have developed Cisco-validated design guides to provide customers the most value for their IT expenditures.
Cisco UCS for Big Data comes in optimized power/performance-based configurations, all of which are tested with the leading big data software distributions. You can customize these configurations further, or use the system as is. Utilizing one of Cisco UCS for Big Data’s pre-configured options goes a long way to ensuring a stress-free deployment. All Cisco UCS solutions also provide a single point of control for managing all computing, networking, and storage resources, for any fine tuning you may do before deployment or as your hub evolves in the future.
I encourage you to check out the latest Gartner video to hear Satinder Sethi, our VP of Data Center Solutions Engineering and UCS Product Management, share his perspective on how powering your infrastructure is an important component of building an enterprise data hub.
In addition, you can read the MapR Blog, Building an Enterprise Data Hub, Choosing the Foundational Software.
Let me know if you have any comments or questions, or via twitter at @CicconeScott.
Tags: Big Data, blade server, blades servers, C240 M3 Rack Server, Cisco UCS, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Enterprise Data Hub, Gartner, Hadoop, MapR, rack server, UCS Central, UCS service profiles
The Guild of St Luke in Florence, where Leonardo Da Vinci qualified as a master sculptor when he was only 20 years old, had in attendance both artists and doctors of medicine. While someone today might wonder what those two vocations have in common, someone from the Italian Renaissance would not. Michelangelo and Leonardo Da Vinci were the most famous artists of their day, but they were also remarkably skilled engineers, designers, architects, and experts on human anatomy.
Like the guilds of Renaissance Europe, NVIDIA graphics cards serve multiple disciplines. They can deliver 2D/3D graphics performance to CAD/CAM engineers doing design work, medical technicians and doctors examining MRI/CT scans or tumor reconstructions, scientists performing data modeling, and a variety of graphics professionals.
Although their huge thirst for computing power and their immense appetite for data have kept them at the leading edge of computer design, graphics applications have been unable to take advantage of the revolution in virtualization. Narrow network bandwidths and localized rendering engines made it impractical. And companies were sensitive about the security of their intellectual property.
However, Cisco and Citrix have developed a solution in which graphics applications can run in virtual environments with as much performance and security as if they were running locally on high-powered graphics workstations. The solution allows graphics professionals to reap the benefits of virtualization: data remains protected in the data center, desktops are centrally provisioned, and users in different locations can remotely access the same large graphics files on a variety of devices—even on remote workstations, laptops, and tablets.
This blog describes the solution, its major components, and what’s involved in configuring it inside your data center. You can get the details about the full solution in this Cisco White Paper.
The Cisco and Citrix Solution for Virtualizing Graphics Applications
Four elements are key to the Cisco-Citrix solution:
- A combination of Citrix XenDesktop 7.5 and Cisco UCS C240 M3 rack servers that enables up to 64 VMs per server to run rich 2D/3D applications accelerated by NVIDIA GRID technology.
- Compute, network, and storage efficiency that gives each desktop (or other device) virtual GPU performance comparable to locally executing applications.
- The flexibility of the Citrix XenDesktop to run the NVIDIA GRID cards in both pass-through and vGPU modes, to configure different vGPU types, and to balance the number of vGPUs to match requirements.
- Comprehensive and centralized management of the entire system and its components via the Cisco UCS Management suite.
Major System Components
The major components of this system are:
- Cisco UCS 240 M3 Rack Servers
- Citrix XenDesktop 7.5
- NVIDIA GRID K1 or K2 Graphics Cards
- Citrix XenServer 6.2 Service Pack 1.
Cisco UCS 240 M3 Rack Servers
The Cisco UCS C240 M3 rack server is part of the Cisco Unified Computing System (UCS) family, a data center platform that unites compute, network, and storage access. The platform is optimized for virtual environments and uses open industry-standard technologies to reduce total cost of ownership. It integrates a 10 Gigabit Ethernet network fabric with enterprise-class, x86-architecture servers.
The Cisco UCS C240 M3 servers feature breakthrough compute power for demanding workloads and are rack-mountable with a compact 2RU form factor. These servers use the same stateless, streamlined provisioning and operations model as their blade server counterparts, the Cisco UCS B-Series Servers. The Cisco UCS C240 M3 servers can support either SAS, SATA, or SSD drives internally, or they can interface with third-party shared storage to meet cost, performance, and capacity requirements.
The Cisco UCS C240 M3 servers also include:
- Cisco UCS 6248UP 48-port Fabric Interconnects that supply 10-Gigabit Ethernet, Fibre Channel, and FCoE (Fibre Channel over Ethernet) connectivity
- The Cisco UCS Virtual Interface Card, a PCI Express (PCIe) adapter optimized to handle virtualization workloads of the Cisco UCS C-Series rack servers
- Cisco UCS Manager, which can be accessed through a GUI, a CLI, or an XML API to control multiple chassis and thousands of virtual machines. Administrators can use the same interfaces to manage these servers along with all other Cisco servers in the enterprise.
Citrix XenDesktop 7.5
Citrix XenDesktop 7.5 delivers Windows operating systems and high performance applications to a variety of device types with a native user experience. This XenDesktop release includes HDX enhancements (including HDX 3D Pro) to optimize virtualized application delivery on mobile devices and across limited network bandwidths. HDX 3D Pro provides GPU acceleration for Windows Desktop OS machines (provisioned as VDI desktops), and Windows Server OS machines (that use RDS). It enables an optimal user experience on wide area network (WAN) connections as low as 1.5 Mbps as well as local area network (LAN) connections.
NVIDIA GRID K1 and K2 Cards
The NVIDIA GRID K1 and K2 cards let multiple users simultaneously share GPUs that provide ultra-fast graphics displays with no lag, making a remote data center feel like it’s next door. Because the cards use the same graphics drivers that are deployed in non-virtualized environments, you can run the exact same application both locally and virtualized. The software stack—including GPU virtualization, remoting, and session-management libraries—enables efficient compression, fast streaming, and low-latency display of high-performance 2D and 3D enterprise applications.
Citrix XenServer 6.2 Service Pack 1
Citrix XenServer is an open-source virtualization platform for managing server, and desktop virtualization environments. XenServer 6.2 enables GPU sharing between multiple virtual machines. As a result, each physical GPU on the NVIDIA card can support multiple virtual GPU devices (vGPUs).
As shown in the illustration below, the NVIDIA Virtual GPU Manager running in XenServer dom0 controls the vGPUs, which are assigned directly to guest VMs:
Guest VMs use NVIDIA GRID virtual GPUs in the same manner as a physical GPU that has been passed through by the hypervisor. An NVIDIA driver loaded in the guest VM provides direct access to the GPU for performance-critical operations. Lower-performance management operations use a paravirtualized interface to the NVIDIA GRID Virtual GPU Manager.
Because resource requirements can vary, the maximum number of vGPUs that can be created on a physical GPU depends on the vGPU type, as shown in this table:
||Intended Use Case
||Frame Buffer (Megabytes)
||Virtual Display Heads
||Max Resolution per Display Head
||Power User, Designer
||Power User, Designer
||Power User, Designer
For example, an NVIDIA GRID K2 physical GPU can support up to four K240Q vGPUs on each of its two physical GPUs, for a total of eight vGPUs. However, the same card can support only two K260Q vGPUs, for a total of four vGPUs.
Configuring the Cisco-Citrix System – An Overview
These are the major steps required to configure a single VM to use the NVIDIA GRID vGPU:
- Install an NVIDIA GRID GPU card in a Cisco C240 M3 UCS server.
- Perform the base Cisco UCS configuration and, if required, upgrade the GPU firmware.
- Enable virtual machines for pass-through support by installing the pass-through GPU driver and the Citrix XenDesktop HDX 3D Pro Virtual Desktop Agent.
- Install XenServer 6.2.0 and Service Pack 1, and install the NVIDIA GRID vGPU Manager.
- Create a virtual machine and configure it with the NVIDIA vGPU type. For graphics-intensive applications, be sure to configure virtual machines running Citrix HDX 3D Pro Graphics with at least four virtual CPUs.
- Install and configure the vGPU driver on the VM guest operating system.
- Verify that the graphics applications are ready to use the vGPU.
The detailed configuration steps are provided in the full white paper.
For advanced configurations, note that the C240 M3 riser 1 is associated with the first CPU socket and riser 2 with second CPU socket. Refer to this white paper for information regarding vCPU pinning and GPU locality configurations.
Cisco, Citrix, and NVIDIA have teamed up to bring the benefits of virtualization to the users of graphics-intensive applications and the IT organizations that deploy and manage them. Combined breakthrough technologies allow graphics professionals to benefit from the remote access, data sharing, and low overhead of virtualization while experiencing the performance they demand for their graphics-intensive workloads.
For more information, see:
Tags: C240 M3 Rack Server, Citrix XenDesktop, desktop virtualization, Graphics Applications, NVIDIA GRID cards, VDI architectures
With the World Cup games recently finished, I’m reminded of how rampantly soccer has swept across the U.S. in the last few years. Kids often start quite young — there are leagues for even five and six year olds! One element that helps younger kids enjoy their first soccer experience is that the balls are sized smaller in line with their height, making it easier for them to kick and control the ball. It’s an everyday example of how there can be better results when a tool is well matched with “entry-level” requirements.
Deploying an entry-level desktop virtualization solution follows similar logic. For a deployment to be successful, there must be a balance between the solution, its cost, and its ease of implementation, especially when the number of users is small. For large corporate environments with a few thousand users, it’s much easier to defray CAPEX costs across a large number of users, realize a low cost-per-seat, and rely on IT administrative staff to deploy and manage the solution. For smaller environments like branch offices or SMBs, deploying and managing a comprehensive desktop virtualization solution has generally been too complex and cost-prohibitive — until now.
Cisco and Citrix have collaborated on a new reference architecture that removes the barriers to smaller deployments, making it easy to deliver Microsoft Windows apps and desktops to a variety of client and mobile devices. Based on Cisco and Citrix technologies, the architecture creates a self-contained, easy-to-deploy, and centrally managed solution that supports 500 seats cost-effectively. This is a new Cisco and Citrix solution designed for fault-tolerant deployments of less than 1000 users, opening the door to new desktop virtualization opportunities in branch offices, SMBs, pilot projects, and test and development environments.
Citrix and Cisco test engineers validated the reference architecture and conducted a series of sizing tests using Login VSI. The testing demonstrated how the architecture can support up to 500 Medium/Knowledge Workers or 600 Light/Task Workers while delivering an outstanding end user experience. This blog gives a brief synopsis of the architecture, its benefits, the testing we conducted, and the test results. For more details, you can read the full reference architecture paper and test report here.
Figure 1 shows key solution components. Three Cisco UCS C240 M3 Rack Servers combine industry-standard, x86 servers with networking and storage access into a single converged system. The C-Series servers are part of the Cisco Unified Computing System (UCS) family of products. They have a compact 2RU form factor and use the same stateless, streamlined provisioning and operations model as Cisco UCS B-Series Blade Servers. Cisco UCS 6248UP 48-Port Fabric Interconnects supply 10-GigabitEthernet, Cisco Data Center Ethernet, Fibre Channel, and FCoE connectivity needed for the solution.
Figure 1. 500-User Architecture for Citrix XenApp 7.5 on Cisco UCS C240 M3 Rack Servers
The Citrix XenApp 7.5 release delivers a Windows OS and applications to mobile devices (including laptops, tablets, and smartphones) with a native-touch experience and high performance. In this architecture, the XenApp software delivers 500 Hosted Shared Desktop (HSD) sessions using Remote Desktop Services (RDS). Citrix XenServer 6.2 is the hypervisor that supports virtual machines (VMs) running Microsoft Windows 2012 Server for XenApp and infrastructure services.
Using local storage is essential to achieving an entry-level price point. To make that possible with just twelve 10,000RPM SAS drives, each server includes an LSI Nytro MegaRAID card containing two 100GB flash memory cards for caching I/O operations. Using the LSI Nytro flash cache in conjunction with local storage is a key differentiator for this solution, allowing it to deliver responsive performance while conserving cost.
Why the Buzz?
The reference architecture is an exciting breakthrough for these reasons:
- Self-contained, all-in-one solution. The architecture defines an entirely self-contained “in-a-box” solution with all of the infrastructure elements required for a XenApp 7.5 deployment, including Active Directory, DNS, SQL Server, and more. This takes the complexity out of deploying a desktop virtualization solution especially for small standalone environments.
- Fault-tolerant architecture. The architecture locates redundant infrastructure virtual machines across two Cisco UCS C-Series servers to optimize availability. The solution also configures N+1 XenApp servers to maintain service levels even if a XenApp server failure occurs. In addition, Microsoft Distributed File System services are used across multiple servers to protect user data on local storage.
- Easy to build, deploy, grow, and maintain. The compact design of Cisco UCS C-Series Rack Servers keeps the footprint small, making the solution easy to deploy in a small business or branch office setting. Since the C-Series servers are part of the Cisco UCS product family, they can be managed as standalone systems or alongside existing blade and rack servers using Cisco UCS Manager.
By adding Cisco UCS Central Software to the solution, companies can extend Cisco UCS Manager capabilities, allowing administrators to manage multiple Cisco UCS domains (such as domains for satellite offices) in conjunction with centrally defined policies. Both the C-Series Rack Servers and B-Series Blade Servers can be managed using the same set of management tools.
- Low cost per seat. The architecture avoids expensive flash drives, instead caching IOPs in flash memory on the LSI Nytro cards. The choice of less expensive SAS drives helps to rein in solution costs while providing excellent end user experience.
Figure 2 shows the virtual machines deployed across the three physical servers in the test configuration. Infrastructure VMs were hosted on two of the Cisco UCS C240 M3 Servers, and each server also hosted eight XenApp 7.5 HSD VMs. The redundancy across physical servers yields a highly available design.
Figure 2. Test Configuration
Table 1 lists specific components in the test configuration.
|3 x Cisco UCS C240-M3 Rack Servers (dual Intel Xeon E5-2697v2 Processors @ 2.7 GHz, 256GB of memory, one Cisco VIC1225 network adapter)
||Cisco UCS Manager 2.2(1d)
|1 x LSI Nytro MegaRAID Controller NMR 8110-4i card per server
||Citrix XenApp 7.5
|12 x 600-GB 10,000 RPM hot-swappable hard disk drives
||XenServer 6.2 Hypervisors and XenCenter 6.2
|2 x Cisco 6248UP 48-port Fabric Interconnects
||Microsoft Windows Server 2012 R2, 64-bit Remote Desktop Services (5vCPU, 24GB of memory per VM)
Local storage was organized into drive groups to create RAID 5 and 10 volumes for the hypervisor, infrastructure services, and XenApp VMs. The XenApp 7.5 VMs were provisioned with Machine Creation Service (MCS) differencing disks. MCS differencing disks are virtual hard disks that store desktop changes during Hosted Shared Desktop sessions and they incur a high number of IOPS. The LSI Nytro cards are specifically configured to accelerate IOPs for the I/O-intensive volumes that contain the MCS differencing disks.
To generate load, we used the Login VSI 3.7 software to simulate multiple users accessing the XenApp 7.5 environment and executing a typical end user workflow. Login VSI 3.7 tracks user experience statistics, looping through specific operations and measuring response times at regular intervals. Collected response times determine VSImax, the maximum number of users the test environment can support before performance degrades consistently. Because baseline response times can vary depending on the virtualization technology used, using a dynamically calculated threshold provides greater accuracy for cross-vendor comparisons. For this reason, Login VSI also reports VSImax Dynamic.
At the start of the testing, we executed performance monitoring scripts to record resource consumption for the hypervisor, virtual desktop, storage, and load generation software. At the beginning of each test run, we took the desktops out of maintenance mode, started the virtual machines, and waited for them to register. The Login VSI launchers then initiated the desktop sessions and began user logins (the ramp-up phase). Once all users were logged in, the steady state portion of the test began in which Login VSI executed the application workload, running applications like Microsoft Office, Internet Explorer (including a Flash video applet), printing, and Adobe Acrobat Reader.
The testing captured resource metrics during the entire workload lifecycle — XenApp virtual machine boot, user logon and desktop acquisition (ramp-up), user workload execution (steady state), and user logoff. Each test cycle was not considered passing unless all test users completed the ramp-up and steady state phases and all metrics were within permissible thresholds.
Two test phases were conducted:
- Finding the recommended maximum density for a single physical server. This phase validated single-server scalability under a maximum recommended density with the RDS load. The maximum recommended load for a single server occurs when CPU or memory utilization peaks at 90-95% and the end user response times remain below 4000ms. This phase was used to determine the server N+1 count for the solution.
- Validating the solution at full scale. This phase validated multiple server scalability using the full test configuration.
The first phase was executed under the Login VSI Medium workload and then the Light workload to identify VSImax for each workload type. The validation phase was executed using the Medium workload only.
Phase 1: Single Server Recommended Maximum Density
We first tested different combinations of XenApp 7.5 server VMs and virtual CPU (vCPU) combinations, finding that the best performance was achieved when the number of vCPUs assigned to the VMs did not exceed the number of hyper-threaded cores available on the server. (In other words, not overcommitting CPU resources provides the best user experience.) For the Intel E5-2697v2 processors, 24 cores with hyper-threading equates to 48 vCPUs. The highest density was observed at eight XenApp VMs per physical server, with each VM configured with five vCPUs and 24GB RAM.
The first test sequence determined VSImax for each workload on a single server, indicating the density that a single server can support before the end user experience degrades. Based on this value, we added one additional server to the total number of physical servers needed so that the full-scale configuration achieves optimal performance under normal operating conditions and enable N+1 server fault tolerance.
Medium Workload: Single Server Recommended Maximum Density
For the single server Medium Workload, guided by VSImax scores, we determined that 250 user sessions per host gave us optimal end user experience and good resource utilization. Figures 3 and 4 show end user response times and CPU utilization metrics for the Medium workload.
Figure 3. Single Server, Medium Workload, End User Response Times at 250 Sessions
Figure 4. Single Server, Medium Workload, CPU Utilization
Light Workload: Single Server Recommended Maximum Density
For the single server Light Workload, we determined that 325 user sessions per host gave us optimal end user experience and good server utilization metrics. Figures 5 and 6 show end user response times and CPU utilization metrics for the Light workload.
Figure 5. Single Server, Light Workload, End User Response Times at 325 Sessions
Figure 6. Single Server, Light Workload, CPU Utilization
Phase 2: Full-Scale Configuration Testing
Using all three Cisco UCS C240 M3 Rack Servers, we performed 500-session Login VSI Medium Workload tests to validate the solution at scale, which provided excellent results. The Login VSI Index Average and Average Response times tracked well below 2 seconds throughout the run (Figure 7), indicating an outstanding end user experience throughout the test.
Figure 7. Full-Scale Configuration, Medium Workload, End User Response Times at 500 Sessions
Figures 8 through 13 show performance data for one of the three Cisco UCS C240 M3 servers in the full configuration test. The graphs are representative of data collected for all servers in the three-server test.
Figure 8. Full-Scale Configuration, Medium Workload, CPU Utilization
Figure 9. Full-Scale Configuration, Medium Workload, IOPS
Figure 10. Full-Scale Configuration, Medium Workload, IO Throughput (Mbps)
Figure 11. Full-Scale Configuration, Medium Workload, IO Wait
Figure 12. Full-Scale Configuration, Medium Workload, IO Latency
Figure 13. Full-Scale Configuration, Medium Workload, IO Ave. Queue Length
What about XenDesktop?
Given the same hardware configuration, are you curious how well XenDesktop with Windows 7 virtual desktops perform? Or, perhaps, a 500-seat deployment is initially too much and you just want to “kick some tires” with a single UCS server. In either case, here’s a 200-seat XenDesktop reference architecture that provides the same server specifications and configuration as the 500-seat XenApp configuration discussed above: Deploy 200 Citrix XenDesktop 7.1 Hosted Virtual Desktops on Cisco UCS C240 M3 Rack Server with LSI Nytro MegaRAID and SAS Drives.
Desktop virtualization is an efficient way to deliver the latest Microsoft Windows OS and applications not only to traditional client PCs, but also to the user’s choice of mobile device types. At the same time, desktop virtualization centralizes and protects corporate data and intellectual property, simplifying desktop and OS management. Until now, it’s been difficult for small to medium-sized organizations to realize these advantages because of the complexity and up-front costs associated with building out a pilot or entry-level configuration.
Because this low-cost configuration enables a 100% self-contained solution, it overcomes previous obstacles to deploying desktop virtualization in small business or branch office settings. The architecture provides an extremely easy-to-deploy, fault tolerant, Cisco UCS-managed infrastructure for Citrix XenApp 7.5 hosted shared desktops. For many, the solution greatly simplifies the entry point into desktop virtualization, making it easier to build out and manage a 500-seat standalone deployment.
To read more about the 500-seat XenApp 7.5 reference architecture and the validation testing, see the full white paper: Reference Architecture for 500-Seat Citrix XenApp 7.5 Deployment on Cisco UCS C240-M3 Rack Servers with On-Board SAS Storage and LSI Nytro MegaRAID Controller.
— Frank Anderson, Senior Solutions Architect, Cisco Systems, Inc.
Tags: 500 Seats, C240 M3 Rack Server, Citrix XenApp, Citrix XenDesktop, desktop virtualization solution, reference architecture, UCS, VDI architectures
Every day, security threats continue to evolve, as cyber attackers continue to exploit gaps in basic security controls. In fact, the federal government alone has experienced a 680% increase in cyber security breaches in the past six years, and cybersecurity attacks against the US average 117 per day. Globally, the estimated annual cost of cybercrime is over $100 billion. Often, even when security breaches are identified, it can be extremely difficult to figure how they happened or who is responsible.
One company working hard to prevent these threats is Solutionary, a managed security services provider (MSSP) that actively monitors their customers’ technology systems in order to identify and thwart security events before any negative impacts occur.
In order to provide real-time analytics of client traffic and user activity, Solutionary, a wholly owned subsidiary of NTT Group, developed a patented Solutionary ActiveGuard® Security and Compliance Platform which correlates data across global threats and trends in order to quickly identify security alerts and provide clients with actionable alerts.
The patented, cloud-based ActiveGuard® Security and Compliance Platform is the technology behind Solutionary Managed Security Services
In order to keep up with growing data volumes, the need for fast security analytics, and their expanding client base, Solutionary needed to find a way to quickly scale their infrastructure, as their traditional server infrastructure was not able to easily scale and support in-depth analysis. Their challenge was to figure out how to:
1) Increase their data analytics capabilities and improve their clients’ security
2) Cost-effectively scale as their clients/data volume grows
When a security threat occurred in the past, the legacy systems could only be used to analyze log data; they couldn’t see the big picture. Thus, when an event happened, it would sometimes take weeks of forensics work to figure out what had occurred. In order to meet these challenges, Solutionary turned to the MapR Distribution for Hadoop running on the Cisco Unified Computing System™. By using Hadoop, Solutionary was able to smoothly analyze both structured and unstructured data on a single data infrastructure, instead of relying on a costly traditional database solution that couldn’t pull in both structured and unstructured data into a single platform for analysis.
Cisco UCS Common Platform Architecture for Big Data
Specifically, the Cisco/MapR environment consists of two MapR clusters of 16 Cisco UCS C240 M3 Rack Servers. Solutionary uses the Cisco UCS Manager to provision and control their servers and network resources, while the Cisco UCS 6200 Series Fabric Interconnects provide high-bandwidth connections to servers, and act as centralized management points for the Cisco infrastructure, eliminating the need to manage each element in the environment separately. Because of the environment’s high scalability, it’s easy for the fabric interconnects to support the large number of nodes needed for MapR clusters. Scalability is improved even further by using the Cisco UCS 2200 Series Fabric Extenders to extend the network into each rack.
Cisco UCS Components
With MapR and the Cisco UCS CPA for Big Data environment, Solutionary can now access a much greater amount of data analysis and contextual data, giving them a more informed picture of behavior patterns, anomalous activities, and attack indicators. By quickly identifying global patterns, Solutionary can identify new security threats and put them into context for their clients.
Let me know if you have any comments or questions, or via twitter at @CicconeScott.
Tags: Big Data, blade server, blades servers, C240 M3 Rack Server, Cisco UCS, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Hadoop, MapR, rack server, Solutionary, UCS Central, UCS service profiles