Guest post from Dan Swart
Dan Swart is a Senior Manager in Cisco Technical Services Product Management, leading the team responsible for Enterprise and Data Center Solution Support services. Along with that, Dan has been heavily involved in Data Center Alliance programs and Converged Infrastructures. Dan has Bachelor of Science Degrees in Zoology and Electrical Engineering from North Carolina State University.
We want what we want when we want it. Never truer than today when we’ve got a global marketplace of technology vendors vying to deliver on now practically required solutions like enterprise cloud.
While it’s really impossible today to deploy an enterprise cloud using products created by a single vendor, would we want it any other way? Yes, there are major component manufacturers that can sell most of the products needed to build an enterprise cloud, but the restrictions inherent in those offers, and the need for margin stacking to single source all needed hardware and software from a component manufacturer may limit the attractiveness of those options.
Most of the customers we work with want to build their enterprise cloud using products that are “best for my needs” rather than products that are what a single manufacturer offers. Along with that, enterprise license agreements, volume purchase agreements and other factors make it difficult to purchase a cloud infrastructure from a single source. For those reasons and others, most enterprise cloud deployments are inherently multivendor.
So great, you get exactly what you want and need. What could go wrong? Famous last words. Read More »
Tags: cloud, enterprise cloud, Multiservice Data Center, private cloud, Solution Support
Over the past few weeks, I’ve shared how we are helping our customers address one of their toughest challenges brought on by the Internet of Everything (IoE), Big Data and hybrid IT environments: effective management of the massive amounts of data, types of data and in various locations. With solutions like Data Virtualization , Big Data Warehouse Expansion and Cisco Tidal Enterprise Scheduler, we give our customers the tools to address this challenge head on.
Once you have access to all of your data…what next? The second challenge is to extract real-time valuable information from data in order to make better business decisions. As I’ve said before, more data is only a good thing if you use that data to better respond to opportunities and potential threats. Our customers certainly understand this and, in a recent Cisco study, 40% of surveyed companies identified effectively capturing, storing and analyzing data generated by connected “things” (e.g., machines, devices, equipment) as the biggest challenge to realizing the value of IoT.
The majority of data analysis has historically been performed after moving all data into a centralized repository, but digital enterprises will have so many connections creating so much widely distributed data that moving it all to a central place for analysis will no longer be the optimal approach. For insights needed in real-time, or data sets that are too large to move, the ability to perform analytics at the edge will be a new capability that must be incorporated into any comprehensive analytics strategy.
Analytics 1.0 was all about structured data, in centralized data repositories. Analytics 2.0 added unstructured data and gave rise to Big Data. Analytics 3.0 will require all of those existing capabilities but will also require data management and analytics capabilities closer to where the data is created…at the edge of the network.
With this new approach in mind, today we announced Connected Analytics for IoE, packaged, network-enriched analytics that leverage Cisco technologies and data to extract real-time valuable information called:
- Optimize the fan experience – Connected Analytics for Events monitors Wi-Fi, device and application usage along with social media to deliver insights on fan engagement and business operations.
- Improve store operations and customer service – Connected Analytics for Retail supports analysis of metrics, including customer and operational data in retail environments, to help stores take new steps to assure customer satisfaction and store performance.
- Enhance service quality, customer experience and unveil opportunities for new business – Connected Analytics for Service Providers provides near real-time operational and customer intelligence from patterns in networks, operations, and customer system data.
- Understand how to get the most out of your IT assets – Connected Analytics for IT provides advanced data management, data governance, business intelligence and insights to help align and get the most out of IT capabilities and services.
- Reveal hidden patterns impacting network deployment and optimization – Connected Analytics for Network Deployment analyzes devices, software, and features for inconsistencies that disrupt network operations and provides visualizations and actionable recommendations to prioritize network planning and optimization activities.
- Understand customer patterns in order to meet quality expectations and uncover monetization strategies – Connected Analytics for Mobility analyzes mobile networks to provide network, operations and business insights for pro-active governance to Wi-Fi solution customers.
- Gain a holistic view of customers across data silos – Cisco Connected Analytics for Contact Center delivers actionable customer intelligence to impact behaviors and outcomes during the critical window of customer decision making. Having the right offer at the right time will drive market leadership.
- Measure the impact of collaboration in comparison with best practices – Cisco Connected Analytics for Collaboration measures the adoption of collaboration technologies internally. It leverages data collection using the Unified Communications Audit Tool, from sources such as WebEx, IP Phones, Video, Email and Jabber.
The portfolio also includes Cisco Connected Streaming Analytics, a scalable, real-time platform that combines quick and easy network data collection from a variety of sources with one of the fastest streaming analytics engines in the industry.
In the world of IoE, data is massive, messy, and everywhere, spanning many sources – cloud, data warehouses, devices – and formats – video, voice, text, and images. The power of an intelligent infrastructure is what brings all of this data together, regardless of its location or type. That is the Cisco difference.
Join the Conversation
Follow @MikeFlannagan and @CiscoAnalytics.
Learn More from My Colleagues
Check out the blogs of Mala Anand, Bob Eve and Nicola Villa to learn more.
Tags: analytics, connected analytics, data, IoE, IoT
Read today’s data center news and it’s all about software innovation, Cloud, SDN, Internet of Everything, Big Data, applications…etc. One would think that the days of hardware innovation are long gone. That’s far from the truth! Software and cloud may be the water cooler topics of today, but they depend on a highly reliable, high performance and efficient hardware infrastructure to run on. So, make no mistake, the pace of hardware innovations is alive and well and is just as important a topic in today’s conversation.
Just over a year ago, Cisco introduced the Nexus 9000 Series switches. It’s industry leading performance and highest densities along with several other industry leading features were well published. However, you might have missed a key industry first design feature in the Nexus 9500 that will change how modular chassis are designed in the future. Here’s a short video (50sec) that gets you right to the heart of the innovation.
Revolutionizing Modular Switch Design
In most modular switch designs, a backplane or midplane provides connectivity between the line cards and fabric modules. The Nexus 9500 Series is the industry-first switching platform that eliminates the need for a midplane in a modular chassis design (figure 1).
Figure 1. Nexus 9500 Midplane-free Chassis Design
With a precise alignment mechanism, the Nexus 9500 Series switch line cards and fabric modules directly attach to each other with connecting pins. Line cards and fabric modules have an orthogonal orientations (connected at right angles) in the chassis so that each fabric module is connected to all line cards and vice versa.
Eliminating the need for a midplane provides several advantages over modular platforms with a midplane:
Power and Cooling Efficiency: The midplane obstructs the front to back cooling airflow requiring cut-outs in the midplane or airflow redirection, resulting in reduced cooling efficiency. Without a midplane blocking the airflow path, the Nexus 9500 chassis design delivers optimized cooling efficiency, providing up to 15% higher efficiency, thus requiring less or smaller fans. This also allows for higher density and compact chassis designs.
Increased MTBF: Without a midplane, the switch has less components that can fail, increasing the overall MTBF. Also, with a midplane design, if you bend a pin on the midplane connector while inserting a module, the entire switch must be taken out of commission to replace that midplane or swapped out with a new chassis. With the Nexus 9500, if a fabric connector pin gets bent, the damaged module can be replaced without taking the chassis out of service.
Unrestricted Scale: Midplane chassis designs typically have an inherent performance limitation since they are based on current available technology, thus limiting its future scale. Midplanes are generally designed to support a couple next generation modules and fabrics, beyond which a chassis upgrade is required. By eliminating the midplane, the Cisco Nexus 9500 Series alleviates the performance restriction introduced by a midplane, allowing it to scale multiple generations of modules/fabrics saving capex and datacenter disruption.
For more about the Nexus 9000 Series innovations and benefits, please visit www.cisco.com/go/nexus9000
Tags: midplane, modular switch, nexus 9500
In this episode of Engineers Unplugged, Tony Harvey (@tonyknowspower) and Craig Sullivan (@craigsullivan70) discuss the role of storage in SAP HANA. How does big data impact you? Watch and learn.
Thinking about unicorns.
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
- Episodes will publish weekly (or as close to it as we can manage)
- Subscribe to the podcast here: engineersunplugged.com
- Follow the #engineersunplugged conversation on Twitter
- Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
- Practice drawing unicorns
Join the behind the scenes by liking Engineers Unplugged on Facebook.
Tags: Big Data, netapp, SAP. HANA, Storage, UCS
Want to get the most out of your big data? Build an enterprise data hub (EDH).
Big data is rapidly getting bigger. That in itself isn’t a problem. The issue is what Gartner analyst Doug Laney describes as the three Vs of Big Data: volume, velocity, and variety.
Volume refers to the ever-growing amount of data being collected. Velocity is the speed at which the data is being produced and moved through the enterprise information systems. Variety refers to the fact that we’re gathering information from multiple data sources such as sensors, enterprise resource planning (ERP) systems, e-commerce transactions, log files, supply chain info, social media feeds, and the list goes on.
Data warehouses weren’t made to handle this fast-flowing stream of wildly dissimilar data. Using them for this purpose has led to resource-draining, sluggish response times as workers attempt to perform numerous extract, load, and transform (ELT) functions to make stored data accessible and usable for the task at hand.
Constructing Your Hub
An EDH addresses this problem. It serves as a central platform that enables organizations to collect structured, unstructured, and semi-structured data from slews of sources, process it quickly, and make it available throughout the enterprise.
Building an EDH begins with selecting the right technology in three key areas: infrastructure, a foundational system to drive EDH applications, and the data integration platform. Obviously, you want to choose solutions that fit your needs today and allow for future growth. You’ll also want to ensure they are tested and validated to work well together and with your existing technology ecosystem. In this post, we’ll focus on selecting the right hardware.
The Infrastructure Component
Big data deployments must be able to handle continued growth, from both a data and user load perspective. Therefore, the underlying hardware must be architected to run efficiently as a scalable cluster. Important features such as the integration of compute and network, unified management, and fast provisioning all contribute to an elastic, cloud-like infrastructure that’s required for big data workloads. No longer is it satisfactory to stand up independent new applications that result in new silos. Instead, you should plan for a common and consistent architecture to meet all of your workload requirements.
Big data workloads represent a relatively new model for most data centers, but that doesn’t mean best practices must change. Handling a big data workload should be viewed from the same lens as deployments of traditional enterprise applications. As always, you want to standardize on reference architectures, optimize your spending, provision new servers quickly and consistently, and meet the performance requirements of your end users.
Cisco Unified Computing System to Run Your EDH
The Cisco Unified Computing System™ (Cisco UCS®) Integrated Infrastructure for Big Data delivers a highly scalable platform that is proven for enterprise applications like Oracle, SAP, and Microsoft. It also provides the same required enterprise-class capabilities–performance, advanced monitoring, simplification of management, QoS guarantees–to big data workloads. With lower switch and cabling infrastructure costs, lower power consumption, and lower cooling requirements, you can realize a 30 percent reduction in total cost of ownership. In addition, with its service profiles, you get fast and consistent time-to-value by leveraging provisioning templates to instantly set up a new cluster or add many new nodes to an existing cluster.
And when deploying an EDH, the MapR Distribution including Apache™ Hadoop® is especially well-suited to take advantage of the compute and I/O bandwidth of Cisco UCS. Cisco and MapR have been working together for the past 2 years and have developed Cisco-validated design guides to provide customers the most value for their IT expenditures.
Cisco UCS for Big Data comes in optimized power/performance-based configurations, all of which are tested with the leading big data software distributions. You can customize these configurations further, or use the system as is. Utilizing one of Cisco UCS for Big Data’s pre-configured options goes a long way to ensuring a stress-free deployment. All Cisco UCS solutions also provide a single point of control for managing all computing, networking, and storage resources, for any fine tuning you may do before deployment or as your hub evolves in the future.
I encourage you to check out the latest Gartner video to hear Satinder Sethi, our VP of Data Center Solutions Engineering and UCS Product Management, share his perspective on how powering your infrastructure is an important component of building an enterprise data hub.
In addition, you can read the MapR Blog, Building an Enterprise Data Hub, Choosing the Foundational Software.
Let me know if you have any comments or questions, or via twitter at @CicconeScott.
Tags: Big Data, blade server, blades servers, C240 M3 Rack Server, Cisco UCS, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Enterprise Data Hub, Gartner, Hadoop, MapR, rack server, UCS Central, UCS service profiles