Each week, we’ll highlight the most important Cisco Partner Ecosystem news and stories, as well as point you to important, Cisco-related partner content you may have missed along the way. Here’s what you might have missed this week:
Off the Top
The Channel Company Presents Annual Best of Breed Event in Orlando
Our own David Durham was feeling nostalgic and decided to stop by the Partner blog this week. The reason? He was covering the Best of Breed Conference in Orlando.
Cisco CEO, Chuck Robbins was there to deliver a keynote, field questions, and chat with partners:
There was plenty more to keep David busy throughout the 3-day event that kicked-off Monday, but don’t take my word for it, check out David’s highlights – and be sure say hi in the comments.
The Power Of Partnering: Microsoft Recognizes FlexPod Partner Innovation
We always love having new bloggers step in and contribute. This week it was Ed Cho’s turn to make his debut. In his post, he talks about five years of partnering with NetApp, and some important updates around Microsoft Private Cloud with Flexpod and Microsoft SQL Server 2005 Migration.
He also takes a second to look back on some memorable Microsoft Partner of the Year Awards. Make sure you check out his blog and drop him a welcome note or provide feedback in the comments.
As always, let us know what you think of the blog. Feedback from partners, especially around partner programs, is vital for Cisco to keep producing programs that work for all of us.
Read More »
Tags: #Bob15, chuck robbins, David Durham, FlexPod, Microsoft, netapp, rick snyder, SQL
Next in our series of Why I Love Big Data is Bruce from MapR. Together, Cisco and MapR are working on a very cool solution for keeping data local, but accessing very quickly. Also, come by the Connected Banking stand in the Cisco Live World of Solutions and DevNet area to see a demo of the distributed system. You will see how Cisco and MapR can provide solutions for security and data theft prevention to prevent theft of customer’s personal data and financial information.
Bruce Penn, Principal Solution Architect, MapR Technologies
Bruce is a Principal Solution Architect with MapR Technologies. He has over 22 years of Information Technology experience that includes Data Warehousing, Business Intelligence, Enterprise Architecture, Systems Design, Project Management and Application Programming. Prior to MapR, Bruce spent 8.5 years at Oracle and was instrumental in helping grow the Oracle Exadata Database Machine business through extensive collaboration with several large enterprise customers. Bruce was the first Solution Architect to join MapR’s Sales Engineering team and has been solely focused on the MapR Distribution for Hadoop and associated Apache Hadoop ecosystem technologies ever since. Bruce holds a Bachelor’s Degree in Electrical Engineering from Michigan State University.
Cisco and MapR have long been partners in the big data market, and with enterprises embracing the Internet of Everything (IoE) and moving towards a truly distributed data center environment, the combination of UCS and MapR provide unique capabilities to simplify this architecture.
Cisco UCS servers provide a powerful foundation for running distributed big data/Hadoop MapR clusters with unparalleled performance, availability, and manageability at the hardware level. The MapR Distribution including Apache Hadoop provides similar robustness at the software level, creating a rock-solid distributed platform for many flavors of IoE applications.
With the advent of IoE applications, data often originates at the “edge” of a system’s network, meaning that devices such as routers and switches in one data center will generate log data locally, while devices in other data centers will do the same creating silos of log data. In order for applications built around this log data to react in real time, they need to access that data as quickly as possible, and often those applications will want to aggregate the data across data centers in order to make decisions quickly, while keeping the data local to the originating data center. It may be important to keep the data local for legal and regulatory reasons, as well as for efficient local queries. With Cisco UCS Servers, MapR Data Placement Control, and Apache Drill, this becomes a simple task.
Read More »
Tags: analytics, BigData, Cisco, CiscoUCS, MapR, SQL, WhyILoveBigData
SQL Server 2005 end of support is on April 12, 2016. Many of our customers agree that it’s time to think about migrating/upgrading to something better and faster soon. If you are still using SQL Server 2005, here are some points to consider.
SQL Server 2014 New Features This is a major upgrade packed with new features of SQL Server 2014 include In-Memory online transaction processing (OLTP), updateable Columnstore indexes, and AlwaysOn availability groups.
Cisco Unified Computing System (UCS) So why should you consider Cisco UCS to take advantage of these SQL Server 2014 features? Let’s start with performance.
Cisco UCS C460 M4 running Microsoft SQL Server 2014 outperformed:
- Fujitsu SPARC M10-4S by 80 percent
- Dell PowerEdge R820 by 31 percent
- IBM x3859 by 13 percent
Why are we so good?
- Optimize OLTP workloads
The new OLTP engine helps facilitate high performance, low latency data access. Cisco UCS has high memory capacities that support you to take advantage of the SQL Servers’ In-Memory OLTP engine. Progress Insurance used Cisco UCS and In-Memory OLTP and saw a 4x performance gain. They saw a 320% increase in processing rate, from 5,000 transactions/second to 21,000 transactions.
- Optimize BI and Data Warehousing workloads
Columnstore index provides significant performance improvement for Data Warehouse queries. You no longer need to drop and recreate the index when making changes. We have seen 10x better performance results. Such workloads are increasing tenfold every 5 years. With high memory support, Cisco UCS Blade Servers provide up to 3TB of RAM and the Cisco UCS Rack Servers up to 6TB of RAM. You can trust us to give you the best experience.
- Maximizing Availability
Cisco UCS’s service profiles and the stateless architecture allow SQL Server workloads to be introduced back into production in 5-7 minutes regardless if it’s virtualized or bare metal. Children’s Hospital in Colorado leveraged the service profiles and provisioned 15 servers in 1 day. Without them, it usually takes up to weeks to configure the new servers.
SQL Server’s AlwaysOn AGs provide protection against unplanned downtime. During a failure, Cisco Unified Fabric ensures the connection between the primary and the secondary replicas has the best performance of bandwidth.
Read More »
Tags: #SQLServer, Cisco, CiscoUCS, Microsoft, SQL
Of the more than 300 SQLSaturdays around the world, I am lucky enough to represent Cisco at the one in Barcelona on October 25th. If you’re attending TechEd Europe we encourage you to also join us at this one-day free event for IT professionals to learn more about SQL Server and the Cisco Unified Computing System (UCS).
If my experience at a recent SQL event in San Diego is any indication, it is going to be a great event. I was amazed that even after UCS being recognized as the #1 x86 blade server in Americas, many database administrators still came to our table and asked, “What is Cisco doing here at a SQL Saturday event?” The good news is that these same people left with an understanding of how UCS is different from our competitors and can help simplify, standardize and optimize SQL Server deployments.
In San Diego, 80 people showed up to Cisco consulting systems engineer, Aaron Rigney’s presentation, “Optimize Your SQL Server 2014 Workloads with Cisco UCS” which covered the following key benefits.
- Quickly and easily deploy new SQL Servers in minutes
- Proactively manage and automate your SQL Server workloads with Microsoft System Center 2012 R2 and PowerShell integrations with UCS Manager
- Achieve the highest levels of consolidation and performance for both virtual and bare metal implementations.
Read More »
Tags: @ciscoDC, Cisco, CiscoUCS, Integrated infrastructure, Microsoft SQL Server, Microsoft SQL Server2014, SQL, SQL Server, ucs integrated infrastructure
By now it is clear that big data analytics opens the door to unprecedented analytic opportunities for business innovation, customer retention and profit growth. However, a shortage of data scientists is creating a bottleneck as organizations move from early big data experiments into larger scale adoption. This constraint limits big data analytics and the positive business outcomes that could be achieved.
Click on the photo to hear from Comcast’s Jason Hull, Data Integration Specialist about how his team uses data virtualization to get what they need done, faster
It’s All About the Data
As every data scientist will tell you, the key to analytics is data. The more data the better, including big data as well as the myriad other data sources both in the enterprise and across the cloud. But accessing and massaging this data, in advance of data modeling and statistical analysis, typically consumes 50% or more of any new analytic development effort.
• What would happen if we could simplify the data aspect of the work?
• Would that free up data scientists to spend more time on analysis?
• Would it open the door for non-data scientists to contribute to analytic projects?
SQL is the key. Because of its ease and power, it has been the predominant method for accessing and massaging data for the past 30 years. Nearly all non-data scientists in IT can use SQL to access and massage data, but very few know MapReduce, the traditional language used to access data from Hadoop sources.
How Data Virtualization Helps
“We have a multitude of users…from BI to operational reporting, they are constantly coming to us requesting access to one server or another…we now have that one central place to say ‘you already have access to it’ and they immediately have access rather than having to grant access outside of the tool” -Jason Hull, Comcast
Data virtualization offerings, like Cisco’s, can help organizations bridge this gap and accelerate their big data analytics efforts. Cisco was the first data virtualization vendor to support Hadoop integration with its June 2011 release. This standardized SQL approach augments specialized MapReduce coding of Hadoop queries. By simplifying access to Hadoop data, organizations could for the first time use SQL to include big data sources, as well as enterprise, cloud and other data sources, in their analytics.
In February 2012, Cisco became the first data virtualization vendor to enable MapReduce programs to easily query virtualized data sources, on-demand with high performance. This allowed enterprises to extend MapReduce analyses beyond Hadoop stores to include diverse enterprise data previously integrated by the Cisco Information Server.
In 2013, Cisco maintained its big data integration leadership with updates of its support for Hive access to the leading Hadoop distributions including Apache Hadoop, Cloudera Distribution (CDH) and Hortonworks (HDP). In addition, Cisco now also supports access to Hadoop through HiveServer2 and Cloudera CDH through Impala.
Others, beyond Cisco, recognize this beneficial trend. In fact, Rick van der Lans, noted Data Virtualization expert and author, recently blogged on future developments in this area in Convergence of Data Virtualization and SQL-on-Hadoop Engines.
So if your organization’s big data efforts are slowed by a shortage of data scientists, consider data virtualization as a way to break the bottleneck.
Tags: apache, Big Data, Cisco Data Center, Cisco Data virtualization, Cloudera, Composite Software, data integration, data virtualization, Hadoop, HiveServer2, Hortonworks, mapreduce, query, SQL, video