Cisco Blogs


Cisco Blog > Data Center and Cloud

Case Study: Atos Enables Self-Service IT with Cisco Prime Service Catalog

I recently wrote a blog post about Steria, one of our Cisco Prime Service Catalog customers (you can also read their case study here). Today, I’m proud to introduce another case study on an IT service provider customer using Cisco Prime Service Catalog: Atos.

Atos Societas Europaea (SE) is a global leader in IT services with 77,000 employees in 52 countries worldwide. Cisco has a strong partnership with Atos in several areas including data center, cloud, and collaboration – and they are a customer of multiple Cisco solutions.

In particular, there is a division of Atos that provides managed  services for North American companies. This division of Atos offers a broad range of services for their enterprise customers including new employee onboarding, provisioning smartphones and tablets, requesting Cisco WebEx accounts, provisioning of physical servers and virtual machines for data center operations, and more.

To meet the IT service needs of their large customer base, Atos needed to speed up the service delivery process and serve more customers without adding additional IT staff.  According to Atos’ manager of process automation, Kert Gilpin, “We measure success by how much we can reduce service requests by email or phone and how quickly we can fulfill requests. To continue growing, we needed to automate IT service requests. We wanted to deliver IT as a Service.”

Customer Case Study: Atos Automates Fulfillment of Service Requests from Cisco Data Center

Now, thanks to Cisco Prime Service Catalog, Atos is serving more customers, faster, with the same size IT staff. Cisco Prime Service Catalog provides the one-stop shop for Atos customers to request a broad range of IT services (with more than 1,700 service options and configurations). From 2010 through 2013, Atos used the service catalog to process more than 1.5 million IT service requests from it’s customers – including more than 250,000 approvals for more than 260,000 users.

On the front-end, employees at each customer can log into Cisco Prime Service Catalog’s web-based portal interface for self-service access to their organization’s available services. On the back-end, Cisco Prime Service Catalog is integrated with the customer’s existing systems to automate provisioning for each service request. Some of the most commonly requested services in the Atos catalog include:

  • Server setup or decommissioning: Cisco Prime Service Catalog can be integrated with the customers’ data center infrastructure automation tools to enable self-service provisioning.  “Before, multiple people had to perform a manual task to provision a physical or virtual server,” Gilpin said.  “Now we use Cisco Prime Service Catalog to automate approximately 50 tasks in the workflow, taking different actions depending on the conditions.”
  • Distribution of Windows software updates and patches: For this popular service, Atos integrates Cisco Prime Service Catalog with the customer’s Microsoft Systems Center Configuration Manager (SCCM) server.  Employees receive an automated notification when software application upgrades are available. Then they just click to install the upgrade or patch.
  • Employee onboarding services: Through integration between Cisco Prime Service Catalog and their customers’ Oracle and PeopleSoft HR systems, Atos has automated new hire onboarding, transfers, terminations, leaves of absence, name changes, and changes between contractor and employee status.

This combination of self-service ordering and automation is powerful – with real and tangible benefits.  “Automation means customer requests are fulfilled more quickly,” Gilpin said. “The request is generally complete in minutes, compared to days or weeks when we manually provisioned services. And our IT team now has more time for activities that provide value to our customers.”

You can also read the full Atos case study here, and learn more about Cisco Prime Service Catalog at cisco.com/go/service-catalog.

Tags: , , , , , , ,

Finally, a hybrid cloud that makes both users and IT happy!

Two years back, I disparaged hybrid clouds in my blog: “Why Hybrid Clouds Look Like my Grandma’s Network”. Since then the pain and necessity of many clouds in business environment has become acute. I see a great similarity between Hybrid Clouds and Bring Your Own Device (BYOD) phenomenon that has become well-accepted in today’s organization. IT tried to resist it initially, but the consumer movement proliferated into the workplace and was hard to control. Hence IT had no choice but to follow along.

A similar movement is emerging in Cloud. After Amazon Web Services (AWS) made it simple for application developers to swipe credit cards to buy compute and get up and running in a jiffy, the addiction has been hard to stop. Enterprise stakeholders are consuming cloud infrastructure by the hour and in the process running up total costs for their organizations and leaving gaping holes in security and compliance. But this time around, IT has an opportunity to get ahead of the phenomenon.

Screen Shot 2014-06-18 at 11.25.29 PM

 

Challenges with existing hybrid cloud approaches:

 Vendor lock-in: It is hard to argue against the flexibility offered by public clouds. However, few realize that the flexibility comes at the cost of vendor lock-in. Public cloud APIs are typically custom and moving the workload back is almost impossible.

Skyrocketing costs: Granted that public cloud vendors have been driving down costs. However, using public cloud for regular application deployments is like using a rental car for long-term use. If you need a car temporarily, say during a vacation, it makes sense to rent it by the day. However, when you are back at home and need a car for everyday commute, using a rental car will run up costs. This is what enterprises are running into when public cloud charges for resources and bandwidth start to add up. However, it is hard to get out once you are locked into operational practices and workload customization in your favorite cloud.

Security & Compliance holes: Security, what security? When you don’t even know what workloads are running in public clouds and you have no control over who accesses them and how, it is needless to say how big a security and compliance hole this is.

The Solution: Embrace Bring Your Own Cloud (BYOC), build hybrid clouds with Intercloud Fabric

Now that we agree that there’s no way around folks bringing their own clouds, IT needs to provide choice to users while driving consistency, control and compliance for its own sake. Here’s how Intercloud Fabric make this possible:

Choice: Intercloud Fabric enables IT to support a number of clouds including giant public clouds (Amazon, Azure) or their favorite cloud provider including Cisco Powered.

Consistency: Although users get choice of clouds, IT can maintain consistency in networking, security and operations. This is made possible by seamless workload portability across clouds, say vSphere to AWS while maintaining enterprise IP addressing and security profiles.

Compliance: Since public clouds appear as an extension of enterprise data center, current compliance requirements like logging, change control, access restrictions continue to be enforced.

Control: IT controls the cloud in a good way. They don’t have to say “No” to their end users in consuming diverse clouds but can still manage them with a single console and move workloads back and forth.

Seem too good to be true?

 See how cloud providers and business customers are getting ready to do it -- replay of recent webcast Securely Moving Workloads Between Clouds with Cisco InterCloud Fabric

Also, if you are Gigaom Structure in San Francisco this week, you can see the solution in action and get further insights in our workshop on Intercloud Fabric.

 

Tags: , , , , , , , ,

Cisco’s Data Center and Cloud Management Software at Cisco Live San Francisco #CLUS

We’re excited to showcase new innovations in our data center infrastructure automation and cloud management software at Cisco Live San Francisco this week!

CL14-CoSp-Exhib-250x250-GGB

This past Friday, Cisco announced the new Cisco UCS Director version 5.0 – including Application Centric Infrastructure (ACI) support and integration with the Cisco Application Policy Infrastructure Controller (APIC). Cisco customers, prospective customers, and partners in the data center market won’t want to miss these two Cisco Live breakout sessions that will showcase this major new release on Thursday:

Cisco UCS Director 5.0
PSODCT-1004
Thursday, May 22
8:30 to 9:30am

Management and Automation of Application Centric Infrastructure (ACI) with Cisco UCS Director
BRKACI-2410
Thursday, May 22
12:30 to 2:00pm

In addition, we’ll be featuring live demos of UCS Director 5.0 all week in the Cisco Live World of Solutions expo. Here are some of the Cisco UCS Director + ACI demos you don’t want to miss:

Read More »

Tags: , , , , , , , , ,

How Data Virtualization Helps Data Scientists

By now it is clear that big data analytics opens the door to unprecedented analytic opportunities for business innovation, customer retention and profit growth. However, a shortage of data scientists is creating a bottleneck as organizations move from early big data experiments into larger scale adoption. This constraint limits big data analytics and the positive business outcomes that could be achieved.

Jason Hull

Click on the photo to hear from Comcast’s Jason Hull, Data Integration Specialist about how his team uses data virtualization to get what they need done, faster

It’s All About the Data

As every data scientist will tell you, the key to analytics is data. The more data the better, including big data as well as the myriad other data sources both in the enterprise and across the cloud. But accessing and massaging this data, in advance of data modeling and statistical analysis, typically consumes 50% or more of any new analytic development effort.

• What would happen if we could simplify the data aspect of the work?
• Would that free up data scientists to spend more time on analysis?
• Would it open the door for non-data scientists to contribute to analytic projects?

SQL is the key. Because of its ease and power, it has been the predominant method for accessing and massaging data for the past 30 years. Nearly all non-data scientists in IT can use SQL to access and massage data, but very few know MapReduce, the traditional language used to access data from Hadoop sources.

How Data Virtualization Helps

“We have a multitude of users…from BI to operational reporting, they are constantly coming to us requesting access to one server or another…we now have that one central place to say ‘you already have access to it’ and they immediately have access rather than having to grant access outside of the tool” -Jason Hull, Comcast

Data virtualization offerings, like Cisco’s, can help organizations bridge this gap and accelerate their big data analytics efforts. Cisco was the first data virtualization vendor to support Hadoop integration with its June 2011 release. This standardized SQL approach augments specialized MapReduce coding of Hadoop queries. By simplifying access to Hadoop data, organizations could for the first time use SQL to include big data sources, as well as enterprise, cloud and other data sources, in their analytics.

In February 2012, Cisco became the first data virtualization vendor to enable MapReduce programs to easily query virtualized data sources, on-demand with high performance. This allowed enterprises to extend MapReduce analyses beyond Hadoop stores to include diverse enterprise data previously integrated by the Cisco Information Server.

In 2013, Cisco maintained its big data integration leadership with updates of its support for Hive access to the leading Hadoop distributions including Apache Hadoop, Cloudera Distribution (CDH) and Hortonworks (HDP). In addition, Cisco now also supports access to Hadoop through HiveServer2 and Cloudera CDH through Impala.

Others, beyond Cisco, recognize this beneficial trend. In fact, Rick van der Lans, noted Data Virtualization expert and author, recently blogged on future developments in this area in Convergence of Data Virtualization and SQL-on-Hadoop Engines.

So if your organization’s big data efforts are slowed by a shortage of data scientists, consider data virtualization as a way to break the bottleneck.

Tags: , , , , , , , , , , , , , ,

New UCS Servers deliver innovative scaling options and record-breaking power

February 18, 2014 at 10:31 am PST

This week we’re announcing new systems at the upper end of the UCS server product line: some heavy-duty iron for heavy-duty times.   These are important new tools for our UCS customers:  the digital age is accelerating, IT needs more horsepower to keep up, and there is a lot at stake.

Cisco UCS Servers with Intel® Xeon® Processor E7 V2 Deliver Unmatched Customer Benefits from Cisco Data Center

Consider this: less than 10 years ago, some of the largest mainframes scaled up to half a terabyte (TB) of main memory.  What if I were to tell you that these latest generation UCS blade servers will scale to 3TB?   Sound like a lot?  It is.  And that’s just the two-processor version.   Connect two UCS B260 M4 blades with an expansion connector and they become a UCS B460 M4, a four socket server that will scale to 6TB.  Putting that into perspective: a spiffy new laptop might ship today with 8GB of memory.   Multiply that by 750 and you have 6TB.

Not too long ago, all the content Wikipedia would fit in this type of footprint (in 2010 it was just under 6TB with media.)   Here is a fun illustration of what this scale of data would look like on paper (just the ~10GB of text, not the images.)  Now remember, we’re not talking about fitting all that data on the local disks of the server – we’re talking about fitting it in main memory.   This is becoming crucially important in the field of data analytics, where “in-memory” is the key to speed and competitiveness.  Applications like SAP HANA are at the forefront of this trend. Today, at Intel’s launch event in San Francisco, Dan Morales (Vice President of Enabling Functions at eBay) joined us to talk about how they’re betting on this type of analytic technology to help them make the eBay Marketplace work better for buyers and sellers (and eBay shareholders.)   I’ll post a video clip of that soon; his description of the challenges and opportunities, at eBay scale, is worth a watch.

We’ve talked about memory scaling, and Bruno Messina has a nice post that talks more about the scalability on these systems and UCS at large.   But dominating performance is the name of the game: behemoth processing performance is what we look for at this end of the server spectrum and Intel has not disappointed on this round of new technology.   The next generation of the Intel Xeon E7 family packs up to 15 cores per processor and delivers an average 2x performance increase compared to previous generation products.   Performance will be even higher on specific workloads, for example up to 3X on database and even higher for virtualization.   Cisco’s implementation of this technology has once again set the standard for system performance.   In today’s launch, Intel cited Cisco with 6 industry-leading results on key workloads.  As of this posting, the closest to come to that achievement that was Dell with 4.  HP ProLiant posted 1.  So hats off, once again, to the engineering team in Cisco’s Computing Systems Product Group.  Girish Kulkarni has a great summary of the performance news here.

 

New UCS Mission Critical Servers from Cisco Data Center

 

Our collaboration with Intel is one of the best technology combinations in the industry today.  Consider what we both bring to the party.  Intel: innovation in processor technology that drives Moore’s Law.  Cisco: innovation in connecting things across the data center and around the world.  UCS is an outcome of two blue-chip tech powerhouses investing in real innovation and the results have changed the industry.

In 1991, Stewart Alsop famously wrote:  “I predict that the last mainframe will be unplugged on 15 March 1996.”  He just as famously had to eat his words.  He munched on those twelve years ago, and while mainframes and RISC-based systems remain, there is an inexorable trend as the heaviest analytic workloads continue to shift to the type of scale-up x86-based systems we’re talking about today.   It only makes sense.  So while this will garner me plenty of comments from the architectural purists out there, I say “go ahead and plug a mainframe back in.”  It will fit right in your UCS B-Series blade chassis…

 

Tags: , , , , , , , ,