Atos Societas Europaea (SE) is a global leader in IT services with 77,000 employees in 52 countries worldwide. Cisco has a strong partnership with Atos in several areas including data center, cloud, and collaboration – and they are a customer of multiple Cisco solutions.
In particular, there is a division of Atos that provides managed services for North American companies. This division of Atos offers a broad range of services for their enterprise customers including new employee onboarding, provisioning smartphones and tablets, requesting Cisco WebEx accounts, provisioning of physical servers and virtual machines for data center operations, and more.
To meet the IT service needs of their large customer base, Atos needed to speed up the service delivery process and serve more customers without adding additional IT staff. According to Atos’ manager of process automation, Kert Gilpin, “We measure success by how much we can reduce service requests by email or phone and how quickly we can fulfill requests. To continue growing, we needed to automate IT service requests. We wanted to deliver IT as a Service.”
Now, thanks to Cisco Prime Service Catalog, Atos is serving more customers, faster, with the same size IT staff. Cisco Prime Service Catalog provides the one-stop shop for Atos customers to request a broad range of IT services (with more than 1,700 service options and configurations). From 2010 through 2013, Atos used the service catalog to process more than 1.5 million IT service requests from it’s customers – including more than 250,000 approvals for more than 260,000 users.
On the front-end, employees at each customer can log into Cisco Prime Service Catalog’s web-based portal interface for self-service access to their organization’s available services. On the back-end, Cisco Prime Service Catalog is integrated with the customer’s existing systems to automate provisioning for each service request. Some of the most commonly requested services in the Atos catalog include:
Server setup or decommissioning: Cisco Prime Service Catalog can be integrated with the customers’ data center infrastructure automation tools to enable self-service provisioning. “Before, multiple people had to perform a manual task to provision a physical or virtual server,” Gilpin said. “Now we use Cisco Prime Service Catalog to automate approximately 50 tasks in the workflow, taking different actions depending on the conditions.”
Distribution of Windows software updates and patches: For this popular service, Atos integrates Cisco Prime Service Catalog with the customer’s Microsoft Systems Center Configuration Manager (SCCM) server. Employees receive an automated notification when software application upgrades are available. Then they just click to install the upgrade or patch.
Employee onboarding services: Through integration between Cisco Prime Service Catalog and their customers’ Oracle and PeopleSoft HR systems, Atos has automated new hire onboarding, transfers, terminations, leaves of absence, name changes, and changes between contractor and employee status.
This combination of self-service ordering and automation is powerful – with real and tangible benefits. “Automation means customer requests are fulfilled more quickly,” Gilpin said. “The request is generally complete in minutes, compared to days or weeks when we manually provisioned services. And our IT team now has more time for activities that provide value to our customers.”
Two years back, I disparaged hybrid clouds in my blog: “Why Hybrid Clouds Look Like my Grandma’s Network”. Since then the pain and necessity of many clouds in business environment has become acute. I see a great similarity between Hybrid Clouds and Bring Your Own Device (BYOD) phenomenon that has become well-accepted in today’s organization. IT tried to resist it initially, but the consumer movement proliferated into the workplace and was hard to control. Hence IT had no choice but to follow along.
A similar movement is emerging in Cloud. After Amazon Web Services (AWS) made it simple for application developers to swipe credit cards to buy compute and get up and running in a jiffy, the addiction has been hard to stop. Enterprise stakeholders are consuming cloud infrastructure by the hour and in the process running up total costs for their organizations and leaving gaping holes in security and compliance. But this time around, IT has an opportunity to get ahead of the phenomenon.
Challenges with existing hybrid cloud approaches:
Vendor lock-in: It is hard to argue against the flexibility offered by public clouds. However, few realize that the flexibility comes at the cost of vendor lock-in. Public cloud APIs are typically custom and moving the workload back is almost impossible.
Skyrocketing costs: Granted that public cloud vendors have been driving down costs. However, using public cloud for regular application deployments is like using a rental car for long-term use. If you need a car temporarily, say during a vacation, it makes sense to rent it by the day. However, when you are back at home and need a car for everyday commute, using a rental car will run up costs. This is what enterprises are running into when public cloud charges for resources and bandwidth start to add up. However, it is hard to get out once you are locked into operational practices and workload customization in your favorite cloud.
Security & Compliance holes: Security, what security? When you don’t even know what workloads are running in public clouds and you have no control over who accesses them and how, it is needless to say how big a security and compliance hole this is.
The Solution: Embrace Bring Your Own Cloud (BYOC), build hybrid clouds with Intercloud Fabric
Now that we agree that there’s no way around folks bringing their own clouds, IT needs to provide choice to users while driving consistency, control and compliance for its own sake. Here’s how Intercloud Fabric make this possible:
Choice: Intercloud Fabric enables IT to support a number of clouds including giant public clouds (Amazon, Azure) or their favorite cloud provider including Cisco Powered.
Consistency: Although users get choice of clouds, IT can maintain consistency in networking, security and operations. This is made possible by seamless workload portability across clouds, say vSphere to AWS while maintaining enterprise IP addressing and security profiles.
Compliance: Since public clouds appear as an extension of enterprise data center, current compliance requirements like logging, change control, access restrictions continue to be enforced.
Control: IT controls the cloud in a good way. They don’t have to say “No” to their end users in consuming diverse clouds but can still manage them with a single console and move workloads back and forth.
By now it is clear that big data analytics opens the door to unprecedented analytic opportunities for business innovation, customer retention and profit growth. However, a shortage of data scientists is creating a bottleneck as organizations move from early big data experiments into larger scale adoption. This constraint limits big data analytics and the positive business outcomes that could be achieved.
Click on the photo to hear from Comcast’s Jason Hull, Data Integration Specialist about how his team uses data virtualization to get what they need done, faster
It’s All About the Data
As every data scientist will tell you, the key to analytics is data. The more data the better, including big data as well as the myriad other data sources both in the enterprise and across the cloud. But accessing and massaging this data, in advance of data modeling and statistical analysis, typically consumes 50% or more of any new analytic development effort.
• What would happen if we could simplify the data aspect of the work?
• Would that free up data scientists to spend more time on analysis?
• Would it open the door for non-data scientists to contribute to analytic projects?
SQL is the key. Because of its ease and power, it has been the predominant method for accessing and massaging data for the past 30 years. Nearly all non-data scientists in IT can use SQL to access and massage data, but very few know MapReduce, the traditional language used to access data from Hadoop sources.
How Data Virtualization Helps
“We have a multitude of users…from BI to operational reporting, they are constantly coming to us requesting access to one server or another…we now have that one central place to say ‘you already have access to it’ and they immediately have access rather than having to grant access outside of the tool” -Jason Hull, Comcast
Data virtualization offerings, like Cisco’s, can help organizations bridge this gap and accelerate their big data analytics efforts. Cisco was the first data virtualization vendor to support Hadoop integration with its June 2011 release. This standardized SQL approach augments specialized MapReduce coding of Hadoop queries. By simplifying access to Hadoop data, organizations could for the first time use SQL to include big data sources, as well as enterprise, cloud and other data sources, in their analytics.
In February 2012, Cisco became the first data virtualization vendor to enable MapReduce programs to easily query virtualized data sources, on-demand with high performance. This allowed enterprises to extend MapReduce analyses beyond Hadoop stores to include diverse enterprise data previously integrated by the Cisco Information Server.
In 2013, Cisco maintained its big data integration leadership with updates of its support for Hive access to the leading Hadoop distributions including Apache Hadoop, Cloudera Distribution (CDH) and Hortonworks (HDP). In addition, Cisco now also supports access to Hadoop through HiveServer2 and Cloudera CDH through Impala.
As a Cloud Architect, I’ve had the privilege to work with CTOs and CIOs across the globe to uncover the key factors driving Business Continuity and Workload Mobility across their cloud infrastructures. We’ve worked with enterprises, large and small, and service providers to answer their top five concerns in our new Business Continuity and Workload Mobility solution for the Private Cloud.
1) Can you provide business continuity, workload mobility, and disaster recovery for my unique mix of applications, with lower infrastructure costs and less complexity for my operations teams? Yes.
2) Can you provide a multi-site design that reduces business outages and costly downtime, allowing my critical applications to be more secure and available? Yes.
3) Can my operations teams perform live migrations of applications across sites while maintaining user connections, security, and stateful services? Yes.
4) Does your multi-site solution allow me to utilize idle standby capacity during “normal” operations, and reclaim that capacity as needed during an outage event? Yes.
5) Can your Cisco Validated Design greatly reduce my deployment risks and simplify my design process, saving my business significant time, money, and resources? Yes.
A Proven Multi-site Design, Built on the Most Widely Deployed Cloud Infrastructure
We addressed each of these pain points as we designed, built, and validated our new multi-site business continuity and workload mobility solution. Our multi-site solution is built upon Cisco’s cloud foundation, the Virtual Multi-service Data Center (VMDC) that’s been deployed at hundreds of the world’s top enterprises and service providers. In our latest VMDC release, we’ve extended our cloud design to support multi-site topologies and critical use cases for private cloud customers. This validated design simply connects regional and long-distance data centers within your private cloud to address some critical IT functions, including:
application business continuity across data center sites;
stateful workload mobility across data center sites, will maintaining user connections and security;
application disaster recovery and avoidance across data center sites; and
application geo-clustering and load balancing across data center sites.
Choose the Cloud Infrastructure that Fits Your Unique Business Needs
The VMDC Business Continuity and Workload Mobility solution (CVD Design Guide) is grounded in the reality of today’s cloud environment, providing different design choices that match your applications needs. We realize there is no “one size fits all” cloud design, that’s why we support both physical and virtual resources, multiple hypervisors and storage choices, and security compliant designs with industry certifications like FISMA, PCI, and HIPPA.
Key Factors Driving Business Continuity and Workload Mobility in the Private Cloud Read More »