The IT admin standpoint: What’s in it for me?

In the first part of this post we have seen that you can build a service catalog with all the enterprise features you need: multitenancy, role based access control, reporting, chargeback, approvals, etc.

But you can also offer (secured) access to the API to launch the workflow, offering a degree of autonomy for your consumers. Eventually, with a resources quota you avoid everyone being able to create dozens of VMs every hour if the capacity of the system can’t sustain it.

The IT Admin patrolling the infrastructure


If you allow your internal clients to self-serve, you will:

  • get less requests for trivial tasks that consume time and are not really that… interesting to do (let them play with it)
  • be the hero of the productivity increase (no requests pending in a queue)
  • spend your time and skills designing the architectural blueprint that will be offered as a service to your clients (so everybody will play with your rules)
  • use policy-based provisioning, so that you can define the rules just once and map them to tenants and environments so every deployment inherits them
  • maintain control on resource consumption and system capacity, hence on costs and budget
  • increase your relevance: they will come to you to discuss their needs, propose new services and collaborate in governance

The discussion above is valid for the entire infrastructure in the Data Center.
Now, I’d like to provide a couple of examples.


Example 1: provisioning a new server farm

A common example of automation workflow is the creation of a 4-hypervisors server farm.

A single workflow starts from the SAN storage creating a volume and 4 LUNs, where the hypervisor will be installed to enable remote boot for the servers.

Then a network is created (or the existing management network will be used) and 4 Service Profiles (the definition of a server in Cisco UCS) are created from a template, with an individual IP address, mac address and for each network interface.

Then, zoning and masking are executed to map every new server to a specific LUN and the service profiles are associated to 4 available servers (either blades or rack mount servers). The hypervisors are installed using the PXE boot, writing the bytes in the remote storage, configured and customized, and finally added to a (new) cluster in the hypervisor manager (e.g. vCenter).

All this process takes less than one hour: you could launch it and go to lunch and when you’re back you’ll find the cluster up and running. Compare it to a manual provisioning of the same server farm, eventually performed by a number of different teams (see the picture above): it would take days, sometimes weeks.


Example 2: network provisioning

This is the story of a customer that implemented automation specifically for the networking.

They were influenced by the trend about SDN and initially they were caught in the marketing trap “SDN means software implemented networking, hence overlay”. Then they realized the advantage provided by the ACI architecture and selected it as their SDN platform (it is software defined, thanks to the software controller and its powerful policy model).

Developers and the Architecture department asked to access the API exposed by the controller to self-provision what they needed for new projects, but this was seen as an invasion of the property.

It would have worked, but it implied a transfer of knowledge and delegation of responsibility on a critical asset. At the end of the day, if developers and software designers had knowledge in networking, specialists would not exist.

So the networking team built a number of workflows in UCS Director, using the hundreds of tasks offered by the automation library, to implement some use cases ranging from basic tasks (allow this VM to be reached from the DMZ) to more complex scenarios (e.g. create a new environment for a multi-tier application including load balancer and firewall configuration plus access from the monitoring tools, with a single request).

Blueprint designed in collaboration with Security and Software Architects


Graphical Editor for the workflow


These workflows were offered in a web portal (offered by UCSD out of the box) and also through the REST API exposed by UCSD. Sample calls were provided to consumers as python clients, powershell clients and Postman collections, so that the higher-level orchestration tool maintained by the Architecture department was able to invoke the workflows immediately, inserting them in the business process automation that was already in place.

Example of python client running a UCSD workflow


All the executions of the workflows – launched through the self-service catalog or through the REST API – are tracked in the system and the administrator can inspect the requests and their outcome:

The Service Requests are audited and can be inspected and rolled back


Any run of the workflow can be inspected in full detail, look at the tabs in the window:

The Admin has full control (see the tabs in the window)

I believe it’s worth spending some time (it does not take so much) in creating the automation: it will return  big value for both the organization and the individuals offering it.


Cisco UCS Director
Cisco ACI
ACI for Simple Minds
ACI for (Smarter) Simple Minds
Invoking UCS Director Workflows via the Northbound API


Luca Relandini

Principal Architect

Data Center and Cloud - EMEAR