In our previous blog, we looked at how public clouds have set the pace and standards for satisfying the technology needs of data scientists, and how on-premises offerings have become increasingly attractive due to innovations such as Kubernetes and Kubeflow.

Nevertheless, delivery of ML platforms on-premises is still not easy. The effort to replicate a public cloud ML experience requires enthusiasm and persistence in the face of potential frustration. To address this challenge, the Cisco community has developed an open source tool named MLAnywhere to assist IT teams in learning and mastering the new technology stacks that ML projects require. MLAnywhere provides an actual, usable outcome in the form of a deployed Kubeflow workflow (pipeline) with sample ML applications on top of Kubernetes via a clean and intuitive interface. Along with providing an educational resource for IT teams, MLAnywhere also speeds up and automates the deployment of a Kubeflow environment.

How MLAnywhere works?

MLAnywhere is a simple microservice built using container technologies. It’s designed to be installed, maintained, and evolved easily. The fundamental goal of this open source project is to help IT teams understand what it takes to configure ML environments, while providing data scientists with the automated capabilities and tooling they need. It also includes real world examples of ML code built into the tool via Jupyter Notebook samples.

To install, simply download the project files from the Cisco DevNet repository and follow the instructions to build a container using a Dockerfile. After that, you launch the resulting container on an existing Kubernetes cluster.

MLA Installation Process
Image 1: MLA Installation Process

MLAnywhere layers on top of a Kubernetes infrastructure that a container platform solution can deploy and manage, such as Cisco Container Platform (CCP). CCP greatly simplifies both the day one deployment and day two operational aspects of Kubernetes, in a turn-key, secure, production-grade, end-to-end Cisco-supported product, that works with any server hardware.

If GPUs are required for the ML workloads, MLAnywhere seamlessly consumes these from the underlying servers via the container platform APIs, and exposes them into the targeted Kubernetes clusters. From here they can be used directly in Kubeflow framework as the container platform (in this case CCP) manages the alignment of the GPU drivers and software.

What’s in it for IT Operations?

To help educate the user on what’s happening under the surface, and what it takes within the underlying Kubernetes platform to prepare, deploy, and run the Kubeflow tooling, MLAnywhere provides clear, descriptive steps. These are presented within the ML interface while the relevant elements are deployed automatically – including the all-important logging information.

Image 2: MLAnywhere driven Kubeflow deployment

Container technology and the data scientists

Data scientists, many of whom have worked using traditional methodologies in the ML space, will recognize the benefits container technology can bring – dependency management, environment variables management, and GPU driver deployments, to name but a few. Most importantly, they benefit from the scale and speed Kubernetes brings, all while continuing to use well-known frameworks such as TensorFlow or PyTorch.

ML engineers and data scientists are generally more concerned about getting access to the actual dashboards and tools than the underlying plumbing, so appropriate links are provided within MLAnywhere to the Kubeflow interface as the environments are built out on demand.

Image 3: Kubeflow Interface

The future of ML platform automation

MLAnywhere can bring quick and instant value to various teams involved in the ML process, with a focus on helping data scientists and IT operations to quickly set up an ML stack on-premises.

In the future, we’ll focus on adding even more value to MLAnywhere. We intend to merge this project with another Cisco initiative around Kubeflow, called “The Cisco Kubeflow Starter Pack”, and bring their best parts together into a compelling, single open source project.

To view the MLAnywhere code repository and installation instructions to get started now, visit: https://github.com/CiscoDevNet/MLAnywhere.