Movies have always created powerful mystique about artificial intelligence. For example, 2001: A Space Odyssey had the computer, Hal 9000, that recognized astronauts, spoke to them, and even locked the door to prevent an astronaut from entering the spacecraft. In the Terminator movies, Skynet was a self-aware computer set on destroying humans. The awesome computer capabilities depicted in these and other movies are very entertaining to be sure but also create a mysticism about computers being omniscient, omnipresent, and even omnipotent. Parts of these fictional computer superpowers are actually reality in our pockets. For many of us, the smart phone is able to recognize our voices, our faces, talk to us in different languages, and even lock doors. Despite how artificial intelligence and machine learning are embedded into our lives, the mystical powers of fictional computers still give many the impression that using artificial intelligence or machine learning capabilities in business requires the wizardry of Merlin, the intellect of Einstein, and the national effort of a moon landing.
The reality is that machine learning has advanced to a point where it is no longer in the realm of rocket science. To take advantage of machine learning today, one does NOT need know all the internal details of a machine learning algorithm such as
One only has to be able to use software packages such as scikit-learn, TensorFlow, PyTorch, and many others. Rather than the mysticism of rocket science, the true technical barrier for entry to machine learning has been lowered to that of a software problem. Does having a data scientist that understand the machine learning algorithmic details help? Absolutely. However, data scientists do not need to know all the details of the machine learning algorithm to mine value out of data. This situation is very similar to that of a C programmer who may not understand the details of assembly language but can still develop sophisticated programs.
Is machine learning different than traditional programming? Yes. Historically, humans had to create software to take input data and have the computer to generate output data. For example, a programmer can try to write code that recognize photos of cats and dogs by describing all the characteristics of of cats and dogs (e.g., nose, ears, tails). Unfortunately, this is an exceptionally daunting problem because of the myriad variations among cats, dogs and their respective breeds. Instead of writing such detailed instructions to recognize a pet, with supervised machine learning, you feed the machine learning algorithm with lots of labeled examples, such as photos that are properly identified as cats and dogs. Then, the machine learning algorithm can create a program, also known as a model, that can recognize cats and dogs with amazing accuracy. With this ability to recognize patterns in data, machine learning can be used in a variety of tasks, not just academic examples such as dog recognition. The once complex pattern recognition problem has become as simple as managing the labeled data and using the machine learning algorithms.
In Part 2 of this blog, I will go into details of how Cisco is helping customers to adopt machine learning. I hope that your journey with machine learning is a smooth one based on software engineering rather than magical ruby slippers on a yellow brick road.
Are you attending GTC in Munich, Israel, or China? Stop by the Cisco booth to find out more. Keep the conversation going. Feel free to reach out to me at @hanyang1234 to discuss AI/ML on UCS.