Intelligent machines have helped humans in achieving great endeavors. Artificial Intelligence (AI) combined with human experiences have resulted in quick wins for stakeholders across multiple industries, with use cases ranging from finance to healthcare to marketing to operations and more. There is no denying the fact that Artificial Intelligence (AI) has helped in quicker product innovation and an enriching user experience. However, few of these use cases include context-aware marketing, sales forecasting, conversational analytics, fraud detection, credit scoring, drug testing, pregnancy monitoring, self-driving cars – a never-ending list of applications.
But the very idea of developing smart machines (AI-powered systems) raises numerous ethical concerns. What is the probability that these smart machines won’t harm humans or other morally relevant beings? Matthew Hutson, a research scientist from the Massachusetts Institute of Technology mentioned in one of his studies that AI algorithms embedded in digital and social media technologies can reinforce societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and impair mental well-being. Earlier discussions related to this concept of “Data and AI ethics” were only limited to non-profit organizations and academic institutions. But with the rapidly changing industry spectrum, global tech giants are putting together fast-growing teams to handle the ethics of AI. And as these companies have invested more due diligence into the challenge, they’ve discovered that the majority of these ethical issues arise during the lifecycle of data resulting from the widespread collection and processing of data to train AI models.
Even before jumping into the question of, “How could we create an outline for ethical AI developments?” – let’s take a pause and try to understand the challenges with how current approaches today are failing to address ethical AI. We as humans learn by reading or from past experiences. Similarly, machines need to learn from a historical set of events and datasets. Machine learning (ML) and Artificial Intelligence (AI) models are firstly trained on a set of training data points and then are expected to perform with considerable accuracy for new production/live scenarios. What if these systems are fooled at production scenarios? What if people can overpower it to use for their own benefits? Imagine an autonomous car with broken brakes going at full speed towards a grand-mother and a child. By deviating a little, one out of the two on-road passengers can be saved. What is the right choice in such a situation? Machines need to be trained in such a way that enables them to make ethical decisions. Type “greatest leaders of all time” in your favorite search engine and you will probably see a list of the world’s prominent male personalities. How many women do you count? The ethical framework for AI needs to mitigate such gender biases.
Having understood the need for establishing a framework to ensure Ethical AI, let’s try to understand the solutions which can be leveraged to address these challenges. Organizations that leverage AI models for the development and delivery of goods and services need to have an outline mentioning some of the key concerns like privacy, bias, manipulation, opacity, environmental impact, and machine ethics, etc. While most organizations tend to follow these guidelines, they do not clearly define the ethical boundaries of intelligent applications. The major problem with this strategy lies in the difficulty of implementing these ethical frameworks in real life. The guidance provided by these systems is often high level and cannot be enforced by technical professionals without increasing the complexity and specificity of the AI system. A more complex AI system may hinder its real efficiency, but a system with no blockade on its power can easily venture out into ethical dangers.
The answer lies in a broader approach towards implementation along with actual project development. Here is a five-point checklist that I believe if leveraged will help us build more ethically aware AI systems.
- Defining success metrics – AI-powered systems need to have a clear definition of success metrics which upon development can be used to measure the performance of the model. These metrics would be helpful in setting realistic targets and incorporating ethical values within the model design itself.
- Ensuring data privacy – Data breaches have often led to the problem of identity theft. Privacy concerns require frameworks like differential privacy, where datasets are publicly shared by describing the patterns of the groups and holding back the information about the individuals in the dataset.
- Incubating fairness testing – Most AI systems are generally tested for their performance accuracy and precision. But fairness is one such aspect that often seems to be missing from the unit test cases. Having a fairness test case during the development phase can incorporate ethical orientation right from the initial stages.
- Periodic performance evaluation – AI-powered systems need to be monitored periodically for their model performance. The performance which is humane in nature and imbibed with ethical values. Consider the earlier example of an autonomous car with broken brakes. To handle such scenarios, machine performance needs to be periodically evaluated and the gathered data can be used to train the model for future instances.
- Feedback mechanism – Each machine eventually learns from the historical datasets and performs accordingly. But what if we can have a continuous feedback mechanism for these automated systems. This continuous stream of feedbacks will help the systems to learn and evolve over a period of time and will also guide them in case of physical adversities.
Leveraging these frameworks, we can address discrimination that powerful AI systems might cause. Hence organizations need to have a well-structured ethical framework for their AI-powered systems to ensure that these smart machines don’t harm the well-being of humans or other morally relevant beings. These frameworks not only help in mitigating machine biases but also ensure improved data privacy in the long run. So as the industry advances the way they handle the ethics of AI, the promise of the technology can come to fruition. Data is generated during all of our digital interactions, but the foundation of that data is very human. And ethically handling human data and interactions requires that organizations establish more consistent, ethical data-related practices, starting today.
Very Informative and beautifully explained!
Thank you.
This is one different perspective, companies need to think of.
Well Explained.!!
Thank you, Naveen. Glad to know you liked it.
This article will help to elucidate the adoption of AI/ML in companies inclined towards programmability and automation. Well done, Utkarsh. Thanks.
Thank you Geeniya. Glad to know your thoughts.
Exactly, all organizations need to have a well-structured ethical framework for their AI-powered systems. Well explained, Utkarsh!
Thank you.
This is informative and very well explained. Well done.
Thank you Laroya.
Very Informative and beautifully
Thank you. Glad to know your thoughts.