This blog post comes from Dr Catriona Wallace the Founder & Executive Director of Artificial Intelligence company Flamingo Ai, provider of Machine Learning based technologies. Flamingo Ai is the second woman led business ever to list on the Australian Stock Exchange. To hear more from Dr. Wallace join us at this upcoming Women Rock IT event Thursday April 23.

Prepare for Artificial Intelligence (AI) to expand thanks to recent world changes. It will be particularly important for women to understand AI as it will uniquely affect them both in business and personally.

John McCarthy first coined the term ‘artificial intelligence’ in 1956 when he invited a group of researchers from a variety of disciplines, including language simulation, neuron nets and complexity theory, to meet and discuss software that mimics human intelligence. Artificial intelligence (AI) may be described as the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Machine Learning is a type of artificial intelligence which allows a computer program to learn on its own accord, without needing to be explicitly programmed or coded – the software learns each time it conducts a task.

AI is the fastest growing tech sector in the world. US$38B was invested in the last 12 months and this is due to grow 12-fold in the next 5 years.

The major challenges with AI come from the lack of regulation, legislation or guidelines to monitor its development and use. There are little to no laws governing this sector at the moment. As such there are risks of AI being biased in its decision making and potentially causing harm. Examples of this include the Apple Card incident and the Amazon recruitment program.

Bias in AI comes from two primary sources. One, less than 10 percent of coders in the AI sector are women, which means that there is the potential for conscious or unconscious bias to be built into the algorithms. And two, the data that is used to train the algorithms, which is historical data, is often a reflection of the data that has been collected reflecting society’s past norms. For example, I am a professor, I am a female and regularly wear a dress. If you Google Professor Style, you will see that 90 percent of images are of men in tweed suits. The two factors influence the results:  first the coder who tagged the images may believe that professors are men in tweed suits and  second, the images provided to the coder most likely had a majority of men because this was the historical norm. The result is that the AI learns that men in tweed coats are professors and that this is the predominant answer.

In November 2019, Minister for Industry, Science and Technology, Karen Andrews, launched Australia’s first AI & Ethics framework. In the framework are eight  key principles. These include:

  1. Human, social and environmental wellbeing
  2. Human-centred values
  3. Fairness
  4. Privacy protection and security
  5. Reliability and safety
  6. Transparency and explainability
  7. Contestability
  8. Accountability

AI technology is about 5 years ahead of the legal system, hence in the interim we need to rely on frameworks and guidelines as well as Ethical Leadership such as the framework created under Minister Andrews.

AI has the potential to greatly improve the human condition. In particular, AI-will drive advancement in biotech, medical, climate and agriculture technologies among others – these changes will be hugely beneficial to humans. There is also great scope for AI to support people with disabilities.

At the same time there is a huge potential for AI to be used to harm humans, be it through mass manipulation of people during elections,  powering  harmful social media, or the coming of autonomous weapons. This is why Elon Musk noted, “Inviting AI into the world is like summonsing the demon.” We, however, are very hopeful that AI will free humans from the mundane to focus on activities or initiatives that make us more human including creativity, a return to basic science, improved relationships and health, a focus on art and the better care of the environment. And on the back of the Summer 2019 bushfires that swept Australia, – we would like to build fire-fighting robots as there should be no reason that a human should stand between their house and a 30 foot inferno. A robot would be happy to do this.

During April, I will launch a new business, called Ethical AI Advisory , which is a consultancy set up to assist organisations develop ethical approaches to artificial intelligence. This will be my contribution to making sure AI is done for good and that women and minorities are not discriminated against, so that it is fair for all.


Jennifer Boynton

Corporate Social Responsibility Content Strategist

Corporate Affairs