In an era where artificial intelligence (AI) is rapidly transforming industry and society, collaboration between the public and private sectors has never been more critical. Trust and safety are ultimately on the line.
Cisco is a proud signatory and supporter of the EU AI Pact, outlining shared commitments around implementing appropriate governance, mapping organization’s high-risk use cases, and promoting AI literacy and safety for workers. Each of these measures plays an important role in fostering innovation while mitigating risk. They also align closely with Cisco’s longstanding approach to responsible business practices.
Advancing our approach to AI Governance
In 2018, Cisco published its commitment to proactively respect human rights in the design, development, and use of AI. We formalized this commitment in 2022 with Cisco’s Responsible AI Principles. We operationalize these principles through our Responsible AI Framework. And in 2023, as the use of generative AI became more prolific, we used our Principles and Framework as a foundation to build a robust AI Impact Assessment process to review potential AI use cases, for use in both product development and internal operations.
Cisco is an active participant in the development of frameworks and standards around the world, and in turn, we continue to refine and adapt our approach to governance. Cisco’s CEO Chuck Robbins signed the Rome Call for AI Ethics, confirming our commitment to the principles of transparency, inclusion, accountability, impartiality, reliability, security and privacy. We have also closely followed the G7 Hiroshima Process and align with the International Guiding Principles for Advanced AI Systems. Europe is a first mover in AI regulation addressing risks to fundamental rights and safety through the EU AI Act and we welcome the opportunity to join the AI Pact as a first step in its implementation.
Understanding and mitigating high risk use cases
Cisco fully supports a risk-based approach to AI governance. As organizations begin to develop and deploy AI across their products and systems, it is critical to map the potential uses and mitigation approaches.
At Cisco, this important step is enabled through our AI Impact Assessment process. These analyses look at various aspects of AI and product development, including underlying models, use of third-party technologies and vendors, training data, fine tuning, prompts, privacy practices, and testing methodologies. The ultimate goal is to identify, understand and mitigate any issues related to Cisco’s Responsible AI Principles – transparency, fairness, accountability, reliability, security and privacy.
Investing in AI literacy and the workforce of the future
We know AI is changing the way work gets done. In turn, organizations have an opportunity and responsibility to help employees build the skills and capabilities necessary to succeed in the AI era. At Cisco, we are taking a multi-pronged approach. We have developed mandatory training on safe and trustworthy AI use for global employees and have developed several AI learning pathways for our teams, depending on their skillset and industry.
But we want to think beyond our own workforce. Through the Cisco Networking Academy, we have committed to train 25 million people across the world in digital skills, including AI, by 2032. We are also leading the work with the AI-Enabled ICT Workforce Consortium, in partnership with our industry peers, to provide organizations with knowledge around the impact of AI on the workforce and equip workers with relevant skills.
Looking ahead to the future
We are still in the early days of AI. And while there are many unknowns, one thing remains clear. Our ability to build an inclusive future for all will depend on a shared commitment around safe and trustworthy AI across the public and private sectors. Cisco is proud to join the AI Pact and continue to demonstrate our strong commitment to Responsible AI globally.