This year we’ll see a movement for responsible, ethical use of AI that begins with clear AI governance frameworks that respect human rights and values.

In 2024, we’re at a breathtaking crossroads.

Artificial intelligence (AI) has created incredible expectations of improving lives and driving business forward in ways that were unimaginable only a few short years ago. But it also comes with complicated challenges around individual autonomy, self-determination, and privacy.

Our capacity to trust organizations and governments with our opinions, skills, and fundamental aspects of our identities is at stake. In fact, there is growing digital asymmetry that AI creates and perpetuates – where companies, for instance, have access to personal details, biases, and pressure points of customers whether they are individuals or other businesses. AI-driven algorithmic personalization has added a new level of disempowerment and vulnerability.

This year, the world will convene a conversation about the protections needed to ensure that every person and organization will be at ease when using AI, while also ensuring space for innovation. Respect for fundamental human rights and values will require a careful balance between technical coherence and digital policy objectives that do not impede business.

It’s against this backdrop that the Cisco AI Readiness Index reveals that 76% of organizations don’t have comprehensive AI policies in place. In her annual tech trends and predictions, Liz Centoni, Chief Strategy Officer and GM of Applications, pointed out that while there is mostly general agreement that we need regulations, policies, and industry self-policing and governance to mitigate the risks from AI, that is not enough.

“We need to get more nuanced, for example, in areas like IP infringement, where bits of existing works of original art are scraped to generate new digital art. This area needs regulation,” she said.

Speaking at the World Economic Forum a few days ago, Liz Centoni explained a wide-angle view that it’s about the data that feeds AI models. She couldn’t be more right. Data and context to customize AI models derives distinction, and AI needs large amounts of quality data to produce accurate, reliable, insightful output.

Some of the work that’s needed to make data trustworthy includes cataloging, cleaning, normalizing, and securing it. It’s underway, and AI is making it easier to unlock vast data potential. For example, Cisco already has access to massive volumes of telemetry from the normal operations of business – more than anyone on the planet. We’re helping our customers achieve unrivaled AI-driven insights across devices, applications, security, the network, and the internet.

That includes more than 500 million connected devices across our platforms such as Meraki, Catalyst, IoT, and Control Center. We are already analyzing more than 625 billion daily web requests to stop millions of cyber-attacks with our threat intelligence. And 63 billion daily observability metrics provide proactive visibility and blaze a path to faster mean time to resolution.

Data is the backbone and differentiator

AI has and will continue to be front-page news in the year to come, and that means data will also be in the spotlight. Data is the backbone and the differentiator for AI, and it is also the area where readiness is the weakest.

The AI Readiness Index reveals that 81% of all organizations claim some degree of siloed or fragmented data. This poses a critical challenge due to the complexity of integrating data held in different repositories.

While siloed data has long been understood as a barrier to information sharing, collaboration, and holistic insight and decision making in the enterprise, the AI quotient adds a new dimension. With the rise in data complexity, it can be difficult to coordinate workflows and enable better synchronization and efficiency. Leveraging data across silos will require data lineage tracking, as well, so that only the approved and relevant data is used, and AI model output can be explained and tracked to training data.

To address this issue, businesses will turn more and more to AI in the coming year as they look to unite siloed data, improve productivity, and streamline operations. In fact, we’ll look back a year from now and see 2024 as the beginning of the end of data silos.

Emerging regulations and harmonization of rules on fair access to and use of data, such as the EU Data Act which becomes fully applicable next year, are the beginning of another facet of the AI revolution that will pick up steam this year. Unlocking massive economic potential and significantly contributing to a new market for data itself, these mandates will benefit both ordinary citizens and businesses who will access and reuse the data generated by their usage of products and services.

According to the World Economic Forum, the amount of data generated globally in 2025 is predicted to be 463 exabytes per day, every day. The sheer amount of business-critical data being created around the world is outpacing our ability to process it.

It may seem counterintuitive, however, that as AI systems continue to consume more and more data, available public data will soon hit a ceiling and high-quality language data will likely be exhausted by 2026 according to some estimates. It is already evident that organizations will need to move toward ingesting private and synthetic data. Both private and synthetic data, as with any data that is not validated, can also lead to bias in AI systems.

This comes with the risk of unintended access and usage as organizations face the challenges of responsibly and securely collecting and maintaining data. Misuse of private data can have serious consequences such as identity theft, financial loss, and reputation damage. Synthetic data, while artificially generated, can also be used in ways that create privacy risks if not produced or used properly.

Organizations must ensure they have data governance policies, procedures, and guidelines in place, aligned with AI responsibility frameworks, to guard against these threats. “Leaders must commit to transparency and trustworthiness around the development, use, and outcomes of AI systems. For instance, in reliability, addressing false content and unanticipated outcomes should be driven by organizations with responsible AI assessments, robust training of large language models to reduce the chance of hallucinations, sentiment analysis and output shaping,” said Centoni.

Recognizing the urgency that AI brings to the equation, the processes and structures that facilitate data sharing among companies, society, and the public sector will be under intense scrutiny. In 2024, we’ll see companies of every size and sector formally outline responsible AI governance frameworks to guide the development, application, and use of AI with the goal of achieving shared prosperity, security, and wellbeing.


With AI as both catalyst and canvas for innovation, this is one of a series of blogs exploring Cisco Executive Vice President Liz Centoni’s tech predictions for 2024. Her complete tech trend predictions can be found in The Year of AI Readiness, Adoption and Tech Integration ebook.

Catch the other blogs in the 2024 Tech Trends series.



Vijoy Pandey

Senior Vice President

Outshift by Cisco