Avatar

As businesses move further into their digital transformation journey, the complexities of cloud security will continue to evolve. Traditional security practices, with their complex and layered rules, have long been the foundation of security systems. However, the advances in Artificial Intelligence (AI) are shifting the paradigm in the way we will interact and set expectations with our security solutions. Let’s explore how these developments will streamline the implementation of security policies and their implications on managing AI-generated content with modern SSE and SASE solutions.

I.  Unifying the Private Access, Internet Access, VPN Access, and ZTNA Experience in SSE

To set the stage, let’s take a common example. A company needs a security policy that allows an executive to access public internet websites from their office laptop but restricts their access to the Jira dashboard hosted within the company’s private data center.

Traditionally, the Admin would need to create a multifaceted policy to meet this requirement. First, the admin will need to determine whether the policy involves a ZTNA-based access, VPN-based access, or a public internet-based app access. They would need to confirm the user’s group, location, and device, and then create policies to grant or restrict access accordingly. Second, the Admin will also need to create sub-policies that need to be configured meticulously for security controls like the Firewall, IPS, SWG or DNS that will be required to be carried out along each access path selected. This process involves multiple steps and leads to an unnecessary cognitive burden on the Admin. In addition, a slight misconfiguration could potentially pose a security risk or degraded experience to the users. However, there is a more streamlined approach available. This is where intent-based security with unified management steps in.

In an intent-based security system, the Admin simply needs to define the intent: “executives should be able to access public websites but not the Jira dashboard.”

The system analyzes and interprets this intent, generating the necessary underlying configurations to enforce it.

This approach abstracts away the complexity of underlying access and security controls configuration. It also offers a single point of configuration, regardless of whether the policy is being set up via a user interface, API, or command-line interpreter. The emphasis is on the intent, not the specific security controls or the access method. In fact, instead of working through a configuration UI, the intent could be stated in a plain sentence, letting the system understand and implement it.

By utilizing Generative AI techniques in tandem with the principles of few-shot learning, these intent-based security policies can be efficiently transformed into actionable policy directives.

II. Addressing the challenge of AI-Generated content with AI-Assisted DLP

As workplaces increasingly adopt tools like ChatGPT and other Generative AI (GenAI) platforms, interesting challenges for data protection are emerging. Care must be taken when handling sensitive data within GenAI tools, as unintentional data leaks could occur. Leading Firewall and Data Loss Prevention (DLP) vendors, such as Cisco, have introduced functionality to prevent sensitive data from being inadvertently shared with these AI applications. 

But let’s flip the scenario:

What if someone uses one of the content-generating AI tools to create a document or source code that finds its way into the company’s legal documents or product? The potential legal ramifications of such actions could be severe. Cases have been reported where AI has been used inappropriately, leading to potential sanctions. Furthermore, there needs to be a mechanism to detect deliberate variations of these documents and source codes that may have been copied and pasted into the company’s product.

Owing to the sophisticated internal representation for text in large language models (LLMs), it’s possible to accurately facilitate these DLP use-cases.

Cisco’s Secure Access has Security Assistant in Beta version that uses LLMs to not only create policies based on intent but can also detect ChatGPT and AI-generated source code, including its’ variants, along with providing sufficient context around who, when and from where this content may have been generated.

In summary – The next-gen cybersecurity landscape, with its unified management and intent-based security policies, is here. It’s poised to revolutionize how we implement and manage security, even as we grapple with new challenges posed by AI-generated content.

For more information on Cisco Secure Access check out:

1.    Introducing Cisco Secure Access: Better for users, easier for IT, safer for everyone

2.    Protect your hybrid workforce with cloud-agile security


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn



Authors

Prabhat Singh

Vice President of Engineering

Cloud and Network Security