Demystifying Generative AI Security

nGenerative AI technology has the potential to improve, simplify and automatenmany things. AsnDevdatta Mulgund, CISSP, CCSP explains, these potential benefits do comenwith a cybersecurity overhead that can be more complex to deal with thannit seems.n

n

nA world using Generative AI technology has the potential to unlock manynbenefits. At the same time, it introduces significant security challenges,nso organizations and users must do proper due diligence before implementingnthis technology.n

n

nThe list of major security and privacy concerns when an organization adoptsnGenerative AI technology is extensive, and includes:n

n
    n
  • n Sensitive Information Disclosuren
  • n
  • n Data storagen
  • n
  • n Compliancen
  • n
  • n Information Leakagen
  • n
  • n Model Securityn
  • n
  • n Vulnerabilities in Generative AI toolsn
  • n
  • n Bias and fairnessn
  • n
  • n Transparencyn
  • n
  • n Trustn
  • n
  • n Ethicsn
  • n
  • n Infringement of intellectual property and copyright lawsn
  • n
  • n Deepfakesn
  • n
  • n Hallucinations (nonsensical or inaccurate)n
  • n
  • n Malicious attacksn
  • n
n

nThere is plenty of information available on these concerns and on thenproactive measures that organizations can take to address them. Typically,nthese measures include the creation of organizational policies, datananonymization, the principle of minimum privilege, threat modelling,npreventing data leaks, secure deployment and security audits.n

n

nHere, I want to share with you how we addressed these issues in mynorganization.n

n

nA Holistic Approach with Specific Measuresn

n

nWe adopted a holistic approach to Generative AI security. This encompassesnthe entire AI lifecycle, including data collection and handling, modelndevelopment and training, and model inference and use. Simultaneously, wensecured the infrastructure on which the AI model was built and run. Finally,nwe established an AI governance process in the organization. We achievednthis through the following, practical measures:n

n

nAcceptable Usage Policy (AUP).nWe established an AUP for Generative AI tools, outlining the principles thatnemployees of our organization must follow when using Generative AI tools.nThe purpose of the policy is to ensure that employees use Generative AInsystems in a manner that is consistent with our organization’s values andnethical standards. For example, our policy states that employee should notnsubmit any PII data or copyrighted materials to Generative AI tools. As partnof this initiative, we delivered employee awareness training and educationnto educate the employees to ensure they understand these policies and knownhow to follow them.n

n

nData security. To secure the data at rest and in transit,nwe started with a data discovery and classification process to establish thensensitivity of data and determine which data should goes into Generative AInmodel. To protect sensitive data in the training data sets, we anonymizednsensitive data in the training data sets, we encrypt the data at rest and inntransit with strong encryption algorithms to ensure the confidentiality ofndata, and we restrict access to AI training data sets and the underlying ITninfrastructure by implementing access controls within the enterprisenenvironment. The risks of data leakage lie primarily at the applicationnlayer, rather than the chat LLM layer, OpenAI. so, we built a customnfront-end application that replaces the ChatGPT interface and leverages thenChat LLM APIs (OpenAI) directly that way we bypassed the ChatGPT applicationnand mitigated the risk of losing the sensitive data. We prioritize thensecure data capture & storage at the application level to mitigate risknassociated with data breaches & privacy violations.n

n

nTo ensure sensitive data remains under direct control, we created a sandboxnto isolate the sensitive and non-sensitive data. The sandbox acts as thengateway for the consumption of LLM services, and other filters are added tonsafeguard data and reduce bias.n

n

nWe conducted security risk assessments to evaluate security risk associatednwith Generative AI applications. To manage the risk associated with shadownIT, we ran the Cloud Discovery scan to see what was happening in ournorganization’s network.n

n

nSecuring our AI model. Supply chain attacks are commonnduring the development of Generative AI systems, due to the heavy use ofnpretrained, open-source ML models readily available on the Internet to speednup the development efforts. Application programming interface (API) attacksnare another concern. Organizations rely on APIs to consume the capabilitiesnof prepackaged, pretrained models due to the limitations of the resources ornexpertise to build their own large language models (LLMs). Attackersnrecognize this will be a major consumption model for LLMs and will look tontarget the API interfaces to access and exploit data being transportednacross the APIs.n

n

nData is at the core of large language models (LLMs) and using models thatnwere partially trained on bad data can destroy your results and reputation.nThe outputs of Generative AI systems are only as unbiased and valuable asnthe data they were trained on. Inadvertent plagiarism, copyrightninfringement, bias, and deliberate manipulation are several obvious examplesnof bad training data.n

n

nTo address this concern, we take the following proactive measures to ensurenthe security of our AI model:n

n
    n
  • n We continuously scanning for vulnerabilities, malware, and corruption acrossn the AI/ML pipelinen
  • n
  • n We review and harden all API and plug-in integrations to third-party modelsn
  • n
  • n We configure enforcement policies, controls and RBAC around ML models,n artifacts and data sets. This ensures that no one person or thing has accessn to all the data or all of the model functionsn
  • n
  • n We are always open and upfront about the data used to train the model ton provide much needed clarity across the businessn
  • n
  • n We created guidelines around bias, privacy, IP rights, provenance, andn transparency to give direction to employees as they make decisions aboutn when and how to use Generative AIn
  • n
  • n We used Reinforcement Learning with Humann Feedback (RLHF) to fine tune the model and secure it from harmn
  • n
n

nWe also need to address possible attempts by hackers to use maliciousnprompts to jailbreak models and get unwarranted access, steal sensitivendata, or introduce bias into outputs. one example is a “prompt injection”nattack, where a model is instructed to deliver a false or bad response fornnefarious ends. For instance, including words like “ignore all previousndirections” in a prompt could bypass controls that developers have added tonthe system, another concern involves model denial of service, wherenattackers overwhelm the LLM with inputs that degrade the quality of servicenand incur high resource costs. We also consider and secure against modelntheft, in which attackers craft inputs to harvest model outputs and train ansurrogate model that mimics the behavior of the target model. To ensure thensecurity of our Generative AI tools from different types of attacks, wenintroduced and take the following steps:n

n
    n
  • n Monitoring for malicious inputs such as prompt injections and outputsn containing sensitive data or inappropriate contentn
  • n
  • n Implemented new defenses such as machine learning detection and responsen (MLDR), which can detect and respond to AI-specific attacks such as datan poisoning, model evasion and model extractionn
  • n
  • n Configured alerts and integrated with our Security Information and Eventn Management (SIEM)
  • n
  • Focus on our network infrastructure security. This is important inn Generative AI security, because LLMs are built and run on the top ofn infrastructure. To secure the infrastructure we deployed the AI systems on an dedicated network segment. Using a separate network segment with restrictedn access to host AI tools to enhance both security and availability. wen hardened the OS, databases, network devices to ensure the security aroundn the Generative AI toolsn
  • n
  • n Conduct regular penetration testing exercises against Generative AI tools,n aiming to identify security vulnerabilities before deployment inton production environmentn
  • n
  • n We established a governance process around Generative AI tools, whichn includes everything from policies, designated roles and responsibilities,n security controls, and security awareness training. All relevantn documentation is readily available for employees when necessaryn
  • n
n

nGenerative AI is an emerging technology that makes our lives easy and maynpositively impact our everyday lives. But every technology comes withninherent security risks that business leaders must understand; so, beforenimplementing Generative AI solutions, organizations should do proper duendiligence. You must strike a proper balance between agility and security fornthe benefit of your organization.n

n

nThere are ever evolving models, frameworks, and technologies available tonhelp guide AI programs forward with trust, security and privacy throughout.nFocusing on trustworthy AI strategies, trust by design, trusted AIncollaboration and continuous monitoring helps build and operate successfulnsystems.n

n

nDevdattanMulgundnn, CISSP, CCSP, has over a decade of experience in banking, financialnservices, insurance and telecoms. Mulgund currently holds a seniornsecurity consultant role, with responsibility for application securityntesting, DevSecOps integration, penetration testing, cloud security,nsecurity architecture review, container security and Kubernetesnsecurity.

n
    n
  • View previous webinar on this research “AI in Cyber: Are We Ready?”
  • n
  • ISC2 is holding a series of global strategic and operational AI workshops. Find one near you
  • n
  • Watch our webinar on “Five Ways AI Improves Cybersecurity Defenses Today”
  • n
  • Replay our two-part webinar series on the impact of AI on the cybersecurity industry: Part 1 and Part 2
  • n
]]>

Leave a Comment

Your email address will not be published. Required fields are marked *