nWhere do today’s privacy lawyers see problems ahead in a world fullnof generative AI chatbots? At ISC2 Security Congress in Nashville, anpanel of legal experts shared their views: Scott Giordano (Attorney innAI, privacy, and cybersecurity, Giordano AI Law), David Patariu (privacynattorney, Venable LLP), John Bates (senior manager in the CybersecuritynProgram Transformation capability, EY), and John Bandler (founder &nprincipal, Bandler Law PLLC).n
nnToday’s cybersecurity world faces a wide range of security challenges causednby the sudden adoption of generative artificial intelligence (AI) chatbotsnsuch as ChatGPT and Bard. One view is that these tools are still immature,nare not being widely used, and perhaps shouldn’t be widely used until theynhave been thoroughly proven. In the view of a distinguished panel of legalnexperts at the ISC2 Security Congress sessionnnWhat ChatGPT Means for Your Infosec Program,nnthey are not only being widely used but this will quickly start to overwhelmnanyone who tries to hold back their use with old-world policies.n
nnThe threat relating to generative AI tools in terms of infosecurity are asnnumerous as they are unfamiliar. Nonetheless, they can be broken down intonthree broad areas – the training data fed to the machine learning (ML)nsystem, (including how it’s been collected and whether it is reliable), thenML model itself and how it was trained, and finally the chatbot promptnitself and how this might be manipulated to reveal something undesirable ornsimply used naively.n
nnToday, chatbot risk is often understood in terms of adversarial attacks onnthe ML systems or their data or software. However, it also extends to thenissue of generating false content (e.g., deepfakes), misusing systemsn(creating polymorphic malware), conducting social engineering attacks basednon different forms of impersonation, finding and exploiting novelnvulnerabilities, and conducting massive data scraping of sensitive content.n
nnThese possibilities are not entirely hypothetical, with surveys ofncybersecurity professionals recently recording a rise in detected incidentsnthat were aided in some way by chatbots. Indeed, the growing prevalence ofndeepfakes and social engineering point to a coming crisis in authenticationnand identity – how we know someone is who they say they are – that the worldnseems under-prepared for.n
nn
nWalking into trouble
nn
nn“You can’t tell people not to do this,” said EY’s John Bates. “That is notnthe answer, emphatically. That equals shadow IT.” In fact, people are usingnthese tools – specifically marketing and sales – which means thencybersecurity people are often the last to find out. According to Bates, thenbest model through which to approach these risks is to re-use the templatenfrom cloud security a decade ago.n
nnAn important issue is that the licensing terms governing how these tools arenused vary, for example ChatGPT is not the same as Google’s Bard. Thisnrequires organizations to do more thorough due diligence on their technicalnas well as legal risks within a governance framework. Bates suggested thatnorganizations might need to consider forming a special generative AInsteering committee to deal with these issue drawn from legal as well asnengineering and cybersecurity teams. “How do we find an efficient easy tonuse this [generative AI] that doesn’t blow us up?” Organizations alreadynhave the tools and processes to cope with this such as business continuity,nthird-party risk management, data governance, and data protection, he said.nWhat is required is vigilance without trying to ignore or downplay thenissue.n
nnn
nn
nPrivacy and hallucination
nn
nnAll four speakers drew attention to the fact that case law affecting the usenof generative AI is now expanding quite rapidly with a slew of casesnrelating to privacy in particular. The challenge for ML systems is theirnneed for data but this would have to accommodate the rights of end users,nincluding copyright. At the extreme end of this was the tendency ofngenerative AI systems to hallucinate facts (including about real people) innways that could be inaccurate and legally risky. A lot of this risk isndifficult to see, opening organizations using generative AI to unknown risksnacross a swathe of legal situations.n
nnThe first privacy risk concerned the company and personal data used to trainnthe models.n
nnScott Giordano of Giordano AI Law described this as an “open season.” Rightnnow, AI companies were still hiding behind the technical defense that whatnthey are doing did not violate copyright because generative AI is based on anmathematical representation rather than copyrighted data.n
nn“To build an LLM, you have to gather lots of data. The problem is that lotsnof that data is ourdata. Right now, we are in a place wherenunauthorized web scraping is being fought in court. At the moment, it’s anlosing battle and it may require Congress to step in and say you can’t donthis,” said Giordano.n
nnVenable LLP attorney David Patariu agreed that the legal situation aroundncompany privacy was still unclear. Compounding this were more subtle secondnorder effects, for example AI companies using data to improve their modelsnand services, something typically written into platform agreements. Equally,nthis could be used to aid other customers, including a company’s rivals.n
nn“I get that my data is my data. But what about the learning from my data andnputting into your model?”n
nnJohn Bandler of Bandler Law PLLC drew the audience’s attention to the wildninaccuracy of chatbots, something which has caught out lawyers. As a tool,nit was saving time but at the expense of inventing facts and events.n
nn“A lawyer went to ChatGPT looking for case law and it spat out a fictitiousncase that never happened. When they were called on it by the judge whonthought it wasn’t right, they doubled down.”n
nnUltimately, what cybersecurity practitioners should not assume is that thenrisk of generative AI is purely about the systems themselves (theirnsoftware, ML models or data) being compromised. Just as important are thenlegal risks connected to privacy, copyright, IP, and the abuse of hiddennflaws in systems that are otherwise being legitimately accessed (e.g., APIs,ndata scraping). The same principles also apply to internal ML projects – thensystem is not a black box that can be taken as read. Someone has to checknthe data and models for inadvertent bias that could lead to additional risk.
n- n
- ISC2 Security Congress took place October 25-27 2023 in Nashville,n TN and virtually.n n On-demand replays of the sessions are available now. n
- ISC2 SECURE Washington, DC takes place in-person on December 1, 2023n at the Ronald Reagan Building and International Traden Center. Then agenda and registration details are here. n
- ISC2 SECURE Asia Pacific takes place in-person on December 6-7, 2023n at the Marina Bay Sands Convention Centre in Singapore.n n Find out more and register here. n
- Register your interest in ISC2 Security Congress 2024 in Las Vegas here. n