nCybersecurity professionals will have a critical role to play as theirnorganizations develop and deploy AI, a panel of legal experts told attendeesnat ISC2 Security Congress in Nashville, Tennessee this week, given thatninfosecurity disciplines and tasks are woven throughout the NIST AI RisknManagement Framework (RMF).
nnThe NIST AI RMF was unveiled late last year, coincidently a month after thenlaunch of ChatGPT. It aims to help organizations manage the risks aroundndesigning, developing, deploying, or using AI.n
nnIt spells out potential harms to individuals, organizations and ecosystems,nand the characteristics of trustworthy AI – including that it be “secure andnresilient” and “privacy-enhanced”. It also lays out “core functions” fornachieving this, with governance at its heart.
nnLegal standards
nnAdam Cohen of Baker Hostetler said, “Standards in law, when it comes toncybersecurity, come from industry best practices.” With a paucity of casenlaw in this area, he continued, “When you’re looking for an anchor tonexplain why what you’ve done is reasonable these kinds of frameworks arenwhat we turn to.”n
nnAs with other security frameworks, it’s not going to be mandatory, he said.n“But this will help you in looking at these issues and having a structurednway to do that– by showing that you align with a standard that can supportnyour justification for how you did things or a legal defensibility.”
nnMore practically, the NIST AI RMF Playbook lays out “suggested actions fornachieving the outcomes” laid out in the framework. Infosec professionalsnshould expect to be involved in virtually every aspect of the Seven AInSystem Lifecycle Stages laid out in the playbook, the panel said. Theynhighlighted the framework’s focus on ensuring the “resilience” of AInsystems.n
nnThe framework spells out responsibilities around the planning and designingnof AI systems, including how to build in security from the outset, and thenobvious confidentiality and security implications around the collection andnprocessing of training data.
nnThere are also specific infosec related aspects to the build and use,nverification and validation, deployment and use, and use or impacted stages,nranging from detecting hidden functionality, red teaming, vulnerabilityndisclosure and bug bounties.n
nnCohen added that the framework was not just focused on information securitynrisk. And, he added, cybersecurity professionals should not be under thenillusion that any of this detracts from or supplants their existing range ofnduties.n
nn“It doesn’t mean that you’re going to use this instead of all the other waysnor frameworks or organizing principles, or elements of your security programnthat you use to approach other kinds of applications,” said Cohen. “You’renstill going to have to think about vulnerability and patch management,nyou’re going to have to think about logging and monitoring.”n
nnUnderstanding the legalities of AIn
nnWhile AI tools might have some unique characteristics and risks, “thatndoesn’t mean you take a completely different approach in dealing with itnfrom a security point of view.”n
nnDavid Patariu, of Venable LLP, added that when it came to AI, there werenmany “fuzzy” areas. “AI’s new, so it’s a little unknown [as to] what isnreasonable.”n
nnNevertheless, said Patariu, the framework gave a solid foundation and “a lotnof tools and ways to think about how to assess risk, how to get the rightnprocess in place.”n
nnHowever, security pros need to be aware of the broader legal context too,nthe panelists said. While the current state of the US Congress means it’snunlikely there will be Federal legislation any time soon, as with privacy,nState legislation can fill the gap. And the EU’s AI Act will effectivelynbecome the “law of the land”, at least for organizations operating acrossnborders.n
nnUltimately, when considering how AI affects security at your organization,nCohen said, professionals had to “go back to fundamentals of your securitynprograms and apply them to these kind of applications. Don’t think this isnnew and unique… if you’re not doing these fundamentals, you’re lost.”n
n