ISC2Congress: Former Twitter AI ethics chief calls for legal protection for algorithmic auditors

nResponsible AI is “the younger cousin of privacy and security”,nTwitter’s former head of AI ethics told ISC2 Congress in a keynotensession in Nashville, Tennessee this week, as she laid out a vision fornhow society can audit algorithms and ensure accountability in thenpost-ChatGPT era.     

n

nDr Rumman Chowdhury, now chief scientist at Parity Consulting, told thenaudience the conversations around AI today look similar to thenconversations she was having around AI in 2017, when she was heading upnAccenture’s responsible AI practice.n

n

n“Will AI come alive and blow up the planet? Will I have my job becausenof AI.” At the time, she said, “A lot of us worked to actually shiftnthat narrative towards the things that matter to us in our everydaynlives.”n

n

nGenerative AI changing the conversationn

n

nWhile Chowdhury noted that Generative AI had initiated a whole new wavenof thinking about AI, the questions were essentially the same. She addednthat what is different is “the accessibility of these systems” and thenbreadth and depth of the data used to train them.n

n

nAt the same time, there had been advances in thinking about governance,nand “A lot of companies have very established teams even havenestablished governance practices… So we’re smarter about the impacts ofnAI systems.”n

n

nPolicy and regulatory efforts have progressed, with the likes of thenEU’s Digital Services and upcoming AI Act, and the NIST’s AI RisknManagement Framework. The DSA, for instance, said that companies mustnprove their algorithmic systems don’t violate human rights, or introducengender-based violence.n

n

nWhile government interest and regulation is necessary, Chowdhury said,nthis should not be at the expense of collaboration and open access andnthe stifling of innovation. “There’s a scary idea of some sort ofnmalicious agent that’s getting this code and they’re doing terriblenthings. And bad actors exist. … But it ignores all the good things thatnhappen when code is shared freely, data is shared freely.”n

n

nBut, Chowdhury continued, while there was a “robust civil society”naround AI, with people who are adept in legal and societal impacts,n“What’s missing in our little world is the cultivation of independentnthird parties that are technical in nature.”n

n

nThe road to legislating AIn

n

nAccording to Chowdhury, legislation could open the path for algorithmicnevaluation, while APIs made it easier to interact with models. “Mynpassion, and what I’m working on now, is creating this robust,nindividual, independent third-party ecosystem to provide external reviewnand accountability.”n

n

nIt’s not entirely clear what algorithmic auditing should look like,nwhile appropriate certifications and standards had yet to emerge, shensaid. “Anybody can decide to go on LinkedIn and say I’m an auditor,nthere are plenty of people who’ve popped up training on algorithmicnauditing. But we don’t actually have a way of certifying who is annauditor.”n

n

nPerhaps more concerning is uncertainty around the legal framework forntesting models or expositing vulnerabilities.n

n

n“Because let’s say you do find a problem with a model, where do you go?”nshe said. “Who is the authoritative body to decide whether or not thisnis truly a vulnerability? What is the company’s responsibility ofnresponding to that vulnerability that you found? Again, we have none ofnthese institutions at the moment.”n

n

nChowdhury highlighted that she had told a congressional hearing lastnJune that “We actually need support for AI model access to enablenindependent research and auditing capabilities…people actually do thisnwork, but they do it in a very ad hoc fashion.”n

n

nAnd, crucially, she said, “and I learned this from the securityncommunity,” there is a need for “legal protections for independent rednteaming and ethical hacking.”n

n

nChowdhury said her non-profit organization, Humane Intelligence, hadnsponsored a competition at DefCon to uncover harms in AI models, and wasnworking to increase alternative pathways into this world. “The point wenare making is that you don’t actually have to have a PhD in order tonaudit an algorithm.”n

n

nThe organization has a public policy paper, in conjunction with NIST,nscheduled for early 2024, along with a series of research papers. Itnalso aims to launch an open-source evaluation platform and offernfor-profit services on red-teaming and model testing.n

n

nUltimately, Chowdhury said, the aim was to build an open accessncommunity “where everybody can be part of evaluating and auditing AInmodels. This is important. And it’s critically important to do thisnin a way that still protects security and privacy.”n

n ]]>

Leave a Comment

Your email address will not be published. Required fields are marked *