The Real-World Impact of AI on Cybersecurity Professionals

nnAI in Cyber 2024: Is the Cybersecurity Profession Ready?, a survey ofnISC2 members revealed that 88% are already seeing AI impact theirnexisting roles, with most seeing positives in the form of improvednefficiency despite concerns over redundancy of human tasks.nn

n

nAfter decades of being a construct of science fiction, artificialnintelligence (AI), or at least advanced machine learning (ML) has made thenleap from fiction to reality. As a concept and technology, it has enjoyed annexceptional acceleration in development and capability in the last decade.n

n

nThat progress has particularly manifested itself in the form ofnpublic-facing large language models (LLMs) in the form of ChatGPT, GooglenPaLM and Gemini, Meta’s LLaMA and more, all of which are forms of extensivengenerative AI that can be leveraged to do almost anything from creating andocument on historic baseball wins to correctly writing and targeting anphishing email in the language of a criminal’s choosing. AI is everywherenand while the cybersecurity industry was quick to adopt AI and ML as part ofnits latest generation of defensive and monitoring technologies, so too haventhe bad actors, who are leaning on the same technology to elevate thensophistication, speed and accuracy of their own cybercrime activities.n

n

nISC2 conducted a survey titled “nnAI in Cyber 2024: Is the Cybersecurity Profession Ready?”nnof 1,123 members who work, or have worked, in roles with securitynresponsibilities to understand the realities of how AI is impacting everydayncybersecurity roles and tasks, as opposed to the perception of how its usenintersects with the roles of professionals.n

n

n

n

It’s Already Here

n

n

n

A combined 54% of respondents note that there has been a substantialnincrease in cyber threats over the last six months. That’s made up of 13%nwho are confident they can directly link that to AI-generated threats, andn41% who are unable to make a definitive connection. Arguably, if an AI LLMnis working well, you won’t know the difference between automated andnhuman-based attacks. The clues will be more nuanced, such as speed andnrepetition of attack that appear implausibly fast for a human (or a roomnfull of humans) to conduct.

n

nFurthermore, 35% of those surveyed stated that AI is already impacting theirndaily job function. This is not a positive of negative measure, justnrecognition of the fact AI plays a role. Be that dealing with AI drivennattacks, or working with AI-based tools such as automated monitoringnapplications and AI-driven heuristic scanning. Add to the above those whonbelieve that AI will impact their job in the near future, and we see thatnmore than eight in 10 (88%) expect AI to significantly impact their jobsnover the next couple of years.n

n

n

n

AI is Perceived as a Positive

n

n

n

How will that impact actually manifest as positive of negative input orninterference? The survey respondents are highly positive about the potentialnfor AI. Overall, 82% agree that AI will improve job efficiency for them asncybersecurity professionals. That is countered by 56% also noting that AInwill make some parts of their job obsolete. Again, the obsolescence of jobnfunctions isn’t necessarily a negative, but rather noting the evolvingnnature of the role of people in cybersecurity in the face of rapidlynevolving and autonomous software solutions, particularly those charged withncarrying out repetitive and time-consuming cybersecurity tasks.

n n

When we asked respondents which job roles are being impacted by AI and ML,nwe quickly see it’s the time consuming and lower-value functions – preciselynthe area where organizations would rather not have skilled people tied up.nFor example:

n
    n
  • 81% noted the scope for AI and ML to support analyzing user behaviorn patterns
  • n
  • This was followed by 75% who mentioned automating repetitive tasks
  • n
  • 71% who see it being used to monitor network traffic for signs ofn malware
  • n
  • A joint 62% who see it being used for predicting areas of weakness inn the IT estate (also known as testing the fences) as well asn automatically detecting and blocking threats
  • n
n

nThese functions all point to efficiencies over reductions in cybersecuritynroles. With a global cybersecurity workforce of 5,452,732 according to thenlatestnnISC2 Cybersecurity Workforce Studynn, combined with a global workforce gap of 3,999,964, it is unlikely that AInis going to make major inroads into closing the supply and demand divide,nbut it will play a meaningful role in allowing the 5,452,732 to focus onnmore complex, high value and critical tasks, perhaps alleviating some of thenworkforce pressure.n

n

n

n

AI as a Threat

n

n

n

nAside from how AI has become a key tool for the cybersecurity industrynitself, we must not overlook the role it is playing within the criminalncommunity to support attack and other malicious activities. Three quartersnof those surveyed were concerned that AI is, or will be used, as a means fornlaunching cyberattacks and other malicious criminal acts.n

n n

With an appreciation that the technology is already serving both sides, itnwas noted that right now, the currently level of AI and ML maturity is stillnmore likely to benefit cybersecurity professionals than it will help thencriminals. But it is a slim margin. Only 28% agreed, while 37% disagreed,nleaving nearly a third (32%) still unsure whether AI is more of a help toncybersecurity than a hindrance.

n

nWhat are the threats being launched off the back of AI technology? We askednrespondents to list the ones that most concern them.

n n

nThere was a very clear bias towards misinformation-based attacks andnactivities at the top of the list:

n
    n
  • Deepfakes (76%)
  • n
  • Disinformation campaigns (70%)
  • n
  • Social engineering (64%)
  • n
n n

This year is set to be one of the biggest for democratic elections innhistory, with leadership elections taking place in seven of the world’s 10nmost populous nations:

n
    n
  • Bangladesh
  • n
  • India
  • n
  • U.S.
  • n
  • Indonesia
  • n
  • Pakistan
  • n
  • Russia
  • n
  • Mexico
  • n
n

nAlongside these, major elections are taking place in key economic andnresource-rich nations and blocs including the U.K., E.U., Iceland, South Africa, SouthnKorea, Azerbaijan, Republic of Ireland and Venezuela.n

n

nThese are alongside regional and mayoral elections that in total will see 73ncountries (plus all 27 members of the European Union) go to the polls atnleast once in 2024. The UK will be the busiest at the ballot box, with 11nelections in total. There will be 10 major regional elections, plus angeneral election for a new government and Prime Minister in 2024.n

n

nThe fact that cybersecurity professionals are pointing to these types ofninformation and deception attacks as the biggest concern is understandably angreat worry for organizations, governments and citizens alike in this highlynpolitical year.n

n n

Other key AI-driven concerns are not attack based, but more regulatory andnbest-practice drive. These include:

n
    n
  • The current lack of regulation (59%)
  • n
  • Ethical concerns (57%)
  • n
  • Privacy invasion (55%)
  • n
  • The risk of data poisoning – intentional or accidental – (52%)
  • n
n

nAttack-based concerns do start to re-emerge further down the list ofnresponses, with adversarial attacks (47%), IP theft (41%) and unauthorizednaccess (35%) all noted by respondents as a concern.n

n

nStrangely, at the bottom of the list was enhanced threat detection. 16% ofnrespondents are concerned that AI-based systems might be too good at whatnthey do, potential creating a myriad of issues for human operators includingnfalse positives or overstepping into user privacy.n

n

n

n

How Can We Be Better Prepared?

n

n

n

nBy their own admission, survey respondents noted there’s room fornimprovement, but the preparedness for an influx of AI technology andnAI-based threats is not all negative:

n
    n
  • Some 60% say they could confidently lead the rollout of AI in theirn organization
  • n
  • Meanwhile, only a quarter (26%) said they were not prepared to deal withn AI-driven security issues
  • n
  • More concerning was that four in 10 (41%) said they have little or non experience in AI or ML, while a fifth (21%) said they don’t know enoughn about AI to mitigate concerns
  • n
n

nThe conclusion is clear – education on AI use, best practice and efficiencynis a crucial requirement for all organizations today.n

n

n

n

Creating Effective AI Policyn

n n

The fact that, despite its recent acceleration in technological capabilitynand advancement, AI and ML are both still in their technology infancy. Thisnis brought into stark contrast by the lack of organizational policy (as wellnas government regulation) regarding the acquisition, use, access to data andntrust placed in AI technology:

n
    n
  • Of the cybersecurity professionals we surveyed, only a quarter (27%)n said their organizations had a formal policy in place on the safe andn ethical use of AI
  • n
  • Only 15% said they had a policy that covered securing and deploying AIn technology today
  • n
  • On the other side of the two questions, we find nearly 40% ofn respondents are still trying to develop a policy and a position. That’sn 39% on ethics and 38% on safe and secure deployment
  • n
  • Meanwhile, almost a fifth (18%) said their organizations have no plansn to create a formal policy on AI in the near future
  • n
  • Nearly one in five organizations surveyed are not ready for or preparingn for AI technology either in or interfacing with their operations
  • n
n

nArguably, this is a cause for concern particularly given that governmentnregulation in many major economies has yet to catch up with the use of AIntechnology. That said, four out of five respondents do see a clear need forncomprehensive and specific regulations governing the safe and ethical use ofnAI:

n
    n
  • Respondents were clear that governments need to take more of a lead ifn organizational policy is to catch up, even though 72% agreed thatn different types of AI will need their own tailored regulations
  • n
  • Regardless, 63% said regulation of AI should come from collaborativen government efforts (ensuring standardization across borders) and 54%n wanting national governments to take the lead
  • n
  • In addition, 61% would also like to see AI experts coming together ton support the regulation effort, while 28% favor private sectorn self-regulation
  • n
  • Only 3% want to retain the current unregulated environment
  • n
n n

Given the lack of policy and regulation, how are organizations governing whonshould have access to AI technology? The fact is there is no commonnapproach:

n
    n
  • The survey revealed that 12% of respondents said their organizations hadn blocked all access to generative AI tools in the workplace. So, no usingn ChatGPT to write that difficult customer letter any time soon!
  • n
  • However, 17% are not even discussing the issue, while around a thirdn each are allowing access to some (32%) or all (29%) generative AI toolsn in the workplace
  • n
  • A further 10% simply don’t know what their organization is doing in thisn regard
  • n
n

nWhat Does This Mean for You?n

n

nUltimately, this study and the feedback of ISC2 members working on the frontnline of cybersecurity highlights that we are in an environment where threatsnare rising, at least partly due to the use of AI, at a time when thenworkforce is struggling to grow to meet the demand.n

n

nThere are no standards on how organizations are approaching internal AInregulation, with little government-driven legal regulation at this stage.nCybersecurity professionals want meaningful regulations governing the safenand ethical use of AI to serve as a catalyst for formalizing operationalnpolicy.n

n

nThere is no doubt that AI is going to change and redefine many cybersecuritynroles, removing people from certain high-speed and repetitive tasks, but isnless likely to eliminate human cybersecurity jobs completely. However, asnthe technology is still in its infancy, education is paramount. Educationnwill play a critical role in ensuring that today’s and tomorrow’sncybersecurity professionals are able to adapt and fully utilize thenefficiency and operational benefits of AI technology, maintain clear ethicalnboundaries as well as respond to AI-driven threats. Seeking out educationnopportunities is essential for cybersecurity professionals to continuenplaying a leading role in the evolution of AI in the sector.

n ]]>

Leave a Comment

Your email address will not be published. Required fields are marked *