Artificial intelligence (AI) has been all over the media lately, especially since the rise of OpenAI’s ChatGPT. It has people of all industries asking: will it take over my job?
Specifically if you’re a cybersecurity professional you might be wondering: Will AI take over cyber security? Will AI replace cyber security jobs?
Generative AI, like ChatGPT, has been proving itself to be a tool capable of producing misinformation, spam, and custom tailored social engineering attacks at volumes that have never been seen before.
A study from SlashNext shows a 1,265% increase in phishing emails since ChatGPT launched. With generative AI, phishing attacks can be more targeted, and sophisticated than ever. Especially in a world where everyone’s social media profiles are just a search away.
These AI-powered cyber attacks are not just threatening to individuals, but also to enterprises. The world needs cyber security experts and analysts now more than ever to detect and respond to this new massive volume of threats.
In this post, I’ll go over why I think demand for creative human minds will be at an all time high in the cyber security space, and what the future of cyber security holds in a world polluted with generative AI threats.
You'll find yourself asking a different question: How many cyber security jobs will AI produce?
First, as mundane as it may sound, we need to understand the exact nature of what a cybersecurity professional does.
Cyber security professionals are analysts and engineers who are concerned with mitigating, and responding to cyberthreats.
They can do anything from managing and monitoring a business’s hardware and software, to conducting internal security audits. In large organizations, you’ll find at least one cybersecurity professional (and likely entire teams of security experts) working to maintain the business’s digital integrity.
Cybersecurity professionals often focus on two types of cybersecurity: offensive security (called “red team”) and defensive security (called “blue team”). Offensive security specifically focuses on identifying and exposing vulnerabilities in a business’s infrastructure, while defensive security is for adding proactive security measures to prevent breaches.
A lot of the job is collaborative, and based in logic and analytics. For example, a security engineer could identify (using software like CrowdStrike) that an attacker infiltrated a business’s network. Their next steps would be to analyze what endpoints (subnets, computers, servers) have been compromised, determine how to safely quarantine the threat, and prevent the vulnerability altogether. Remediating a breach like that might require a cybersecurity analyst to reach out to several different teams (engineering, human resources, marketing), and vendors to fully address the situation.
With that in mind, let’s move on to understand what people mean when they ask about AI, because the term might be more vague than you might think.
Nowadays, AI is a very loose term. From when the idea of “machines that can think” was written by Alan Turing, to today, the term AI has greatly evolved. Today, AI can mean anything from programs that automatically generate text captions, to the software behind “self-driving” cars.
But within the last 2 years (since 2022), when people refer to AI they are typically talking about generative AI like ChatGPT because that’s the latest advancement to make headlines.
Large language models (LLM) and generative AI have made such an impact in the media due to its ability to produce text that appears to have human-like characteristics.
Models like ChatGPT are able to “talk like” certain people or impersonate different formats and styles of writing (such as poems, movie scripts, and the style of different writers).
This naturally causes an existential crisis for people who see a machine produce convincing text that resembles human-like qualities. Hence, we get all the uproar that we have seen lately with AI.
However, there’s a lot more to AI than just the latest buzz around generative AI. AI in the form of machine learning has been used for decades already to accomplish tasks thought to have been impossible for computers.
Machine learning has made it easier than ever to recognize patterns (in any data including images and video), and detect abnormalities. It plays a critical role in a wide array of industries including medical, retail, and of course big tech like Amazon and Google.
This is just the start of what AI really is. There are plenty more variants and applications of AI in use today. But, with the hype around AI today, I typically assume people are talking about generative AI when they are asking questions like “will AI replace my job”. Because, if machine learning would have replaced your job, that would have been decades ago.
With an understanding of both what AI means, and the nature of a cybersecurity professional’s job, let’s get into why AI won’t replace cyber security jobs.
Cyber security is a creative process, which involves logical reasoning that generative AI like ChatGPT lacks. There is currently no sign that cyber security jobs are at risk of being automated by AI.
In fact, it is likely the opposite. Due to the volume of spam, and social engineering attacks that can be created through generative AI, it should generate more cyber security jobs than it removes.
The U.S. Bureau of Labor Statistics even projected that employment of security analysts should grow by 32% from 2022 to 2032.
That is not to say that aspects of a cyber security analyst’s job won’t be automated by AI. There are plenty of useful applications of AI like captioning, image-to-text, facial recognition, and pattern matching.
The core responsibilities of a cybersecurity analyst are to detect and respond to cyber attacks, and generative AI just isn’t going to have an impact on those responsibilities.
This is true especially when you consider all the variables with cyber security. Each enterprise uses different security software and hardware, has different teams to communicate with, and could even interact with custom services and code.
Moreover, each cyber attack can vary greatly. Responding to an attack could be anything from isolating compromised service accounts to addressing a SQL injection bug found in a service.
Even if you could feed an enterprise’s entire technology stack, and all security events through ChatGPT (which would be far too large to do today), there is no evidence that these large language models (LLM) would have any chance at understanding where a cyberattack originated, and exactly how to respond.
A report from Zscaler shows that between 2021 and 2022, the amount of phishing attacks increased by over 47%. Which (as they note) just happens to be the same year when the generative AI model, GPT3, was released to the public.
It is not hard to see why hackers would use ChatGPT to mass produce social engineering attacks. It can generate phishing emails quickly without error, and it is difficult for companies who operate the AI models to restrict people using them for nefarious purposes.
The future looks even less bright, because as generative AI advances. It will be easier to create more advanced fake content like videos with realistic audio.
That future is closer than it might sound, as today hackers can not only mimic how someone sounds via text, but also audio, images, and video. It has already been used in countless scams including robocalls that impersonated U.S. President Joe Biden’s voice.
Businesses will need to address this immense threat of social engineering attacks. And likely, they will want to hire even more cybersecurity professionals to detect, prevent, and handle the aftermath of these social engineering attacks.
But, you may have just had a brilliant idea: what if you use generative AI to detect phishing attacks? Then you may not need to hire cybersecurity engineers? Unfortunately, even OpenAI has confirmed that ChatGPT itself cannot detect if text is AI generated. Whichever way you look at it, you’re going to need more human intelligence to combat the endless flow of AI-based attacks.
As the laws surrounding generative AI develop, we may see additional regulations and compliance requirements. Since generative AI is trained on data found on the internet, anyone’s text (including this very blog post) can be included in the model.
This is already causing issues with copyrighted content, and could change how data is made available online forever (e.g., more paywalled content, exclusive content, restrictions on automated access, limiting data collection).
Upholding compliance laws like GDPR, HIPAA and CCPA already provide cybersecurity professionals with plenty to worry about. There are even security products dedicated to helping businesses get compliant (OneTrust for example).
The existing compliance laws can already be a lot of work to implement, but it's looking like AI will only cause more rules and regulations. Which will drive an even larger demand for cybersecurity professionals to implement, maintain and monitor user data.
New laws surrounding AI appear inevitable, as U.S. President Joe Biden had an executive order on the use of AI where he mentions the importance of weighing the risks of AI and clearly stating “Use of new technologies, such as AI, does not excuse organizations from their legal obligations”.
And who is responsible for implementing and upholding the data security and privacy laws? Ultimately, it will be up to cybersecurity professionals.
Well, in a way, it already has. But it has less to do with generative AI like ChatGPT and much more to do with machine learning.
Security platforms like CrowdStrike have been using AI to power their detection and response systems for years now. These platforms will monitor enterprise networks and endpoints to detect anomalies, report on security incidents, and recommend (or even automatically) remediate the threat.
The use of machine learning for cyber security is nothing new. It is a clear benefit to both cyber security analysts (makes sifting through data easier), as well as businesses (who get an immediate response to cyber attacks).
But, if you’re concerned about AI taking over your cyber security job, that likely won’t be the case. Most of the concerns about AI replacing jobs is fueled by the apparent success of large language models (LLMs) like ChatGPT.
The truth is, generative AI like ChatGPT isn’t helpful in doing a cyber security analyst’s job. However, it is very helpful to hackers to generate believable phishing attacks, which will only stress the importance of the role of cybersecurity professionals.
The problem with cyber security jobs is not that there is a shortage due to AI. The reality is that there are not enough qualified individuals to fill the jobs.
According to CyberSeek.org (a project funded by the U.S. Department of Commerce), in 2024 there are currently over a half million job openings in the US alone, and over one million people employed in cybersecurity jobs. Visit CyberSeek’s website to see the latest data.
That sounds like a job market that is in dire demand, and in need of more cybersecurity experts than ever before. Not like a market that is showing signs of being replaced by AI.
And anyway, if AI were intelligent enough to do the role of a cybersecurity professional, businesses would adopt AI instead of dedicating resources sifting through applications to find a qualified candidate.
It is clear from looking at the data (whether it be from CyberSeek or the U.S. Bureau of Labor Statistics) that there is currently no shortage of cybersecurity jobs. In fact, it looks like there’s only going to be growth in the cybersecurity space.
As we approach the second anniversary of GPT3’s public release, there hasn’t been any decline in the demand for cybersecurity professionals. But, one thing is for certain. There has been a rise in cyberthreats and especially social engineering attacks.
There’s no evidence that we should be concerned about generative AI like ChatGPT taking over cybersecurity jobs in the future either.
Cybersecurity is a creative job that requires collaboration across different teams, interfacing with a variety of software and hardware, and addressing unique security threats. There’s just no indication that generative AI would be useful in replacing a cybersecurity job completely.
There’s no doubt that AI in the form of machine learning has been a great benefit to the realm of cybersecurity, but that technology is nothing new nowadays. It is the standard, and it is here to stay.
While generative AI can look a lot like human intelligence, it is a convincing facade. It looks to be the sci-fi idea of “artificial general intelligence” where a machine is capable of true human reasoning when it is just good at combining related words (or colors) together. It isn't doing the sort of abstract thinking required from a cyber security expert.
Being concerned about generative AI taking your cybersecurity job today, is like an airplane pilot being afraid of losing their job to teleportation seen in sci-fi movies. While you can’t say it will never happen, you can be confident it won’t be in your lifetime.