As artificial intelligence (AI) continues to evolve at a rapid pace, the capabilities of large language models (LLMs) and generative AI are transforming industries around the globe. While these technologies bring vast benefits, they also pose new challenges, particularly in the realm of cybersecurity. The increasing adoption of AI tools such as ChatGPT has led to their exploitation by malicious actors, highlighting the dual-use nature of these technologies—capable of both legitimate and harmful applications.

AI’s Growing Influence in Cyber Threats
One of the most concerning aspects of AI’s influence in cybersecurity is its potential for misuse. Cybercriminals are increasingly leveraging AI to execute sophisticated attacks, making traditional defense mechanisms harder to maintain. These developments present both risks and opportunities, as AI has become a tool for attackers and defenders alike.
Phishing and Scams at Scale
One of the most immediate threats posed by generative AI is its ability to scale phishing attacks. Cybercriminals are using AI tools to create highly convincing, multilingual phishing emails, fake reviews, SMS scams, and fraudulent social media content. These scams are more difficult to detect than traditional phishing attempts, as AI can overcome typical red flags like awkward phrasing or improper grammar. With the use of generative AI, scammers can tailor their lures to a wider audience and adapt quickly to thwart detection methods.
In particular, the ChatGPT brand has been misused by attackers in an array of scams. Cybercriminals are exploiting the popularity of ChatGPT by incorporating its name into fraudulent schemes. These include fake YouTube videos, malicious ads, and even typosquatting—where malicious Android apps mimic the ChatGPT name to distribute malware. The aim is to gain user trust by exploiting the reputation of trusted AI tools, often leading to financial losses or data theft.
For instance, fake ads on platforms like Facebook use the ChatGPT name to redirect users to fraudulent investment portals. Similarly, scammers are creating fake videos featuring prominent figures like Elon Musk to promote cryptocurrency scams. Additionally, malicious browser extensions masquerading as ChatGPT are being used to steal data, distribute adware, and trick users into subscribing to unwanted services.
AI-Generated Malware: Current Limitations
While AI models like ChatGPT show potential for generating malware, their limitations prevent them from replacing traditional methods of malware creation—at least for now. The process of creating fully functional and complex malware with an LLM requires significant technical knowledge and effort. While there are proof-of-concept instances of AI-generated malware, attackers still prefer to rely on established methods, which are often faster and more efficient. However, AI-driven malware creation remains a possibility for the future, as cybercriminals experiment with AI tools like WormGPT to circumvent security filters and bypass traditional defenses.
Despite this, AI-generated malware remains in its early stages, with only rudimentary versions currently possible. The main challenge for attackers is overcoming security filters and developing functional malware that can bypass traditional defense mechanisms. Nonetheless, the rise of AI-based models without safety filters, like WormGPT, signals a growing threat on the horizon.
AI as a Tool for Security Researchers
On the flip side, generative AI is also becoming an invaluable tool for security researchers. These tools can assist analysts in detecting malware, understanding malicious code, and writing detection rules (e.g., Yara, Suricata, Sigma). For example, ChatGPT can help deobfuscate or beautify scripts, which accelerates the analysis of simpler malicious code.

Source: create.vista.com
Moreover, AI is facilitating the development of security-focused assistant tools. Platforms like Gepetto, VulChatGPT, and security copilots from companies like Microsoft and Google are helping analysts reverse-engineer malware, conduct vulnerability analysis, and improve incident response times. These tools leverage AI’s ability to sift through large amounts of data, identify patterns, and make suggestions, thus aiding researchers in identifying threats more quickly and efficiently.
However, while these AI tools offer significant promise, they do have limitations. Their capabilities in highly specialized areas remain limited, and privacy concerns arise from the potential for sensitive data input into AI tools to be used for further training purposes. Additionally, the cloud-based nature of these tools can make them expensive to run, raising concerns over accessibility for smaller organizations.
The Road Ahead: Rapid Development and Emerging Threats
Looking forward, the rapid development of generative AI is likely to bring both opportunities and challenges. With AI tools becoming more sophisticated and open-source software gaining traction, the capabilities of these models will only improve. This rapid advancement could bring about new threats, such as deepfakes—highly realistic AI-generated videos and audio used for scams, reputation damage, or political manipulation.
As the technology matures, attackers are likely to find more innovative ways to exploit AI. The battle between cybercriminals and security researchers will intensify as malicious actors experiment with AI-driven attacks while researchers develop countermeasures and detection tools.
Conclusion
Generative AI, particularly large language models like ChatGPT, represents both a revolution and a challenge in the cybersecurity landscape. While these technologies offer significant benefits in legitimate applications, their misuse in phishing, scams, malware generation, and deepfakes cannot be overlooked. As AI evolves, cybersecurity professionals must remain vigilant and adapt to new threats, utilizing AI’s capabilities to improve defenses while staying ahead of malicious actors.
Learn from market wizards: Books to take your trading to the next level