In the ever-evolving landscape of cybersecurity, the rise of artificial intelligence (AI) has been both a blessing and a curse. While AI has empowered defenders with advanced tools to detect and neutralize cyber threats, it has also armed cybercriminals with more sophisticated methods to launch attacks. As the digital battlefield becomes increasingly complex, cybersecurity experts like Yusuf are at the forefront of developing AI-driven defenses to outsmart these evolving threats. But can AI truly keep us safe, or does it also open new doors for hackers to exploit?
AI has revolutionized the way we approach cybersecurity. Large Language Models (LLMs) like OpenAI’s GPT-4 and customized AI tools such as CyberGPT have shown remarkable proficiency in detecting phishing emails, a common and pervasive threat. In a recent study, GPT-4 and CyberGPT achieved accuracy rates of 97.22% and 97.46%, respectively, in identifying phishing attempts. These models leverage their ability to understand context, analyze email content, and detect subtle phishing indicators such as suspicious links, urgent language, and requests for sensitive information.
However, the same AI technologies that protect us are also being weaponized by cybercriminals. Generative AI, for instance, can now write malware, create deep fakes, and even exploit zero-day vulnerabilities with alarming efficiency. A study revealed that a popular generative AI chatbot could generate exploit code 87% of the time when provided with a description of a vulnerability. This means that even novice hackers can now launch sophisticated attacks without needing advanced coding skills. To learn more about this, read my research paper titled (Can AI Keep You Safe? A Study of Large Language Models for Phishing Detection) link: https://ieeexplore.ieee.org/document/10427626
One of the emerging challenges in cybersecurity is the proliferation of shadow AI—unauthorized AI deployments within organizations. Employees may download AI models from the cloud or use AI-powered tools on their mobile devices, often without proper oversight. These shadow AI systems can become sources of data leakage or misinformation, creating new vulnerabilities for organizations.
Deep fakes, another AI-driven threat, are becoming increasingly sophisticated. Cybercriminals are using AI-generated audio and video to impersonate executives, government officials, or even heads of state. New data from Regula’s Deepfake Trends 2024 report found this combination of increased tools at fraudsters’ disposal and underestimation of vulnerability to deepfakes is causing a problem. Nearly all (92%) of businesses experienced financial loss due to a deepfake. As deep fake technology improves, the potential for financial fraud, misinformation, and even legal disputes grows exponentially. How can we trust the authenticity of digital evidence when AI can create convincing fakes?
As simple as it sounds, your live selfie can authorize access to your data, including your bank, credit card, and SSN credentials. It’s unreal—I tried it, and it was shocking how easily a simple live photo selfie can grant access to your information.
My advice: for the sake of deepfake risks, avoid using Face ID unless necessary.
As AI becomes more integrated into our digital infrastructure, it also becomes a target for cyberattacks. Attackers can exploit vulnerabilities in AI systems to poison training data, manipulate outputs, or extract sensitive information. Prompt injection attacks, where hackers manipulate AI models by crafting specific inputs, are a growing concern. These attacks can bypass the guardrails of AI systems, leading to unintended and potentially harmful outcomes.
The Open Worldwide Application Security Project (OWASP) has identified prompt injection as the number one threat to large language models. As AI systems become more autonomous, the risk of these attacks increases, highlighting the need for robust defenses and continuous monitoring. To learn more about this, read my research paper titled ” Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks” The paper is available at: https://arxiv.org/abs/2408.12806
Despite these challenges, AI also unlocks significant opportunities to strengthen cybersecurity. AI-driven tools can process vast amounts of data in real-time, detect anomalies, and provide actionable insights to security teams. For instance, our customized CyberGPT goes beyond simply identifying phishing emails—it explains its reasoning, assigns confidence scores, and helps security professionals make informed decisions.
AI also enhances threat response by offering expert recommendations and prioritizing actions for human analysts. Although fully autonomous responses remain risky due to potential AI “hallucinations,” this collaborative approach ensures faster, more accurate threat mitigation while keeping human oversight in the loop.
The integration of artificial intelligence (AI) into autonomous vehicles (AVs) has revolutionized the automotive industry, offering unprecedented advancements in safety, efficiency, and convenience. However, this reliance on AI also introduces significant cybersecurity risks. As AVs become more connected and reliant on AI-driven systems, they become vulnerable to sophisticated cyberattacks, such as CAN bus manipulation, Bluetooth exploits, and key fob hacking. These vulnerabilities, if exploited, could lead to catastrophic consequences, including loss of vehicle control, unauthorized access to sensitive data, or even physical harm to passengers. The dual-edged nature of AI in AVs underscores the urgent need for robust cybersecurity measures, ethical AI development, and continuous innovation to ensure that the benefits of autonomous driving are not overshadowed by the risks posed by malicious actors.
In our research, we explored how Large Language Models (LLMs) can be weaponized to generate malicious code for AV attacks. By fine-tuning a custom LLM, dubbed HackerGPT, we demonstrated how AI can be used to craft sophisticated attack scripts targeting critical vehicle systems. For instance, HackerGPT generated payloads for CAN message injection attacks, enabling us to manipulate a vehicle’s acceleration and braking systems. Similarly, it produced scripts for Bluetooth-based exploits, allowing unauthorized access to a car’s infotainment system, and key fob hacking, which could remotely unlock and start vehicles without physical keys. These experiments highlight how LLMs can lower the barrier to entry for cybercriminals, enabling even those with minimal technical expertise to execute complex attacks. This alarming capability emphasizes the need for ethical safeguards in AI development, as well as proactive cybersecurity strategies to protect AVs from emerging AI-driven threats.
To learn more about this, read my research paper titled (The Dark Side of AI: Large Language Models as Tools for Cyber Attacks on Vehicle Systems) link: https://ieeexplore.ieee.org/abstract/document/10754676
As AI continues to reshape the cybersecurity landscape, the battle between defenders and attackers will only intensify. While AI offers powerful tools to detect and neutralize threats, it also introduces new vulnerabilities that cybercriminals are eager to exploit. The key to staying ahead lies in continuous innovation, collaboration, and education.
We are leading the charge, developing AI-driven defenses that can outsmart even the most sophisticated threats. However, the cybersecurity community must remain vigilant, addressing emerging challenges such as shadow AI, deep fakes, and quantum computing. By leveraging AI responsibly and proactively, we can build a more secure digital future—one where the benefits of AI outweigh the risks.
Yusuf Usman (Member, IEEE and ASEE) is a Graduate Research Assistant in Cybersecurity at Quinnipiac University, Hamden, CT, USA. His research focuses on cybersecurity, leveraging AI and ML techniques for phishing detection, automated attack and defense strategies, malware detection, and autonomous vehicle security. Additionally, Yusuf has contributed to NASA-funded research on 6G millimeter-wave Massive MIMO wireless communication, networks, and emerging technologies, including 5G, 6G, and beyond. He has authored and co-authored several research articles on these topics.
Author’s Bio : YUSUF USMAN
Yusuf holds several professional certifications, including: Certified Information Security Manager (CISM) (ISACA), AWS Certified Security – Specialty, Applied Healthcare Cyber Risk Management, Ethical Hacking Penetration Testing IV, CompTIA SecurityX, CompTIA PenTest+, AWS Academy Cloud Security Foundations, and Cloud Foundations Certificate.
Cybersecurity has rapidly evolved from a back-office technical concern to a boardroom imperative. As digital…
Ransomware has evolved into one of the most formidable threats to organizations worldwide, and 2025…
Third-party vendors are indispensable to modern enterprises, offering specialized services, cost efficiencies, and scalability. However,…
A critical vulnerability in the FastCGI library could allow attackers to execute arbitrary code on…
Significant security flaws have been discovered in React Router, a widely-used routing library for React…
In an era where cyber threats are growing in sophistication and frequency, Chief Information Security…