These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In an era where digital threats are increasingly sophisticated, the
intersection of Artificial Intelligence and cybersecurity presents both
promising defenses and potent dangers. This paper delves into the escalating
threat posed by the misuse of AI, specifically through the use of Large
Language Models (LLMs). This study details various techniques like the switch
method and character play method, which can be exploited by cybercriminals to
generate and automate cyber attacks. Through a series of controlled
experiments, the paper demonstrates how these models can be manipulated to
bypass ethical and privacy safeguards to effectively generate cyber attacks
such as social engineering, malicious code, payload generation, and spyware. By
testing these AI generated attacks on live systems, the study assesses their
effectiveness and the vulnerabilities they exploit, offering a practical
perspective on the risks AI poses to critical infrastructure. We also introduce
Occupy AI, a customized, finetuned LLM specifically engineered to automate and
execute cyberattacks. This specialized AI driven tool is adept at crafting
steps and generating executable code for a variety of cyber threats, including
phishing, malware injection, and system exploitation. The results underscore
the urgency for ethical AI practices, robust cybersecurity measures, and
regulatory oversight to mitigate AI related threats. This paper aims to elevate
awareness within the cybersecurity community about the evolving digital threat
landscape, advocating for proactive defense strategies and responsible AI
development to protect against emerging cyber threats.