As with any new technology, there is always the risk that cybercriminals could exploit it for nefarious purposes. In the case of ChatGPT, this could include learning how to craft attacks and write ransomware. The chatbot’s vast volumes of data and natural language capabilities make it a potentially attractive tool. Here then we take a closer look at cybercrime and ChatGPT.
Cybercrime and ChatGPT – the security risks
There are 4 general categories which can be used to classify the risks associated with ChatGPT:
- Data theft: The illegal use of private information for illicit objectives such as fraud and identity theft,
- Phishing: Fraudulent emails sent from supposedly reliable sources. These are created to steal private data, including credit card numbers and passwords. Cybercriminals often send fake emails or messages posing as legitimate sources. They do this to trick users into revealing sensitive information or downloading malware.
- Malware: Malicious software that can be used to penetrate computers, steal sensitive information and perform other nefarious tasks. This could also include creating fake social media profiles or websites to lure in unsuspecting victims.
- Botnets: Distributed denial-of-service (DDoS) assaults are carried out through networks of computers known as botnets. They can be used to interrupt operations and shut down websites.
Past cyber-attacks can inform the present
Cyber operations, whether espionage or an attack, instruct computer systems or machines to operate in ways they were not intended. For example, in 2007, the Idaho National Lab carried out a demonstration called Aurora. The demo saw 30 lines of code damaging an electric generator beyond repair.
Bad actors can use the same tactics with AI systems. They can find weaknesses in models and exploit them, or as former Wilson Centre Global Fellow Ben Buchanan highlights in a recent Georgetown Centre for Security and Emerging Technology (CSET) report: Compromise “the data upon which they depend.” Two methods for data poisoning could include modifying “input data while in transit or while stored on servers”, or changing “what data the system sees during training and therefore changing how it behaves.”
Examples of weaknesses in AI systems are extensive, such as a vacuum cleaner that ejects collected dust back onto a space it just cleaned so it can collect even more dust. Or, a racing boat in a digital game looping in place to collect points instead of pursuing the main objective of winning the race. While these examples may seem trivial, these same techniques can be used to far more serious effect. Remember, AI systems have been deployed in support of a diversity of functions including aircraft collision avoidance systems, healthcare, loans and credit scores, and facial recognition technologies.
In Conclusion
Whichever way you look at them, both cybercrime and ChatGPT are here to stay. Whilst CRIBB and other cyber security specialists focus mainly on the former, ChatGPT is there to be enjoyed. AI is growing in our everyday lives and if it can be leveraged for the greater good, then we are all for it. However, the downsides must also be kept in consideration, as they should with anything. Perhaps the last word here should go to the late, great Stephen Hawking. He warned the dangers of developing advanced artificial intelligence, stating that it could potentially become a threat to humanity if not properly controlled. He said: “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.” Hawking also emphasized the importance of developing AI safely and responsibly, in order to prevent negative consequences.