
GPT-4: Security experts warn of GPT-4 risks
The newest large language model (LLM), GPT-4, which was released on Tuesday by the artificial intelligence (AI) research company OpenAI, has prompted a number of vulnerabilities to be highlighted by cyber security specialists. Due to GPT-4’s improved reasoning and language comprehension skills as well as its long-form text production capability, which may be utilised to produce more complex code for malicious software programmes, these vulnerabilities may arise from the increasingly sophisticated nature of security threats.
While OpenAI’s generative AI chatbot, ChatGPT, gained enormous popularity after being made available to the general public in November of last year, its extensive use also allowed cybercriminals to utilise the tool to produce harmful code.
According to a study report released on Thursday by the Israeli cyber security company Check Point Research, despite advancements in safety metrics, GPT-4 still runs the danger of being used maliciously by online criminals. These skills include building C++ code for malware that may gather private Portable Document Format (PDF) files and send them to distant servers via a covert file transfer system.
In a demonstration, the LLM, which is now accessible on ChatGPT Plus—a paid membership tier of ChatGPT—failed to identify the harmful intent of code when the phrase “malware” was removed, even though GPT-4 first rejects code production because to the existence of the word “malware” in the query.
Additional risks that Check Point’s researchers could use include the “PHP Reverse Shell” technique, which hackers use to access a device and its data remotely, writing Java code to download malware remotely, and developing phishing draughts by pretending to be bank and employee emails.
According to some security experts, GPT-4 will continue to present a wider range of issues, including an expansion in the sort and scope of cybercrimes that more hackers can now use to target both persons and organizations.
Tools like GPT-4-based chatbots “will continue to open the door for potentially more danger, as it lowers the threshold in reference to cybercriminals, hacktivists, and state-sponsored attackers,” according to Mark Thurmond, global chief operations officer at US cybersecurity firm Tenable.