Malwarebytes survey reveals 81% of people are concerned about the security risks posed by ChatGPT and generative AI, while just 7% think they will improve internet security. Credit: Andrey_Popov/Shutterstock A new Malwarebytes survey has revealed that 81% of people are concerned about the security risks posed by ChatGPT and generative AI. The cybersecurity vendor collected a total of 1,449 responses from a survey in late May, with 51% of those polled questioning whether AI tools can improve internet safety and 63% distrusting ChatGPT information. What’s more, 52% want ChatGPT developments paused so regulations can catch up. Just 7% of respondents agreed that ChatGPT and other AI tools will improve internet safety. In March, a raft of tech luminaries signed a letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months to allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” The letter cited the “profound risks” posed by “AI systems with human-competitive” intelligence. The potential security risks surrounding generative AI use for businesses are well-documented, as are vulnerabilities known to impact the large language models (LLM) applications they use. Meanwhile, malicious actors can use generative AI/LLMs to enhance attacks. Despite this, there are use cases for the technology to enhance cybersecurity, with generative AI- and LLM-enhanced security threat detection and response a prevalent trend in the cybersecurity market as vendors attempt to help make their products smarter, quicker, and more concise. ChatGPT, generative AI “not accurate or trustworthy” In Malwarebytes’ survey, only 12% of respondents agreed with the statement, “The information produced by ChatGPT is accurate,” while 55% disagreed, a significant discrepancy, the vendor wrote. Furthermore, only 10% agreed with the statement, “I trust the information produced by ChatGPT.” A key concern about the data produced by generative AI platforms is the risk of “hallucination” whereby machine learning models produce untruths. This becomes a serious issue for organizations if its content is heavily relied upon to make decisions, particularly those relating to threat detection and response. Rik Turner, a senior principal analyst for cybersecurity at Omdia, discussed this concept with CSO earlier this month. “LLMs are notorious for making things up,” he said. “If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?” Related content news CISA, FBI urge developers to patch path traversal bugs before shipping The advisory highlights how developers can follow best practices to fix these vulnerabilities during production. By Shweta Sharma May 03, 2024 3 mins Vulnerabilities news Microsoft continues to add, shuffle security execs in the wake of security incidents The company has appointed new product security chiefs as well as a customer-facing CISO as it continues to respond to high-profile attacks on its products and own network. By Elizabeth Montalbano May 03, 2024 4 mins CSO and CISO feature Malware explained: How to prevent, detect and recover from it What are the types of malware? How does malware spread? How do you know if you’re infected? We've got answers. By Josh Fruhlinger May 03, 2024 18 mins Ransomware Phishing Malware brandpost Sponsored by Cyber NewsWire LayerX Security Raises $26M for its Browser Security Platform, Enabling Employees to Work Securely from Any Browser, Anywhere Early adoption by Fortune 100 companies worldwide, LayerX already secures more users than any other browser security solution and enables unmatched security, performance and experience By Cyber NewsWire May 02, 2024 4 mins Cyberattacks Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe