Select the directory option from the above "Directory" header!

Menu
IBM: Treat generative AI like a burning platform and secure it now

IBM: Treat generative AI like a burning platform and secure it now

Many enterprises prioritise innovation without adequately addressing the security risks posed by generative AI, IBM warns.

In the rush to deploy generative AI, many organisations are sacrificing security in favor of innovation, IBM warns.

Among 200 executives surveyed by IBM, 94% said it’s important to secure generative AI applications and services before deployment. Yet only 24% of respondents’ generative AI projects will include a cybersecurity component within the next six months.

In addition, 69% said innovation takes precedence over security for generative AI, according to the IBM Institute for Business Value’s report, The CEO's guide to generative AI: Cybersecurity.

Business leaders appear to be prioritising development of new capabilities without addressing new security risks – even though 96% say adopting generative AI makes a security breach likely in their organisation within the next three years, IBM stated.

“As generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon,” IBM stated.

For network and security teams, challenges could include having to battle the large volumes of spam and phishing emails generative AI can create; watching for denial-of-service attacks by those large traffic volumes; and having to look for new malware that is more difficult to detect and remove than traditional malware.

“When considering both likelihood and potential impact, autonomous attacks launched in mass volume stand out as the greatest risk. However, executives expect hackers faking or impersonating trusted users to have the greatest impact on the business, followed closely by the creation of malicious code,” IBM stated.

There’s a disconnect between organisations’ understanding of generative AI cybersecurity needs and their implementation of cybersecurity measures, IBM found. “To prevent expensive—and unnecessary—consequences, CEOs need to address data cybersecurity and data provenance issues head-on by investing in data protection measures, such as encryption and anonymisation, as well as data tracking and provenance systems that can better protect the integrity of data used in generative AI models,” IBM stated.

To that end, organisations are anticipating significant growth in spending on AI-related security. By 2025, AI security budgets are expected to be 116% greater than in 2021, IBM found. Roughly 84% of respondents said they will prioritise GenAI security solutions over conventional ones.

On the skills front, 92% of surveyed executives said that it’s more likely their security workforce will be augmented or elevated to focus on higher value work instead of being replaced.

Cybersecurity leaders need to act with urgency in responding to generative AI’s immediate risks, IBM warned. Here are a few of its recommendations for corporate execs:

  • Convene cybersecurity, technology, data, and operations leaders for a board-level discussion on evolving risks, including how generative AI can be exploited to expose sensitive data and allow unauthorised access to systems. Get everyone up to speed on emerging “adversarial” AI – nearly imperceptible changes introduced to a core data set that cause malicious outcomes.
  • Focus on securing and encrypting the data used to train and tune AI models. Continuously scan for vulnerabilities, malware and corruption during model development, and monitor for AI-specific attacks after the model has been deployed.
  • Invest in new defenses specifically designed to secure AI. While existing security controls and expertise can be extended to secure the infrastructure and data that support AI systems, detecting and stopping adversarial attacks on AI models requires new methods.

EMA: Security concerns dog AI/ML-driven network management

Security also is a key concern for enterprises that are considering AI/ML-driven network management solutions, according to recent study by Enterprise Management Associates (EMA).

EMA surveyed 250 IT professionals about their experience with AI/ML-driven network management solutions and found that nearly 39% are struggling with the security risk associated with sharing network data with AI/ML systems.

“Many vendors offer AI-driven networking solutions as cloud-based offerings. IT teams must send their network data into the cloud for analysis. Some industries, like financial services, are averse to sending network data into the cloud. They’d rather keep it in-house with an on-premises tool. Unfortunately, many network vendors won’t support an on-premises version of their AI data lake because they need cloud scalability to make it work,” EMA stated in its report, AI-Driven Networks: Leveling up Network Management.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags generative AI

Show Comments