IT organisations that apply artificial intelligence and machine learning (AI/ML) technology to network management are finding that AI/ML can make mistakes, but most organisations believe that AI-driven network management will improve their network operations.
To realise these benefits, network managers must find a way to trust these AI solutions despite their foibles. Explainable AI tools could hold the key.
A survey finds network engineers are skeptical.
In an Enterprise Management Associates (EMA) survey of 250 IT professionals who use AI/ML technology for network management, 96% said those solutions have produced false or mistaken insights and recommendations.
Nearly 65% described these mistakes as somewhat to very rare, according to the recent EMA report “AI-Driven Networks: Leveling Up Network Management.” Overall, 44% percent of respondents said they have strong trust in their AI-driven network-management tools, and another 42% slightly trust these tools.
But members of network-engineering teams reported more skepticism than other groups—IT tool engineers, cloud engineers, or members of CIO suites—suggesting that people with the deepest networking expertise were the least convinced.
In fact, 20% of respondents said that cultural resistance and distrust from the network team was one of the biggest roadblocks to successful use of AI-driven networking. Respondents who work within a network engineering team were twice as likely (40%) to cite this challenge.
Given the prevalence of errors and the lukewarm acceptance from high-level networking experts, how are organisations building trust in these solutions?
What is explainable AI, and how can it help?
Explainable AI is an academic concept embraced by a growing number of providers of commercial AI solutions. It’s a subdiscipline of AI research that emphasises the development of tools that spell out how AI/ML technology makes decisions and discovers insights.
Researchers argue that explainable AI tools pave the way for human acceptance of AI technology. It can also address concerns about ethics and compliance.
EMA’s research validated this notion. More than 50% of research participants said explainable AI tools are very important to building trust in AI/ML technology they apply to network management. Another 41% said it was somewhat important.
Majorities of participants pointed to three explainable AI tools and techniques that best help with building trust:
- Visualisations of how insights were discovered (72%): Some vendors embed visual elements that guide humans through the paths AI/ML algorithms take to develop insights. These include decisions trees, branching visual elements that display how the technology works with and interprets network data.
- Natural language explanations (66%): These explanations can be static phrases pinned to outputs from an AI/ML tool and can also come in the form of a chatbot or virtual assistant that provides a conversational interface. Users with varying levels of technical expertise can understand these explanations.
- Probability scores (57%): Some AI/ML solutions present insights without context about how confident they are in their own conclusions. A probability score takes a different tack, pairing each insight or recommendation with a score that tells how confident the system is in its output. This helps the user determine whether to act on the information, take a wait-and-see approach, or ignore it altogether.
Respondents who reported the most overall success with AI-driven networking solutions were more likely to see value in all three of these capabilities.
There may be other ways to build trust in AI-driven networking, but explainable AI may be one of the most effective and efficient. It offers some transparency into the AI/ML systems that might otherwise be opaque. When evaluating AI-driven networking, IT buyers should ask vendors about how they help operators develop trust in these systems with explainable AI.