Select the directory option from the above "Directory" header!

Menu
How companies are putting embedded genAI to good use

How companies are putting embedded genAI to good use

Enterprise software vendors of all stripes are beginning to add generative AI features to their applications. Here’s how Stream Financial, NFP, and Thomson Reuters are (cautiously) taking advantage of the technology.

Credit: NicoElNino/Shutterstock

ChatGPT, Claude, Bard, and other public-facing generative AI chatbots based on large language models (LLMs) are nice enough, but they’re general-purpose and not well integrated into enterprise workflows.

Employees either have to go to a separate app, or companies have to spend time and effort adding the functionality to their applications via application programming interfaces. Plus, in order to use ChatGPT and other genAI chatbots well, employees have to learn prompt engineering.

Embedded generative AI, by comparison, promises to put the new AI functionality right where employees need it most — into their existing word processing applications, spreadsheets, email clients, and other enterprise productivity software — without any work on the part of their employers. If it’s done right, the new AI functionality should be seamless and intuitive to users, allowing them to get all the benefits without genAI’s steep learning curve.

Based on a recent survey of technology decision-makers in North America and the UK, Forrester predicts that by 2025, nearly all enterprises will be using generative AI for communications support, including writing and editing. In fact, 70% of the survey respondents said they were already using generative AI for most or all of their writing or editing.

But according to Forrester, standalone genAI tools — like ChatGPT — can’t support cross-functional collaboration and don’t work where employees work. “That means that for many use cases, genAI will be more beneficial as an embedded functionality than as a standalone app,” the firm said in the survey report.

Manish Goyal, global AI and analytics leader for IBM Consulting, agrees. “You can have all the best AI, but if it’s not in the workflow where people use it, it’s not going to get adoption,” he said.

The biggest buzz in embedded genAI has been around Microsoft 365 Copilot, a generative AI assistant being built into apps across the Microsoft 365 productivity suite. Although some genAI capabilities have been rolled out to Teams and other Microsoft 365 apps, Copilot itself is not yet generally available, with only 600 companies allowed early access for testing purposes.

David McCurdy, chief enterprise architect and CTO at solutions integrator Insight, is eagerly awaiting the general release of Microsoft 365 Copilot. “For people who’ve seen the demos, the integration of generative AI is going to completely change how back-office work is done,” he said.

In the meantime, some enterprises are adding generative AI to their apps themselves, via API calls to OpenAI or locally run LLMs like Llama 2. Insight, for example, embedded generative AI into Microsoft Excel via APIs. “But we don’t want to do too much development, because Office 365 is going to have it,” McCurdy said.

Even companies without the time or people needed to create their own AI embeds can start using generative AI within their productivity tools today, because many vendors have already added various genAI features to their apps. For example, several online meeting platforms, including Zoom and Microsoft Teams, are now offering built-in or add-on AI-powered summarisation features.

“It completely changed how meetings are summarised,” McCurdy said. “We can say with certainty that the summarisation features are second to none. There are plenty of things that aren’t going great, but for summarisation and creating lists, it kicks butt.”

Here's a look at how some companies are using generative AI in their workflows, where it works well, and strategies for embracing genAI safely.

Stream Financial: genAI in emails, text documents, code

“We use an email client called Spark, and it has generative AI built in for summarising emails — because people can be long-winded — and for composing emails,” said Jowanza Joseph, head of engineering at Stream Financial Technology, a Salt Lake City-based fintech firm.

“For me, and for most of the leadership team, the view is positive,” he said. “The only negative that we see is that it can be a façade for someone’s lack of understanding. Sometimes you read something, and it doesn’t make sense and you can tell the AI wrote it.”

Another generative AI tool that his company uses is Grammarly, which works inside of Microsoft Word, Google Docs, and many other applications. Grammarly was originally just a grammar checker but recently added AI-powered text creation, rewording, and summarisation.

“We have a corporate Grammarly subscription,” Joseph said. “It works inside of Google Docs, and we can ask it to write paragraphs or summarise. It’s good at filler and summaries, but if you want to get in depth on a technical subject, it just can’t do it.”

The problem with Grammarly’s AI, Joseph said, is that each interaction stands on its own. You can’t have the kind of back-and-forth that you can with ChatGPT or Claude in order to fine-tune the output. “We don’t have a good way to prompt it to tell it what to do.”

The company is careful not to use public AIs for confidential data or customer information, Joseph noted.

Long term, he said, he’s optimistic about using generative AI to generate content, such as marketing text. “Maybe writing different copy to A/B test. Or creating different iterations of the same copy.”

But the biggest impact Joseph sees genAI making is with code generation. “We’re paying $19 a month per person for GitHub Copilot,” he said, “and we’d easily pay five times more.” The tool has saved the company countless hours and has a good depth of knowledge, he said.

“That’s really where generative AI has proven its value,” Joseph said. “Everything else is still to be determined. There’s still a lot of work to be done.”

NFP: genAI for marketing copy, meeting summaries

Insurance broker NFP has been using AI to write text for two years. Long before ChatGPT came out publicly, the company was using Jasper AI to create marketing copy, said Kyle Healy, the company’s SVP of sales enablement.

Launched in February of 2021 and initially based on OpenAI’s GPT 3.5 large language models, Jasper claims to have more than 100,000 enterprises as customers, including Pfizer, Sports Illustrated, HubSpot, and SentinelOne.

Today, Jasper AI uses GPT 4, the same set of models that power the most advanced version of ChatGPT. But it adds business-friendly functionality such as fine tuning and additional models designed to create marketing or business content tailored to specific use cases. In addition, it can be trained on a company’s own brand voice and used via extensions and APIs inside company workflows.

“Some of our concerns were about security, and Jasper is a closed system,” Healy said. “Some of our marketing people are still using it for some of our copywriting.”

But recently, the company has mostly moved on from Jasper to the generative AI tools embedded in Salesloft, its sales engagement platform. This spring, Salesloft added generative AI capabilities allowing fast creation of emails.

“We’ve also started using generative AI in coaching and in directional guidance in some of our CRM systems,” Healy said. “And we’re now using GPT in more of our programs, as it starts to get connected into everything,” he said. “It’s totally embedded across the Microsoft ecosystem.”

Healy himself is a major genAI user. “I wrote my entire 2023 business plan with AI,” he said. “We were trying to convince some people internally how effective it could be, and someone senior to me said it wasn’t there yet. So — initially as a spoof — I had it write my business plan and turned it in. I got nothing but great reviews and said, ‘Gotcha! A robot wrote that!’

“We also had a senior member of our sales team who worked in private equity and started to use it to write their leads and construct contracts. We had a lot of people come to use it early this year — leadership, sales, everything. They’re saying, ‘We have got to get on this. We have to. Have to. Have to.’”

Another way the company uses generative AI is in Microsoft Teams, where it automatically generates meeting notes.

“What’s important for everyone to understand is that it’s amplifying and augmenting what we’re already doing, not replacing it,” Healy said. “It enables us to do more with the same amount of people.”

The next step, he said, is to use the generative AI capabilities in Salesforce to turn raw data into commentary. “It’s something that we’re working on now,” Healy said. “It will allow executives to absorb information in a way that’s natural to them. I’m not a numbers person. I get stuck in numbers all the time.”

When deciding whether to use a generative AI tool, the accuracy of its training data is important, as are privacy, security, and usability.

“With sales and marketing copy, we’re not dealing with anything proprietary,” Healy said. “But we do a lot of acquisitions. Can our legal teams use AI to create contracts quicker or find nuances and details? We’d have to explore privacy and security and closed-source models.”

For now, he said, he’s working on getting salespeople up and running with the new AI tools. “It’s about simplicity and ease of use,” he said. “Does it feel natural, or is it something that they have to go and learn?”

ChatGPT, for example, was not easy to use, Healy said. “You almost had to take courses on how to talk to it. It was almost like programming, to some extent. And we’re not a tech company — we sell insurance.”

Generative AI that’s embedded into the tools that employees already use can overcome those usability obstacles.

“Salesloft, for example, is iterative and organic,” Healy said. “Generative AI is a natural extension of the tool. We don’t have to go and learn something new, which is important to us. We can give it to a 35-year insurance veteran, and they can use it. Or we can give it to a 25-year-old who’s super native in the whole thing.”

Thomson Reuters: genAI to draft communications, answer employee questions

Thomson Reuters has been using AI in its products and workplace for decades, said Mary Alice Vuicic, the news organisation’s chief people officer. “It’s embedded into so much of what we already have,” she said. “It’s about augmentation, about freeing up humans to do higher-value work.”

AI doesn’t replace people, she said. “It replaces tasks.”

Now that generative AI has become widely available, the company has been proactive in reaching out to its suppliers to understand their road maps for the technology. A lot has already been announced, Vuicic said. Salesforce and Workday, for example, both have generative AI strategies. “And there’s [Microsoft] Copilot, and a myriad of other tools,” she said.

Many generative AI tools are already being used at Thomson Reuters, she said. For example, communication teams are using AI to write first drafts.

“In fact, we’re seeing that opportunity across the board — for the first drafts of work products,” she said. “We can’t rely on it for accuracy, though, and that’s been part of our training. So the first drafts are then edited by humans, so we’re applying human expertise to deliver a higher quality product.”

In human resources, experiments are under way to use generative AI to answer employee questions. “We’ve been really pleased with the early results,” Vuicic said. “The accuracy has been at 95% so far, and it frees up time for people who had been answering those questions to devote to value-added work.”

A three-pronged approach to genAI adoption

To stay ahead of the possibilities, Thomson Reuters has launched an enterprise-wide initiative to accelerate the adoption of AI, Vuicic said.

The first step is developing AI standards and ethics, which is something that the company has been working on for years. “We’ve been using AI for over three decades,” said Vuicic. “Even before generative AI, we were working to become leaders in this.”

The next step, she said, is workforce education. “This is absolutely essential.”

In April, the company held an all-employee global day of learning dedicated to AI. Nearly 7,000 participants — out of a total workforce of 27,000 — attended the event live. “And thousands more are leveraging the training asynchronously,” she added.

The vast majority of attendees said that they were already able to apply generative AI to their work, according to Vuicic.

“This is the most important innovation that will happen in the careers of most of our people,” she said. “And we need to be at the forefront of this. We have a responsibility as a company to help educate, provide training, development, upskilling, and reskilling. But every individual has the responsibility to lean in and evolve.”

The third leg of the company’s AI strategy is to provide a safe and secure place to experiment with all the tools. In addition to the generative AI embedded inside productivity software, Thomson Reuters is working with a number of large language models, she said, including OpenAI, Anthropic, Google, and open-source models like Llama 2.

“It’s changing so rapidly,” she said. “We want to make sure we’re leveraging the best technology for our customer use cases.”

Finally, the company is working to reinforce the culture of learning and experimentation. “The organisations that learn the fastest and experiment will win in this,” Vuicic said.

Consider the risks and choose wisely

Some enterprise technology vendors are offering not only generative AI features, but also flexibility to their customers in how the models are trained or which specific models are used.

“IBM uses Salesforce,” said Goyal at IBM Consulting. “Our Salesforce administrator can choose the right models and configure it so that end users like me can just see it.” Salesforce offers a choice of embedded AI, he added.

IBM consultants are now working with enterprises, he said, helping them think about what’s going to be possible with Microsoft 365 and beyond, and also about the legal and security implications of generative AI.

“Our stance at IBM with Watsonx is very clear,” he said. “Your data is your data. We never use client data to train our models. I work with Azure, AWS, and Google and each of them, when it comes to generative AI, has been very clear on this. And in our partner calls and with our clients, it’s the same thing — nothing you upload or use the service against is used to train the model. No enterprise would ever use it otherwise.”

One example of an enterprise vendor that ran afoul of this principle is Zoom, which originally said that it would use meeting transcripts to train its AI — then quickly backtracked after public outcry.

But when it comes to the initial training data, vendors are less transparent about where it comes from, Goyal admitted. “With Watsonx, we provide full data lineage for training data,” he said. “But that can’t be said for all vendors. There’s been a lot of caginess.”

And several AI vendors are currently being sued by artists and writers concerned that their copyrighted products were used without permission to train the AIs.

“AI has always had risks,” said Goyal. With large language models, some of these risks are old risks that are now amplified — and some risks are new. Enterprises need guardrails in place when working with generative AI, he said.

But, in the end, it’s important for enterprises to be looking at how they can use these new capabilities. “The ones who figure it out earliest are going to be the winners,” he said. “The hype is justified.”


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags generative AI

Show Comments