Select the directory option from the above "Directory" header!

Menu
7 key questions IT leaders need to answer before committing to generative AI

7 key questions IT leaders need to answer before committing to generative AI

When asked, IT leaders must be able to immediately harness the power of generative AI before investing in it.

Credit: Dreamstime

Some companies use generative AI to write code and some use it to create marketing text or fuel chatbots. And then there are others like SmileDirectClub, that create images in order to answer the question of how to better serve their customers.

SmileDirectClub, the UK-based teledentistry company, uses generative AI to create teeth. Or, more specifically, to help people understand how their teeth can be corrected.

“We have a platform called the SmileMaker platform,” says CIO Justin Skinner. “We take a photo of your teeth with your phone and we generate a 3D model representation and we can project with AI what a straightening plan would look like, how long it would take, and what it would look like when we’re done.”

Existing generative AI platforms like OpenAI’s ChatGPT, Google Bard, or Stable Diffusion aren’t trained on 3D images of teeth. Not that any of these were even available when SmileDirectClub started.

SmileDirectClub built their own generative AI, using their own data set, on their own servers, in compliance with HIPAA, GDPR, and other regulations.

The company started the project three years ago with an external partner. Then, when that didn’t work, it hired it’s own team to build the proprietary models it needed.

“There’s nothing like this out there to the level of accuracy we need,” Skinner says. “Teeth are very tricky. There aren’t a lot of distinguishing marks, so getting an accurate 3D model from your phone is a difficult task.”

The first generation of the tool went live in November last year in Australia, and in May this year in the US, and around 100,000 people have used it so far. The next release will include a photorealistic projection of what the new teeth will look like.

Today, the tool only offers a draft treatment plan for customers, Skinner says. They still need to see a dentist or use an impression kit at home for the high-definition impression. This may also change in the future as the technology improves.

But that’s not the only way SmileDirectClub looks to take advantage of generative AI.

“We’re exploring—for cost reduction and efficiency reasons—leveraging tools like ChatGPT and Bard, and we look forward to playing around with Microsoft Copilot,” Skinner says.

His company isn’t alone.

According to a recent poll of senior executives conducted by The Harris Poll on behalf of Insight Enterprises, 39% of companies have already established policies or strategies around generative AI and 42% are in the process of developing them. Another 17% plan to but haven’t started yet. Only 1% of companies have no plans to develop plans for generative AI.

In addition to how SmileDirectClub answers the critical question about customer care, here are seven others that CIOs need to answer that can help them formulate generative AI strategies or policies.

Where is the business value?

According to the Harris Poll, 72% of executives say they plan to adopt generative AI technologies in the next three years in order to improve employee productivity. And 66% say they plan to use it to improve customer service. In addition, 53% say it will help them with research and development, and 50% with automating software development or testing.

And that’s just the tip of the iceberg as far as enterprise use cases of generative AI are concerned—and it’s changing quickly.

CIOs have to work hard to stay on top of developments, says Skinner. More importantly, CIOs have to understand how the possibilities of generative AI in general specifically applies to their business.

“That’s the first question,” he says. “Do I really understand these things? And do I deeply understand how to apply it to my business to get value?”

Given the fast pace of change, understanding generative AI means experimenting with it—and doing so at scale.

That’s the approach that Insight Enterprises is taking. The Tempe-based solutions integrator currently has 10,000 employees using generative AI tools and sharing their experiences so the company can figure out the good as well as the bad.

“It’s one of the largest deployments of generative AI that I know of,” says David McCurdy, Insight’s chief enterprise architect and CTO. “I’m on a mission to understand what the model does well and what the model doesn’t do well.”

The novelty of generative AI might be cool, he says, but it isn’t particularly useful.

“But we sat down and fed it contracts and asked it nuanced questions about them: where are the liabilities, where are the risks,” he says. “This is real meat and bones, tearing the contract apart, and it was 100% effective. This will be a use case all over the world.”

Another employee, a warehouse worker, came up with the idea of using generative AI to help him write scripts for SAP.

“He didn’t have to open a ticket or ask anyone how to do it,” McCurdy says. “That’s the kind of stuff I’m after, and it’s incredible.”

The number one question every CIO should ask themselves is how their company plans to use generative AI over the next one or two years, he says.

“The ones who say it’s not on the table, that’s a bad mistake,” he adds. “Some people feel they’re going to wait and see but they’re going to lose productivity. Their boards of directors, their CEOs are going to ask, ‘Why are other companies loving this tech? Why are we not?'”

But finding opportunities where generative AI can provide business value at the level of accuracy it’s capable of delivering today is just one small part of the picture.

What is our deployment strategy?

Companies looking to get into the generative AI game have a wide variety of ways to do it.

They can fine tune and run their own models, for example. Every week, there are new open source models becoming available, each more capable than the last. And data and AI vendors are offering commercial alternatives that can run on premises or in private clouds.

Then, traditional SaaS vendors like Salesforce and, of course, Microsoft and Google, are embedding generative AI into all their services. These models will be customised for specific business use cases and maintained by vendors who already know how to manage privacy and risk.

Finally, there are the public models, like ChatGPT, which smaller companies can access directly via their public-facing interfaces, and larger companies can use via secured private clouds. Insight, for example, runs OpenAI’s GPT 3.5 Turbo and GPT 4.0 hosted in a private Azure cloud.

Another option for companies with very particular requirements but no interest in training their own models is to use something like ChatGPT and then give it access to company data via a vector database.

“The value is using existing models and staging your own data beside it,” McCurdy says. “That’s really where innovation and productivity are going to be.”

This is functionally equivalent by pasting documents into ChatGPT for it to analyse before asking your questions, except that the documents won’t have to be pasted in every time. For example, Insight has taken all the white papers it’s ever written, all the transcripts of interviews, and loaded them into a vector database for the generative AI to refer to.

Can we keep our data, customers, and employees safe?

According to a May PricewaterhouseCoopers report, nearly all business leaders say their company is prioritising at least one initiative related to AI systems in the near term.

But only 35% of executives say their company will focus on improving the governance of AI systems over the next 12 months, and only 32% of risk professionals say they’re now involved in the planning and strategy stage of applications of generative AI.

A similar survey of senior executives released by KPMG, released in April, showed that only 6% of organisations have a dedicated team in place to evaluate the risk of generative AI and implement risk migration strategies.

And only 5% have a mature responsible AI governance program in place, though 19% are working on one and nearly half say they plan to create one.

This is particularly important for companies using external generative AI platforms rather than building their own from scratch.

For example, SmileDirectClub’s Skinner is also looking at platforms like ChatGPT for the potential productivity benefits but is worried about the data and privacy risks.

“It’s important to understand how the data is protected before jumping in head first,” he says.

The company is about to launch an internal communication and education campaign to help employees understand what’s going on, and the benefits and limitations of generative AI.

“You have to make sure you’re setting up security policies in your company and that your team members know what the policies are,” he says. “Right now, our policy is that you can’t upload customer data to these platforms.”

The company is also waiting to see what enterprise-grade options will come online.

“Microsoft Copilot, because of integration with Office 365, will probably be leveraged first at scale,” he says.

According to Matt Barrington, emerging technologies leader at Ernst Young Americas, about half of the companies he talks to are worried enough about potential risks of taking a full-stop approach to ChatGPT and similar platforms.

“Until we can understand it, we’re blocking it,” he says.

The other half are looking to see how they can build the right framework to train and enable people.

“You have to be cautious but you have to enable,” he says.

Plus, even the 50% who’ve put the brakes on ChatGPT, their people still use it, he adds. “The train has left the station,” he says. “The power of this tool is so big that it’s hard to control. It’s like the early days of cloud computing.”

How do we guard against bias?

Dealing with bias is hard enough with traditional machine learning systems, where a company is working with a clearly defined data set. With large foundational models, however, like those used for code, text, or image generation, this training data set might be completely unknown.

In addition, the ways the models learn are extremely opaque—even the researchers who developed them don’t fully understand yet how it all happens. This is something that regulators in particular are very concerned about.

“The European Union is leading the way,” says EY’s Barrington. “They’ve got an AI Act they’re proposing, and OpenAI’s Sam Altman is calling for hard-core regulations. There’s a lot yet to come.”

And Altman’s not the only one. According to a June Boston Consulting Group survey of nearly 13,000 business leaders, managers, and frontline employees, 79% support AI regulation.

The higher the sensitivity of the data a company collects, the more cautious companies have to be, he says.

“We’re optimistic about the impact AI will have on business, but equally cautious about having a responsible and ethical implementation,” he says. “One of the things we’ll heavily lean in on is the responsible use of AI.”

If a company takes the lead in learning how to not only leverage generative AI effectively, but also to ensure accuracy, control, and responsible use, it will have a leg up, he says, even as the technology and regulations continue to change.

This is why transcription company Rev is taking its time before adding generative AI to the suite of tools it offers.

The company, which has been in business for nearly 12 years, started out by offering human-powered transcription services and has gradually added AI tools to augment its human workers.

Now the company is exploring the use of generative AI to automatically create meeting summaries.

“We’re taking a little bit of time to do due diligence and make sure these things work the way we want them to work,” says Migüel Jetté, Rev’s head of RD and AI.

Summaries aren’t as risky as other applications of generative AI, he adds. “It’s a well-defined problem space and it’s easy to make sure the model behaves. It’s not a completely open-ended thing like generating any kind of image from a prompt, but you still need guardrails.”

That includes making sure the model is fair, unbiased, explainable, accountable, and complies with privacy requirements, he says.

“We also have pretty rigorous alpha testing with a few of our biggest users to make sure our product is behaving the way we anticipated,” he says. “The use that we have right now is quite constrained, to the point where I’m not too worried about the generative model misbehaving.”

Who can we partner with?

For most companies, the most effective way to deploy generative AI will be by relying on trusted partners, says Forrester Research analyst Michele Goetz.

“That’s the easiest way,” she says. “It’s built in.”

It will probably be at least three years before companies start rolling out their own generative AI capabilities, she says. Until then, companies will be playing around with the technology in safe zones, experimenting, while relying on existing vendor partners for immediate deployments.

But enterprises will still have to do their due diligence, she says.

“The vendors say they’re running the AI as a service and it’s walled off,” she says. “But it still might be training the model, and there might still be knowledge and intellectual property going to the foundational model.”

For example, if an employee uploads a sensitive document for proofreading, and the AI is then trained on that interaction, it might then learn the content of that document, and use that knowledge to answer questions from users at other companies, leaking the sensitive information.

There are also other questions that CIOs might want to ask of their vendors, she says, like where the original training data comes from, and how it’s validated and governed. Also, how is the model updated and how the data sources are managed over time.

“CIOs have to trust that the vendor is doing the right thing,” she says. “And this is why you have a lot of organisations that are not yet ready to allow the newer generative AI into their organisations in areas that they can’t control effectively.” That is particularly the case in heavily-regulated areas, she says.

How much will it cost?

The costs of embedded AI are relatively straightforward. Enterprise software companies adding generative AI to their tool sets—companies like Microsoft, Google, Adobe, and Salesforce—make the pricing relatively clear. However, when companies start building their own generative AI, the situation gets a lot more complicated.

In all the excitement about generative AI, companies can sometimes lose track of the fact that large language models can have very high compute requirements.

“People want to get going and see results but haven’t thought through the implications of doing it at scale,” says Ruben Schaubroeck, senior partner at McKinsey Company. “They don’t want to use public ChatGPT because of privacy, security, and other reasons. And they want to use their own data and make it queryable by ChatGPT-like interfaces. And we’re seeing organisations develop large language models on their own data.”

Meanwhile, smaller language models are quickly emerging and evolving. “The pace of change is massive here,” says Schaubroeck. Companies are starting to run proofs of concept, but there isn’t as much talk yet about total cost of ownership, he says. “That’s a question we don’t hear a lot but you shouldn’t be naive about it.”

Is your data infrastructure ready for generative AI?

Embedded generative AI is easy for companies to deploy because the vendor is adding the AI right next to the data it needs to function.

For example, Adobe is adding generative AI fill to Photoshop, and the source image it needs to work with is right there. When Google adds generative AI to Gmail, or Microsoft adds it to Office 365, all the documents needed will be readily available. However, more complex enterprise deployments require a solid data foundation, and that’s something that many companies are still working toward.

“A lot of companies are still not ready,” says Nick Amabile, CEO at DAS42, a data and analytics consulting firm. Data has to be centralised and optimised for AI applications, he says. For example, a company might have data spread between different back-end systems, and getting the most value out of AI will require pulling in and correlating that data.

“The big advantage of AI is that it’s able to analyse or synthesise data at a scale humans aren’t capable of,” he says.

When it comes to AI, data is fuel, confirms Sreekanth Menon, VP and global leader for AI/ML services at Genpact.

That makes it even more urgent than ever to enable the enterprise for AI, with the right data, cleansed data, tools, data governance, and guardrails, he says, adding “and is my current data pipeline enough for my generative AI to be successful.”

That’s just the start of what it’s going to take to get an enterprise ready for generative AI, he says. For example, companies will want to make sure that their generative AI is explainable, transparent, and ethical. That will require observability platforms, he says, and these platforms are only starting to appear for large language models.

These platforms need to be able to track not just the accuracy of results, but also cost, latency, transparency, bias, and safety and prompt monitoring. Then, models typically need consistent oversight to make sure they’re not decaying over time.

“Right now, you need to be putting guardrails and guiding principles in place,” he says. Then companies can start incubating generative AIs and, once they reach maturity, democratise them to the entire enterprise.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags generative AI

Show Comments