Sergey Nivens - Fotolia
TORONTO -- People love to talk about artificial intelligence tools' potential to make content management and CRM work easier, better and more efficient.
What isn't highlighted by industry insiders or in artificial intelligence (AI) product marketing is that, sure, technology can effect beautiful automation that can eliminate low-skill jobs and create cost efficiencies CFOs only dream of. But AI can't accomplish these benefits on its own.
AI projects require continuous human monitoring, or its results degrade into something at best irrelevant to the business and at worst detrimental to business goals and harmful to customers.
Experts attending OpenText's Enterprise World 2017 user conference here cautioned that companies considering AI projects must also factor in the cost of employing humans -- data scientists, specifically -- to keep their hands on the AI tiller and reel them when necessary. And they'll need it.
You'll need employees to continually tune the algorithms AI systems need to spider an organization's data stores to return actionable, credible insights that drive business objectives. In some cases, these may be better for CRM engagements or for identifying sales opportunities. In service settings, AI might be used to detect customer equipment failures before they cause a manufacturing line to shut down. In content management systems, AI can police network activity in both sales and service data stores for potential fraud and report suspicious customer activity to experts who can take a closer look.
Size does matter when it comes to data stores
AI isn't plug and play -- far from it. Tuning algorithms so they can create actionable insights that actually help employees and customers extends beyond just having a data scientist watch the output of an AI system, said Alan Lepofsky, vice president and principal analyst at Constellation Research Inc.
The algorithm needs a critical mass of data to crawl in the first place; and some businesses might not have the sheer volume of data to even begin taming an unruly algorithm that returns off-topic or inaccurate results. That's why vendors offer (or will offer) huge data warehouses of anonymous data that customers can use for this purpose, he said.
"For AI to become accurate, it needs a large set of training data," Lepofsky said. "For things like cat pictures, that's easy. For making business decisions, especially personal ones, it's harder.
"Analyzing 100 emails won't yield much insight, but what about 10,000 or a million, all anonymized but combined into a single corpus? Same for business decisions around supply chain, HR, marketing," Lepofsky added. "AI can't learn enough from an individual; it needs data from the team, division, company or even industry."
And for AI to be of any value, the data has to be at least somewhat standardized. If data is stored in mismatched formats across silos, the AI results won't be useable. As the old tech aphorism says, "garbage in, garbage out."
AI's freedom isn't free
Company leaders impressed by new AI packages such a SAP's Leonardo, IBM's Watson, Salesforce's Einstein, OpenText's Magellan and others on the market or to be released in the near future need to keep in mind these hidden costs of AI projects that don't appear on sales brochures.
There's one more piece of homework in an AI implementation that requires human bandwidth: developing an accurate model of your business before experimenting with algorithms in the first place.
"That's where the hard part is," said Thomas Dong, OpenText's vice president of product marketing, acknowledging that product marketers sometimes do simplify AI project requirements because the processes around advanced analytics aren't always easy to explain. It's very easy for a company to be sold the notion of, "We know the data input structure, we've got this algorithm, let's dump it in and get some results.
"But therein lies great danger," Dong said. "Any data scientist knows there's an art to modeling."
That's because AI algorithms, smart as they may seem in a conference demo, are blind to even subtle differences in data presentations and cannot find those insights on their own. AI will need a Sherpa to become effective performers in your enterprise, running free on the network to crunch data at speeds humans cannot approach.
"Maybe the way you define an order is different from what you expect that output to be; maybe I'm not forecasting my demand in a way the algorithm expects," Dong said. "You need a data scientist to understand exactly what that algorithm does with the input data."
AI projects take longer than you think
Dirk Seckler, global head of sales at Knorr-Bremse Group, said that it takes up to a year -- and sometimes more -- for humans to tune up an algorithm. His company, which sells train brakes, uses AI for "condition based maintenance" to help predict when parts need to be replaced before the brakes fail.
"To build up reliable algorithms, we are not talking two or three months, we are talking two to three years," Seckler said, stressing that even after an algorithm becomes reliable it needs monitoring and upgrading. "You've started this algorithm, but it's not a static thing. It will change. [In our case,] you have the weather, the track, different influences."
While minding the algorithms driving analytics and AI technology may be a lengthy, painstaking process, Seckler said AI can deliver success over the longer haul. He encourages companies to invest in AI despite it being harder than vendors make it out to be. Why? Because other companies will use it to their advantage. Companies that avoid AI because it's too complicated or its payoff is too long-tailed will be at a disadvantage, he said.
"You can be in the game, or out of the game," Seckler said.
Experts: Don't believe all the hype about AI
A look inside the OpenText-Documentum deal, six months later
Microsoft AI moves beyond Office 365