Artificial intelligence (AI) has been a hot topic among IT and business leaders as it promises to be the biggest driver of change in human history. The way we work, live, learn and play will never be the same once AI is infused into all of our devices, cars, appliances and everything else we interact with. CIOs are well aware of this and are looking to use AI as part of their digital transformation strategy.
One of the challenges is that people often overestimate what an AI can do and they expect perfection. If there are any mistakes at all, it’s back to the drawing board to refine the algorithms or spend more time in the learning phase. With self-driving cars, for example, when an accident occurs, people freak out and act like the car is the long-lost cousin of a T-600 Terminator that purposefully had the accident to wipe out a human. The fact is, self-driving cars don’t need to be accident-free, they just need to be better than human drivers to be helpful to society. That bar is achievable today.
This means, broadly, that AI systems merely need to be assistive (i.e., helpful to the person using it) to be put into production. Can it make a doctor work smarter? Can it help classify images faster than people? Can it predict outages faster than an engineer? Once that threshold is met, roll it out and reap the benefits. Aim for minimum viable intelligence
Last week I attended an AI event in San Francisco hosted by Cambridge Consultants, NVIDIA and NetApp where this very topic was discussed. During his keynote,Tim Ensor, director of AI at Cambridge Consultants, mentioned how when his company works with customers, AI initiatives are launched once they achieve “minimum viable intelligence” (MVI).
The threshold for what “minimum viable” means will vary by use case. For example, an AI-based robot that assembles customer orders for a retailer needs to be near perfect as errors here can cost companies big bucks in returns. For other applications, though, the bar isn’t nearly as high.