It’s easy to get excited about AI projects. Especially when you hear about all the great things humans are doing with AI, from conversational and natural language processing (NLP) systems to image recognition, autonomous systems and great predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to overlook some major red flags. And it’s those red flags that cause over 80% of AI projects to fail.
One of the biggest reasons for AI projects to fail is that companies don’t justify using AI from a return on investment (ROI) standpoint. Simply put, they are not worth the time and expense given the cost, complexity and difficulty of implementing the AI systems.
Organizations are rushing past the exploration phase of AI adoption and jumping from simple proof-of-concept “demos” to production without first assessing whether the solution will deliver any positive returns. A major reason for this is that measuring AI project ROI can be more difficult than first anticipated. Far too often, teams are pressured by senior management, colleagues, or remote teams to just get on with their AI efforts, and projects move forward without a clear answer to the problem they’re actually trying to solve or the ROI that will be seen. When companies struggle to develop a clear understanding of what to expect when it comes to AI ROI, misalignment of expectations is always the result.
Missing and misaligned ROI expectations
So, what happens if the ROI of an AI project is not aligned with management expectations? One of the most common reasons AI projects fail is that the ROI is not justified by the investment of money, resources and time. If you’re going to spend your time, effort, staff, and money implementing an AI system, you’ll want to get a clear positive return.
Worse than a misaligned ROI is the fact that many organizations don’t even measure or quantify ROI. ROI can be measured in various ways, from a financial return such as generating revenue or reducing costs, but it can also be measured as a return on time, shifting or reallocating critical resources, improving reliability and security, reducing errors and improving quality control, or improving security and compliance. It’s easy to see how an AI project can deliver a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars in potential costs or liability, then it is worth every dollar spent to reduce liability . But you will only see that ROI if you plan it in advance and manage that ROI.
Management guru Peter Drucker once said, “You can’t manage what you don’t measure.” The act of measuring and managing AI ROI distinguishes those who see positive value from AI from those who end up canceling their projects for years and millions of dollars in their efforts.
Boil the ocean and bite off more than you can chew
Another big reason companies aren’t seeing the ROI they expect is that projects are trying to bite off way too much at once. Iterative, agile best practices, especially those employed by best practice AI methodologies such as CPMAI clearly advise project owners to “Think Big. Start small. Repeat often.” Unfortunately, there are many failed AI implementations that have taken the opposite approach by thinking big, starting big, and not repeating often. An example of this is Walmart’s investment in AI-powered inventory management robots. In 2017, Walmart invested in robots to scan store shelves, and by 2022 they were taking them out of stores.
Walmart clearly had plenty of resources and smart people. So you can’t blame their failure on bad people or bad technology. The main problem was rather a poor solution to the problem. Walmart realized that it was simply cheaper and easier to use human employees already working in the stores to perform the same tasks the robot was supposed to do. Another example of a project that does not deliver the expected results can be found in the various applications of the Pepper robot in supermarkets, museums and tourist areas. Better people or better technology would not have solved this problem. Rather just a better approach to managing and evaluating AI projects. Methodology, people.
Taking a step-by-step approach to running AI and machine learning projects
Are these companies caught up in the hype of the technology? Did these companies just want a robot to roam the halls for the “cool” factor? Because being cool doesn’t solve real business problems or solve a pain point. Don’t do AI for AI’s sake. If you’re just doing AI for AI, don’t be surprised if you don’t get a positive ROI.
So, what can companies do to ensure a positive ROI for their projects? First, stop implementing AI projects for the sake of AI. Successful companies take a step-by-step approach to running AI and machine learning projects. As mentioned, methodology is often the missing secret sauce for successful AI projects. Organizations are now seeing benefits in using approaches such as: the method Cognitive Project Management for AI (CPMAI), built on decades-old data-centric project approaches such as CRISP-DM and integration of established best-practice agile approaches to provide short, iterative sprints for projects.
These approaches all start with the business user and requirements in mind. The very first step of CRISP-DM, CPMAI, and even Agile is to figure out if you should even go ahead with an AI project. These methodologies suggest alternative approaches, such as automation or direct programming, or even just more people may be more suitable to solve the problem.
The “AI Go No Go” Analysis
If AI is the right solution, make sure you answer “yes” to several questions to assess if you’re ready to start your AI project. The set of questions to ask to determine if you want to move forward with an AI project is called the “AI Go No Go” analysis and is part of the very first stage in the CPMAI methodology. The “AI Go No Go” analysis lets users ask a series of nine questions in three general categories. For an AI project to actually go ahead, you need three things in line: the business feasibility, the data feasibility, and the technology/feasibility. The first of the three general categories asks about business feasibility and whether there is a clear problem definition, whether the organization is actually willing to invest in this change once it is created, and whether there is sufficient ROI or impact .
These may seem like very basic questions, but far too often these very simple questions are skipped. The second set of questions is about data, including considerations about data quality, data quantity, and data access. The third set of questions is about implementation, including whether you have the right team and skills, can run the model as needed, and whether the model can be used where planned.
The hardest part of asking these questions is being honest with the answers. It’s important to be very honest when discussing whether you want to move forward with the project, and if you answer “no” to any of these questions, it means you’re either not ready to move forward yet or that you don’t have to move forward at all. Don’t just push through and do it anyway, because if you do, don’t be surprised if you’ve wasted a lot of time, energy and resources and not getting the ROI you were hoping for.