Big data and Artificial intelligence technologies have been described as major performance improvement drivers and many research have quantified their impact. However, these technologies could paradoxically have a negative effect on corporate performance. In which situations could they decrease performance? What are the strategic implications?
Big data and Artificial Intelligence (AI) have undoubtedly revolutionized the business landscape, promising improved performance, revenue generation, cost reduction, and business model transformation. Countless industry reports have extolled their virtues. However, beneath the sheen of success stories lie nuanced realities. In this article, we delve into the lesser-known aspect of AI: when its application can actually hinder corporate performance. We explore three critical situations where AI’s potential is compromised and offer strategic implications to navigate these challenges effectively.
3 situations when using AI decreases performance
When AI contributes to making bad decisions
In their research, Rana et al surveyed 355 executives about the unintended consequences of using AI for decision-making and their influence on performance. They identify three components in interactions that contribute to a negative effect on performance: poor data quality, lack of governance and inefficient training. The three components are grouped under the term “AI opacity” by the authors and this opacity has both direct and indirect effects on performance. Direct effects on performance, for example, occur when an AI system is fueled with inaccurate or inconsistent data over time, leading to the production of useless or harmful recommendations. Likewise, inadequate training of staff on the systems will result in underutilization of the technology, lower motivation, and a subsequent decline in performance. Indirect effects of this opacity take place when it increases companies’ exposure to risks and suboptimal decisions.
According to this research, AI systems might penalize performance because they lead to bad business decisions when systems are poorly designed, governed or implemented. The authors use the examples of Amazon’s recruitment system which was unfair to female applicants, Facebook’s gender-biased career advertisements and Uber’s racially biased dynamic pricing decisions.
When users don’t trust the results
In healthcare, AI systems for diagnosis have blossomed and in the U.S., the Food and Drug Administration (FDA) authorized many new programs that use artificial intelligence. However, doctors are skeptical that the tools really improve care or are backed by solid research. According to a story in the New York Times, “Doctors are raising more questions as they attempt to deploy the roughly 350 software tools that the F.D.A. has cleared to help detect clots, tumors or a hole in the lung. They have found few answers to basic questions: How was the program built? How many people was it tested on? Is it likely to identify something a typical doctor would miss? The lack of publicly available information is causing doctors to hang back, wary that technology that sounds exciting can lead patients down a path to more biopsies, higher medical bills and toxic drugs without significantly improving care.”
Similarly, a recent experimental study reveals surprising results. The experiment involved professional radiologists with various availability of AI assistance and contextual information to study the effectiveness of human-AI collaboration and to investigate how to optimize it. The findings reveal that although AI alone was more accurate in diagnoses than two-thirds of radiologists, collaboration with human experts yielded no improvement because some “radiologists partially underweight the AI’s information relative to their own and do not account for the correlation between their own information and AI predictions.” So the authors conclude that “the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI.”
When AI systems are used for the wrong tasks
In other situations, the systems are well designed and the users trust the results offered and paradoxically that’s the source of the problem. This is the conclusion of a recent study which involved several hundreds of BCG consultants. Among all the results of the study, two were particularly interesting when it comes to setting the boundaries of using AI for decision-making:
- “First, for a task selected to be outside the scope of relevance for AI-assisted tasks, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI. Even more, on business problem-solving tasks, those given training on how to use LLMs performed worse than those just given access to LLMs, suggesting the training made them overconfident in LLM results.
- Second, the findings indicate that while subjects using AI produce ideas of higher quality, there is a marked reduction in the variability of these ideas compared to those not using AI. This suggests that while GPT-4 aids in generating superior content, it might lead to more homogenized outputs.”
These research (and many others) paint a contrasted picture of the reality of value creation with data and AI. They also call for a balanced view on their impact on performance, they are no silver bullet and might even prove more damageful than useful. What are the implications we can derive on value creation? How to maximize value creation and limit the risk of deteriorating performance using AI? I propose here 4 strategic implications:
- Data is an asset and should be treated this way. In order to make good automated business decisions, data quality should be assessed and maintained over time and it should be governed within the company.
- Investment in technology should be balanced with investments in human resources. Securing data and infrastructure is not enough when managers are not trained on the AI systems and on how to use the results they produce.
- Significant efforts should be put into ensuring an understanding of the system design by the users. Transparency on the data used, the models implemented, the results limits and the scope of relevance all foster adoption through higher transparency.
- Using AI is a strategic decision and should then be connected with the global strategy of the company. A policy clarifying when AI should be used alone, when it should be used with human supervision, and when it shouldn’t be used could prove useful to achieve the company’s strategic objectives.
In the realm of business, the allure of AI and big data is undeniable, promising untold opportunities for growth and efficiency. However, as our exploration has revealed, the road to success is not without its potholes. From AI opacity leading to bad decisions to a lack of trust in AI results and the risk of misapplication, it’s clear that a nuanced approach is required.
To navigate this complex landscape successfully, we must recognize that data is an invaluable asset, requiring careful management and governance. Investments in technology should be balanced with investments in human resources, ensuring that employees are equipped to harness AI’s power. Transparency and understanding of AI systems’ design are crucial for fostering trust and adoption. Finally, using AI should be a strategic decision aligned with a company’s broader goals.
In conclusion, AI is a double-edged sword, capable of both enhancing and diminishing performance. By embracing the strategic implications outlined here—treating data as an asset, balancing technology with human expertise, promoting transparency, and aligning AI with overall strategy—we can harness its potential for value creation while minimizing the risks. In this rapidly evolving landscape, it is adaptability, informed decision-making, and a commitment to ethical and effective AI use that will determine which companies thrive in the age of automation.