Who’s Afraid of a Big Black Box?

June 23, 2025
By Chris Caplice, Ph.D.

You can’t escape it.

Artificial intelligence (AI) is a hot topic of every conference you attend and for every supplier you meet. Heck, even your co-workers are talking about it. People use AI to make recipes, write songs, synthesize reports, chat with customers and everything in-between.

Yet no one talks about AI the same way. Some use it as an umbrella term for any technology that mimics human intelligence. Others go a step further, diving into machine learning (ML) or generative AI. Some software suppliers define it as using math of any kind!

Regardless, AI shows up in all aspects of business, including logistics and transportation procurement. Supply chain managers use it to analyze and predict pricing, conduct real-time mini-bids or dynamic procurement, and facilitate faster and more accurate RFP decisions, among other things.

Most professionals fall into two camps when it comes to AI. One side welcomes it with open arms and sees opportunities everywhere to make everything better. The other has visions of an unchecked takeover, “The Matrix”-style, leading to massive layoffs and the destruction of entire industries.

If these two groups have one thing in common, it’s a C-suite dictating an “AI-first” approach to new systems and projects. Bosses today expect their teams to try an AI-enabled solution before anything else. While this might seem constraining, it has a silver lining: With managers often desperate for C-suite buy-in on projects, AI initiatives come with executive blessing built in.

Successful AI development and implementation, of course, is far more nuanced. As with any new method or model, there are trade-offs to automating decisions and tasks. Let’s talk about three.

Accuracy Versus Explainability

This trade-off goes to the heart of the hesitancy that many professionals have with adopting AI models: They are black boxes.

Understanding how a model arrived at a result is just as important as the result’s accuracy. While AI/ML models have been shown to provide highly accurate results, they are exceptionally hard to explain.

Traditional statistical methods, such as regression, enable users to understand why the model provided a particular answer. It’s relatively easy to explain and even quantify the impact each exogenous variable has on the decision being made. Being able to grasp the “why” helps a non-technical decision-maker trust a model’s capabilities.

More opaque models like XGBoost or neural networks are more difficult to explain. But this by itself is no reason to eschew them. We use black boxes every day without giving them a second thought. Do most people really understand how a microwave oven works, and does that stop them from making popcorn?

To be honest, most users of standard optimization tools can’t really explain how they work. They just trust that they do.

The only way for users to gain trust in a black (or blackish) box is through consistently successful interactions — just as people who have popped a lot of popcorn are comfortable with a microwave oven.

The challenge with black-box AI models is that the decision-maker doesn’t always have the time or willingness to spend the iterations to gain trust. In these cases, it might be better to use a more explainable model.

Automation Versus Insight

The second trade-off is whether the model is designed to automate a task or provide deeper insights into decisions.

It’s often said that a new AI model will automate the straightforward tasks so you can spend your time focusing on “more important things.” Conversely, AI models can also be used to connect the dots between seemingly disparate data points or observations, helping you make a more informed decision.

However, these approaches are fundamentally opposed. As a task becomes automated, the time spent thinking about its details decreases. Conversely, using an exploratory AI model to uncover new connections naturally slows down the decision-making process. A system cannot do both simultaneously, at least not effectively.

Empowering Versus Replacing

The third trade-off is between empowering existing workers through the AI model versus simply replacing them.

This is related to the automation-insight trade-off. AI may not replace a job entirely, but it can automate many of the tasks that are part of the overall job. In practice, the trade-off is a matter of degrees.

We expect an AI tool to improve a person’s productivity. And productivity is simply the ratio of output to input — for example, dispatches per hour or contracts processed per day. As AI tools continue to advance, they will naturally perform more tasks and allow people to spend time on other, perhaps less transactional, work.

***

AI’s potential is real. However, it’s important to have a clear, North-star understanding of what you want it to accomplish. That way, you’ll better understand the trade-offs you’ll need to make in terms of accuracy versus explainability, automation versus insight, and empowering versus replacing.

(Image credit: Getty Images/BlackJack3D)

About the Author

Chris Caplice, Ph.D.

About the Author

Chris Caplice, Ph.D., is chief scientist at DAT Freight & Analytics, a Denver-based freight exchange service and transportation information provider, and senior research scientist at the Massachusetts Institute of Technology (MIT) Center for Transportation and Logistics in Cambridge, Massachusetts. He is founder and co-director of the MIT FreightLab research initiative.