Thoughts about the evolution of Supply Chain Management, AI and ML

As AI is revolutionizing the world and bold statements like “Supply chain management will be fundamentally changed by AI” are everywhere, we will reflect and speculate about the future of supply chain management under the realm of AI. If this is of interest, then the evolution of AI is to be regarded as well, to aim at a comprehensive consideration of the subject. There will be a informal consideration of 3 key areas:

State of AI/ML:

A short characterization of where AI stands. This is an informal collection of facts,
that are relevant to getting an idea what the role of AI and machine learning in supply chain management is.

AI/ML in SCM:

A brief review of where AI is and can be used in supply chain management.

Digital Twins:

We are trying to figure out the role of digital twins in supply chain management. At least in the recent discussions, digital twins are supposed to be the upcoming decision support systems, not only for product development, but as well for all networks that give comprehensive feedback about their real-time state.

A short characterization of AI evolution

This is a rough, incomplete description of AI evolution. Just to set the scene for our thinking about SC evolution (the timestamps
are just illustrative, but not precise):

20 years ago:

Computing capacity was expensive and neural networks could not be fed with sufficient data to show great benefits. Traditional machine learning algorithms dominated the field with some satisfactory results, but in general far away from what we are getting today, what concerns non-tabular data like images.

10 years ago:

Compute capacity was affordable. Big data was a buzzword. Neural networks,
took over, being data-hungry, with the right architecture, and very performant for a lot of image recognition task and NLP tasks. The saying was that everything what a human can do, the machine does faster and more reliably.

Now:

The transformer architecture takes over. The replication and combination of conventional wisdom is impressive. Some people are hoping that we are on the brink of universal learning.

So, this is all great. Still, AI is shining, when mastering a specific problem (teach a robot to make a backflip, but it still cannot walk). Large language models need a lot of data and then they excellently inform you about conventional wisdom. If you ask for a standard algorithm, that is everywhere on the web, it just works great. If you ask about a rare problem that has only a few or no web citations, it will be completely lost, or even worse, tell you somthing, which is just not right. To be fair the newer version will say “I don’t know.”, which is OK. This will all improve, but I have a challenging time imagining that the fundamental problem will go away: how should the system know things where it does not have data (interpolation vs. extrapolation). So, surprise me. Even the praised Euclidean proofs fall under this category: this is a well defined strict field, where the evaluation of existing combinations can lead to new knowledge. The new alchemy is now, that the models will create their own data and cascading models will learn from each other. Sounds like the derivative of derivative stuff, that was AAA-rated and led to the financial crisis. Again, surprise me.

If you think about supply chain management, systems can either produce solutions that are widely accepted and just adopted, or systems provide decision support, where the supply chain manager still makes the final decision. When a system proposes an action,
this is similar to reinforcement learning: find the optimal decision to get a maximum reward.

The rough state of reinforcement learning is as follows: impressive results, when playing Atari games. Can be in danger of only
finding local optima. Might need a lot of data and long learning times. Is subject to a lot of tweaks that are supposed to enable quicker learning and learning of at least the best local optimum. These adjustments are (among others): reward shaping, model-based approches or reward learning. Long story short: the narrative is that the agent learns for itself, but in reality you model a lot.

Decision making under uncertainty

If you try to recognize a cat on a picture, the state space is very high-dimensional, but you find decision boundaries that are very clear in most cases. There might be raccoons or blurry pictures, but for a clearly identifiable cat you will get a high score from the model that this is a cat. If you have decisions in the supply chain that are cat-like then you can just automate them, and perhaps leave the supply chain raccoon for human judgement. The rest of the decisions is all different: you get a certain confidence level for decision and this confidences are far away from 0 and 1 (1 impyling full confidence). These systems will suggest a decision with a certain confidence level.

If you check the literature, there is a lot of discussions about probabilities in machine learning. The numbers between 0 and 1 that are produced by sigmoid and softmax, are only proxies for probabilty measures. Alternative approaches just assume a probability distribution and you learn the parameters.

We will examine below how these areas fit into supply chain management.

AI and ML in Supply Chain Management

Supply chain automation:

this is clearly the area where a system is faster and less error-prone in comparison to a human being (e.g. image recognition and classification of products, extraction of information from documents, …).
The technologies for these tasks are available. It is a question of organizational capabilities and economic efficency if the deployment of such technologies makes sense.

Decision support:

in the current discussion, one can get the impression that AI is a Swiss knife, that automatically solves problems that could not be solved before. One should not forget decades of operations research, where solutions, heuristics and approximations for typical business problems have been found. These solutions are excellent benchmarks for ML solutions. In general, all planning problems can be approached with ML techniques, but there might be traditional methods that do equally well or better. And never forget: there are a lot of methods that stem from applied statistics and it is highly debatable if this is machine learning or not. Demand forecasting on every supply chain level is a topic that is a good candidate for big progress: short-term demand sensing is already used and automated forecasting for thousands of SKUs with the right reinforcement learning mechanisms sounds promising.

Supply chain redesign:

the idea is simple: you get the state of your supply chain in real-time, the system uses the information and suggests modifications of the supply chain architecture. E.g. changes in BTO/BTS cross-over, number of warehouses, transportation modes, you name it. This seems far away. Geometric deep learning that exploits symmetries of data goes into that direction when regarding network architectures. This seems to require a more abstract understanding of supply chain architectures in order to uncover attributes that are abstractly transformable. Having said that: if we keep the scope small and consider a clearly defined set of alternatives for a specific characteristic, we can have system support for a decision about an isolated redesign choice. How to re-examine the entire SC network is a whole different story.

Digital Twins

Now we are ready for digital twins. First, we give a definition and then we examine how AI/ML fits in. The working assumption is that
the digital twin is the future central decision support tool for a supply chain, responsible for making decisions from operational, over tactical, to strategic.

Digital twin technology is the process of using real-time data to create a digital representation of a real-world object. In the literature this is distinguished from traditional simulation. The digital twin operates on real-time data, the user can interact with the model and changes can be dynmically incorporated. The idealized picture is, that the real object changes the model and the model changes the object in a continuous improvement cycle.

A network twin models not a single physical object, but a network of objects.

So, digital twins are simulations on steroids (just to be simplistic). Ever tried to simulate a retail warehouse with 10000 SKUs of the most different form factors, transportation modes and demand characteristics? Perhaps it is a central warehouse that delivers to the whole of Europe (EU and other countries), perhaps big retailers have specific packaging requirements, perhaps electronic devices have to be specifically labeled, perhaps you have to run campaigns for launch that make it necessary to bundle certain products. The list is endless.

Even for greater minds than myself, this is an ambitious task. Basically, you start a project to make a netowork twin of the entire warehouse, and you will fail, or you start small: take goods-out only, the fine picking area only and have a controlled start. Let’s not forget that the idea stems from product development, maintenance and improvement, where there is one product that lives in the IoT world. All the product characteristics have been carefully designed. Why would be anything less sufficient for our retail warehouse twin?

Are digital twins a specific AI or Machine Learning topic?

It is a topic of operations management, operations improvement, and operations research. AI/ML are just part of that.
As elaborated above, there might be AI/ML methods that can be used, but the modeling exercise is huge and cannot be covered by AI/ML methods alone. So, the traditional planning algorithm already does the job and in a lot of cases it is not clear that an AI algorithm will get better results. The same holds for traditional forecasting: if you screen the literature, there is no evidence that machine learning performs better. It is seen if transformers with self-attention mechanisms will provide the breakthrough. The modeling exercise is not going away. This has been done over decades. The only criterion is the performance of the model. So, up to now, no revolution. But the next quantum leap in AI will be changing this.

Where is this going?

In my view, the combination of human and machine will be the most promising development. The transformer models already incorporate human choices in their learning (human-in-the-loop). In a supply chain decision process, the approvement or adaptation of a machine-generated plan would be such a signal. In the same spirit, existing models that represent existing knowledge will be increasingly combined with AI models. The digital twin development in the idealized sense, goes in both directions: the real world changes the model, and the model influences the real world. So, this will dove-tail increasingly. AI/ML tackles every business process, and the development is highly dynamic. Exciting times. However, the machines are not taking over. Not yet.
This is all speculation, but sometimes it is fun, just to speculate.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Cookie Consent with Real Cookie Banner