BY YOSSI SHEFFI
Director of the MIT Center for Transportation and Logistics (CTL)
The societal and economic spasms of the early 2020s highlighted the crucial role of world-spanning supply chains in the modern global economy, as well as the growing role of digital technology — including artificial intelligence (AI) and automation — now and in the future. In this adaptation from his latest book, The magic conveyor belt: Supply chains, A.I. and the future of work, MIT Professor Yossi Sheffi considers how automation, robotics and AI are changing and augmenting jobs, and what tools will help humans be productive and provide value as they manage increasingly complex supply chains.
People, teams and organisations will need new tools to be as productive as possible with the new automation/AI technologies. Such tools need to enable workers, teams and managers to collaborate with each other and with the technology. After discussing where humans fit in the flow of work, the sections that follow describe four categories of tools that will help people sense, analyse and recommend courses of action in intricately connected global supply chains.
A spectrum of human-in-the-loop models
Which tools people will need to work with machines will depend on humans’ roles in the future economy and how they can best collaborate with AI and automation. In a Harvard Business Review article, two Accenture executives outlined five principles that can help companies optimise collaboration between humans and AI. These high-level principles are: re-imagining business processes, embracing experimentation/employee involvement, actively directing the AI strategy, responsibly collecting data and redesigning work to incorporate AI while cultivating related employee skills.
Re-imagining business processes, redesigning work to incorporate AI and cultivating related employee skills require thinking about the natural flow of activities and tasks in the organisation. Different theorists have developed different frameworks for people and companies to effectively carry out tasks and processes. Many of them involve some sort of sequence and iteration — or a loop — of steps that include gathering information about the situation, developing decisions or plans, taking action and gathering more information about the outcome.
In the context of AI and automation, an important question is what role humans and machines should play in these loops of controlled activities. At one extreme, a person might be fully in the loop, in that they must execute one or more essential steps every time the task must be done. Or, a machine might automatically process most of the routine instances of the task and only send the exceptional, anomalous, or complex instances of the task to a person who is on a side branch of the loop. Such a process might be able to run 24/7 for most activities, with only a fraction of cases delayed until normal business hours.
In even more advanced examples of automation, the person might only watch the loop through a dashboard; only when a problem arises would the person investigate and potentially intervene. Finally, human involvement might only be at a higher level, such as designing a machine’s fully autonomous system that operates continuously with workers rarely intervening during operations.
Illuminating the black box AI
Many machine learning systems act like inscrutable black boxes; they provide answers without any explanation of why the system chose that answer. An AI’s lack of explanations is one barrier to both the adoption and reliable use of deep learning systems because explanations play three key roles in any decision-making process. First, explanations are needed to convince stakeholders that the AI’s answer is correct. Second, explanations are required to cross-check or validate the AI’s answer: Is the AI using dubious data or logic? Third, they are useful to help people learn from the AI by seeing not only the answer but also its rationale.
To solve this problem of “black box” AI, researchers and engineers are working on a new class of machine learning systems known as Explainable AI, or XAI. XAI machine learning systems output both answers and some form of explanation. The research required for XAI involved changes to the machine learning model itself as well as psychological studies to determine what kinds of explanation humans would need or want to make the best use of the system.
Digital twins for management and simulation
As the business environment, supply chains and technology become more complex, people need more tools to help them both understand the existing system and safely experiment with proposed decisions, tactics and strategies. One technology that aids people in doing this is digital twins. A digital twin is a detailed, realistic, digital replica of a physical system, such as a piece of equipment, a conveyance, a factory, a warehouse, a company or even an entire supply chain. However, a digital twin is more than just a computer representation of an asset. The asset is connected to its digital representation and keeps updating it with its actual conditions.
Digital-twin technology enables the use of a type of AI known as reinforcement learning, which learns by trial and error
Digital twins can be used to visualise and monitor the performance of the physical system. They can also be employed to train people in basic operations or handle problems. Companies can make multiple copies of a digital twin and the copies can serve to simulate and compare the effects of volatility, scenarios, contingencies or proposed changes to the object or how it is used.
Digital-twin technology also enables the use of a type of AI known as reinforcement learning, which learns by trial and error; that is, it tries various actions and is “rewarded” or “punished” for the resulting outcomes. Copies of a digital twin can provide a realistic simulated environment for these trial-and-error learning systems.
Better interfaces and collaboration tools
Interfaces between people and machines are an essential element of collaboration between humans and computers. Advances in very high-speed, low-power, low-cost mobile computers, displays and cameras are enabling innovative computer interfaces that provide augmented reality (AR) and virtual reality (VR).
With augmented reality, the user wears a headset or smart glasses. The user can also employ a handheld device that overlays digital data wherever the user looks with the headset or points the device. AR visually connects a physical object and the digital data associated with it in two ways. First, the AR overlays digital data on the user’s physical environment. For instance, a person might look at a piece of equipment and get overlays of performance trends on that equipment, error messages, instruction manuals, usage schedules and so forth. Second, many AR systems record the physical space and the objects in it (e.g., locations of items, quantities in bins) as well as any actions taken (e.g., picking an object, completing a maintenance task). These two aspects of AR ensure that the object and its digital twin are in sync.
With AR, the user wears a headset or smart glasses, as well as a handheld device that overlays digital data wherever the user looks with the headset or points the device
VR, by contrast, entirely replaces the user’s field of vision with an immersive, computer-generated view of a virtual or digital world. The technology typically creates a wholly synthetic world or uses copies of a digital twin for immersive simulations for applications in engineering, training, customer experience and what-if exploration. VR also allows for remote work or telepresence in which immersive displays relay live video-camera data from the remote location. In another application, multi-user VR can bring collaborative functionality to remote workers and distant stakeholders. Such virtual interfaces can be useful in a global supply chain context or for remote workplaces where gathering all the expertise or stakeholders in the same geographic location is too costly or time-consuming.
Democratising tool development
A fundamental trend in computers and high technology is the “de-skilling” of automation and AI-based systems — making more and more aspects of computer use accessible to more and more people. This enables workers to create automation and AI that help them transition toward software-assisted jobs that will be in greater demand in the future.
One category of these user-friendly tools helps workers create their own robotic process automation systems without having to write code themselves. The worker performs a menial task on the computer while the tool records the sequence of activity. The tool can then create a robotic process that can repeat those actions for future instances of that task.
Another category encompasses the so-called low-code or no-code development platforms. These allow non-programmers to create software such as websites, applications and mobile apps. The platforms use graphical design tools, a set of templates and modular building blocks to help the user construct the software without needing to learn a traditional programming language.
Code-development platforms can use machine learning applied to the vast amounts of existing software to help people write code. Generative AI can create some code from a simple text description of what the code is supposed to do. With such systems, non-programmers can write a description of what they want and the AI will create the code that matches that description.
While such generative AI systems may be de-skilling software development, leading to a loss of work for programmers and software engineers, they provide a gain for domain experts, who will be able to build a product themselves. In 2021, the technology analyst firm Gartner predicted that by 2024, 80 percent of technology products and services would be built by those who were not technology professionals. It is possible, then, that the “85 million jobs lost, 97 million jobs gained” narrative (about how automation will affect employment over time) may say more about changes in how existing employees spend their time than it does about whether they have a job or not.
- Wilson, H. James and Paul R. Daugherty. 2019. Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review. 4 April, 2019.
- Turek, Matt. 2018. Explainable Artificial Intelligence. Darpa.mil. 2018.
- Gartner Says the Majority of Technology Products and Services Will Be Built by Professionals Outside of IT by 2024. n.d. Gartner.
Dr. Yossi Sheffi is the Elisha Gray II Professor of Engineering Systems at the Massachusetts Institute of Technology, where he serves as Director of the MIT Center for Transportation and Logistics (CTL).
Adapted from The Magic Conveyor Belt: Supply Chains, A.I. and the Future of Work, published by MIT CTL Media, copyright 2023.