Many organizations are undergoing digital transformations with specific goals, strategies, and key performance indicators (KPIs) in mind. However, the challenge lies in executing these strategies successfully and pragmatically. This article explores this issue, drawing on use cases from various companies, including examples from the Benelux region, and suggesting a continuous lifecycle and DevOps approach for systems that extend beyond software.
Digital Transformation in Diverse Industries
Our perspective on digital transformation is based on our experience serving and collaborating with numerous companies from a broad range of industries. In the aerospace/defense and automotive industries, the trend is towards increasingly software-defined and autonomous systems developed using a mix of model-based and data-driven approaches. Other industries, such as railway systems, energy production, process industries, and industrial machinery manufacturers, are focusing on integrating platforms, becoming more data-centric to enhance performance, enable predictive maintenance, and generate new value streams through innovative customer services.
To do this, they strive to integrate Big Data, agile workflows, and DevOps approaches, for which MathWorks has built experience through our work with software and internet companies, along with financial service providers. Collaboration between various roles in today’s organizations requires aligning different technical and cultural approaches, such as those found in the scientific, bioengineering, and informatics teams in the medical, biotech, and pharmaceutical fields. Defining the abstraction between hardware and software – while achieving efficient integration in implementation – can involve approaches pioneered in electronics and semiconductor companies. And today, many systems developers are learning how to leverage wireless communications to provide enhanced system capabilities and services, following the lead of communications sector participants such as handset manufacturers, ground stations, and carriers.
The Future of Mass Production
In the factory of the future, one key theme is mass customization: being able to combine the flexibility and personalization of custom-made products with the low unit prices of mass production. Rainer Brehm, CEO Factory Automation at Siemens, described that as “producing what matters, […] reduce waste by producing what the market actually wants and need[s].” This type of mass customization requires production that is connected, flexible and autonomous to be able to adjust to the needs of the individual customer (figure 2). To achieve that, the production systems themselves need to be reconfigurable through software and able to perform different kinds of operations; in other words, they have to be connected, flexible and autonomous, too.
However, advancing towards the smart factory vision is hindered by significant challenges. Aberdeen Group’s study highlights the rising design complexity of autonomous systems (figure 1, left). Meanwhile, McKinsey’s survey uncovers a growing gap between software complexity and development productivity (figure 1, left). Productivity lags behind the demands of these complexities.
Weaving the Digital Thread: Moving from Models to Model-Based Design
Establishing a digital thread is one key step to help manage the complexity throughout the system lifecycle. The digital thread enables the linking and tracing between digital twin representations of a product and its data, accessible to various stakeholders throughout its life. Through Model-Based Design, companies in various industries repurpose models to make digital twins, enabling functions such as Predictive Maintenance. Pragmatic digital transformation, focusing on custom digital solutions, heavily uses digital twins as virtual replicas of physical assets or processes. These counterparts offer real-time insights, and enhance collaboration, playing a crucial role in achieving operational efficiency, cost reduction, and strategic goals in digital transformation. For examples of approaches to initiate digital twin projects encapsulating insights from industry representatives in the Benelux region, reference previous publication, How to Get Started with Digital Twinning, in Link Magazine.
In the system lifecycle (figure 2), models are often used at two specific stages: the system-level conceptualization and the subsystem design. But digital transformation demands more than that. It comes from the systematic use of models throughout the lifecycle where simulatable models are an authoritative source of truth. They are used to accelerate key workflows such as rapid refinement of designs, the selection of the best design options, and assessment of component reuse.
These models ensure the system performs correctly and create a digital thread of traceability. They are also used to automatically generate software for the production system, suitable for various platforms. This code generation is adaptable to various programming languages; C/C++, CUDA, and IEC-61131 Structured Text, making it suitable for microcontrollers, FPGAs, PLCs, and GPU chips, and can also be packaged for deployment on Edge systems or in the cloud.
DevOps for Physical Systems
Traditionally, DevOps has been employed for software-only systems, for example by IT groups to implement IT software. Can it apply as well to systems whose functionality are largely defined by software, but that also have physical components? Yes, by systematically using models, simulation, and data. In Figure 5, development activities on the left integrate into the agile workflow.
Archived data in green enters this workflow, informing development from tests, experiments, and databases (figure 3, blue arrow). The digital twin monitors the asset, streaming operational data (red) back into the development cycle, used to update components and model parameters.
The same verification and validation steps done interactively during design can also be automated on high-performance systems, in HPC environments, or in the cloud. This includes regression testing and confirming system behaviour after software changes or hardware replacements, all managed efficiently by a continuous integration system like Jenkins. In this DevOps approach, models and data are systematically utilized on both ends, with data streaming back from the operating asset.
A case study illustrates the real-world impact: Atlas Copco, a Belgium-based manufacturer of advanced air compressor systems, utilizes Model-Based Design to integrate models of mechanical, electrical, algorithmic, and software subsystems. These integrated models serve as a digital twin for system design, fostering collaboration among engineering teams. Given that compressed air applications consume more than 10% of global energy production, this approach enhances Atlas Copco’s products and services throughout the machine’s entire life cycle.
“We use a digital twin, powered by MATLAB® and Simulink®, as the single source of truth and then build applications on top so that everyone has access to the same data and information.” Carl Wouters, VP, Engineering, Atlas Copco.
These applications enable sales engineers to conduct personalized simulations, providing customized product offerings to customers. Current Atlas Copco compressor models are equipped with hundreds of sensors, and real-time data from over 250,000 compressors worldwide informs customer-specific maintenance strategies, yielding valuable insights, and, importantly, generating revenue streams.
Pragmatic digital transformation involves using simulatable models and data throughout the lifecycle to develop complex products, understand their current status, optimize their performance, and create added-value services. This shift requires adopting a DevOps approach that extends beyond software and encompasses the asset’s broader context.
Organizations need to develop their own vision of how to make use of the opportunities created by the above. The question to ask is: what capabilities, processes and skills would give them the capacity for creating Perpetually Upgradeable Systems? And once they determine that, what value can they generate with that capacity?