Engineers are finding new ways to develop more effective AI models. Mathworks’ Seth DeLand explores how simulation and artificial intelligence combine to solve challenges of time, model reliability and data quality.
In recent years, Industry 4.0 has led to significant interest in AI methods to gain a competitive advantage for existing engineering applications. However, engineers still have some reservations about these methods and can find it difficult to apply and integrate them into their systems. Multidisciplinary collaboration plays a key role in this process. The data used to develop the AI model is crucial for its performance – if that data is insufficient, inaccurate or biased, the model’s predictions will be too. Furthermore, applying domain-specific knowledge and methods for data engineering is essential in facilitating the model to learn the correlations.
At a high level, there are three key ways AI and simulation are intersecting. The first has to do with addressing the challenge of insufficient data, as simulation models can be used to synthesize data that might be difficult or expensive to collect. The second is the use of AI models as approximations for complex high-fidelity simulations that are computationally expensive, also referred to as reduced-order modeling. The third is the use of AI models in embedded systems for applications such as controls, signal processing and embedded vision, where simulation has become a key part of the design process.
Challenge 1: Data for training and validating AI models
The process of collecting real-world data and creating good, clean and cataloged data is difficult and time consuming. Engineers also must be mindful of the fact that while most AI models are static (they run using fixed parameter values), they’re constantly exposed to new data and that data might not necessarily be captured in the training set. Projects are more likely to fail without robust data to help train a model, making data preparation a crucial step in the AI workflow. ‘Bad’ data can leave an engineer spending hours trying to determine why the model isn’t working, without the promise of insightful results.
Simulation can help engineers overcome these challenges. In recent years, data-centric artificial intelligence has brought the AI community’s focus to the importance of training data. Rather than spending all a project’s time tweaking the AI model’s architecture and parameters, it has been shown that time spent improving the training data can often yield larger improvements in accuracy. The use of simulation to augment existing training data has multiple benefits: computational simulation is in general much less costly than physical experiments, the engineer has full control over the environment and can simulate scenarios that are difficult or too dangerous to create in the real world and the simulation gives access to internal states that might not be measured in an experimental setup, which can be very useful when debugging why an AI model doesn’t perform well in certain situations.
With a model’s performance so dependent on the quality of the data it’s being trained with, engineers can improve outcomes with an iterative process of simulating data, updating an AI model, observing what conditions it can’t predict well and collecting more simulated data for those conditions. Using industry tools such as Simulink and Simscape, they can generate simulated data that mirrors real-world scenarios. The combination of Simulink and Matlab enables them to simulate their data in the same environment in which they build their AI models, meaning they can automate more of the process and not have to worry about switching toolchains.
Challenge 2: Approximating complex systems with AI
When designing algorithms that interact with physical systems, such as an algorithm to control a hydraulic valve, simulation-based modeling of the system is key to enabling rapid design iteration for your algorithms. In the controls field, this is called the “plant model”; in the wireless area, it’s called the “channel model.” In the reinforcement learning field, it’s called the “environment model.” Whatever you call it, the idea is the same: create a simulation-based model that gives you the necessary accuracy to recreate the physical system your algorithms interact with.
The problem with this approach is that to achieve the “necessary accuracy,” engineers have historically created high-fidelity models from first principles. For complex systems, these models can take a long time to both build and simulate. Long-running simulations mean that less design iteration will be possible, meaning there may not be enough time to evaluate potentially better design alternatives.
AI comes into the picture here in that engineers can take the high-fidelity model of the physical system that they’ve built and approximate it with an AI model (a reduced-order model). In other situations, they might just train the AI model from experimental data, completely bypassing the creation of a physics-based model. The benefit is that the reduced-order model is much less computationally expensive than the first-principles model, meaning that engineers can perform more exploration of the design space. And, if a physics-based model of the system does exist, they can always use that model later in the process to validate the design determined using the AI model.
Recent advances in artificial intelligence, such as neural ODEs (ordinary differential equations), combine AI training techniques with models that have physics-based principles embedded within them. Such models can be useful when there are certain aspects of the physical system that the engineer wishes to retain, while approximating the rest of the system with a more data-centric approach.
Challenge 3: AI for algorithm development
Engineers in applications like control systems have come to rely more and more on simulations when designing their algorithms. In many cases, they’re developing virtual sensors, observers that attempt to calculate a value that isn’t directly measured from the available sensors. A variety of approaches are used, including linear models and Kalman filters.
But the ability of these methods to capture the nonlinear behavior present in many real-world systems is limited, so engineers are turning to AI-based approaches, which have the flexibility to model the complexities. They use data (either measured or simulated) to train an AI model that can predict the unobserved state from the observed states and then integrate that AI model with the system.
In this case, the AI model is included as part of the controls algorithm that ends up on the physical hardware, which has performance/memory limitations and typically needs to be programmed in a lower-level language like C/C++. These requirements can impose restrictions on the types of machine learning models that are appropriate for such applications, so the engineers may need to try multiple models and compare tradeoffs in accuracy and on-device performance.
At the forefront of research in this area, reinforcement learning takes this approach a step further. Rather than learning just the estimator, reinforcement learning learns the entire control strategy. This has shown to be a powerful technique in some challenging applications such as robotics and autonomous systems, but building such a model requires an accurate model of the environment, which may not be readily available, as well as the computational power to run a large number of simulations.
In addition to virtual sensors and reinforcement learning, AI algorithms are increasingly used in embedded vision, audio and signal processing and wireless applications. For example, in a car with automated driving capabilities, an AI algorithm can detect lane markings on the road to help keep the vehicle centered in the lane. In a hearing aid device, AI algorithms can help enhance speech and suppress noise. In a wireless application, AI algorithms can apply digital pre-distortion to offset the effects of nonlinearities in a power amplifier. In all these applications, AI algorithms are part of the larger system. Simulation is used for integration testing to ensure the overall design meets the requirements.
The future of AI for simulation
Overall, as models grow in size and complexity to serve increasingly complex applications, AI and simulation will become even more essential tools in the engineer’s toolbox. Industry tools like Simulink and Matlab have empowered engineers to optimize their workflows and cut their development time by incorporating techniques such as synthetic data generation, reduced-order modeling and embedded AI algorithms for controls, signal processing, embedded vision and wireless applications.
With the ability to develop, test and validate models accurately and affordably, before hardware is introduced, these methodologies will only continue to grow in use.