Mohamed Anas is Mathworks’ regional engineering manager. Paola Jaramillo is a data scientist and Johanna Pingel and David Willingham are product marketing managers deep learning at Mathworks.

21 October 2021

Building AI applications isn’t just modeling, but rather a complete set of steps that includes data preparation, modeling, simulation, testing and deployment. With the right tools and support, engineers and scientists can achieve success without having to become data AI experts.

With the increased availability of ‘big industrial data’, compute power and scalable software tools, it becomes easier than ever to use artificial intelligence (AI) in engineering applications. AI methods ‘learn’ information directly from data without relying on a predetermined equation as a model. They’re particularly suitable for today’s complex systems.

In the context of AI, engineers and scientists are improving technologies, driven by analytics based on industrial data. Analytics modeling is the ability to describe and predict a system’s behavior from historical data using domain-specific techniques for data preparation, feature engineering and AI models that can be trained with machine and/or deep learning. Combining these capabilities with automatic code generation, targeting edge-to-cloud, enables reuse while automating actions and decisions.

Mathworks AI workflow
Four steps engineers should consider for a complete, AI-driven workflow.

Engineers using machine learning and deep learning often expect to spend a large percentage of their time developing and fine-tuning AI models. Yes, modeling is an important step in the workflow, but the model isn’t the end of the journey. In fact, building AI applications isn’t just modeling, but rather a complete set of steps that includes data preparation, modeling, simulation, testing and deployment. The key element for success in practical AI implementation is uncovering any issues early on and knowing what aspects of the workflow to focus time and resources on for the best results – and it’s not always the most obvious steps.

Step 1: data preparation

Data preparation is arguably the most important step in the AI workflow. Without robust and accurate data as input to train a model, projects are more likely to fail. If you give the model ‘bad’ data, you won’t get insightful results – and will likely spend many hours trying to figure out why the model isn’t working.

Techwatch Books: ASML Architects

To train a model, you should begin with clean, labeled data, as much as you can gather. This may also be one of the most time-consuming steps of the workflow. When deep-learning models don’t work as expected, many often focus on how to make the model better – tweaking parameters, fine-tuning the model and doing multiple training iterations. However, engineers would be better served focusing on the input data: pre-processing and ensuring correct labeling of the data being fed into a model to confirm that the data can be understood by the model.

Step 2: AI modeling

Once the data is clean and properly labeled, it’s time to move on to the modeling stage of the workflow, which is where data is used as input, and the model learns from that data. The goal of a successful modeling stage is to create a robust, accurate model that can make intelligent decisions based on the data. This is also where deep learning, machine learning or a combination thereof comes into the workflow as engineers decide what will be the most accurate, robust result.

At this stage, regardless of deciding between deep learning (neural networks) or machine learning models (SVM, decision trees), it’s important to have direct access to many algorithms used for AI workflows, such as classification, prediction and regression. You may also want to use a variety of prebuilt models developed by the broader community as a starting point or for comparison.

While algorithms and prebuilt models are a good start, they’re not the complete picture. Engineers learn how to use these algorithms and find the best approach for their specific problem by using examples. A tool such as Matlab provides hundreds of examples for building AI models across multiple domains.

AI modeling is an iterative step within the complete workflow, and engineers must track the changes they’re making to the model throughout this step. Tracking changes and recording training iterations, with tools like Experiment Manager, is crucial as it helps explain the parameters that lead to the most accurate model and create reproducible results.

Step 3: simulation and test

AI models exist within a larger system and must work with all other pieces in the system. Consider an automated-driving scenario: not only do you have a perception system for detecting objects (pedestrians, cars, stop signs), but this has to integrate with other systems for localization, path planning, controls and more. Simulation and testing for accuracy are key to validating that the AI model is working properly and everything works well together with other systems, before deploying a model into the real world.

To build this level of accuracy and robustness prior to deployment, engineers must ensure that the model will respond the way it’s supposed to, no matter the situation. Questions you should ask at this stage include: what’s the overall accuracy of the model? Does the model perform as expected in each scenario? Does it cover all edge cases?

Trust is achieved once you’ve successfully simulated and tested all cases you expect the model to see and can verify that the model performs on target. By using tools like Simulink, engineers can verify that the model works as desired for all the anticipated use cases, avoiding redesigns that are costly both in money and time.

Step 4: deployment

Once you’re ready to deploy, the target hardware is next, meaning you need to ready the model in the final language in which it will be implemented. This step typically requires design engineers to share an implementation-ready model, allowing them to fit that model into the designated hardware environment. From desktop to the cloud to FPGAs, tools like Matlab can handle generating the final code in all scenarios. They offer engineers the leeway to deploy their model across a variety of environments without having to rewrite the original code.

Take the example of deploying a model directly to a GPU. Automatic code generation eliminates coding errors that could be introduced through manual translation and provides highly optimized Cuda code that will run efficiently on the GPU.

Engineers don’t have to become data scientists or even AI experts to apply artificial intelligence. The right tools, functions and apps to integrate AI into their workflow and available experts to answer questions related to AI integration are crucial resources for setting them – and their AI models – up for success. Ultimately, engineers are at their best when they can focus on what they do best and build on it with the right resources to help them bring AI into the picture.

Edited by Nieke Roos