Artificial intelligence promises mountains of gold. As smart algorithms are spreading like wildfire in the high tech industry, engineers are facing all kinds of obstacles. AI specialist Albert van Breemen tells us what tech companies have to take into account if they want to master the technology.
Artificial intelligence is a technology that, due to its versatile applicability, enables radical changes in many industries. AI has developed rapidly over the last eight years – even faster than experts predicted. This is mainly due to deep learning, an AI technology requiring a huge amount of data to work properly. The first wave of commercial AI applications has emerged at companies that already have lots of data, like Amazon, Facebook, Google and Uber. Currently, we’re seeing a second wave in which AI algorithms are being trained with sensor data. AI is entering the engineering world and traditional machine builders are faced with the question of whether and how to invest in artificial intelligence.
Designing advanced systems is getting ever more challenging. Each generation, their complexity grows. There’s an increasing demand for performance, intelligence and interoperability. Engineers regularly reach the limitations of their current toolbox, in which modeling and analysis from so-called “first principles” are standard. Adding AI gives them technologies to work with larger datasets (big data) and smart AI algorithms. Both elements form the basis for finding new solutions to the aforementioned challenges.
In practice, however, companies are finding it difficult to embrace artificial intelligence. A good case study could convince management to invest more in AI. Unfortunately, the people who have potential cases within a company have no experience with the technology and fail to recognize the opportunities. This makes it difficult to demonstrate the benefits and secure an innovation budget. In addition to business-driven barriers, we also see many operational obstacles in practice, such as the complexity of the AI software and hardware, the great diversity of algorithms (when to use which technique), the computing power needed to train models and finding talent.
Barrier 1: the AI technology stack
The technology stack for artificial intelligence is a complex set of hardware and software components. Model training requires an incredible amount of computing power because it uses computationally intensive gradient-descending and backpropagation techniques. This calls for special hardware, such as CPUs, tensor processing units (ie AI-specific computing cores), FPGAs or ASICs. Applying a trained model to an embedded system requires different hardware, designed with energy consumption and system resources in mind.
The AI software stack is built up of a series of modules, most of which are developed independently of each other. Updating one of the modules often causes it to stop working smoothly with the rest. In addition, server-side software libraries, such as Tensorflow, aren’t always compatible with the libraries used on embedded systems.
Barrier 2: algorithm diversity
Deep learning algorithms can be subdivided into a number of categories, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) and reinforcement learning. Within each of these, there are a number of more specific algorithms. Which one to choose isn’t only determined by the application, like object classification or object detection in CNNs. It also depends on the software stack and the hardware used, as well as the model size and accuracy and the type of input data, for example. Optimizing a model is a time-consuming process.
Barrier 3: computing power
The demand for computing power to train modern AI algorithms doubles every three to four months. Training an algorithm like Deepmind’s Alphagozero requires more than a million teraflops of computing power. To put that into perspective: a modern GPU does about fifteen teraflops. According to a conservative estimate, training an AI algorithm for the Starcraft game will soon cost several million euros. Most companies lack the resources for this. Fortunately, many other useful AI algorithms can be trained with less computing power, but the standard IT infrastructure of most companies is often inadequate for this as well.
Barrier 4: talent
Many machine builders have recently experienced the introduction of artificial intelligence within their company. They have to catch up because they often are unaware of the value that data and AI can bring to their products. In addition to the challenge of bridging the gap between engineers and AI experts, they’re also in a fierce competition with larger and well-known AI companies to attract AI specialists. Universities are currently working hard to train new talent. Eindhoven University of Technology’s Eindhoven Artificial Intelligence Systems Institute (EAISI), for example, is starting a number of new master’s programs this year that close the gap between engineering and AI.
Companies at bat
Current developments in artificial intelligence will have a major impact on products and industries. Occasionally, you can hear people saying that we’re too late and everything happens in the US and China. But that’s not true. The success of an AI application in engineering strongly depends on domain knowledge. And that’s just where Dutch companies excel, for example in mechatronics and human-machine interaction.
Knowledge institutions and organizations such as the High Tech Systems Center and EAISI are working hard to remove the barriers. Collaboration with companies is key to achieve this and to make a new voice heard: AI and engineering takes place in Brainport.