Yesterday I came across a video showing a small ‘industrial’ robot trying to serve a hotdog to a customer in a sort of vending machine. Because the bun slides sideways, the robot fails to shove the sausage in. It continues anyway and puts the bare sausage on a second station, where it tries to pull a little paper bag over it. The bag is too small and falls off, but this is also ignored by the robot. As a result, it ends up serving a sausage, instead of a paper bag with a hotdog.
The poster commented, sarcastically, that he couldn’t wait for AI to take over the world. Of course, I understand the sarcasm and the joke, but looking at the discussions underneath the post, I also conclude that a lot is to be learned, by many. What bugs me with the whole story is that there seem to be several misconceptions that get into people’s minds when they see this video. There are two that I want to point out explicitly.
First of all, we’re looking at an automated (control) system here, not an artificial-intelligence solution. Robots have been running production lines for decades, without what we now call AI. This robot can be controlled using a simple closed loop of sensors and actuators, without any need for complex data analysis of the kind done in neural networks and other learning systems. A simple camera or sensor suffices to tell the control software whether or not a bun is present in the correct position, or whether the bun is covered by a bag in step two of the process. In short: what’s missing isn’t AI, but the normal closed loop of a control system.
Second, more importantly, this isn’t an AI or software problem. Yes, the robot is likely controlled by software, but the problems here are first of all hardware flaws. If a sausage needs to be put in a bun in a system like this, the bun should be properly supported by guides so that it doesn’t fall sideways if the sausage hits it by accident. Same for the ‘bagging station’: the slide-on mechanism looks nice, but fails because the bags are too short relative to the support for the bun. A ‘drop the bun in the bag’ solution would be much more robust. It would drop the bun into a bag underneath, even if someone filled the system with too small bags.
Blaming the software indicates a lack of systems thinking: the system consists of hardware (robot, supports), software (control) and consumables (the food and the bags). The overall design of mainly the hardware is wrong here. The mindset of many machine building companies is that “the software will fill the gaps.” That’s impossible in this case, so the whole system needs to be redesigned.
I hope the video shows an intentionally failing system. No company in their right mind would release a product like this. The comments certainly show that, despite over fifty years of designing systems that combine hardware and software, it’s still hard for many to see the whole system rather than the (discipline-defined) parts. The whole post also makes painfully clear that there’s a huge misconception of what AI is – a robot controlled by software isn’t AI; assuming it is in this case disqualifies both control systems engineering and AI.