July 12, 2019

How to implement AI systems


A common problem in Artificial Intelligence projects is, that it's unclear how to start an attempt to build a robot. It's a question of using the right tools for building a robot. In case of AI it's hard to define these tools. A naive approach is to define computer hardware as a tool. But after the latest nvidia GPU graphics card was put into the computer, the AI problem isn't solved. Sure, before the software can get started some kind of cpu is needed, but this can't answer the problem how to program the AI itself.
A better idea is to realize a workflow which starts with a teleoperated robot, goes over to a plan recognition system and at the end a fully autonomous robot is the result. Every automation project starts with a human controlled system. If somebody likes to build a new generation of Agriculture robots, he will need a joystick which is used as a remote control.
This step is important because a system which can't be controlled with a human in the loop is not useful at all. A remote control makes sure, that the hardware of the robot works. Which means, that the device has a motor and realizes a useful task. The only thing what is bit problematic is the joystick, but replacing the joystick with software can wait.
In the second step of the overall pipeline, the actions of the human operator are annoated with software. For example, the human has built an RC controlled car, drives in a circle and the computer prints out to the screen: “it's a circle”. It's not necessary in this step, that the AI is able to drive the shape by itself, it's enough if the AI detects what the human is doing. Such a requirement is easier to realize in software and allows the project to growth slowly.
In step three the transition from a plan recognition system towards a fully autonomous robot is done. The existing domain model which is able to recognizes human actions is used in the reverse mode (a solver is the perfect choice) and then the robot works by it's own. Doing the third and last step is equal to an advanced AI project. It can't be realized in the beginning but at the end. For most robotics projects, this last step isn't realized. Because the two steps before a too complicated and are not fulfilled completely. It makes sense to postpone the goal of a fully autonomous system.
1. teleoperation
2. plan recognition
3. fully autonomous robot
Most real robot project are located between step 1 and step 2. That means, the teleoperation works slightly good, and the engineers are trying to build some kind of plan annotation into the system. A realistic self-description of the status helps to identify which subproblems have to be solved. If the problem of teleoperation wasn't solved yet, it's time to focus the energy into that direction.
Let us describe a robot system which is in the early first step. In a agriculture robot project, the robot should harvest apples from the tree. The first step is called teleoperation. Which means, the robot contains of robotarm plus a robothand. The human operator has a dataglove which allows him to pick the apples from the tree. Without the teleoperation the system can't do anything. The reason why it's important is to realize such introduction projects is to test the hardware. Creating a robotarm with a gripper at the end and connect the device with a dataglove for realtime control is from a technical perspective not easy. It's not directly an AI project but an important prestep in starting such an attempt.
To reduce the difficulty further the pipeline can be realized in a virtual environment only. Which means, not in reality but in a simulator which has a simpler physics and simpler image recognition problems.