February 16, 2026

The information layer in the DIKW pyramid

The lowest layer in the DIKW pyramid is the data layer which can be desribed easily. There are raw sensor data like distance, temperature, gps coordinates which are stored in a numerical format. The next layer in the pyramid, the information layer, is harder to describe. A working thesis is, that the information layer consists of [tags].

For the example of a warehouse robot, the tag cloud would be: [roomA, roomB, roomC, shelfNorth, shelfsouth, shelf1, shelf2, obstacle, battery, chargingstation, barcode, path, left, right, speed, direction, batteryempty, order]

Of course the tag list is not complete, there are additional tags available but for reason of simplication this might be a starting point. These tags are providing context because after selecting one tag, possible alternative tags are not activated. For example, the goal for the robot might be [roomB] but not [roomA, roomC]. The robot might rotate to [left] but not to [right]. So the context of a tag are always the tag which might be possible but are not activated at the moment.

All the tags are creating a semantic network. In contrast to a full blown ontology or AI frames, a tag based information is more minimalist. Every tag can be activated or not similiar to the tags in a blogging post for annotating a document.

The interesting situation is, that there is an intersection available between low level sensor data and mid level tagging cloud. For example:

- gps sensor -> [roomA]
- gps sensor -> [direction]
- distance sensor -> [obstacle]

For desribing the robot's behavior both layers (data and information) are important. The robot needs to log the numerical raw sensor data and also the robot needs to annotate the current sensory perception with semantic information.

What we can say for sure is, that tagging information doesn't belong to the lowest data layer. A sensor like a gps sensor has no builtin tagging mechanism. The sensor doesn't know the position of a certain shelf, or doesn't know if the robot is in roomA or in roomB. What the gps sensor knows instead are precise x/y coordinates. The reason is, that the sensor hardware is able to generate such data. Its up to a higher instance in the DIKW pyramid to process these data.

No comments:

Post a Comment