Philosopher Thomas Nagel famously asked "What is it like to be a bat?" in his 1974 paper of the same name. Nagel argued that one might try to imagine what being a bat would be like, but without inhabiting its body and experiencing the world as it does through echolocation, we can't really know what it is like to be a bat.
German biologist Jakob Von Uexküll developed the term umwelt to capture the idea of how the world is experienced by a particular organism. When translated from German this equates to "self-centered world". For a dog the world is dominated by smell, whereas for most humans the world is experienced primarily through vision.
Here we will explore a robot's umwelt, and try to imagine what is it like to be a robot. We will use a robot with both range sensors and a camera.
%pip install aitk --quiet
from aitk.robots import World, Scribbler, RangeSensor, Camera
Let's create a world with several uniquely colored rooms along a long corridor for our robot to explore.
world = World(width=300, height=200)
world.add_wall("orange",50,75,150,85)
world.add_wall("yellow",150,75,250,85)
world.add_wall("orange",145,0,150,75)
world.add_wall("yellow",150,0,155,75)
world.add_wall("red",0,125,165,135)
world.add_wall("red",220,125,225,200)
world.add_wall("blue",225,125,230,200)
world.add_wall("pink",155,0,185,30)
robot = Scribbler(x=30, y=30, a=-100, max_trace_length=600)
robot.add_device(RangeSensor(position=(6,-6),width=57.3,max=20,a=0,name="left-ir"))
robot.add_device(RangeSensor(position=(6,6),width=57.3,max=20,a=0,name="right-ir"))
robot.add_device(Camera(width=128,height=64))
world.add_robot(robot)
Random seed set to: 5131871
Let's watch the world from a bird's-eye view as the robot moves around. This gives us a distal perspective on the world. We are not experiencing the world as the robot does, but instead have a top-down global view of what is happening.
Notice that there is a pink box located in the yellow room. Later we will be trying to find this box.
world.watch()
Image(value=b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00C\x00\x08\x06\x0…
At the same time let's watch how the robot is experiencing the world through it's camera. This gives us a proximal perspective on the world, from the agent's point of view.
robot["camera"].watch(width="500px")
HTML(value='<style>img.pixelated {image-rendering: pixelated;}</style>')
HTML(value='<style>img.pixelated {image-rendering: pixelated;}</style>')
HTML(value='<style>img.pixelated {image-rendering: pixelated;}</style>')
Image(value=b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x80\x00\x00\x00@\x08\x06\x00\x00\x00\xd2\xd6\x7f…
Below is a simple controller that tries to keep moving forward while avoiding any obstacles that it encounters. It is only using the robot's range sensors to make navigation decisions.
robot.state["timer"] = 0
def avoid(robot):
left = robot[0].get_distance()
right = robot[1].get_distance()
if left == robot[0].get_max() and right == robot[1].get_max() and \
robot.state["timer"] == 0:
robot.move(0.5, 0)
elif robot.state["timer"] > 0 and robot.state["timer"] < 5:
robot.state["timer"] += 1
elif left < robot[0].get_max():
robot.move(0.1, -0.3)
robot.state["timer"] = 1
elif right < robot[1].get_max():
robot.move(0.1, 0.3)
robot.state["timer"] = 1
else:
robot.state["timer"] = 0
Now let's watch the world from both the global, top-down view and the local, robot-based view at the same time.
world.reset()
world.seconds(10, [avoid])
Using random seed: 5131871
0%| | 0/100 [00:00<?, ?it/s]
Simulation stopped at: 00:00:10.00; speed 0.97 x real time
Seeing the world through the robot's camera is very different then seeing it from the bird's-eye view. Let's try to really take the robot's perspective.
Hide the top-down view of the world.
Now you will try to control the robot using only the robot's sensors to guide you. You goal is to traverse the hallway to the the yellow room and approach the pink box there.
We will create a dashboard where you can see all the robot's sensor readings and you can control the robot's movements via pressing buttons.
First, we define a couple of functions that will advance the world when you press a button.
from ipywidgets import Output
def set_time(world):
time.value = "Time: " + world.get_time()
def move(translate, rotate):
robot.imove(translate, -rotate)
with Output():
world.seconds(seconds.value, real_time=realtime.value, callback=set_time)
Next, we hook up a Joystick-like control pad to the control function.
from aitk.utils import JoyPad
joypad = JoyPad(scale=[.4, .4], function=move)
Finally, we construct a dashboard to layout the controls and view of the world.
from ipywidgets import HBox, VBox, Layout, FloatSlider, Label, Checkbox
seconds = FloatSlider(description="Seconds:", min=0.1, max=5, value=0.5)
realtime = Checkbox(description="Real time", value=True)
layout = Layout(width="760px")
time = Label(value="Time: " + world.get_time())
VBox(children=[
time,
HBox(children=[
joypad.get_widget(),
robot["camera"].get_widget(),
]),
VBox(children=[
seconds, realtime,
robot.get_widget(show_robot=False, attributes=["stalled"]),
robot["left-ir"].get_widget(title="Left IR", attributes=["reading"]),
robot["right-ir"].get_widget(title="Right IR", attributes=["reading"]),
])
], layout=layout)
HTML(value='<style>img.pixelated {image-rendering: pixelated;}</style>')
HTML(value='<style>img.pixelated {image-rendering: pixelated;}</style>')
VBox(children=(Label(value='Time: 00:00:10.00'), HBox(children=(GridspecLayout(children=(Button(description='⬉…
Execute the cell below when you are ready to start controlling the robot via the dashboard. Click the different arrow buttons to move the robot in different directions. The "Stalled" sensor is True when the robot is stuck and unable to move in the current direction. If this happens try reversing the direction of movement. The "IR" sensors detect obstacles on the robot's left and right. The smaller the value the closer the obstacle.
If you get stuck at some point, go ahead and look at the top-down view of the world again. Then try to get unstuck and continue using the buttons to reach the goal.
Go back to the top-down world view to see how you did. Did you crash into any walls? Did traversing the hallway take longer than you expected?
At the top of the dashboard you can see the time it took for you to reach the pink box. Start again from the top of this notebook and see if you can reach the pink box faster this time. By practicing, you should get better at seeing the world from the robot's perspective.
Some Cognitive Scientists believe that a key to understanding cognition is embracing the fact that organisms are embedded in environments. Brains evolved to control bodies, and it is the interplay between the brain, the body, and the environment from which cognition emerges. Hopefully this experience of taking the robot's proximal perspective gave you a taste of what it is like to be a robot.