Archive
UEXKULL_ANIMAL

PART 1

img-01Code developed as part of residency in Eastern Bloc (Montreal). 

This is a step by step tutorial that will be spread across a few posts. This chapter will contain some explanation about the theoretical inspiration as well as initial installation. In depth code explanation post will follow shortly.

The basic idea behind the project is to provide accessible, easy to understand and modify Machine reinforcement learning code that allows for interfacing with real world sensors or motors.

The code uses Python Pybrain library, for hardware the project will use Rasberry Pi. When possible I will aim to link relevant sources, further reading or explanations*.

The inspiration

This section is purely for people interested in disjoined theory quotations, for project specific information scroll to The basic setup.

“Beside the selection of stimuli which the receptors let through, and the arrangement of muscles which enables the effectors to function in certain ways, the most decisive factor for the course of any action is the number and arrangement of receptor cells which, with the aid of their receptor signs, furnish the objects of the Umwelt with receptor cues, and the number and arrangement of effector cells which, by means of their effector signs, supply the same objects with effector cues. The object participates in the action only to the extent that it must possess certain qualities that can serve as perceptual cue-bearers on the one hand and as functional cue-bearers on the other; and these must be linked by a connecting counterstructure. The relations between subject and object are best shown by the diagram of the functional cycle (Fig. 3). This illustrates how the subject and the object are dovetailed into one another, to constitute a systematic whole. If we further consider that a subject is related to the same or to different objects by several functional cycles, we shall gain insight into the first principle of Umwelt theory: all animals, from the simplest to the most complex, are fitted into their unique worlds with equal completeness. A simple world corresponds to a simple animal, a well-articulated world to a complex one.”
Jakob von Uexküll / A stroll through the worlds of animals and men: A picture book of invisible worlds (Old translation of whole essay can be found here) page 6

 

In this project we will of course look at building simple ”animals” engaging in their own unique little simple worlds. Uexkull claims the tick to have 3 distinct stimuli perceptions and corresponding number of effect actions. In the first iteration, within this project we’ll aim to construct one effect / sense relation.

Fig3_Uexkull_animal-02

Fig.3.

Umwelt theory resembles typically framed Reinforcement Learning scenario. It may be that Uexkull’s theories filtered down with the field of biosemitics and philosophy into concepts in programming, equally possibly it is a very vague resemblance. Either way lets get to reinforcement learning explanation at level of one art major talking to another, or theoreof (*).

reinforcemnt-learning_Uexkull_animal

Reinforcement Learning

Reinforcement learning is alongside Supervised/Unsupervised learning a common Artificial Neural Network learning strategy . It is based on theoretical models for dopamine-based learning in mammalian brains (concerning especially basal ganglia). For some ideas how it works in us - humans, there is an interesting blog to read about reinforcement learning and gaming to check out – here.

In machine reinforcement agent (ie. robot/algorithm) undertakes actions within defined environment. The results of those actions are evaluated but an external Task/Interpreter and agent is either rewarded/not rewarded for its action. The reward and the new state of the environment is fed back to the agent. Iteration after iteration until firm knowledge basis of how to maximise receiving rewards is establish. The rewards, altho only numerical, are just effective in shaping robots behaviour as offering cookies is to humans. But each to their own.

You can read more in depth explanation from Simon Technical Blog , which also was basis for my first Reinforcement Learning codes in Pybrain. The math and code explanations were used as initial basis for the project code.

There are many great further reading lists on Github, including this comprehensive list.

The basic setup

Hardware:

The basic setup requires Raspberry Pi 3 Model B+ with Raspbian.
Most of dependencies installation are done with pip and git through command line.

Programming

The latest code can be downloaded from:
https://github.com/noi01/uexkull_animal.git

Install libraries

Python - Pybrain (http://pybrain.org/)

Please look at Pybrain dependency list on Github (here) for installation instructions. As this is Raspberry based project, the installation of dependencies may be a long and painful. But it did work on my Pi and here are some tried and tested notes:

1. The project uses Python3.

2. I heard that not all Numpy versions work with Pybrain, some of the dependencies versions:
– matplotlib==2.2.2
– numpy==1.15.1
– PyBrain==0.3.3
– scipy==0.18.1

To install everything though terminal, you can use the following code:

pip3 install numpy scipy structure sequential https://github.com/pybrain/pybrain/archive/0.3.3.zip

sudo apt-get install libatlas-base-dev

sudo apt-get install python3-matplotlib

3. We can now proceed to clone github repository to folder designated for the code:

git clone https://github.com/noi01/uexkull_animal.git

4. When running the code, inevitable errors will surface. Those need to be patched by modifying Pybrain library files.

IndexError: only integers, slices (`:`), ellipsis (`…`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

This comment provides a way to fix the issue:
https://github.com/pybrain/pybrain/issues/211#issuecomment-280288045

In the file interface.py modify line 53 to:
values = self.params.reshape(self.numRows, self.numColumns)[int(state), :].flatten() #state appears to be float
In the file /pybrain/datasets/sequential.py modify line 46 to:
return self.getField(field)[int(seq[index]) : ] #appears to be float

We can proceed to testing if everything got installed correctly.

The RasberryPi sensor setup

To run the code it’s not required to have a robot, even through the code is written (in this iteration) for either servo or dc motor using robot. But to check the installation, we will at least run a simple photoresistor input, on pin 7 in code, to RasberryPi. I

One thing to keep in mind is that RasberryPi does not accept analog input. To do that we will need to construct an easy circuit to allow an analog sensor - photoresistor - provide digital data to RasberryPi pin. To do that we will ad capacitor and measure how quickly it loads and discharges within a increment of time.

For this you will need:
Tiny breadboard  -  not needed but is easier
Capacitor 1uF
Photoresistor

photosensor-RasberryPi_Uexkull-animal

Overview

We are following https://learn.adafruit.com/basic-resistor-sensor-reading-on-raspberry-pi/overview . Connect the power cable 3v3 (pin1) to photoresitor, then between photoresistor and capacitor add cable that feeds into pin7 (Our sensing input). On the other leg of capacitor connect the cable to ground (pin6).

photosensor-RasberryPi_2_Uexkull-animal

Photoresistor Circuit

The code setup

So, let’s clone the https://github.com/noi01/uexkull_animal.git to your Rasberry Pi. I setup a folder uexkull_animal on desktop on Rasberry pi.

The code consists of 3 separate files; task_01.py, environment_01.py and Agent.py . To start the code run in terminal or similar:

python Agent.py

This will produce feedback in the terminal similar with the light sensor:

Sensor read
[421.0]
Reward
1
Action performed: [0.]
I don't walk

So this is the first step, next post we’ll look into possible integration with a robot.

img-02

*I am not a programmer and this is not a programming advice. Please refer to links for hopefully real programming advice.

Read More