PART 1

img-01Code developed as part of residency in Eastern Bloc (Montreal). 

This is a step by step tutorial that will be spread across a few posts. This chapter will contain some explanation about the theoretical inspiration as well as initial installation. In depth code explanation post will follow shortly.

The basic idea behind the project is to provide accessible, easy to understand and modify Machine reinforcement learning code that allows for interfacing with real world sensors or motors.

The code uses Python Pybrain library, for hardware the project will use Rasberry Pi. When possible I will aim to link relevant sources, further reading or explanations*.

The inspiration

This section is purely for people interested in disjoined theory quotations, for project specific information scroll to The basic setup.

“Beside the selection of stimuli which the receptors let through, and the arrangement of muscles which enables the effectors to function in certain ways, the most decisive factor for the course of any action is the number and arrangement of receptor cells which, with the aid of their receptor signs, furnish the objects of the Umwelt with receptor cues, and the number and arrangement of effector cells which, by means of their effector signs, supply the same objects with effector cues. The object participates in the action only to the extent that it must possess certain qualities that can serve as perceptual cue-bearers on the one hand and as functional cue-bearers on the other; and these must be linked by a connecting counterstructure. The relations between subject and object are best shown by the diagram of the functional cycle (Fig. 3). This illustrates how the subject and the object are dovetailed into one another, to constitute a systematic whole. If we further consider that a subject is related to the same or to different objects by several functional cycles, we shall gain insight into the first principle of Umwelt theory: all animals, from the simplest to the most complex, are fitted into their unique worlds with equal completeness. A simple world corresponds to a simple animal, a well-articulated world to a complex one.”
Jakob von Uexküll / A stroll through the worlds of animals and men: A picture book of invisible worlds (Old translation of whole essay can be found here) page 6

 

In this project we will of course look at building simple ”animals” engaging in their own unique little simple worlds. Uexkull claims the tick to have 3 distinct stimuli perceptions and corresponding number of effect actions. In the first iteration, within this project we’ll aim to construct one effect / sense relation.

Fig3_Uexkull_animal-02

Fig.3.

Umwelt theory resembles typically framed Reinforcement Learning scenario. It may be that Uexkull’s theories filtered down with the field of biosemitics and philosophy into concepts in programming, equally possibly it is a very vague resemblance. Either way lets get to reinforcement learning explanation at level of one art major talking to another, or theoreof (*).

reinforcemnt-learning_Uexkull_animal

Reinforcement Learning

Reinforcement learning is alongside Supervised/Unsupervised learning a common Artificial Neural Network learning strategy . It is based on theoretical models for dopamine-based learning in mammalian brains (concerning especially basal ganglia). For some ideas how it works in us - humans, there is an interesting blog to read about reinforcement learning and gaming to check out – here.

In machine reinforcement agent (ie. robot/algorithm) undertakes actions within defined environment. The results of those actions are evaluated but an external Task/Interpreter and agent is either rewarded/not rewarded for its action. The reward and the new state of the environment is fed back to the agent. Iteration after iteration until firm knowledge basis of how to maximise receiving rewards is establish. The rewards, altho only numerical, are just effective in shaping robots behaviour as offering cookies is to humans. But each to their own.

You can read more in depth explanation from Simon Technical Blog , which also was basis for my first Reinforcement Learning codes in Pybrain. The math and code explanations were used as initial basis for the project code.

There are many great further reading lists on Github, including this comprehensive list.

The basic setup

Hardware:

The basic setup requires Raspberry Pi 3 Model B+ with Raspbian.
Most of dependencies installation are done with pip and git through command line.

Programming

The latest code can be downloaded from:
https://github.com/noi01/uexkull_animal.git

Install libraries

Python - Pybrain (http://pybrain.org/)

Please look at Pybrain dependency list on Github (here) for installation instructions. As this is Raspberry based project, the installation of dependencies may be a long and painful. But it did work on my Pi and here are some tried and tested notes:

1. The project uses Python3.

2. I heard that not all Numpy versions work with Pybrain, some of the dependencies versions:
– matplotlib==2.2.2
– numpy==1.15.1
– PyBrain==0.3.3
– scipy==0.18.1

To install everything though terminal, you can use the following code:

pip3 install numpy scipy structure sequential https://github.com/pybrain/pybrain/archive/0.3.3.zip

sudo apt-get install libatlas-base-dev

sudo apt-get install python3-matplotlib

3. We can now proceed to clone github repository to folder designated for the code:

git clone https://github.com/noi01/uexkull_animal.git

4. When running the code, inevitable errors will surface. Those need to be patched by modifying Pybrain library files.

IndexError: only integers, slices (`:`), ellipsis (`…`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

This comment provides a way to fix the issue:
https://github.com/pybrain/pybrain/issues/211#issuecomment-280288045

In the file interface.py modify line 53 to:
values = self.params.reshape(self.numRows, self.numColumns)[int(state), :].flatten() #state appears to be float
In the file /pybrain/datasets/sequential.py modify line 46 to:
return self.getField(field)[int(seq[index]) : ] #appears to be float

We can proceed to testing if everything got installed correctly.

The RasberryPi sensor setup

To run the code it’s not required to have a robot, even through the code is written (in this iteration) for either servo or dc motor using robot. But to check the installation, we will at least run a simple photoresistor input, on pin 7 in code, to RasberryPi. I

One thing to keep in mind is that RasberryPi does not accept analog input. To do that we will need to construct an easy circuit to allow an analog sensor - photoresistor - provide digital data to RasberryPi pin. To do that we will ad capacitor and measure how quickly it loads and discharges within a increment of time.

For this you will need:
Tiny breadboard  -  not needed but is easier
Capacitor 1uF
Photoresistor

photosensor-RasberryPi_Uexkull-animal

Overview

We are following https://learn.adafruit.com/basic-resistor-sensor-reading-on-raspberry-pi/overview . Connect the power cable 3v3 (pin1) to photoresitor, then between photoresistor and capacitor add cable that feeds into pin7 (Our sensing input). On the other leg of capacitor connect the cable to ground (pin6).

photosensor-RasberryPi_2_Uexkull-animal

Photoresistor Circuit

The code setup

So, let’s clone the https://github.com/noi01/uexkull_animal.git to your Rasberry Pi. I setup a folder uexkull_animal on desktop on Rasberry pi.

The code consists of 3 separate files; task_01.py, environment_01.py and Agent.py . To start the code run in terminal or similar:

python Agent.py

This will produce feedback in the terminal similar with the light sensor:

Sensor read
[421.0]
Reward
1
Action performed: [0.]
I don't walk

So this is the first step, next post we’ll look into possible integration with a robot.

img-02

*I am not a programmer and this is not a programming advice. Please refer to links for hopefully real programming advice.

Read More

OBELISK_BETA_NB

Continuity was writing a book. Robin Lanier had told her about it. She’d asked what it was about. It wasn’t like that, he’d said. It looped back into itself and constantly mutated; Continuity was always writing it. She asked why. But Robin had already lost interest: because Continuity was an AI, and AIs did things like that.

Mona Lisa Overdrive / W. Gibson

Obelisk Beta is an Artificial Neural Network (ANN) that plays with the algorithmic generation of language. The basis of the work is the theory of meme and it’s transmittance within culture by imitable media, in this case written storytelling.

All cultures develop complex explanations about observable phenomena, weaving motives, which are still prevalent in modern form. In the Obelisk Beta the ANN iterates and constructs it’s own narrative about the reality by piecing together ancient myths and their analogous counterparts – conspiracy theories. Learning and becoming a generative code that, in theory, could even without external intervention continue to serve as an ongoing practitioner of human ontological storytelling practices.

Obelisk Beta is a generative algorithm that once set up develops its language knowledge to construct narrative myth-making over pre-set time.

CODE: Python / Recurrent Neural Network; Quartz Composer projection

OBELISK_BETA from SCNCF lab on Vimeo.

Read More

DSC_2098

I was luckily chosen by the academy to represent them in the First Best Media Art Diploma Work Competition and went to WRO to eat relax assemble the ugliest diploma that ever came out from my academy. Cheers to those of my teachers that believe in the subtle sensuality of cables and code.

View B-612 on WRO2015 website

Read More

DSC_2117s

IMG_1944

DSC_2110s

DSC_2106s

Visiting Wrocław / 16th media art biennale WRO 2015 to represent my Academy in the first diploma competition. I think this is my first time not being the programmer but an artist in such event, so relaxing. The installation took 1 day instead of 3, so I had loads of time to tag along with various members of WRO team and bum coffee of them.

Read More

plakats

Somewhere between the tax submission date and my trip for WRO, I had the pleasure to have a little talk at the first KOS conference. The event was organised by my former tutors and friends from the pathway – Katedra Obszarów Sztuki. With more then 20 speakers, panels and concerts in the evenings, it sure was a cool place to be during 6/7 of May.

More information/photos can be found HERE.

Read More

Visual programming languages
Quartz_Composer_Icon         feature-max6         pd-logo           285093_10150278594162579_67729_n

Good sites for Quartz Composer
http://kineme.net/
https://1024d.wordpress.com/
http://v002.info/

Good sites
http://www.sciencedirect.com/
http://www.ncbi.nlm.nih.gov/pubmed/
http://www.instructables.com/

http://www.creativeapplications.net/
http://www.haque.co.uk/

Read More

topicsb

B-612 is nearly exclusively a thought experiment on the idea of new relationships between technology and living organisms. It is highly probable that hybridisation of reality will be a catalyst for such developments.

In the experiment an ANN is set-up with reinforcement learning algorithm and learnt on over 300 examples of flower cultivation patterns. The patterns are devised to show cause and effect of different water distribution among plants over a set period of time.

After a period of learning the ANN is asked to predict an optimal pattern of watering the plant at a given point in the sequence.

The reinforcement learning rewards following actions:

– giving optimal water dose to the plant
– leaving the full amount of water to itself

Penalties:

– starving plant of water for several days
– letting plant die will result in receiving only penalty

Read More

fridaynigghtmagic4-06


programin-icon

Particle optical flow magic



Read More

fridaynigghtmagic-06


programin-icon

Space magic in 3D



Read More

fridaynigghtmagic3-05


programin-icon




Read More

fridaynigghtmagic-02


programin-icon

The new adventure has begun as a side project for the upcoming months.



Read More

m6

Written for Aesthetic, MA year 1, Intermedia; Tutor Michal Ostrowicki

Introduction

The idea of human creation coming to life and having a will of its own has accompanied man for a long time. In myths and tales, from Pygmalion, Pinocchio to frightful visions of Frankenstein and Metropolis. Now what was once a fiction, a fantasy of man that wanted to create a creature of his own, is now becoming an idea that realization is within our grasp. These man made beings, made using the newest technology, force us to reconsider and rename what we call or more appropriate want to call ‘authentic living’ in our modern world.

In post Tsunami Japan, 85 years old Satsuko Yatsuzaka hugs a white, robotic baby seal – Paro.
“If I hold onto this (Paro), it doesn’t matter if there’s a typhoon outside, I still feel like I’m safe.” (NTDTV)
She says to the camera. Paro was given to their residential home in Fukushima to help the elderly with trauma after the Tsunami of 2012?.

We view robots as capable of mechanic, repetitive work, but can robots really accompany us in our lives and what moral dilemmas do we face if we relate to our own creations?

Emotions in the digital mind

The main moral concern of HRI revolves around the supposedly deceiving factor in robots that display emotions. Humans project onto them their own understanding of purposes and intentions of behaviors.

„(…) Newer technology has created computational creatures that evoke the sense of mutual relating.” (Turkle, page 2)

Paro’s „state of mind’’, like for all social robots, is dependent on the cues from the environment.
Its manufacturer states: „Paro is an autonomous robot, so it can express its feelings, such as surprise and happiness, voluntarily by blinking its eyes and moving its head and legs. This behavior can be perceived as if Paro has feelings.” (http://paro.jp/english/)

„The simulated thinking may be thinking; simulated feeling is never feeling. Simulated love is never love.” (Turkle, page 2)

Social robots display feelings and act as if they were aware of their surroundings. They are made to evoke feelings, programmed for desired interaction with a man. Machines are programmable, they are efficient at their tasks, they mind is produced in a stream of numbers and functions. This makes them unfit to deal with concepts as abstract and biological as emotions. Emotions are the domain of humans.

The long standing opposition between rationality and emotionality is a cliche, common like not many others. It is an origin the popular culture’s image of emotionless, efficient androids, like the Terminator. Contrary to this popular beliefs emotions are not irrational, studies on people that lost them show that they are crucial and indispensable mechanisms. One of the best documented cases is that of Phineas Gage, whose brain has been damaged in an accident. The brain area which was damaged was located in the left frontal lobe, one of its main functions is processing emotional states. After his miraculous recovery, Gage did not become extraordinary efficient worker, nor did he become inhuman. The damage to his brain affected his possibility of decision making, he could pursue goals he set himself and define the hierarchy of importance of his actions.

Studies of such accidents allow us to see the real point of emotions. They are not the remnants of our distant, animal past nor are they sole feature making us human. Their main function is to structure our life, to let us set and pursue goals.

„Without emotions to drive us, we would do nothing at all” (Bloom, 11. Evolution, Emotion, and Reason: Emotions)

From simple survival instincts to elaborate human interactions, emotions are a practical evolutionary development. We see emotions as mystical part of our minds because our view of the world is based on their chemical and electrical properties. They have shaped our outlook from the beginnings of evolution, where they were employed mostly by the olfactory sense, the limbic system and hippocampus. The last one is playing key role in motivation and memory. That is assigning values to our actions and stimulus we receive from our surroundings. Today they are as indispensable to humans as they were thousands of years ago.

Looking from a scientific perspective on emotions it can be vital to see possibility and authenticity of robot emotions. It is obvious that autonomic machines will require ways to structure their aims and needs. A system that efficiently does that will in many ways be analogous to emotional hierarchies of animals. It is not only probable but imminent for such machines to develop emotional (understood as based on internal drives) responses.

„Paro feels happy when you stroke and hold it softly. Paro feels angry when you hit it. When Paro’s whiskers are touched, it will be very shy and cry or turn its head because it does not like to be touched. You will be happy and relieved through interacting with Paro.” (http://paro.jp/english/)

Paro, a therapeutic robot, was made and tailored to be a human companion. As such it should inspire people to interact, to look after and bond with him. As Turkel points out in her studies on HRI „with nurturance comes the fantasy of reciprocation. (…) (People) wanted the creatures to care about them in return” (Turkle, page 2). In the statement and latter claims, Turkel denies the possibility that a robot could facilitate a need to interact with humans. The statement is true for pre-programmed robots, but it is hard to determine its viability when an artificial intelligence is involved.

Read More