Archive
MA RESEARCH

OBELISK_BETA_NB

Continuity was writing a book. Robin Lanier had told her about it. She’d asked what it was about. It wasn’t like that, he’d said. It looped back into itself and constantly mutated; Continuity was always writing it. She asked why. But Robin had already lost interest: because Continuity was an AI, and AIs did things like that.

Mona Lisa Overdrive / W. Gibson

Obelisk Beta is an Artificial Neural Network (ANN) that plays with the algorithmic generation of language. The basis of the work is the theory of meme and it’s transmittance within culture by imitable media, in this case written storytelling.

All cultures develop complex explanations about observable phenomena, weaving motives, which are still prevalent in modern form. In the Obelisk Beta the ANN iterates and constructs it’s own narrative about the reality by piecing together ancient myths and their analogous counterparts – conspiracy theories. Learning and becoming a generative code that, in theory, could even without external intervention continue to serve as an ongoing practitioner of human ontological storytelling practices.

Obelisk Beta is a generative algorithm that once set up develops its language knowledge to construct narrative myth-making over pre-set time.

CODE: Python / Recurrent Neural Network; Quartz Composer projection

OBELISK_BETA from SCNCF lab on Vimeo.

Read More

topicsb

B-612 is nearly exclusively a thought experiment on the idea of new relationships between technology and living organisms. It is highly probable that hybridisation of reality will be a catalyst for such developments.

In the experiment an ANN is set-up with reinforcement learning algorithm and learnt on over 300 examples of flower cultivation patterns. The patterns are devised to show cause and effect of different water distribution among plants over a set period of time.

After a period of learning the ANN is asked to predict an optimal pattern of watering the plant at a given point in the sequence.

The reinforcement learning rewards following actions:

– giving optimal water dose to the plant
– leaving the full amount of water to itself

Penalties:

– starving plant of water for several days
– letting plant die will result in receiving only penalty

Read More

m6

Written for Aesthetic, MA year 1, Intermedia; Tutor Michal Ostrowicki

Introduction

The idea of human creation coming to life and having a will of its own has accompanied man for a long time. In myths and tales, from Pygmalion, Pinocchio to frightful visions of Frankenstein and Metropolis. Now what was once a fiction, a fantasy of man that wanted to create a creature of his own, is now becoming an idea that realization is within our grasp. These man made beings, made using the newest technology, force us to reconsider and rename what we call or more appropriate want to call ‘authentic living’ in our modern world.

In post Tsunami Japan, 85 years old Satsuko Yatsuzaka hugs a white, robotic baby seal – Paro.
“If I hold onto this (Paro), it doesn’t matter if there’s a typhoon outside, I still feel like I’m safe.” (NTDTV)
She says to the camera. Paro was given to their residential home in Fukushima to help the elderly with trauma after the Tsunami of 2012?.

We view robots as capable of mechanic, repetitive work, but can robots really accompany us in our lives and what moral dilemmas do we face if we relate to our own creations?

Emotions in the digital mind

The main moral concern of HRI revolves around the supposedly deceiving factor in robots that display emotions. Humans project onto them their own understanding of purposes and intentions of behaviors.

„(…) Newer technology has created computational creatures that evoke the sense of mutual relating.” (Turkle, page 2)

Paro’s „state of mind’’, like for all social robots, is dependent on the cues from the environment.
Its manufacturer states: „Paro is an autonomous robot, so it can express its feelings, such as surprise and happiness, voluntarily by blinking its eyes and moving its head and legs. This behavior can be perceived as if Paro has feelings.” (http://paro.jp/english/)

„The simulated thinking may be thinking; simulated feeling is never feeling. Simulated love is never love.” (Turkle, page 2)

Social robots display feelings and act as if they were aware of their surroundings. They are made to evoke feelings, programmed for desired interaction with a man. Machines are programmable, they are efficient at their tasks, they mind is produced in a stream of numbers and functions. This makes them unfit to deal with concepts as abstract and biological as emotions. Emotions are the domain of humans.

The long standing opposition between rationality and emotionality is a cliche, common like not many others. It is an origin the popular culture’s image of emotionless, efficient androids, like the Terminator. Contrary to this popular beliefs emotions are not irrational, studies on people that lost them show that they are crucial and indispensable mechanisms. One of the best documented cases is that of Phineas Gage, whose brain has been damaged in an accident. The brain area which was damaged was located in the left frontal lobe, one of its main functions is processing emotional states. After his miraculous recovery, Gage did not become extraordinary efficient worker, nor did he become inhuman. The damage to his brain affected his possibility of decision making, he could pursue goals he set himself and define the hierarchy of importance of his actions.

Studies of such accidents allow us to see the real point of emotions. They are not the remnants of our distant, animal past nor are they sole feature making us human. Their main function is to structure our life, to let us set and pursue goals.

„Without emotions to drive us, we would do nothing at all” (Bloom, 11. Evolution, Emotion, and Reason: Emotions)

From simple survival instincts to elaborate human interactions, emotions are a practical evolutionary development. We see emotions as mystical part of our minds because our view of the world is based on their chemical and electrical properties. They have shaped our outlook from the beginnings of evolution, where they were employed mostly by the olfactory sense, the limbic system and hippocampus. The last one is playing key role in motivation and memory. That is assigning values to our actions and stimulus we receive from our surroundings. Today they are as indispensable to humans as they were thousands of years ago.

Looking from a scientific perspective on emotions it can be vital to see possibility and authenticity of robot emotions. It is obvious that autonomic machines will require ways to structure their aims and needs. A system that efficiently does that will in many ways be analogous to emotional hierarchies of animals. It is not only probable but imminent for such machines to develop emotional (understood as based on internal drives) responses.

„Paro feels happy when you stroke and hold it softly. Paro feels angry when you hit it. When Paro’s whiskers are touched, it will be very shy and cry or turn its head because it does not like to be touched. You will be happy and relieved through interacting with Paro.” (http://paro.jp/english/)

Paro, a therapeutic robot, was made and tailored to be a human companion. As such it should inspire people to interact, to look after and bond with him. As Turkel points out in her studies on HRI „with nurturance comes the fantasy of reciprocation. (…) (People) wanted the creatures to care about them in return” (Turkle, page 2). In the statement and latter claims, Turkel denies the possibility that a robot could facilitate a need to interact with humans. The statement is true for pre-programmed robots, but it is hard to determine its viability when an artificial intelligence is involved.

Read More

m5

This week I have left my steady 9 to 5 job as a graphic designer to become a recluse from society, pursue my MA final year project about embodying feelings into ANN.

I am still unsure how I feel about this decision.

Read More

ma2

There is a strong conviction in humans that flies are like machines. Those, and other insects, may experience faint feeling, qualias (…). How many neurones does a brain actually need to produce a consciousness? Ten-thousand, a million or a billion? We do not know this at the moment.

Christof Koch, The Quest for Consciousness: A Neurobiological Approach, chapter 11, translated by me

Humans display an autonoetic consciousness, which allows them to built an image of themselves, as well as form a link between their past and future actions. But consciousness itself is not bound with such requirements, we know that from cases of people with sever amnesia or Korsakoff’s syndrome. A good description of adaptation to living with Korsakoff’s syndrome can be found in chapter 2 of Oliver’s Sacks The Man Who Mistook His Wife for a Hat. There is no doubt that Jimmy, as well as other of Sacks’ patients, consciously experience the world around them. Report of their interest in arts, spiritual life leave no doubt that in their mind qualia still exist.

Read More

ma1

Going beyond the idea of primary perception*, what do we know about the consciousness of plants?

We know plants have different kinds of memory: immune, term and transgenerational.

We know some of plant’s memory is based on epigenetics.

We know plants communicate with each other or different part of themselves in various ways.

What kind of consciousness, if any, do plants have? With their rich sensuous life, that is so different from ours. Did they develop a different kind of qualia, ones that we cannot even imagine?

Further watching:
Stefano Mancuso: The roots of plant intelligence
Prof. Ariel Novoplansky: Learning Plant Learning

Further reading:
Do Plants Think? Scientist Daniel Chamovitz unveils the surprising world of plants that see, feel, smell and remember

* the thesis popularized by Cleave Backster about plants being sentient and responding to human thoughts

Read More

note7

Artificial neural networks are shaped by their environment, which they get to know through the data. An ANN, unlike some programs, exists in relation to it’s surroundings.

We can influence our ANN by the choice of input.

This is similar to how living organisms are shaped by their surroundings.

What can ANN add to the ongoing discussion on nature versus nurture?

We can grant an ANN possibility to learn i.e. recognising MINST digits, through sufficient amount of neurons, correct learning algorithms and softmaxlayer. But an ANN without necessary data to learn on, it is incapable of doing the task.

Read More

note6

In social robotics exists a term neglect tolerance. It is a time that robot can operate without interaction with human. Robots with more autonomy seem to cope better with such loneliness.

Read More

The question some ask is: why do we want to design autonomous robots?

Autonomy is one of key issues in designing social robots. Autonomy allows for  finding a creative and new solution to problems thus allowing for adaptation to the environment.

“… Dreyfus (1972) has suggested that computers would need to have bodies in order to gain the experience necessary to become truly creative, while others have suggested that human creativity is a social phenomenon and would be as impossible for isolated mind as it is present for an isolated computer.”

“Structure of Psychology; An Introductory Text”, (1981), C.I.Howarth, W.E.C. Gillham, ‘Theories of machines’ p.180

As we see today the gap is slowly closing, autonomy and creativity are necessary for a smooth interaction of machines and humans in most social scenarios.

Today, design of robots differs in the autonomy it gives them. Robots with no autonomy exist only in teleoperation or Wizard of Oz scenarios, where the man manipulates a mechanical skeleton.  This kind of autonomy may be useful but  renders robot useless without human presence.

On the LOA – level of autonomy – scale (as in works of Tom Sheridan) can be attributed several diffrent degrees of autonomy, from direct control of teleoperation to absolute autonomy in human-robot collaboration scenario.

The description of the scale:

1. Computer offers no assistance; human does it all.
2. Computer offers a complete set of action alternatives.
3. Computer narrows the selection down to a few choices.
4. Computer suggests a single action.
5. Computer executes that action if human approves.
6. Computer allows the human limited time to veto before automatic execution.
7. Computer executes automatically then necessarily informs the human.
8. Computer informs human after automatic execution only if human asks.
9. Computer informs human after automatic execution only if it decides too.
10. Computer decides everything and acts autonomously, ignoring the human.

As cited in Foundations and Trends in Human–Computer Interaction Vol. 1, No. 3 (2007) M. A. Goodrich and A. C. Schultz

 

Read More

notes3

Artificial neural networks are based, as the name would suggest, on living neural networks. Neurons that build ANN are not exact models of living neurons, they are their idealized (simplified) vision.

Let us compare those two mechanism together.

In a living nervous system, such like the one humans have, nerves transfer signals by neurotransmitters that are activated by electrical stimuli. An artificial neuron is a less complicated structure, with an activation function inside that passes the signals forward.

Biological neurons

possesses a cell body (often called the soma), dendrites, and an axon. Dendrites are thin structures that arise from the cell body, often extending for hundreds of micrometres and branching multiple times, giving rise to a complex “dendritic tree”. An axon is a special cellular extension that arises from the cell body at a site called the axon hillock and travels for a distance, (…). The cell body of a neuron frequently gives rise to multiple dendrites, but never to more than one axon, although the axon may branch hundreds of times before it terminates. At the majority of synapses, signals are sent from the axon of one neuron to a dendrite of another.

From Wikipedia

Artificial neurons are connected to each other by weights. It’s the weights that are updated by learning algorithms, so the informations that are processed by artificial neurons shape the architecture of the network. Each neuron  receives inputs from the other neurons, the effect on each consecutive neuron is controlled by weights.

Read More

notes2

 “… dream’s evanescence, the way in which, on awakening, our thoughts thrust it aside as something bizarre, and our reminiscences mutilating or rejecting it — all these and many other problems have for many hundred years demanded answers which up till now could never have been satisfactory.”

Sigmund Freud, The Interpretation of Dreams

Sigmoid Belief Nets are networks which form answers as beliefs (probabilities). For learning these complicated networks a wake-sleep algorithm was developed. In the awake part of the process informations are stored and weights learn. The problem which arises in Sigmoid Belief Nets is that

“it’s hard infer the posterior distribution over hidden configurations when given a datavector.”

(Neural Networks for Machine Learning, Coursera, Geoffrey Hinton and collaborators, lecture 13)

The need arises to un-learn some of the data structures from the awake process, in the stage named ‘sleep’.

This process may still lead to incorrect model averaging.

(Neural Networks for Machine Learning, Coursera, Geoffrey Hinton and collaborators, lecture 13)

When first implemented, wake-sleep algorithm was seen as a algorithm in which human brain worked. In this scenario, human brain would store data during the day, then while sleeping the brain would go through the information and ‘correct’ the wrongly defined connections. Is this the true nature of our dreaming?

Read More

notes1

In psychology, Memory is the process by which information is encoded, stored, and retrieved. Encoding allows information that is from the outside world to reach our senses in the forms of chemical and physical stimuli. In this first stage we must change the information so that we may put the memory into the encoding process. Storage is the second memory stage or process. This entails that we maintain information over periods of time. Finally the third process is the retrieval of information that we have stored. We must locate it and return it to our consciousness. Some retrieval attempts may be effortless due to the type of information.

From Wikipedia

Memory process:

  • encoding information (forming a memory)
  • storing information (retaining the memory)
  • retrieving information (recalling memory)

The Autoencoders

The reconstruction of data from partial information is a very human thing. We do not need to remember every detail to find informations we stored useful. What we don’t remember our brain substitutes with generic data. The autoencoders, neural networks, use similar device. In the process of storage they use Principal components analysis, encoding data points in the direction of the most variance, allowing for more compact storage. In this storing process we loose information about remaining directions. In the recalling process Autoencoders reconstruct the data from partial information.

Read More