Disappearance of Tangible interfaces

Estimated reading time: Less than 6 minutes

Shots

So here’s a post around a topic I have been pondering over quite some time now. I had a brief discussion with my brother couple of days back and here I am writing this down because I think it might make myself more clearer. Inputs, criticisms and thoughts are most welcome.

For the start, so when Bret Victor wrote a Rant on the future of Interaction Design and questioned the visible trend of a “picture under glass” vision to interaction design, I had my own tint of thought on the importance of fingers in interaction design. My judgement about the importance of fingers (not one single finger) still stays strong, however the rant questioned

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?

The conversation between the voices in my head started from the rant and it had now come so far that I am going to talk about some on-the-verge technologies which, I believe, can seriously turn how Human Computer Interaction progress in the seemingly far off future.

So when Miguel Nicolelis conducted the Monkey & Joystick Experiment in which Aurora, a monkey is made to play a game using a joystick while its neural activities are recorded. Later the joystick was removed and the researchers were able to simulate the joystick movements just from the brain signals from Aurora as such she realized she can play the game without the joystick. Here’s the TED talk on the experiment.

One observation I had is how the interaction happened. When we discuss about human-computer interaction, typically we have all these tangible input devices like the mouse, the keyboard, the joystick etc. etc. In the case of this experiment, the first thing to do is to make Aurora, the monkey, to interface with the joystick – as whatever is going to happen, is through the the input signals the joystick provides. After the joystick is removed, I am not certain how the brain signal fluctuates but as Nicolelis has mentioned, the Monkey is thinking and playing the game while her limbs are free to do whatever it wants – like scratching her back.

screencast-1

Now I am interested in understanding what is Aurora thinking? Is she thinking
1) about the physical movement of the joystick; or
2) the physical movement of the dots on the screen.

These two are definitely two different activities and the signals from the brain cannot be the same. In the first, the brain is thinking how to move the joystick in order for the dots to behave in a certain way – like move joystick left, move joystick right. This in turn directly converts into motor-neuron transmissions that control its limbs to behave accordingly. In the second, Aurora is looking at the screen and thinking move the red dot left/right, or up/down. This naturally doesn’t require any motor-neuron transmissions and are just thoughts.

 

 

monkey-2-02

 

 

I am not a neuroscientist but my understanding says even if Aurora is not holding a joystick, to deliver the same signal patterns, her limbs should be moving the same way as it does when a joystick is present. These signals only can then be decoded into mechanical movements of the robotic arm. This let me thinking, when Aurora is scratching her back while playing the game, does it not distort the signals from her brain as her limbs are definitely in a different motion than moving joysticks?

My conjecture says  – she is probably thinking something like “move the dot left”, “move the dot up”; rather than something like “move my limb (or joystick) left/right”.

The interesting introspection is – this leads to a completely intangible interaction model.

Let’s take an example.

Let us say you want to type the word “google”. The steps, if roughly broken down, seems something like
1) Brain thinks which letters – for instance ‘g’, ‘o’, ‘o’, ‘g’, ‘l’ and ‘e’ sequentially here;
2) Brain sends central command to the motor-neuron system to control the muscles to press the corresponding keys of these letters on the keyboard in the proper sequence.

So what happened in Nicolelis’s Experiment seem that the signals tapped is only the former one & not the later. This actually leaves room for the idea of the intangible thought only input interface. The point is, even though it seems Aurora is playing the game with her mind only, there is a tangible robotic arm which is simulating the joystick movement through which the game is played – which means it is not entirely joystick-less.

Now let us have a look at another TED talk by Tan Le entitled ‘A headset that reads your brainwaves’.

Here, the subject is shown with a 3D render of a cube on the screen and he is asked to think that he is pulling the object towards him. The brain signals are recorded. Once it is recorded, next time when he thinks he is pulling the object, the system detects the signal patterns and identify pull action and thus the object is manipulated to simulate a pull.

emotive

Thus each pattern of brain signal is mapped to a certain action – like “push”, “pull”, “disappear” etc. Here, the subject is no longer thinking “pull the joystick to pull the cube” or “press the button to make it disappear”. The almost subconscious thought of sending commands to control the body movement is absent. He is just thinking “pull the object”. The bottom-line is, the physical motor action is avoided.

With this in mind, let us get back to Bret Victor’s rant. The trend of human-computer interaction so far has always been simplifying or making interaction convenient to us. This means reducing tension, cognitive load, physical stress etc among others. Primarily, if any interaction designer try to reduce cognitive load & physical stress, that means you need to reduce physical motor activities that tire users out. Despite the ethical debate on the responsibility that we as interaction designers bore, that we should be responsible of our species not to make them become like what we see in Wall-E, radical conclusion points towards the future of interaction design as based on minimal motor activities.

Answering Bret Victor’s question With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?, I believe yes indeed it is the fingers; plus our neural network.

Fingers – we have 10 of them. We don’t get tired typing thousands of words at one go. Arm movements however is exhaustive on the other hand. Body movements, torso movements etc are also demanding to our body.

Wall-E

Projecting a bit more, tangible interfaces as we know today like keyboards shall become invisible in the sense that it is embedded on our bodies and are in the form of basic wearables that are invisible in the interaction process. It communicates directly with our brain and leaves our body from physical exhaustion. It shall depend on our thoughts, interferences of our body conductivity etc. based on our mood, and additionally, which demands minimal motor activities.

Hmmm… this is a cyborg we are talking about.

Topmost Photo: http://worrydream.com/

Also read...

Loading Facebook Comments ...

Leave a Reply

Your email address will not be published. Required fields are marked *