jetpack story mini

for several big global problems (humane elderly care, regenerative agriculture, cleaning up the garbage planet) we need really cool robots that are cheap. for this they have to be able to learn properly. this is difficult but we (as a community) know how to do it. everything we do is for the plan to build cool self-learning robots that you really want to hang out with and that don't need help all the time :)
für mehrere grosse globale probleme (humane altenpflege, regenerative landwirtschaft, müllplanet aufräumen) brauchen wir richtig coole roboter die billig sind. dafür müssen sie richtig gut lernen können. das ist schwierig aber wir (als community) wissen wie es geht. alles was wir machen dient dem plan, coole selbstlernende roboter zu bauen, mit denen man auch wirklich abhängen möchte und die nicht andauernd selber hilfe brauchen :)

🧠 Russian teddies and Martian cats

Turns out, we are huge fans of Simon Stålenhag for his highly imaginative and gripping visual and narrative universe. Most recent addition to our collection is “Things from the flood“, an illustrated novel from 2016, set to take place after the closing of the loop, an advanced research facility playing the lead in the preceding book “Tales from the loop“.

One particular page struck me as highly relevant to what we’re trying to achieve with our work at Jetpack Cognition Lab. It is reproduced below as a testimony, a mini-episode titled “The Russian Teddy”.

The episode contains a few elements that are real and worth a closer look. There is the notion of trashy AI-ware which leads to an overall expectation of a simulation of a personality, rather than a true character of a robot. There is the aspect of asymmetry in regulations across different regions and markets, leading to an inflow of dark imports from less regulated domains to more regulated ones. Market physics, yes. Thirdly, there is the desire of the human kid to go to the end and find out whether the robot’s behavior is really only simulated, or if there is something underneath overt behavior, which comes closer to the singular existential experience we, and all other biological life claim to have, coming out regularly when threatened with death by external agenthood.

To us, as the roboticists and general life-embracing creatives we are, this is precisely antithetic, a negative example, a scenario that we would like to avoid, and that we think we know how to avoid. It is the reason why we insist that robots need to be strongly grounded in the most basic perceptions upwards for everything they do, and be functionally honest. That means, that robots shouldn’t pretend to have spoken language competence, for example, when they don’t have a lot of many more basic audio-motor skills, that humans and animals do have. The basic skills in this example would be awareness of sound sources, their locations, the fundamental characteristics of a sound source, like something dangling in the wind, a machine whirring, or an animal or human doing some activity, up to distinguishing between a non-word utterance and spoken language proper. This is just to name a few. All of our own perceptions, especially those that enter the conscious mind, are always based on literally thousands of subordinate cues. This is not a bug, this is a feature. It is what makes our perceptions so incredibly robust, for all practical purposes (fapp).

Our hypothesis here is, that machines built in such a way, and only such machines, will get close enough to an appropriate behavior because there will be plenty of micro-cues and preceding evidence for the development of a situation, allowing it to understand and change its behavior long before any irreversible escalation.

The “Russian Teddies” were cuddly toys equipped with simple AI chips and a voice module. They were supposed to be able to talk to you, and they were supposed to at least appear to have a personality. In Sweden, AI chips were banned for commercial use, and most AI electronics were smuggled in from Russia, which apparently had a different view of artificial intelligence and artificial life.

Simon Stalenhag, Things from the flood, translated with deepl.com from German edition, page 22

Die »russischen Teddys« waren mit simplen AI-Chips und Sprachmodul ausgestattete Kuscheltiere. Man sollte sich mit ihnen unterhalten können, und sie sollten zumindest den Anschein erwecken, eine Persönlichkeit zu haben. In Schweden waren AI-Chips für den kommerziellen Gebrauch verboten, und die meiste AI-Elektronik wurde aus Russland eingeschleust, wo man offenbar eine andere Meinung zu künstlicher Intelligenz und künstlichem Leben vertrat.

Simon Stalenhag, Things from the flood, German edition, page 22

⁂ Computational ethology – Who we are and what we do

We have been building autonomous machines for over twenty years and have become scientists on a mission to communicate. Our disciplines are computer science, psychology, biology, neuroscience, physics, art. Our subject is behavior.

What is behavior? Everything we do is behavior. We, as in humans, people, robots, and so on. Everything every animal ever does is behavior. Finding your way through the city is behavior, lifting a glass to drink is behavior, washing, talking, scratching yourself, building a house, etc. You get the picture.

How does behavior come about? How is behavior changed and adapted when the circumstances change? How do animals learn when there is no text book? How can self-learning robots benefit from this knowledge? Which behavior is more intelligent than another behavior?

Then, what is artificial intelligence, AI? Well, the A is trivial and we are stuck with “what is I?”.

Intelligence depends on the context. The deep sea is a different context than urban sprawl. A spaceship’s atmosphere (like earth) with little CO2 in it is a different context than one with a lot of CO2 in it. Depending on the premise the same behavior is more or less intelligent.

Intelligence includes the capacity to change behavior. This is itself a behavior. First to find out in which way it should be changed, and then, yeah .. changing it! For this you need motivation, curiosity, playfulness, exploration, creativity, problem solving.

The are two big social-economic questions. The first one is, what is our relationship with nature. The other one is, what’s our relationship with our self. The science of intelligence can contribute here with insights based on quantitative methods. Because, hurray, it turns out that not a single one of our decisions comes about in the way we thought it does, historically. The introspective perception available to everyone’s conscious experience is largely wrong, or
misleading.

To really get ahead, we need to get more people onto the science of adaptive behavior.

For this we choose approachability on purpose, much in contrast to almost any other style chosen by our competition, and established institutions in the field. This is realized through simplicity, absurdity, and softness. Pet-like robots and synthetic animals.

This provides the perfect playground for fully embodied learning on a given body with all its individual peculiarities. Get behavior grounded in self-perception. Honest machines have more fun.

Looking forward to meeting you on the way.

People, planets, robots

Just back from watching David Attenborough: A life on our planet.

If you haven’t, watch it. It’s on Netflix.

The movie makes a case on Biodiversity and highlights how biodiversity is connected in straightforward ways with our own health and prosperity. Remember that all of #Covid19 is itself only a part in an ongoing severe biodiversity crisis.

Planet earth as seen from orbit. This is one is stunning. Picture credits: By NASA/Bill Anders – http://www.hq.nasa.gov/office/pao/History/alsj/a410/AS8-14-2383HR.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=306267

The view of earth shown in the picture above was not generally known to people before the late 1960s, except to a few imaginative visionaries. It is not about saving the planet though. It is about saving our own asses. Right here, right now. Anything else is procrastination, neglect, or worse. Anyway, lets not get stalled.

You ask, what does this have to do with Jetpack Cognition Lab? The answer is, just about everything. Read on to find out.

In whichever way exactly the sustainability turn is realized, there is going to be autonomous mobile robots in it.

Frame capture from “A life on our planet” at 1:13:13, Netflix, Fair use
Frame capture from “A life on our planet” at 1:13:40, Netflix, Fair use

These are likely scenarios but far from complete. Drones need support on the ground and up in the tree. They also need to go beneath the foliage because this is where it happens. That is all quite complicated stuff. The autonomous mobility of the robots we currently have is still a far cry from anything practically usable, and this includes affordable.

Mainstream promises have not realized anywhere despite trillions of sunk budgets, private and public. Remote death operations is as good as it gets.

The developments we are observing are not fast enough by some orders of magnitude. As more people need to be able to work on all that, the approach clearly needs to be massively diversified.

How is this done? You are right, with self-learning robots of course, with education and inspiration. This is a massive challenge and we need to take steps that might appear weird from the outside.

Where are you in this? We are very curious to learn and let us know anytime. Out here with the mobile Jetpack Cognition Lab and we need you support in every possible way.

Next, flatcat crowdfunding campaign.

flatcat is the next step towards creating robots that are built for learning all the way up. Does that scare you? good, this is how you and everyone else works, on the inside that is. Time to get to grips with yourself.