Jetpack Cognition Lab is a science driven robotics startup from Berlin. The lab was founded in 2019 by Oswald Berthold and Matthias Kubisch, two roboticists that came to know each other at Humboldt University of Berlin. The mission is to engage in science communication, and the chosen means to achieve this is through consumer robots. The idea here is to enable a direct experience of cutting edge research, as it drives the motion of what you hold in your hands. On the concrete scale this involves product development, and storytelling for a new kind of robotics.

A wave of social robots is coming at society and it is being developed right now. Jetpack wants to be part of this development and push towards favorable outcomes. Social robots, by definition, need to step up robot skills in interacting with humans on eye level. The major skill required here is the ability to quickly adapt to unforeseen situations. The science inside is that of developmental learning, which powers the next breakthrough in robotics and intelligent machines. Because true intelligence is always about adaptivity, a fancy word for the ability to learn.

In 2021 the team has launched flatcat – a pet-like robot that responds to touch. It has the feel of a furry animal while it does not resemble any existing animal. This is a conscious choice in the design of the robot, because resemblance raises expectations, and compared with any real animal, current robots can only fail. flatcat is like a flat caterpillar consisting of three joints. These joints are smart in that they are not only able to move, but also to register their own motion, and any resistance they meet while they are moving. With such sensor and motor capabilities combined in a single device, it is called a sensorimotor. These sensorimotors connected in a row via the flatcat chassis is all it needs to create the entire behavioral spectrum of the robot.

Watch Jetpack at work on their website or any of their social media channels.


jetpack logo square notype

📖 A review of A thousand brains by Jeff Hawkins

Brains and nervous systems are the most exciting and truly weird topics to discover.

They can be approached through neuroscience, ethology, psychology, cognitive science, arts, brain-inspired computing, and bio-inspired robotics.

Or, even in some other ways. Easy, because our own brain and CNS is involved in all our perception and acquisition of knowledge. It is involved in everything we are and do. It is us, and it is bodily (somatic) all the way down. This is called embodiment.

If the 20th century has seen the emergence of non-classical theory in physics, it has also seen the beginnings of non-classical theory in, ultimately, biology. Biology as the basis and substrate of the majority of intelligent behavior we are observing, anywhere.

Anyway, bla bla. Just finished reading “A thousand brains” by Jeff Hawkins, who you might know from Numenta, their papers, or from the earlier book “A new kind of intelligence”, which I haven’t read.

The work presented and discussed in the book is about the human neocortex, its computational mechanics, and its principles of organization.

What the brain does, in general, is to create models of the world, which it then uses to make predictions and find ways (sequences of actions) to get to goals (usually related to survival, in the broad sense of the complicated life’s of contemporary humans). This is called inverting a model.

In the book, the idea that these internal models don’t come in singular, but rather in a massively large bundle, a huge flock of models, is expounded and illustrated in clear and fun prose, including some pictures.

One of the weirdest things about the brain is the modelling decomposition. Sorry. I just love that word so much.

What is decomposition? I don’t mean the degenerative one. It is meant in the mathematical sense of decomposing something complex into a set of simpler things, together with an explanation of their interactions, so that the overall story will yield the original phenomenon.

Most of us will have an acquired and consciously accessible decomposition of the world in our heads, called a mindset. Usually that’s objects, persons, domains; interactions come via force and gravity, light, sound and touch, inner focus and sociality, etc;

So the cool thing that the brain does, is a) to decompose the world into a soup of models, and b), that this decomposition is mostly and unconsciously completely different and utterly alien to our own introspective thinking. It just doesn’t align. No, it doesn’t. The objects of conscious introspective thought are just the tip of the iceberg, of all unconscious mental and neural activity, not available to introspection.

One of the reasons that this is so is somatics, properties of a physical body that needs to compute in a physical universe, governed by energy equations, metabolics, and distribution networks. Limitation as a resource. Work that.

The story of the relationship between the subjective introspection experience of feeling and living, and objective neural mechanics is one of the most pressing issues in science communication.

Why? Because understanding our own behavior and decision mechanics is essential for our civilization to survive the 21st century. Period.

Hawkins’ book throws a lot of stepping stones out into our path through a foggy toxic lava swamp. Highly appreciated and recommended.

Go check on book home

Posted originally on dynatropes – mission log from climbing mount improbable. where you at?

🐛 the friendliest robot in the world, pt1

We have built the friendliest robot in the world, and we need your support to make more of them.

In all the craze and frenzy, take a minute just for fun and google “friendliest robot in the world”. Don’t know what you will be seeing, but for me the results are remarkable, both in gem and dire. Let’s review this. What got me to this, is that I think with flatcat we have created

1/ the friendliest robot on the planet (frop)

It might be creepy to some but it is just friendly nonetheless, no matter. To celebrate, let’s come up with an appropriate retronym for flatcat, for example

2/ flatcat, friendliest live adaptive technology cuddly auto telic

Either way I needed to research existing claims in direction of “friendliest robot”. And what I get is essentially this: various lists of “top 12” and “most advanced” hard-shell social or humanoid robots exempt the revered Paro; direct references to Blue, Pepper, and Kuri; and one IEEEspectrum article.

What is friendliness and affection without touch as a mode of communication? From all I can see, none of these robots is able to actually touch a human person. If they are, no one wants to be touched by them. So they might be “able” to touch a human but that might not feel so good for the human. flatcat communicates with people only by touch, nothing else.

What caught my eye on the other hand immediately is Gakutensoku, 學天則, which is Japanese for “learning from the laws of nature” or when taken as Chinese through, “Learning the Rules of Heaven”. Aha 🙂

3/ Gakutensoku

Gakutensoku was a very early robot design that considered friendliness on a fundamental level, done by biologist Makoto Nishimura, who was motivated by his shock from seeing Karel Capek’s theater play “Rossum’s Universal Robots”. Gakutensoku appears to have been Japan’s first functional robot ever, as a side effect.

The robot he wanted to build would celebrate nature and humanity, and rather than a slave, it would be a friend, and even an inspirational model, to people.

To summarize these results, friendly robots are few because somehow no one is incentivized to make them. If they are made, friendly companionship is somewhat misunderstood through the severe disconnection of mainstream engineering from simple facts of human psychology. And, there is a very early precedent, which is coming from a clearly bio-inspired thinking.

For us, friendly robots are just the answer, and we do think that friendly adaptive technology makes a difference for people now, and will do so even more down the road if they are wild and friendly. To continue this mission, we need your support and are looking for team members and funding. Give us a shout, spread the word!

Learning the rules of heaven.


📖 Jetpack story Mini

We have been making up, writing, and expounding our story over the months and years. Spontaneously, this seems to be a very short version of the bigger arc in very simple words.

for several big global problems (humane elderly care, regenerative agriculture, cleaning up the garbage planet) we need really cool robots that are cheap. for this they have to be able to learn properly. this is difficult but we (as a community) know how to do it. everything we do is for the plan to build cool self-learning robots that you really want to hang out with and that don't need help all the time :)
für mehrere grosse globale probleme (humane altenpflege, regenerative landwirtschaft, müllplanet aufräumen) brauchen wir richtig coole roboter die billig sind. dafür müssen sie richtig gut lernen können. das ist schwierig aber wir (als community) wissen wie es geht. alles was wir machen dient dem plan, coole selbstlernende roboter zu bauen, mit denen man auch wirklich abhängen möchte und die nicht andauernd selber hilfe brauchen :)

🧠 Russian teddies and Martian cats

Turns out, we are huge fans of Simon Stålenhag for his highly imaginative and gripping visual and narrative universe. Most recent addition to our collection is “Things from the flood“, an illustrated novel from 2016, set to take place after the closing of the loop, an advanced research facility playing the lead in the preceding book “Tales from the loop“.

One particular page struck me as highly relevant to what we’re trying to achieve with our work at Jetpack Cognition Lab. It is reproduced below as a testimony, a mini-episode titled “The Russian Teddy”.

The episode contains a few elements that are real and worth a closer look. There is the notion of trashy AI-ware which leads to an overall expectation of a simulation of a personality, rather than a true character of a robot. There is the aspect of asymmetry in regulations across different regions and markets, leading to an inflow of dark imports from less regulated domains to more regulated ones. Market physics, yes. Thirdly, there is the desire of the human kid to go to the end and find out whether the robot’s behavior is really only simulated, or if there is something underneath overt behavior, which comes closer to the singular existential experience we, and all other biological life claim to have, coming out regularly when threatened with death by external agenthood.

To us, as the roboticists and general life-embracing creatives we are, this is precisely antithetic, a negative example, a scenario that we would like to avoid, and that we think we know how to avoid. It is the reason why we insist that robots need to be strongly grounded in the most basic perceptions upwards for everything they do, and be functionally honest. That means, that robots shouldn’t pretend to have spoken language competence, for example, when they don’t have a lot of many more basic audio-motor skills, that humans and animals do have. The basic skills in this example would be awareness of sound sources, their locations, the fundamental characteristics of a sound source, like something dangling in the wind, a machine whirring, or an animal or human doing some activity, up to distinguishing between a non-word utterance and spoken language proper. This is just to name a few. All of our own perceptions, especially those that enter the conscious mind, are always based on literally thousands of subordinate cues. This is not a bug, this is a feature. It is what makes our perceptions so incredibly robust, for all practical purposes (fapp).

Our hypothesis here is, that machines built in such a way, and only such machines, will get close enough to an appropriate behavior because there will be plenty of micro-cues and preceding evidence for the development of a situation, allowing it to understand and change its behavior long before any irreversible escalation.

The “Russian Teddies” were cuddly toys equipped with simple AI chips and a voice module. They were supposed to be able to talk to you, and they were supposed to at least appear to have a personality. In Sweden, AI chips were banned for commercial use, and most AI electronics were smuggled in from Russia, which apparently had a different view of artificial intelligence and artificial life.

Simon Stalenhag, Things from the flood, translated with from German edition, page 22

Die »russischen Teddys« waren mit simplen AI-Chips und Sprachmodul ausgestattete Kuscheltiere. Man sollte sich mit ihnen unterhalten können, und sie sollten zumindest den Anschein erwecken, eine Persönlichkeit zu haben. In Schweden waren AI-Chips für den kommerziellen Gebrauch verboten, und die meiste AI-Elektronik wurde aus Russland eingeschleust, wo man offenbar eine andere Meinung zu künstlicher Intelligenz und künstlichem Leben vertrat.

Simon Stalenhag, Things from the flood, German edition, page 22

⁂ Computational ethology – Who we are and what we do

We have been building autonomous machines for over twenty years and have become scientists on a mission to communicate. Our disciplines are computer science, psychology, biology, neuroscience, physics, art. Our subject is behavior.

What is behavior? Everything we do is behavior. We, as in humans, people, robots, and so on. Everything every animal ever does is behavior. Finding your way through the city is behavior, lifting a glass to drink is behavior, washing, talking, scratching yourself, building a house, etc. You get the picture.

How does behavior come about? How is behavior changed and adapted when the circumstances change? How do animals learn when there is no text book? How can self-learning robots benefit from this knowledge? Which behavior is more intelligent than another behavior?

Then, what is artificial intelligence, AI? Well, the A is trivial and we are stuck with “what is I?”.

Intelligence depends on the context. The deep sea is a different context than urban sprawl. A spaceship’s atmosphere (like earth) with little CO2 in it is a different context than one with a lot of CO2 in it. Depending on the premise the same behavior is more or less intelligent.

Intelligence includes the capacity to change behavior. This is itself a behavior. First to find out in which way it should be changed, and then, yeah .. changing it! For this you need motivation, curiosity, playfulness, exploration, creativity, problem solving.

The are two big social-economic questions. The first one is, what is our relationship with nature. The other one is, what’s our relationship with our self. The science of intelligence can contribute here with insights based on quantitative methods. Because, hurray, it turns out that not a single one of our decisions comes about in the way we thought it does, historically. The introspective perception available to everyone’s conscious experience is largely wrong, or

To really get ahead, we need to get more people onto the science of adaptive behavior.

For this we choose approachability on purpose, much in contrast to almost any other style chosen by our competition, and established institutions in the field. This is realized through simplicity, absurdity, and softness. Pet-like robots and synthetic animals.

This provides the perfect playground for fully embodied learning on a given body with all its individual peculiarities. Get behavior grounded in self-perception. Honest machines have more fun.

Looking forward to meeting you on the way.

People, planets, robots

Just back from watching David Attenborough: A life on our planet.

If you haven’t, watch it. It’s on Netflix.

The movie makes a case on Biodiversity and highlights how biodiversity is connected in straightforward ways with our own health and prosperity. Remember that all of #Covid19 is itself only a part in an ongoing severe biodiversity crisis.

Planet earth as seen from orbit. This is one is stunning. Picture credits: By NASA/Bill Anders –, Public Domain,

The view of earth shown in the picture above was not generally known to people before the late 1960s, except to a few imaginative visionaries. It is not about saving the planet though. It is about saving our own asses. Right here, right now. Anything else is procrastination, neglect, or worse. Anyway, lets not get stalled.

You ask, what does this have to do with Jetpack Cognition Lab? The answer is, just about everything. Read on to find out.

In whichever way exactly the sustainability turn is realized, there is going to be autonomous mobile robots in it.

Frame capture from “A life on our planet” at 1:13:13, Netflix, Fair use
Frame capture from “A life on our planet” at 1:13:40, Netflix, Fair use

These are likely scenarios but far from complete. Drones need support on the ground and up in the tree. They also need to go beneath the foliage because this is where it happens. That is all quite complicated stuff. The autonomous mobility of the robots we currently have is still a far cry from anything practically usable, and this includes affordable.

Mainstream promises have not realized anywhere despite trillions of sunk budgets, private and public. Remote death operations is as good as it gets.

The developments we are observing are not fast enough by some orders of magnitude. As more people need to be able to work on all that, the approach clearly needs to be massively diversified.

How is this done? You are right, with self-learning robots of course, with education and inspiration. This is a massive challenge and we need to take steps that might appear weird from the outside.

Where are you in this? We are very curious to learn and let us know anytime. Out here with the mobile Jetpack Cognition Lab and we need you support in every possible way.

Next, flatcat crowdfunding campaign.

flatcat is the next step towards creating robots that are built for learning all the way up. Does that scare you? good, this is how you and everyone else works, on the inside that is. Time to get to grips with yourself.