📖 Jetpack story Mini

We have been making up, writing, and expounding our story over the months and years. Spontaneously, this seems to be a very short version of the bigger arc in very simple words.

for several big global problems (humane elderly care, regenerative agriculture, cleaning up the garbage planet) we need really cool robots that are cheap. for this they have to be able to learn properly. this is difficult but we (as a community) know how to do it. everything we do is for the plan to build cool self-learning robots that you really want to hang out with and that don't need help all the time :)
für mehrere grosse globale probleme (humane altenpflege, regenerative landwirtschaft, müllplanet aufräumen) brauchen wir richtig coole roboter die billig sind. dafür müssen sie richtig gut lernen können. das ist schwierig aber wir (als community) wissen wie es geht. alles was wir machen dient dem plan, coole selbstlernende roboter zu bauen, mit denen man auch wirklich abhängen möchte und die nicht andauernd selber hilfe brauchen :)

🧠 Russian teddies and Martian cats

Turns out, we are huge fans of Simon Stålenhag for his highly imaginative and gripping visual and narrative universe. Most recent addition to our collection is “Things from the flood“, an illustrated novel from 2016, set to take place after the closing of the loop, an advanced research facility playing the lead in the preceding book “Tales from the loop“.

One particular page struck me as highly relevant to what we’re trying to achieve with our work at Jetpack Cognition Lab. It is reproduced below as a testimony, a mini-episode titled “The Russian Teddy”.

The episode contains a few elements that are real and worth a closer look. There is the notion of trashy AI-ware which leads to an overall expectation of a simulation of a personality, rather than a true character of a robot. There is the aspect of asymmetry in regulations across different regions and markets, leading to an inflow of dark imports from less regulated domains to more regulated ones. Market physics, yes. Thirdly, there is the desire of the human kid to go to the end and find out whether the robot’s behavior is really only simulated, or if there is something underneath overt behavior, which comes closer to the singular existential experience we, and all other biological life claim to have, coming out regularly when threatened with death by external agenthood.

To us, as the roboticists and general life-embracing creatives we are, this is precisely antithetic, a negative example, a scenario that we would like to avoid, and that we think we know how to avoid. It is the reason why we insist that robots need to be strongly grounded in the most basic perceptions upwards for everything they do, and be functionally honest. That means, that robots shouldn’t pretend to have spoken language competence, for example, when they don’t have a lot of many more basic audio-motor skills, that humans and animals do have. The basic skills in this example would be awareness of sound sources, their locations, the fundamental characteristics of a sound source, like something dangling in the wind, a machine whirring, or an animal or human doing some activity, up to distinguishing between a non-word utterance and spoken language proper. This is just to name a few. All of our own perceptions, especially those that enter the conscious mind, are always based on literally thousands of subordinate cues. This is not a bug, this is a feature. It is what makes our perceptions so incredibly robust, for all practical purposes (fapp).

Our hypothesis here is, that machines built in such a way, and only such machines, will get close enough to an appropriate behavior because there will be plenty of micro-cues and preceding evidence for the development of a situation, allowing it to understand and change its behavior long before any irreversible escalation.

The “Russian Teddies” were cuddly toys equipped with simple AI chips and a voice module. They were supposed to be able to talk to you, and they were supposed to at least appear to have a personality. In Sweden, AI chips were banned for commercial use, and most AI electronics were smuggled in from Russia, which apparently had a different view of artificial intelligence and artificial life.

Simon Stalenhag, Things from the flood, translated with deepl.com from German edition, page 22

Die »russischen Teddys« waren mit simplen AI-Chips und Sprachmodul ausgestattete Kuscheltiere. Man sollte sich mit ihnen unterhalten können, und sie sollten zumindest den Anschein erwecken, eine Persönlichkeit zu haben. In Schweden waren AI-Chips für den kommerziellen Gebrauch verboten, und die meiste AI-Elektronik wurde aus Russland eingeschleust, wo man offenbar eine andere Meinung zu künstlicher Intelligenz und künstlichem Leben vertrat.

Simon Stalenhag, Things from the flood, German edition, page 22

⁂ Computational ethology – Who we are and what we do

We have been building autonomous machines for over twenty years and have become scientists on a mission to communicate. Our disciplines are computer science, psychology, biology, neuroscience, physics, art. Our subject is behavior.

What is behavior? Everything we do is behavior. We, as in humans, people, robots, and so on. Everything every animal ever does is behavior. Finding your way through the city is behavior, lifting a glass to drink is behavior, washing, talking, scratching yourself, building a house, etc. You get the picture.

How does behavior come about? How is behavior changed and adapted when the circumstances change? How do animals learn when there is no text book? How can self-learning robots benefit from this knowledge? Which behavior is more intelligent than another behavior?

Then, what is artificial intelligence, AI? Well, the A is trivial and we are stuck with “what is I?”.

Intelligence depends on the context. The deep sea is a different context than urban sprawl. A spaceship’s atmosphere (like earth) with little CO2 in it is a different context than one with a lot of CO2 in it. Depending on the premise the same behavior is more or less intelligent.

Intelligence includes the capacity to change behavior. This is itself a behavior. First to find out in which way it should be changed, and then, yeah .. changing it! For this you need motivation, curiosity, playfulness, exploration, creativity, problem solving.

The are two big social-economic questions. The first one is, what is our relationship with nature. The other one is, what’s our relationship with our self. The science of intelligence can contribute here with insights based on quantitative methods. Because, hurray, it turns out that not a single one of our decisions comes about in the way we thought it does, historically. The introspective perception available to everyone’s conscious experience is largely wrong, or
misleading.

To really get ahead, we need to get more people onto the science of adaptive behavior.

For this we choose approachability on purpose, much in contrast to almost any other style chosen by our competition, and established institutions in the field. This is realized through simplicity, absurdity, and softness. Pet-like robots and synthetic animals.

This provides the perfect playground for fully embodied learning on a given body with all its individual peculiarities. Get behavior grounded in self-perception. Honest machines have more fun.

Looking forward to meeting you on the way.

flatcat. development white paper.

your next robot is a desktop research robot

Why flatcat?

We believe that modern robotics/AI needs a boost from the bottom, to take another two to three steps back and take the time to close the sensorimotor gap and look intensively at learning procedures before moving on to higher function such as assistance for humans.

That’s why we developed flatcat as an open, extremely simplified platform from the ground up to enable people from different scientific and research fields, as well as developers and engineers, to develop their own applications or solve research questions with a real robot. flatcat is designed to be inexpensive and highly simplified, making it accessible to many.

How flat?

flatcat is open-source in hardware, firmware and software. The circuits are just as studiable and expandable as the software. The mechanics are made of 3D-printed parts and can therefore be repaired and adapted over and over again. We use readily available off-the-shelf components as much as possible to ensure parts availability. The design is modular and expandable, e. g. more joints, different motors, new controllers, more sensors etc. The mechanics is portable and lightweight, i. e. results can be presented live, mobile use is possible, etc. The motion dimension of flatcat is 2-dimensional and the reduced complexity is initially very convenient for many problems, the robot can be maintained and operated by one person, (desktop research robot).

Spec flat

flatcat provides a Python/C++ interface for the development of applications. The host platform is a Raspberry Pi (Zero/W) with WiFi module which drives the motors via a symmetric, i. e. insensitive data bus (RS-485, 1MBaud).

The power supply can be 6-12V, the internal battery is a 2-cell lithium polymer battery with 7.4V nominal voltage (6.6 – 8.4V, low to high). A stationary power supply can also be connected.

The motor controllers are a Jetpack open-source development: called Sensorimotor. They have rich sensory feedback (bidirectional current, supply voltage, temperature, 300° position, and speed) and provide various control modes such as position PID and Cognitive Sensorimotor Loops (CSL). The motors can also be used for simple sound generation (Beta).

People, planets, robots

Just back from watching David Attenborough: A life on our planet.

If you haven’t, watch it. It’s on Netflix.

The movie makes a case on Biodiversity and highlights how biodiversity is connected in straightforward ways with our own health and prosperity. Remember that all of #Covid19 is itself only a part in an ongoing severe biodiversity crisis.

Planet earth as seen from orbit. This is one is stunning. Picture credits: By NASA/Bill Anders – http://www.hq.nasa.gov/office/pao/History/alsj/a410/AS8-14-2383HR.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=306267

The view of earth shown in the picture above was not generally known to people before the late 1960s, except to a few imaginative visionaries. It is not about saving the planet though. It is about saving our own asses. Right here, right now. Anything else is procrastination, neglect, or worse. Anyway, lets not get stalled.

You ask, what does this have to do with Jetpack Cognition Lab? The answer is, just about everything. Read on to find out.

In whichever way exactly the sustainability turn is realized, there is going to be autonomous mobile robots in it.

Frame capture from “A life on our planet” at 1:13:13, Netflix, Fair use
Frame capture from “A life on our planet” at 1:13:40, Netflix, Fair use

These are likely scenarios but far from complete. Drones need support on the ground and up in the tree. They also need to go beneath the foliage because this is where it happens. That is all quite complicated stuff. The autonomous mobility of the robots we currently have is still a far cry from anything practically usable, and this includes affordable.

Mainstream promises have not realized anywhere despite trillions of sunk budgets, private and public. Remote death operations is as good as it gets.

The developments we are observing are not fast enough by some orders of magnitude. As more people need to be able to work on all that, the approach clearly needs to be massively diversified.

How is this done? You are right, with self-learning robots of course, with education and inspiration. This is a massive challenge and we need to take steps that might appear weird from the outside.

Where are you in this? We are very curious to learn and let us know anytime. Out here with the mobile Jetpack Cognition Lab and we need you support in every possible way.

Next, flatcat crowdfunding campaign.

flatcat is the next step towards creating robots that are built for learning all the way up. Does that scare you? good, this is how you and everyone else works, on the inside that is. Time to get to grips with yourself.

Researchers find new KPI

Researchers find previously unknown Key Performance Indicator (KPI). This is the story of their discovery.

Observing a bit and reading some public domain texts, We define a new Key Performance Indicator (#kpi) the Jetpack Cognition Indicator (JCI, jizzy)

jizzy := headway / energy

headway: as in the urban / relationship definition. since the planet is spherical, finite surface and so on, cannot easily quit the relationship and need to build on headway.

energy: all the funds you are using to make the headway claim plus all other resources claimed in the meantime.

For transparency we do not withhold that we propose in our own favor.

Switching topic, Watch closely for our upcoming campaign and level Meet flatcat

Our mission

Our mission is to do everything we can to support current and upcoming generations with reusable tools and modular wisdom to grow and tend their futures at peace and prosperity.

Of course, we are beginning to learn a little in the behavioral sciences regarding how little we know about children and the educational processes. We had assumed the child to be an empty brain receptacle into which we could inject our methodically-gained wisdom until that child, too, became educated. In the light of modern behavioral science experiments that was not a good working assumption.

Operating manual for spaceship earth, R. Buckminster Fuller, 1969

Taking the A out of AI

Taking the A out of AI means to think more about natural intelligence and what might be natural to a robot. Yes, Embodiment, comes screaming from the backrows. So what is embodiment and why is it important?

As is well known the symbolic approach to AI is interesting but incomplete with regard to real world intelligence. Traversing a doorstep or a flipped carpet edge, picking up a slippery piece of food or crumbs from the floor. Unsolved, all of it. So don’t tell me about autonomous cars and healthcare robots.

At Jetpack we are working to realize the transfer from research insights of our discipline, developmental robotics, into society with consumer products. Our robots. What needs to be addressed first for sustainable robotics in the 21st century is the sensorimotor gap that is present in all AI-branded products out there. It may look small and insignificant but do not let yourself be fooled. This is the layer that connects the silicon brain with outside world and contrary to the common belief, this layer is not trivial to navigate, sensation and perception is extremely noisy, incomplete and contradictory. But it can be done. This is called embodied intelligence and the adaptive sensorimotor layer provides a natural grounding for any embodied intelligence.

Jetpack – pet-like robots that react to touch. Because people are at the center of our approach, our mindset, and our world view. Cheers.

Sustainability

This is a huge topic and rightly so. At Jetpack we are experienced environmentalists and care very deeply about providing a livable future that doesn’t consume itself before it gets there. We are passionate about contributing creatively and analytically. This is one of the reasons why Jetpack exists at all, to raise the stakes on our commitment.

To set some strategy pillars, we focus on open design, spread of knowledge, and the sensorimotor lessons on human psychology and social affairs. Sustainable health includes mental health and requires simple and clear ethics for an enjoyable social experience for everyone. Interestingly, much inspiration for this can be read from the study of self-learning robots. We need to rewind and get some simple things right first, only increasing the entertainment factor.

Energy is as huge a topic for robots as it is for everyone else. By the end of the day, the calories must be there, on average. We design our products with a rigoros accounting on energy and resource use, down to the single controller cycle. [tbc]

Menu ornament

While preparing an update for the people section, we introduced a ornamental pictograms to the labels of the site menu. They are 𝌬 ⨳ 厂 👥 ♾.

Each ornament is chosen with an association in mind. For the home item we use the 𝌬, Tetragram for Residence (U+1D32C), the products item stretches it a bit with using a the ⨳ Smash Product (U+2A33) mathematical operator, for workshop using the Chinese Ideograph factory, workshop; 厂 (U+5382), people is accompanied by the Busts In Silhouette Emoji 👥 (U+1F465), and the log is put together with the so called Permanent Paper Sign Emoji ♾ (U+267E).

Thanks and let us know if you like it.

Update: 2021-01-21 breaks the navigation, removed from titles