Overview

The Ohmni Developer Edition is built with a powerful Docker virtualization layer that makes it possible to run any version of Ubuntu inside Ohmni.

The command to run by default is dockerenv, which you can run from any adb or ssh shell. This will load up the ohmnilabs/ohmnidev image from our Docker Hub repository which is an Ubuntu 18.04 base image with some extra tools installed.

If you want to run your own images, ie. ROS, you can use the docker-ohmnirun command which simply runs the given image with our common flags to enable full privileges and volume mounting.

Getting started

To play around with docker, simply ssh or adb shell to the unit and then run dockerenv (in root access). You'll see the android shell change to a more typical Ubuntu shell:

ohmni_up:/ # su
ohmni_up:/ # dockerenv
root@localhost:/home/ohmnidev#

Congrats! You're now running full Ubuntu within Android :) Our dockerenv command is a docker image based on Ubuntu 18.04 with a bunch of dev tools preinstalled (gcc, node.js, python, TensorFlow, etc.)

root@localhost:/home/ohmnidev# g++ --version
g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

root@localhost:/home/ohmnidev# node --version
v10.15.3
root@localhost:/home/ohmnidev# npm --version
6.4.1
root@localhost:/home/ohmnidev# python2 --version
Python 2.7.15rc1
root@localhost:/home/ohmnidev# python3 --version
Python 3.6.7

For text editors, we have both vim and mg installed by default. You can install any one you like with apt install.

For those of you unfamiliar with Docker, we highly encourage you to read more about how docker images/containers work and how to ultimately build your own images using Dockerfiles. It's super powerful and will help you scale out your code easily to a fleet of 1000+ Ohmni robots.

The key with Docker is that changes you make while inside the running image (i.e. apt install -y cmake or something) will NOT persist between runs of docker. This is by design to provide isolation and a clean starting point each time.

To make persistent changes, we have mapped several volumes from the Android system into the container:

  • /home/ohmnidev in docker maps to /var/dockerhome in Android
    • Use this for any general work/development you do
  • /app in docker maps to /data/data/com.ohmnilabs.telebot_rtc/files in Android - our App's installed directory
    • Access files here such as the bot shell UNIX socket, etc.
  • /dev in docker maps to /dev in Android
    • Exposes all devices for you to access from docker i.e. new depth cameras, USB devices via libusb, etc.

Below we go into some examples of things you can do with docker.

Example - Run your own Ubuntu flavor

Docker makes it super easy to install and run any flavor of Ubuntu. Note you can run multiple versions at the same time! This is useful if some code only compiles properly under 14.04 while another one needs to run on 19.04. You can take your pick from the commands below:

docker-ohmnirun ubuntu:14.04 bash
docker-ohmnirun ubuntu:16.04 bash
docker-ohmnirun ubuntu:18.04 bash
docker-ohmnirun ubuntu:19.04 bash

Example - Running ROS

Compiling ROS from scratch is often painful. Docker make things easy with one-liner installs of whichever version you like:

docker-ohmnirun ros:indigo bash
docker-ohmnirun ros:kinetic bash
docker-ohmnirun ros:lunar bash
docker-ohmnirun ros:melodic bash

Examples - OhmniLabs TensorFlow integration

See here for source https://gitlab.com/ohmni-sdk/demo-tpufacetracking

We have also compiled and pre-installed Tensorflow as part of the dockerenv image (which pulls from the ohmnilabs/ohmnidev Docker Hub).

To test, you can save the following Keras test file as /home/ohmnidev/keras.py from within dockerenv OR /var/dockerhome/keras.py in Android (maps to the same thing).

import tensorflow as tf
from tensorflow.keras import layers

print(tf.VERSION)
print(tf.keras.__version__)

model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.train.AdamOptimizer(0.001),
             loss='categorical_crossentropy',
             metrics=['accuracy'])

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

model.fit(data, labels, epochs=10, batch_size=32)

Inside dockerenv, simply run

python keras.py

The result is as follows:

root@localhost:/home/ohmnidev# python keras.py
1.12.0
2.1.6-tf
Epoch 1/10
1000/1000 [==============================] - 1s 1ms/step - loss: 11.5188 - acc: 0.0990
Epoch 2/10
1000/1000 [==============================] - 0s 173us/step - loss: 11.4893 - acc: 0.1040
Epoch 3/10
1000/1000 [==============================] - 0s 181us/step - loss: 11.4804 - acc: 0.1140
Epoch 4/10
1000/1000 [==============================] - 0s 182us/step - loss: 11.4742 - acc: 0.1190
Epoch 5/10
1000/1000 [==============================] - 0s 188us/step - loss: 11.4703 - acc: 0.1140
Epoch 6/10
1000/1000 [==============================] - 0s 198us/step - loss: 11.4661 - acc: 0.1140
Epoch 7/10
1000/1000 [==============================] - 0s 184us/step - loss: 11.4627 - acc: 0.1360
Epoch 8/10
1000/1000 [==============================] - 0s 189us/step - loss: 11.4592 - acc: 0.1290
Epoch 9/10
1000/1000 [==============================] - 0s 194us/step - loss: 11.4542 - acc: 0.1360
Epoch 10/10
1000/1000 [==============================] - 0s 180us/step - loss: 11.4503 - acc: 0.1390

Docker examples - processing camera frames

See here for source https://gitlab.com/ohmni-sdk/demo-tpufacetracking

Here is some reference Python code for reading frames from the camera. The way it works is that our Android camera driver taps off frames for computer vision algorithms to process while it is running. You can run this using python inside dockerenv.

So if any application has the camera open (i.e. in a Ohmni telepresence call, or if you run webrtc in the browser, or use the app OpenCamera installed on the bot), then our Android HAL driver will send a copy of frames to the UNIX DGRAM socket at /dev/libcamera_stream.

Also in the example below, we open a botshell connection, similar to if you run node bot_shell_client.js. You can format newline terminated text commands and send them over to trigger whatever motion or behavior you want.

NOTE: currently we are only exporting the grayscale image as that's what we use for our current computer vision work. We will extend this soon to allow for the full YUV or RGB frame to be exported.

import argparse
import platform
import subprocess
from PIL import Image
from PIL import ImageDraw

import socket
import os, os.path
import time
from enum import Enum
from struct import *

# Open connection to bot shell and send some commands
botshell = socket.socket( socket.AF_UNIX, socket.SOCK_STREAM )
botshell.connect("/app/bot_shell.sock")
botshell.sendall(b"say hello\n")
botshell.sendall(b"wake_head\n")

if os.path.exists( "/dev/libcamera_stream" ):
  os.remove( "/dev/libcamera_stream" )

print("Opening socket...")
server = socket.socket( socket.AF_UNIX, socket.SOCK_DGRAM )
server.bind("/dev/libcamera_stream")
os.chown("/dev/libcamera_stream", 1047, 1047);

class SockState(Enum):
  SEARCHING = 1
  FILLING = 2

def main():

  state = SockState.SEARCHING
  imgdata = None
  framewidth = 0
  frameheight = 0
  frameformat = 0
  framesize = 0

  print("Listening...")
  while True:

    datagram = server.recv( 65536 )
    if not datagram:
      break

    # Dump contents for view here
    #print("-" * 20)
    #print(datagram)
    #print(len(datagram))

    # Handle based on state machine
    if state == SockState.SEARCHING:

      # Check for non-control packets
      if len(datagram) < 12 or len(datagram) > 64:
        continue

      # Check for magic
      if not datagram.startswith(b'OHMNICAM'):
        continue

      # Unpack the bytes here now for the message type
      msgtype = unpack("I", datagram[8:12])
      if msgtype[0] == 1:
        params = unpack("IIII", datagram[12:28])
        #print("Got frame start msg:", params)

        state = SockState.FILLING
        imgdata = bytearray()

        framewidth = params[0]
        frameheight = params[1]
        frameformat = params[2]
        framesize = params[3]

      #elif msgtype[0] == 2:
        # END FRAME - for now no-op
        #print("Got end frame.")

      #else:
        # No op for other
        #print("Got other msgtype.")

    # Filling image buffer now
    elif state == SockState.FILLING:

      # Append to buffer here
      imgdata.extend(datagram)

      # Check size
      if len(imgdata) < framesize:
        continue

      # Resize and submit
      imgbytes = bytes(imgdata)
      newim = Image.frombytes("L", (framewidth, frameheight), imgbytes, "raw", "L")
      rgbim = newim.convert("RGB")

      # ADD YOUR LOGIC HERE TO PROCESS newim/rgbim

      # Go back to initial state
      state = SockState.SEARCHING
      #print("Got complete frame")

  print("-" * 20)
  print("Shutting down...")
  server.close()

  os.remove( "/dev/libcamera_stream" )
  print("Done")

if __name__ == '__main__':
  main()

From the shell, the easiest way to start/stop OpenCamera (which will show a live view of the camera on the screen and trigger our driver to process frames) is to run:

monkey -p net.sourceforge.opencamera -c android.intent.category.LAUNCHER 1

Then to stop the camera remotely you can either hit the home button on the screen or run:

input keyevent KEYCODE_HOME

Examples - Running telebot_node in Linux

See here for source https://gitlab.com/ohmni-sdk/tbnode-docker

As telebot_node is written in javascript, you can easily run it on Linux instead of OhmniOS. We provided a Linux port of telebot_node which you can run inside a Linux docker or even directly on a Linux.

Note that the sample source is only an example, we do not support porting of any Ohmni sources to Linux at the moment.

Examples - Send data from docker to in-call and standalone overlays

Visit in-call mode to write an in-call overlay or standalone mode to write a standalone overlay.

If you want to write a process in docker and send the data to the in-call overlay to visualize it, such as sending the bounding boxes of objects or human poses that the bot detected, you can just easily achive it by calling a bot shell command: send_to_in_call_api [json].

import socket
import json
import os

# Open connection to bot shell
botshell = socket.socket( socket.AF_UNIX, socket.SOCK_STREAM )
botshell.connect("/app/bot_shell.sock")

jsonData = {"data": "your data"}
command = "send_to_in_call_api " + json.dumps(jsonData) + "\n"
botshell.sendall(command.encode('UTF-8'))
botshell.close()

Similarly, for sending data to standalone page, you just need to follow the above example and change the bot shell command to send_to_stand_alone_api [json].

Note: in order to recieve data in the overlay you need to listen to cdata event.

Ohmni.on('cdata', data => {
  // your custom code here
});