Overview

Underneath the Ohmni Android app that runs the video and communications, we have Native JS: a node.js based control infrastructure that makes it easy for you to modify the native behavior of Ohmni.

The NativeJS files are under the directory:

/data/data/com.ohmnilabs.telebot_rtc/files/assets/node-files

You can go to this directory in the shell and look around.

NOTE: when the bot starts, we re-normalize the files under assets/node-files to ensure proper running of the system. If you want your filesystem to be persistent, add the following empty file.

cd /data/data/com.ohmnilabs.telebot_rtc/files/
touch .systemdev; touch .isunpack

This will prevent Ohmni’s OTA updates, so remember to remove the files and copy your changes out of Ohmni before doing OTA updates.

The entry point when running is app.js. This is what's automatically run by the system when Ohmni boots up, and it runs forever in the background and communicates with our Android application via a local socket.

BotShell

The main API for NativeJS is provided via the BotShell class - code in bot_shell.js.

Essentially, NativeJS runs UNIX, TCP, and WebSocket servers that all speak the same simple protocol. The protocol is a shell-like, newline terminated protocol. All functions in bot_shell.js prefixed with cmd_ are automatically callable.

The easiest way to try out sending commands is to go to /data/data/com.ohmnilabs.telebot_rtc/files/assets/node-files and then run:

./node bot_shell_client.js

You can format newline terminated text commands and send them over to trigger whatever motion or behavior you want.

You can also connect to bot_shell.sock directly and send commands over the socket.

const net = require('net');
const botShellSockPath = "../../../bot_shell.sock"
const socket = net.createConnection({ path: botShellSockPath }, () => {
  console.log("Connected to the bot shell socket");
});
socket.on('connect', () => {
  socket.write("torque 3 on\n");
  socket.end();
});

NOTE: Use only bot_shell commands that are documented below, Using any other commands can brick your unit or put it into a bad state.

API Reference

SID values for motor, servos and LED

  • 3: neck.
  • 0|1:left wheel or right wheel
  • 20: LED

- init

Enable the neck servo and wheels

- sleep

Disable the neck servo and wheels

- reboot [sid]

Restart a component

  • sid: id of the component

Example:

reboot 1

- torque [sid] [state]

Set a component to be enabled or disabled

  • sid: id of the component
  • state: the state of the component on or off

Example:

torque 3 on

- pos [sid] [position] [time]

Set neck position for the bot

  • sid: id of the component. See list of sid
  • position: Neck position in the range of 300 (looking down 90) to 650 (looking up)
  • time: time to move the neck in millisecond

Example:

pos 3 400 200

- wake_head

turn on neck motor

- rest_head

turn off neck motor

- light_color [sid] [h] [s] [v]

Set the hsv color for the disabled

  • sid: id of the component
  • h, s, v: value in HSV color space. Range of h, s, v is (0, 255)

Example:

light_color 20 104 197 85

- light_boot_hue [sid] [h]

Set the h value for led component in the bot

  • sid: id of the component
  • h: h value in HSV color space. Range of h is (0, 255).

Example:

light_boot_hue 20 100

- light_boot_anim [sid]

Turn off then on the led component in the bot.

  • sid: id of the component.

Example:

light_boot_anim 20

- rot [sid] [val]

Rotate a wheel servo

  • sid: id of the wheel.
  • val: the distance to rotate.

Example:

rot 1 -1000

- pre_rot [angle] [speed]

Rotate a wheel servo.

  • angle: number of degrees to rotate. angle can be positive or negative (positive for turning right, negative for turning left).
  • speed: the speed of the bot. Range of speed is (0, 20)

Example:

pre_rot 360 4

- turn_center

Center wheels for calibration.

- pre_drive [distance] [speed]

Move the bot forward or backward.

  • distance: distance to move in millimeters. distance can be positive or negative (positive for moving forward, negative for moving backward).
  • speed: the speed of the bot. Range of speed is (0, 20)

Example:

pre_drive 300 4

- manual_move [lspeed] [rspeed]

Move the bot based on speed of the wheels. lspeed, rspeed can be positive or negative. Note that this function does not auto-stop. Call manual_move 0 0 to stop this function.

  • lspeed: speed of left wheel.
  • rspeed: speed of right wheel.

Example:

manual_move 1000 -1000

- pre_pattern [speed]

Test moving and rotate the bot.

  • speed: the speed of the bot. Range of speed is (0, 20)

Example:

pre_pattern 5

- battery

Query current battery and docked state.

- battery_query

Query advance battery state.

- voltage [sid]

Show the voltage of a component.

  • sid: id of the component.

Example:

voltage 3

- version_hash [sid]

Read version hash of the component.

  • sid: id of the component.

Example:

version_hash 0

- version_date [sid]

Read version date of the component

  • sid: id of the component.

Example:

version_date 0

- discover

Show all available sids in the bot.

- apos [sid]

Show status of the component

  • sid: id of the component

Example:

apos 3

- scan_lidar_device

(Not available on all Ohmni) Scan lidar device in the bot.

Example:

scan_lidar_device

- lidar_scan

(Not available on all Ohmni) Call lidar device to start scanning data.

Example:

lidar_scan

- lidar_stop

(Not available on all Ohmni) Call lidar device to stop scanning data.

Example:

lidar_stop

- lidar_set_pwm [speed]

(Not available on all Ohmni) Set the speed for lidar motor.

  • speed: set speed for lidar motor. Range of speed is (0, 1023)

Example:

lidar_set_pwm 600

- lidar_release

(Not available on all Ohmni) Release serial port of the lidar device.

Example:

lidar_release

- start_collision_detection

(Not available on all Ohmni) Start auto stop on collision for the bot.

Example:

start_collision_detection

- stop_collision_detection

(Not available on all Ohmni) Start collision detection for the bot.

Example:

stop_collision_detection

- say [string]

Text to speech.

  • string: string to speak.

Example:

say hello

- autodock_ll

Call autodock if the camera is opening (incall, or open camera app). If camera is not opening, do nothing.

Example:

autodock_ll

- autodock

Open camera app, then do autodock_ll.

Example:

autodock

- autodock_calibrate_ll

Call old autodock calibration if the camera is opening (incall, or open camera app). If camera is not opening, do nothing.

Example:

autodock_calibrate_ll

- autodock_calibrate

Open camera app, then do _autodock_calibrate_ll.

Example:

autodock_calibrate

- vb_autodock_calibrate_ll

Call vision-based autodock calibration if the camera is opening (incall, or open camera app). If camera is not opening, do nothing.

Example:

vb_autodock_calibrate_ll

- vb_autodock_calibrate

Open camera app, then do vb_autodock_calibrate_ll.

Example:

vb_autodock_calibrate

- send_to_in_call_api [json]

Send JSON message to the in-call overlay. See In-call mode to write an in-call overlay.

  • json: json data to send.

Example:

send_to_in_call_api {"name": "alex", "age": 20}

Note in the in-call page, you need to call bellow function to receive the data:

Ohmni.on('cdata', data => {
  // your custom code here
});

- send_to_stand_alone_api [json]

Send JSON message to the standalone overlay. See Standalone mode to write a standalone overlay.

  • json: json data to send.

Example:

send_to_stand_alone_api {"name": "alex", "age": 20}

Node: in the standalone page, you need to call bellow function to receive the data:

Ohmni.on('cdata', data => {
  // your custom code here
});

Extending NativeJS

Ohmni NativeJS has a plugin system built-in allowing you to add custom logic and code at the JS level. To use the plugin system, simply create the plugins directory as follows:

mkdir -p /data/data/com.ohmnilabs.telebot_rtc/files/plugins

Now you can add .js files to this directory and they will be loaded at startup. For example, here's a simple plugin that I wrote as /data/data/com.ohmnilabs.telebot_rtc/files/plugins/sample_plugin.js:

// Load support for logging straight to logcat here
const LogCatter = require('logcatter');
const Log = new LogCatter('SampleOhmniJSPlugin');

class SampleOhmniJSPlugin {

  constructor(botnode) {
    this._botnode = botnode;
    Log.i("  -> SampleOhmniJSPlugin loaded!");

    // Bind some extra bot shell commands
    this._botnode._botshell.cmd_test_plugin_cmd = function(parts, rl) {
      this.log(rl, "Got test plugin call with: " + JSON.stringify(parts));
    }

  }

  hello() {
    console.log("Hello from plugin!");
  }

}

// IMPORTANT - do not forget this line to export the class
module.exports = SampleOhmniJSPlugin;

In this example note that the key thing is you define and export a class via module.exports. The class name can be anything - you don't need to match the .js file name or anything.

The NativeJS core code will require() the file and instantiate the exported class. The constructor of your class is given a pointer to the core botnode object and you can do whatever you want with it.

In the above example, I have also shown how to add extra bot_shell commands which is a useful way to query data or send custom commands. Here's an example of how I test calling that plugin command:

./node bot_shell_client.js
Connected to ohmni bot_shell.
test_plugin_cmd aha
Got test plugin call with: ["aha"]

You can of course write any other code you want here, i.e. running a network server, loading various node modules, etc.

As you make changes, you can stop and restart app.js by using the shell commands:

setprop ctl.restart tb-node (to restart it)
setprop ctl.start tb-node (to start it)
setprop ctl.stop tb-node (to stop it)

Extra notes

Send custom JSON to In-call Web API:

You can send custom JSON message to the in-call overlay using the function sendCustomJsonMsg(json)

Example:

this._botnode.sendCustomJsonMsg(customJson);

On the overlay, you can retrieve the json by:

Ohmni.on('capi', function(msg) {
  console.log(msg);
})

How to run a process upon startup

If you want to launch your custom process at the beggining, you can write a plugin to exec the starting bash script of that process.

Here is the example showing how to start the latest docker when the bot start.


const LogCatter = require('logcatter');
const Log = new LogCatter('SampleOhmniJSPlugin');
const { exec } = require('child_process');

class StartupPlugin {

  constructor(botnode) {
    this._botnode = botnode;
    Log.i("  -> StartupPlugin loaded!");
    // exec command start to start latest docker
    exec("docker start $(docker ps -lq)");
  }
}

module.exports = StartupPlugin;

Note: to write a plugin please take a look at here

Using npm

Npm is fully installed on the latest developer edition update and both node and npm are globally accessible via /system/bin/node and /system/bin/npm.

We recommend running npm to install modules from the /data/data/com.ohmnilabs.telebot_rtc/files/plugins directory. If you install or change the plugins under /data/data/com.ohmnilabs.telebot_rtc/files/assets/node-files directory, it may interfere with the update process.

Note: npm cannot install compiled modules at the moment. If you have compiled modules you need to run, we suggest compiling and running them on node.js inside Docker (see Layer 3).