Overview

The Ohmni WebAPI provides an easy, sandboxed way for you to develop rich, interactive web applications that run on the robot.

The API gives you the ability to use any web application as the UI shown on the robot with full control over the display and audio. You can stream videos or music with HTML5, play YouTube videos, or run networked web games, etc.

Also, your web application can make special calls that trigger the physical interfaces of the robot - moving around, looking up and down with the neck, controlling the lights, as well as using text to speech and speech interfaces.

There are two modes of operation: Standalone and In-call.

Standalone mode is a way to load and run your web app directly on the display of the robot when no user is connected. Essentially, your URL is loaded into a sandboxed WebView with special hooks so that you can call lower level functions.

In-call mode gives you access to the same suite of calls and controls, but the HTML5/CSS/JS logic runs in the calling user's browser, during the telepresence call. This gives you massive flexibility to add backend integrations, additional UI and interactive HTML5/CSS/JS, etc. all from within the call. The API commands are then RPC'ed over the telepresence connection and run on the bot.

Standalone mode

Write any standard web page in whatever framework you choose. It can be static pages, Rails, Express, etc. Make it as interactive as you like - it will appear on the robot's screen and people with the robot can click on buttons, etc.

Use our Ohmni WebAPI calls (documented below) to control the motion and behavior of the robot. It's as easy as that! You can trigger them on a javascript timer, based on button presses, or based on some complex WebRequests - whatever you want.

Now, from the Ohmni web app (https://app.ohmnilabs.com), log in and click the gear icon to bring up additional settings for the robot that you want to run the code on. It expands the fields as shown below.

Simply enter the URL of the target page of your web app into the "Load URL" box and click the "Load" button:

alt_text

You should see the page load on your Ohmni’ screen. If you are in a call, the page will overwrite the video screen on the robot. To clear this page, clear “Clear” button. You may choose to set this URL persistent on Ohmni's screen by applying it into the "Home screen" box. Whenever Ohmni starts up or is idle, it will show this screen.

alt_text

Add ohmni-standalone.js to your app

Now, to get started with robot control calls in your web application, include Ohmni.js in the header of your web app:

<script src="https://api.ohmnilabs.com/ohmni-api/Ohmni-standalone.js"/>

This will allow you to use calls in the Ohmni.x namespace wherever you want to in your web application. For example, you could bind a button onclick handler to call the function as follows:

<button onclick="tiltdown()">Tilt neck down</button>
<script>
function tiltdown() {
    Ohmni.setNeckPosition(650, 100);
}
</script>

Sample Code

https://api.ohmnilabs.com/ohmni-api/test-standalone.html

In-call mode

The in-call API is very useful when you want to extend a telepresence session to include additional UI, information, interaction, etc. To spark your imagination, here are some cool applications people have built with it:

  • Robot arm teleoperation
    • OhmniLabs R&D has built some cool extensions to use Vive VR controllers with a Chrome browser extension to pipe the data over the call and teleoperate an arm added to Ohmni.
  • Patient data and caregiver survey
    • Some health care partners add in backend data integrations that allow the remote caregiver to log data about the patient during their visit, which goes straight to their own backend.

Version Requirements

Update to the latest Ohmni app version from the robot’s setting menu. This API is only available on Gen12 version hardware and above. To check: 1. Log into the ‘Your Bots” page - http://app.ohmnilabs.com 2. Hover over the robot name and click on the gear icon to the right of the name 3. Version must be 4.0 or higher

Getting Started

Add ohmni.js to your web app

<script src="https://api.ohmnilabs.com/ohmni-api/Ohmni-incall.js"></script>

Supported Ohmni WebAPI calls

Ohmni.move()
Ohmni.setLightColor()
Ohmni.setNeckPosition()
Ohmni.setNeckTorqueEnabled()
Ohmni.showPageOnBot("url") (version >= 4.0.2, in-call mode only)
Ohmni.hidePageOnBot() (version >= 4.0.2,  in-call mode only)
Ohmni.requestBotInfo() (version >= 4.0.8.2)
Ohmni.captureVideo(resolutionWidth, resolutionHeight) (in-call mode only)
Ohmni.captureAudio() (in-call mode only)
Ohmni.stopCaptureAudio() (in-call mode only)
Ohmni.setBotVolume(Number);
Ohmni.getBotVolume();
Ohmni.on('cdata', callback);
Ohmni.sendBotshellCmd();

For any clickable DOM element, add data-external="true" property to avoid conflict with underlying video stream DOM.

Set your app as an overlay

Host the app and set the url in Ohmni settings panel.

NOTE: If you do not see the “Overlay HTML” field. Contact OhmniLabs Help Desk for the capability to be enabled.

alt_text

Click “Save” and refresh the page for this to take effect. Connect to the robot and the app should will be overlayed on the video stream.

Demo app

Paste the URL below into the “Overlay HTML” field (as above) to see a simple example. Feel free to look at the source to see how things work.

https://api.ohmnilabs.com/ohmni-api/test-incall.html

API Reference

Ohmni.move(lspeed, rspeed, time)

Sets the speed of the left and right wheels of the robot for a specified time in milliseconds.

This gives you low level access to the motion of the base. Wheel speeds are most commonly in the -2000 to 2000 range, but start with slower speeds in the -500 to 500 range when you are testing. If the speed is positive, the wheel will rotate counterclockwise. On the other hand, the wheel will rotate clockwise, if the speed is set negatively.

The time variable is capped at 10000ms (10 seconds). This bounds is for safety, making each command implicitly only last for up to 10 seconds. This prevents developers from accidentally having the robot drive off by writing a single command.

If you want continuous motion for longer than 10 seconds, you can call this function again while the previous movement is still running.

Examples:

Ohmni.move(-500, 500, 2000);   // Go backwards for 2 seconds
Ohmni.move(700, -700, 1500);   // Go forwards for 1.5 seconds
Ohmni.move(-200, -200, 5000);  // Rotate in place for 5 seconds

Ohmni.setNeckTorqueEnabled(en)

Turns on or off neck torque.

This either lets the neck be moved by hand (torque off) or turns on the neck and lets it hold position and be driven by the Ohmni.setNeckPosition call.

CAUTION: DO NOT TRY AND MOVE THE NECK BY HAND WHEN TORQUE IS ON! This can permanently damage the neck servo. When the neck is not in use, it is good practice to turn off torque.

Examples:

Ohmni.setNeckTorqueEnabled(1);
Ohmni.setNeckPosition(650, 100);

Ohmni.setNeckPosition(pos, ival)

Sets the position of the neck servo.

Provide a position in the range of 300 (looking down 90 degrees) to 650 (looking up about 45 degrees). 512 looks straight forwards.

The ival parameter instructs how long the move should last, i.e. larger values mean the neck will move more slowly to the desired position. Typical values are in the range of 80 (fast motion) to 220 (slow motion).

Examples:

Ohmni.setNeckTorqueEnabled(1);
Ohmni.setNeckPosition(350, 220);

Ohmni.setLightColor(h, s, v)

Sets the color of the underbody lighting in HSV space.

Call this to set the color of the LEDs beneath the robot. HSV values are all 0-255.

Example:

Ohmni.setLightColor(30,230,100);

Ohmni.setSpeechLanguage(locale_string)

Set the text to speech language of the robot.

Provide a locale string to set the text to speech language. Some examples: "en-US" for English and "zh-TW" for Traditional Chinese.

Call this before calling Ohmni.say with the appropriate string. You can switch between different languages at runtime.

Example:

Ohmni.setSpeechLanguage("zh-TW");
Ohmni.say("你好");
Ohmni.setSpeechLanguage("en-US");
Ohmni.say("Good morning!");

Ohmni.say(string_to_speak, [optional] callback)

Speaks a string using the current language settings. Triggers a callback when complete to make scripting easier.

This speaks the given string. Use UTF-8 encoded characters for Chinese or other languages. The callback is triggered when speaking is complete, so you can trigger other events (UI changes, robot motion or lights, etc.) on the callback.

Example:

Ohmni.say("Good morning!", function() {
  Ohmni.setLightColor(30, 230, 100);
});

Ohmni.requestBotInfo()

Return metadata for current robot

Ohmni.captureVideo(resolutionWidth, resolutionHeight)

The function will retrieve current image in the Video Call and pass data (images: base64) into callback function captureVideoCb

Examples:

# Define function like below to get 30 frames/second.
function captureVideo() {
  intervalGetVideo = setInterval(function() {
    # If you want to change the resolution response can put the parameters
    # into Ohmni.captureVideo(resolutionWidth, resolutionHeight)
    # Current resolution default is 300 x 150
    Ohmni.captureVideo();
  }, 33);
}

# Callback function after Ohmni.captureVideo retrieved the image in current call
function captureVideoCb(imageBase64) {
}

Ohmni.captureAudio()

The function will retrieve current audioStream comes from the bot then pass data (ArrayInt) into callback function captureAudioCb

Examples:

# Run this line below to start get audio stream from the bot.
Ohmni.captureAudio();

# Define callback function captureAudioCb(data) to use data which type is Uint8Array(4096)
# get from the audio stream
# Define global variable like below to push data into and use this data to write audio file
# We will use recordAudio[0] to set header of audio stream data later
var recordAudio = [''];
function captureAudioCb(data) {
  if (data.length < 4096) {
    // Set header of audio file
    recordAudio[0] = data;
  } else {
    recordAudio.push(data);
  }
}

# When stop or during the call if you want to write the audio stream into file by using code
# below to generate url audio file with format blob:https://api.ohmnilabs.com/xxx-xxx-xxxx
# Can download or playback on the link
const blob = new Blob(recordAudio, { type: 'audio/wav' });
const url = URL.createObjectURL(blob);

Ohmni.stopCaptureAudio()

The function will stop retrieve current audioStream comes from the bot after you did call Ohmni.captureAudio()

Ohmni.setBotVolume(value)

The function will set bot volume with range of value (0-11)

Ohmni.getBotVolume()

The function will retrieve current bot volume then pass data (number) into callback function getBotVolumeCb.

Examples:

# Define callback function that will be called automatically after
# Ohmni.getBotVolume retrieves the current bot volume.
function getBotVolumeCb(value) {
}

Ohmni.on('cdata', callback)

Receive json data sent from the bot through the botshell api.

Examples:

Ohmni.on('cdata', data => {
  // your custom code here
});

NOTE: In order to send json data to incall/standalone overlay, please look at the botshell APIs send_to_in_call_api/send_to_stand_alone_api in the nativejs API.

Ohmni.sendBotshellCmd({commandName, parameterArray})

Send NativeJS command to the robot. Please take a look at the NativeJS APIs to get the API Reference. You can also create your own NativeJS API in Extending NativeJS

Examples:

// send custom botshell command
Ohmni.sendBotshellCmd({ commandName: "test_plugin_cmd", parameterArray: ["Call plugin cmd", 20,true] });
// send native botshell command
Ohmni.sendBotshellCmd({ commandName: "rest_head", parameterArray: [] });
setTimeout(() => {
  Ohmni.sendBotshellCmd({ commandName: "wake_head", parameterArray: [] });  
}, 2000);

NOTE: Only support version 4.1.9.1-devedtion or later.

Ohmni.bindSpeechHandler(string_to_match, callback_id)

(currently disabled) This binds a particular string to be matched and triggers it.

Extra notes

Enable Remote Debugger to debug the web application running in Ohmni robot

When developing your web application, you might wish to remote debug the web application running on the bot. Chrome DevTool will allow you to perform this operation. Here is the steps to setup & utilize this tools with Ohmni bot

  1. Enable developer mode for the bot
  2. Connect to the bot with adb (Make sure your pc and your bot is in the same network) $ adb connect <bot-ip>
  3. Open Chrome in your pc and open the Developer Tools > More tools > Developer tools > More tools > Remote Devices
  4. You will see the bot and the page you load on it
  5. Click on Inspect to start debug the page.

Reference: https://developers.google.com/web/tools/chrome-devtools/remote-debugging/

Update WebView component of Android OS

If your web application utilizes media rich component (video playback, or webGL), you might need to update the WebView component to the later version.

(1) Download the following apk to your computer

(2) Push those files to the robot through adb shell (adb push) (in the below sample, files are pushed to /sdcard/Download/ folder)

$ adb push <your-local-path>/com.google.android.webview_88.0.4324.181.com.apk /sdcard/Download/
$ adb push <your-local-path>/framework-res.apk /sdcard/Download/

(3) Copy the framework-res.apk to /system/framework folder

$ cp /sdcard/Download/framework-res.apk /system/framework/framework-res.apk

(4) Restart the robot

(5) Install the webview apk

$ pm install -r -d /path-to-file/com.google.android.webview_88.0.4324.181.com.apk

Note: * Newer version of webview is not tested and might cause unknown behavior, so we recommend to use this 88.0.4324.181 version * pm command does not terminated when completed. You can open another console to query for the version of the webview. When the version is updated correctly, you can termimate the pm command by Ctrl+C

$ dumpsys package com.google.android.webview | grep version 
    versionCode=432418106 minSdk=21 targetSdk=30
    versionName=88.0.4324.181

(6) Switch the Webview Implementation to this new android webview though Android Developer Settings:

  • Scrool the screen down & Click on the Settings button to open Android Settings
  • Scroll down, Tap on About Tablet
  • Tap on Build number 7 times to switch to Developer mode (you will see the notice)
  • Go Back to Android Setting and notice “Developer options” is now visible
  • Tap on “WebView implementation”
  • Select “Android Webview Play”

(7) Restart the robot

(8) Verify the webview implementation has been changed by:

$ settings list global | grep webview_provider
webview_provider=com.google.android.webview

(9) Test the new android webview by loading this URL from the Robot Setting page in app.ohmnilabs.com. This is the webgl/threejs sample that cant be played properly in the old webview implementation.