Ohmni provides four layers of development APIs for maximum flexibility. The layers are as follows:

  1. Native JS
    1. Node.js based control, device interface logic, and IPC running on robot
  2. Web API
    1. Our cloud robotics framework design to make writing rich, interactive applications on the robot as easy as writing webapps
  3. Ohmni Docker Layer
    1. Full Ubuntu compatibility to develop and run low level code like computer vision or audio processing algorithms, custom sensor integrations ROS, etc.
  4. Ohmni Kernel
    1. Full access to Ohmni kernel sources so you can recompile and add any drivers you need

Our goal is to put as much power as possible into the upper layers so that you only need to use the lower layers when there's a specific need.

Making more complex changes such as integrating a LIDAR and autonomous navigation may be best done by making a few changes throughout the stack. Using this as an example we might imagine the following kinds of modifications:

  • Ohmni Kernel
    • Recompile kernel to support a particular depth camera
  • Ohmni Docker Layer
    • Write low level C/C++ code to read from depth camera and process frames using OpenCV or ORB-SLAM or other
    • Run TensorFlow on some image data
    • Run roscore and some rosnodes if needed
  • Native JS
    • Add logic to talk to the native layer SLAM and get map and odometry data, and execute higher level control
    • Report data to OhmniAPI (to display on the robot's screen) and to the cloud
  • Web API
    • Add some custom HTML/CSS UI display of mapping and odometry data and use the real-time updates from the lower levels to update the map
    • Add speech commands and triggers so users can query or set the destination of the robot

We'll go through each of the layers next as separate sections are there is a lot of material to cover in each.