Realsense T265 tracking camera

One of the key points in our project so far has been to keep it simple. At the same time we have experimented with some custom hardware, like Inertial Measurement Unit (IMU), rotary encoders and sonars. IMU is used to measure rotation and acceleration, rotary encoder to measure speed and sonar to detect obstacles in front of the car. All these are pretty old and battle-proven technologies, but still quite low level. At the same time there is a new wave of modern hardware becoming cheaper and more accessible.

When we saw someone talking in Donkeycar Slack about the incoming release of Intel Realsense T265, we got interested. After we saw the list of features for the price, we knew it has to be tried. Even if it wouldn’t give us the speedup we’re looking for, it would be nice to have something to say about it.

So what is T265?

Intel store states that “T265 is a new class of stand-alone Simultaneous Localization and Mapping device, for use in robotics, drones and more”. Nice. But what is T265 actually? It is a small USB-device that knows its position and movement for only 1.5 watts of power. It includes a stereo camera with wide-angle lenses, IMU and Intel Movidius Myriad 2 vision processing unit (VPU). So if someone asks what is it, we can tell it’s a cheap two-eyed edge-AI for the maximum hype.

The device can give you the following data (in relation to where it was started):

  • B&W camera frames for both cameras
  • Position in three axis
  • Rotation as quaternion
  • Velocity in three axis
  • Acceleration in three axis

Back to the reasoning: why did we buy one? We just thought having access to the exact location of our car would be highly beneficial for the incoming races and events. Also as a bonus we get the velocity without adding moving parts or other extra hardware.

Realsense T265T265 fresh from the oven

Unboxing

In these modern times of unboxing experiences, one has to describe the unboxing. It was quite pleasant experience. The actual device felt cold and smooth in its metal shell. We opened it immediately after the courier brought it from the cold Finnish outdoors, hence the coldness. Its appearance was a solid piece of hardware with a decent weight.

It just had to be tested immediately, so Realsense SDK was installed on a Ubuntu 16.04 laptop and as soon as it began showing data, it was strapped onto our shared longboard at our office. Pushing the longboard around the office showed pretty quickly that the tracking was working as supposed and that the accuracy was quite awesome.

After tinkering a while with the built-in GUI application called realsense-viewer, it was time to go deeper. And pretty soon it became obvious that the documentation wasn’t yet on par with the other Realsense devices. There was pretty much no documentation and no examples apart from a single python script in the repository. It wasn’t a big surprise as this hardware was still on preorder state. Still, it raised some concerns about the actual usage.

Conclusion

We’ll jump to a midterm conclusion before going to deep end. We haven’t tried T265 in actual use yet, but we have some idea how it works. In short: it’s really awesome and accurate when it works. It’s just that sometimes it has its weak moments and it gets confused. And because the positional tracking is incremental, even a slightest error will affect everything after that. We’ve seen a bit weird misbehaviour where the camera feed looks ok for a human, but still the position doesn’t change. Maybe these will be fixed in further firmware updates or maybe we are doing something wrong. Nevertheless, we’re eagerly waiting for our printed track to arrive to test the camera in a real setting.

Also it’s quite a limitation that the origin of the position data is always on the spot the camera was started. It’s not an issue of the camera itself, but still it’s something to consider when implementing something using this. We’ve thought about adding some known symbol or QR-code to our printed track to calibrate the position, but our environment is quite forgiving compared to any real use. With all these unknowns, we are still really excited to see what the community will build using these.

Raw stereo camera stream. FOV is a bit reduced because of our 3D-printed case

Left: Camera frame with OpenCV distortion removal. Right: 2D position plotted on surface

Compile

Using T265 requires RealSense SDK 2.0, which supports a variety of operating systems. As it goes to running on Linux, only Ubuntu 16.04 and 18.04 are supported. No mention about Raspbian. Still, communities and clever individuals don’t let these facts slow them down.

Luckily a few days after we received our T265, a member of Donkeycar community named Doug LaRue told that he got his unit working on Raspbian. He was nice enough to share his sorcery and we got the realsense SDK compiled on the Raspbian. Thanks a lot Doug!

After initially writing these instructions, I got a tip that inside the repository, there are official instructions for Realsense and Raspbian. You can find those here. Those include stuff like OpenCV, which you dont need on your headless rasbian running robot, so you could as well go with our simpler instructions as long as they work.

Here are the steps required to get it going:

1. Ensure your SD card partition is expanded

This is maybe a bit obvious step, but I’ll mention it anyway. If you’re using an OS image someone else made, like Donkeycar image, odds are that it’s not using all the space available on your SD-card. The partition can be expanded easily using sudo raspi-config (Advanced Options -> Expand Filesystem).

2. Increase your swap size

Compilation requires plenty of memory and Raspberry does not have enough RAM, so a swap file big enough is required. So we have to increase the default swap size.

# Toggle swap off
sudo dphys-swapfile swapoff

# Edit the config file and increase the swap size to 2048
# by editing variable CONF_SWAPSIZE=2048
sudo nano /etc/dphys-swapfile

# Toggle swap back on
sudo dphys-swapfile swapon

# Reboot your raspberry
sudo reboot

# After reboot, check that swap size changed
free

# Should show something like Swap: 2097148
3. Compile the SDK

Actual compilation of the library. Simplified from Dougs example by ignoring examples and GUI stuff.

# At first update everything
sudo apt update
sudo apt upgrade -y

# Install dependencies
sudo apt install git libssl-dev libusb-1.0-0-dev pkg-config -y
sudo apt install cmake python3-dev raspberrypi-kernel-headers -y

# Clone the repository under home
cd ~
git clone https://github.com/IntelRealSense/librealsense.git
cd librealsense

# Install udev rules
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules && udevadm trigger

# Create the destination directory
mkdir build
cd build

# Remove extra files if this is not your first run
xarg sudo rm < install_manifest.txt
rm CMakeCache.txt

export CC=/usr/bin/gcc-6
export CXX=/usr/bin/g++-6
cmake -D CMAKE_BUILD_TYPE="Release"\
     -D FORCE_LIBUVC=ON \
     -D BUILD_PYTHON_BINDINGS=ON \
     -D BUILD_EXAMPLES=OFF  ..

make -j4
sudo make install
sudo ldconfig
4. Use compiled Python wrapper

pyrealsense2 python wrapper can be used by copying the compiled library file next to the script importing it. It can be found in /home/pi/librealsense/build/wrappers/python

# Test the python library
mkdir ~/test
cd ~/test

cp /home/pi/librealsense/build/wrappers/python/pyrealsense2.cpython-35m-arm-linux-gnueabihf.so .

# Start python REPL
python

# Try if importing the library works
>>> import pyrealsense2 as rs
>>> print(rs)
<module 'pyrealsense2' from ' --->

HSP 94186 with T265First proud host of our T265

Using realsense with python

As stated above, Realsense SDK includes a python wrapper named pyrealsense2. The following examples are far from perfect but gets you going.

# This assumes .so file is found on the same directory
import pyrealsense2 as rs

# Prettier prints for reverse-engineering
from pprint import pprint
import numpy as np

# Get realsense pipeline handle
pipe = rs.pipeline()

# Configure the pipeline
cfg = rs.config()

# Prints a list of available streams, not all are supported by each device
print('Available streams:')
pprint(dir(rs.stream))

# Enable streams you are interested in
cfg.enable_stream(rs.stream.pose) # Positional data (translation, rotation, velocity etc)
cfg.enable_stream(rs.stream.fisheye, 1) # Left camera
cfg.enable_stream(rs.stream.fisheye, 2) # Right camera

# Start the configured pipeline
pipe.start(cfg)

try:
    for _ in range(10):
        frames = pipe.wait_for_frames()

        # Left fisheye camera frame
        left = frames.get_fisheye_frame(1)
        left_data = np.asanyarray(left.get_data())

        # Right fisheye camera frame
        right = frames.get_fisheye_frame(2)
        right_data = np.asanyarray(right.get_data())

        print('Left frame', left_data.shape)
        print('Right frame', right_data.shape)

        # Positional data frame
        pose = frames.get_pose_frame()
        if pose:
            pose_data = pose.get_pose_data()
            print('\nFrame number: ', pose.frame_number)
            print('Position: ', pose_data.translation)
            print('Velocity: ', pose_data.velocity)
            print('Acceleration: ', pose_data.acceleration)
            print('Rotation: ', pose_data.rotation)
finally:
    pipe.stop()

What next?

Now you know how to get T265 working on your Raspberry Pi robot so it’s time to start doing something more useful with it. We have started to share some examples in our repository, but it’s quite empty still. Hopefully we are also able to share the Donkeycar part soon.