Author Archives: paul

ROS Basics – Using ROS Indigo/Jade with a Webcam by the uvc_camera (USB Video Class) package

There are several ways to use ROS Indigo/Jade with a webcam. The one way working at most computer is using a ROS package called uvc_camera which has been created by Ken Tossell. UVC in this context stands for USB Video Class, which is a  standard that covers almost all consumer webcams.

Unfortunately there currently is no step by step tutorial how to use the package, which is why I created this page. In order to run the package, you will need a local catkin workspace as we created it in another post. This is caused by the fact, that the available package is outdated and does not contain any launch files.

Step by Step Guide

We start by cloning the files into our workspace ./src directory, solving the dependencies with rosdep and finally building the workspace with catkin_make:

cd ~/catkin_ws/src/ #change directory to your source folder
git clone https://github.com/ktossell/camera_umd.git #clone the package from its repo
rosdep install camera_umd uvc_camera jpeg_streamer
cd .. #go one dir up to catkin_ws
catkin_make #build the workspace

Before I could build my workspace with the newly cloned files, I still was required to install video4linux support libraries in their development version by:

sudo apt-get install libv4l-dev

After catkin_make finished you can launch the uvc_camera node by:

roscd uvc_camera/launch/
roslaunch ./camera_node.launch

After having a roscore running, the camera_node.launch file should give you something like the following output:

opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
pixfmt 1 = 'MJPG' desc = 'MJPEG'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
  int (Brightness, 0, id = 980900): 0 to 255 (1)
  int (Contrast, 0, id = 980901): 0 to 255 (1)
  int (Saturation, 0, id = 980902): 0 to 255 (1)
  bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
  int (Gain, 0, id = 980913): 0 to 255 (1)
  menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
    0: Disabled
    1: 50 Hz
    2: 60 Hz
  int (White Balance Temperature, 16, id = 98091a): 0 to 10000 (10)
  int (Sharpness, 0, id = 98091b): 0 to 255 (1)
  int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
  menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
  int (Exposure (Absolute), 16, id = 9a0902): 1 to 10000 (1)
  bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
Setting auto_focus is not supported
Setting focus_absolute is not supported

where you can see the possible run modes you know can configure in your custom launch file:

<launch>
  <node pkg="uvc_camera" type="uvc_camera_node" name="uvc_camera" output="screen">
    <param name="width" type="int" value="640" /> 
    <!-- we raised the value by the factor 2, as it is supported by previous output -->
    <param name="height" type="int" value="480" /> 
    <!-- we raised the value by the factor 2 -->
    <param name="fps" type="int" value="30" />
    <param name="frame" type="string" value="wide_stereo" />

    <param name="auto_focus" type="bool" value="False" />
    <param name="focus_absolute" type="int" value="0" />
    <!-- other supported params: auto_exposure, exposure_absolute, brightness, power_line_frequency -->
    <!-- in case you want to use a different video input device, change the value below -->
    <param name="device" type="string" value="/dev/video0" /> 
    <param name="camera_info_url" type="string" value="file://$(find uvc_camera)/example.yaml" />
  </node>
</launch>

You can now start rqt and its plugin Visualization > Image View, choose e.g. the /image_raw topic and in case you have club mate and a hitchhiker’s guide to the galaxy by Douglas Adams around, you’ll get the following output:

RQT Image View UVC camera

 

 

ROS Basics – depthimage_to_laserscan with low cost depth sensors Asus Xtion or Microsoft Kinect

Most of my work depended on the efficient connection between the [amazon &title=Asus Xtion&text=Asus Xtion] and the [amazon &title=CubieTruck&text=CubieTruck] as a low cost laser scanner. As the [amazon &title=Asus Xtion&text=Asus Xtion] usually delivers 3D sensor_msgs/PointCloud  data and most slamming algorithms need 2D sensor_msgs/LaserScan messages to work properly, we need to find a solution to this issue: depthimage_to_laserscan .

If you already managed to use the ros-indigo-openni2-camera and ros-indigo-openni2-launch you can use the following code:

<!-- this code originates from https://github.com/turtlebot/turtlebot/blob/hydro/turtlebot_bringup/launch/3dsensor.launch -->
<launch>
  <!-- "camera" should uniquely identify the device. All topics are pushed down
       into the "camera" namespace, and it is prepended to tf frame ids. -->
  <arg name="camera"      default="camera"/>
  <arg name="publish_tf"  default="true"/>

  <!-- Factory-calibrated depth registration -->
  <arg name="depth_registration"              default="true"/>
  <arg     if="$(arg depth_registration)" name="depth" value="depth_registered" />
  <arg unless="$(arg depth_registration)" name="depth" value="depth" />

  <!-- Processing Modules -->
  <arg name="rgb_processing"                  default="true"/>
  <arg name="ir_processing"                   default="true"/>
  <arg name="depth_processing"                default="true"/>
  <arg name="depth_registered_processing"     default="true"/>
  <arg name="disparity_processing"            default="true"/>
  <arg name="disparity_registered_processing" default="true"/>
  <arg name="scan_processing"                 default="true"/>

  <!-- Worker threads for the nodelet manager -->
  <arg name="num_worker_threads" default="4" />

  <!-- Laserscan topic -->
  <arg name="scan_topic" default="scan"/>

  <include file="$(find openni2_launch)/launch/openni2.launch">
    <arg name="camera"                          value="$(arg camera)"/>
    <arg name="publish_tf"                      value="$(arg publish_tf)"/>
    <arg name="depth_registration"              value="$(arg depth_registration)"/>
    <arg name="num_worker_threads"              value="$(arg num_worker_threads)" />

    <!-- Processing Modules -->
    <arg name="rgb_processing"                  value="$(arg rgb_processing)"/>
    <arg name="ir_processing"                   value="$(arg ir_processing)"/>
    <arg name="depth_processing"                value="$(arg depth_processing)"/>
    <arg name="depth_registered_processing"     value="$(arg depth_registered_processing)"/>
    <arg name="disparity_processing"            value="$(arg disparity_processing)"/>
    <arg name="disparity_registered_processing" value="$(arg disparity_registered_processing)"/>
  </include>

   <!--                        Laserscan 
     This uses lazy subscribing, so will not activate until scan is requested.
   -->
  <group if="$(arg scan_processing)">
    <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager">
      <!-- Pixel rows to use to generate the laserscan. For each column, the scan will
           return the minimum value for those pixels centered vertically in the image. -->
      <param name="scan_height" value="10"/>
      <param name="output_frame_id" value="/$(arg camera)_depth_frame"/>
      <param name="range_min" value="0.45"/>
      <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="scan" to="$(arg scan_topic)"/>

      <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/>
    </node>
    
  </group>
</launch>

As you might see, the depthimage_to_laserscan gets initialized in a separate nodelet manager.

Nodelets are designed to provide a way to run multiple algorithms on a single machine, in a single process, without incurring copy costs when passing messages intraprocess. (Quote Wiki)

They hugely improve the performance of our 3D point clouds and allow significantly higher publishing rates.

An important property of an robot is the rate of data creation. A low rate influences most
higher algorithms leads to incorrect results. In most cases especially the depth sensors are
required to publish sufficient material to create detailed maps. The [amazon &title=Asus Xtion&text=Asus Xtion] Pro driver OpenNi2 and the ROS package openni2_camera offers multiple run modes which can be set by dynamic_reconfigure (I suggest using it in combination with rqt). Another essential option influencing performance is the data_skip parameter, which allows the system to skip a certain amount of pictures the hardware produces before loading them into memory and by that remarkably reduces computational load. It can be set to an integer value between zero, which means not to skip any frames at all, and ten, leading to every eleventh frame to be processed.

Performance Check

The different combinations of resolutions, maximum frequencies and the data_skip -parameter
ran on the aMoSeRo (my low cost [amazon &title=CubieTruck&text=CubieTruck] robot) is illustrated in the table below.  As it can be seen, especially the amount of frames that has to be processed per second highly influences the complete system.

 

In conlusion, the depthimage_to_laserscan package is really useful when working with low cost setups like depth sensors [amazon &title=Asus Xtion&text=Asus Xtion] or the [amazon &title=Kinect&text=Microsoft Kinect]. It furthermore is essential when interfacing SLAM algorithms.

ROS Basics – Step by step guide to a working ROS Indigo Ubuntu 14.04 Laptop/PC

We are beginning with a blank Xubuntu 14.04 Trusty x86 on a [amazon asin=B004URCE4O&text=Lenovo Thinkpad T520] . Any other version of a working Ubuntu 14.04 x86 should be compatible to this tutorial.

Setup Ubuntu environment:

If you are a complete beginner with Linux and Ubuntu, i would advice you to install several tools that are necessary or at least helpful while working with ROS. To install them use the following command and allow sudo to run with administrative permissions by entering your password when asked:

sudo apt-get install fail2ban ufw terminator git

In short, fail2ban is a advanced firewall tool that protects you from bruteforce, ufw is a ‘human readable interface’ to iptables and allows easy firewall rule organisation. Next, terminator is a terminal multiplexer that provides multiple terminals at once without leaving the keyboard while operating. Another essential tool is git, a source code versioning system.

There are more tools that are helpful, but can be considered as optional:

sudo apt-get install vim vnstat htop bmon chromium-browser

Setup ROS desktop environment Ubuntu 14.04 Trusty:

To install ROS itself we can easily follow the well written tutorials provided by their wiki:  http://wiki.ros.org/indigo/Installation/Ubuntu .

In short the commands are like shown below:

  • sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
  • wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O – | sudo apt-key add –
  • sudo apt-get update
  • sudo apt-get install ros-indigo-desktop-full

Setup ~/.bashrc part I:

In order to work correctly ROS requires several bash environment variables, that are not very well documented in the install tutorial. You can enter the following commands every time you start a new bash, or add it to .bashrc, the script that gets executed every time you start a bash.

The most important command:

source /opt/ros/hydro/setup.bash

enables bash to provide all ROS related commands like roscore  and rostopic .

In order to work in an network environment (see their wiki), ROS also requires three more variables, namely:

export ROS_MASTER_URI="http://127.0.0.1:11311"
export ROS_HOSTNAME="127.0.0.1"
export ROS_IP="127.0.0.1"

Where ROS_MASTER_URI, defines the ip location of the roscore and the other two the ip of the local instance. As you see, in the example all ips are the local host ip 127.0.0.1 and need to be accordingly changed in order to work properly.

To simplify the ip settings, I suggest some modifications to the commands like that:

export ROS_MASTER_URI="http://`ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'`:11311"
export ROS_HOSTNAME="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"
export ROS_IP="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"

Which sets the ips to the local wlan0 adapter.

Create catkin workspace:

To use non packaged versions of ROS packages or the latest versions that did not have been compiled to the repository, you’ll need a local catkin workspace. Catkin is the ROS build tool, that is required to build packages from source. It allows multiple programming languages per package and handles linking dependencies.  To create a local workspace you can follow the ROS wiki tutorial:  http://wiki.ros.org/catkin/Tutorials/create_a_workspace.

In short, you also can follow these comands:

  • mkdir -p ~/catkin_ws/src
  • cd ~/catkin_ws/src
  • catkin_init_workspace

We will now build the empty work space as a first test:

  • cd ~/catkin_ws/
  • catkin_make
  • source devel/setup.bash

Setup ~/.bashrc part II:

We also need to reference the newly created local workspace in our bashrc. Without doing that, tools like roslaunch and rosrun wouldn’t be able to find the customly created packages.

source /home/insert-your-username/catkin_ws/devel/setup.bash

You can now build, clone or fork your custom packages and therefore can call your pc a working ROS Indigo environment! 🙂

ROS Tools you can try:

Low cost Hector_mapping with Xtion, 9DRazor IMU and no hardware odometry

This weekend I had the chance to indoor slam by simply walking through my flat with an [amazon asin=B005UHB8EK&text=Asus Xtion] (150 EUR), an 9DRazor (+3.3 FTDI and Cable around 100 EUR) and a common [amazon asin=B004URCE4O&text=Laptop].

By setting up ROS Indigo and using existing software I now can create a simple 2D map of my flat and thanks to the PhD Programm Heterogeneous Cooperating Teams of Robots (Hector) of the TU-Darmstadt, which I slightly modified to fit the low cost setup, the results are quite impressive.

The Xtion is not capable of delivering a 360 degree view, which is why i needed to walk slowly. By changing the setup from a weak ARM to an powerful Intel i5, data rates and size was way better than the aMoSeRo was capable of:

For the ROS interested folks here some ROS related graphs:

It has been a weekend project, therefore the source and some semantic things are not beautiful but working.  E.g. the TF Tree is statically imitating the suggested setup on the Hector Wiki.

Maybe we could profit from using two Xtions and merging the /scans together. By that we would achieve a 300 EUR replacement of a at least 1000 EUR 2D Laser Scanner and would be able to 3D PointCloud everything later.

Thesis passed.

I finished my thesis and passed. As a consequence, I soon will publish my work on this website. Unfortunately the German laws according to copyright forbid to use pictures taken by others without according licenses or paying money. I am currently thinking of replacing certain pictures by a bit worse, but free versions of them, before uploading it. I maybe also will divide the 50 pages into semantic parts and publish is as static part of the website soon.

Colloquium

I am going to demonstrate the complete thesis and the aMoSeRo on:

Monday 20th, October 2014 – 10.00 am
URZ-3409 – Universitätsrechenzentrum
Bernhard-von-Cotta-Straße 1
09599 Freiberg

 Maps

Synchronize the time in ROS offline environments without chrony

As our [amazon &title=CubieTruck&text=CubieTruck] is faced with strange issues when using chrony and internet access is not a general prerequisite on ROS setups, i needed to figure out a new way to synchronize the time with no internet ntp server available. For some reasons, even my local ntp was broken, which is why I try to set the time according to the ros master on all clients by this simple bash command:

ntpdate `echo $ROS_MASTER_URI | grep -oE "b([0-9]{1,3}.){3}[0-9]{1,3}b"`

it simply extracts the IPv4 part of the $ROS_MASTER_URI environment and uses ntpdate to set the time on the excecuting client system.

In case you only want to know the exact time derivation consider using the ntpdate parameter -q which only emulates the request.

aMoSeRo – 3D Cheese!

I have implemented a 3D photo function that is computational not very intensive. It gets triggered by pressing a key when a snapshot is wanted. By that it is possible to add a big Point Cloud around the map as you can see here:

Now writing needs to be finished, code needs to be commented and cleaned up – last two weeks already begun.

aMoSeRo – mapping the reality

Today is the day of the first accurate aMoSeRo map. I tried several slamming algorithms e.g. the hector_mapping package, but data has been to bad. So after reviewing nearly all my code, fixing a lot of unit issues and publishing rates, today the first map has been created, which really is a map of the place I am living!

Using Chrony on CubieTruck

Don’t.

Unless you really know what are you doing.

To synchronize the clock and fix a minimal time shift I was detecting, I followed the idea of the TurtleBot2 to use chrony to fix that Chrony is a little daemon that connects to your linux clock or hwclock and detect shifts. For some reason this lead to total chaos on the amosero.
I suppose chrony hasn’t been build for multicore dynamically speeded processors like the A20, which is why the shifting has been erratic and up to 2 seconds per minute.

sudo apt-get remove chrony

Fixed all timing errors on the [amazon &title=CubieTruck&text=CubieTruck]. Also it’s a bit disturbing how little changes can inflict complex setups.