Tag Archives: ROS

ROS Basics – a short Introduction into ROS

The Robot Operating System (ROS) is an open-source meta-operating system which provides essential features, namely hardware abstraction, low-level device control, implementation of environmental functionality, such as visualisation, simulation or testing and allows message-passing between concurrent running processes [O’K13]. Furthermore, it offers implementations of commonly used functionality in installable packages which even cover complex
algorithms like Simultaneous Localization and Mapping (SLAM) and Visual Object Recognition (VOR).

ROS moreover contains tools and libraries for obtaining, building, writing and running code across multiple heterogeneous computers and therefore includes language and platform independent tools. For example, ROS supports multiple client libraries, namely roscpp for C++ , rospy for Python , roslisp for Lisp and many others. It is also possible to link application-related code and external libraries like OpenCV for computer vision or Eigen3 for efficient linear algebra computation. Furthermore, ROS can successfully be wrapped around other frameworks like the Player Project .

ROS is mostly licensed free and has been developed as open source software under BSD Licence, which offers a variety of advantages for a low cost robot.

Unsurprisingly, with the high complexity of ROS there comes one of the highest learning curves of all robotic frameworks. Besides, due to rapid changes of main characteristics during different major versions, nearly all books and most tutorials on the internet became unreliable which is often very confusing for a beginner. But, after the top of the curve, a lot of things are self explanatory and complex features can be implemented very fast.

Unfortunately, another point to mention and one of the main disadvantages of ROS is the dependency on the ROS host and its Operating System (OS). In case you do not develop on a x86 32bit system a lot of automations do not work and require patience to be solved.
Especially, the support of packages on armhf , the ARM release repository, is not very usable, yet. Additionally, despite the importance of reactivity and low latency ROS is – like all other frameworks – no realtime OS.

ROS general terminology

ROS is a message-based concurrent running heterogeneous peer-to-peer network application. Its structure can be imagined as a mostly undirected graph with an obligatory center process node, called roscore . Broadly speaking, this one master node tracks every other part of the robotic network, including running processes and their interfaces. The centralistic design consequently uses its advantages by offering global debugging possibilities and error logging. It further mediates direct connections between every graph node on request. This becomes very useful in cases like image processing, where running traffic over the central node would impact the global system by increasing network usage and processing power.
Still simplifying, other parts of the graph are organized name spaces, called rosnodes , which in turn are containing more rosnodes or process edges called dependently on their function as rostopics or rosservices . A rosnode in a ROS environment therefore can be a robot, a processing server for navigation or even a human interaction device, like a laptop. Usually they physically do not cross the border of a single computing system, but often a single system can run multiple name spaces. Also, rosnodes profit from zero copy shared memory handling between their topics by using the ROS nodelet manager and by that significantly reduce
memory consumption. Every rosnode offers at least one rostopic , a multi-peer subscribable message provider, or a rosservice , a bidirectional unique connection between peers containing parameters.

ROS history

In 2007, the first robot running a version of ROS was STanford Artificial Intelligence Robot
(STAIR) which was developed by Stanford Artificial Intelligence Laboratory (SAIL). During
that time ROS was called switchyard but already followed its main principles like inter-
process communication, concurrency and heterogeneous environments. After that, Willow
Garage primarily developed ROS until February 2013. At this time ROS reached the critical-
mass, every open source project needs to survive without being mainly driven by external
funding. Since then the stewardship of ROS has been moved to the Open Source Robotics Foundation  and subsequently left Willow Garage .
Major versions of ROS are called distributions and are named using adjectives that start
with with successive letters of the alphabet. Starting with box turtle , C Turtle , diamondback ,
electric , fuerte , groovy , hydro and finally Indigo , which is available since May 2014.

ROS Basics – Using ROS Indigo/Jade with a Webcam by the uvc_camera (USB Video Class) package

There are several ways to use ROS Indigo/Jade with a webcam. The one way working at most computer is using a ROS package called uvc_camera which has been created by Ken Tossell. UVC in this context stands for USB Video Class, which is a  standard that covers almost all consumer webcams.

Unfortunately there currently is no step by step tutorial how to use the package, which is why I created this page. In order to run the package, you will need a local catkin workspace as we created it in another post. This is caused by the fact, that the available package is outdated and does not contain any launch files.

Step by Step Guide

We start by cloning the files into our workspace ./src directory, solving the dependencies with rosdep and finally building the workspace with catkin_make:

cd ~/catkin_ws/src/ #change directory to your source folder
git clone https://github.com/ktossell/camera_umd.git #clone the package from its repo
rosdep install camera_umd uvc_camera jpeg_streamer
cd .. #go one dir up to catkin_ws
catkin_make #build the workspace

Before I could build my workspace with the newly cloned files, I still was required to install video4linux support libraries in their development version by:

sudo apt-get install libv4l-dev

After catkin_make finished you can launch the uvc_camera node by:

roscd uvc_camera/launch/
roslaunch ./camera_node.launch

After having a roscore running, the camera_node.launch file should give you something like the following output:

opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
pixfmt 1 = 'MJPG' desc = 'MJPEG'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
  int (Brightness, 0, id = 980900): 0 to 255 (1)
  int (Contrast, 0, id = 980901): 0 to 255 (1)
  int (Saturation, 0, id = 980902): 0 to 255 (1)
  bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
  int (Gain, 0, id = 980913): 0 to 255 (1)
  menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
    0: Disabled
    1: 50 Hz
    2: 60 Hz
  int (White Balance Temperature, 16, id = 98091a): 0 to 10000 (10)
  int (Sharpness, 0, id = 98091b): 0 to 255 (1)
  int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
  menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
  int (Exposure (Absolute), 16, id = 9a0902): 1 to 10000 (1)
  bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
Setting auto_focus is not supported
Setting focus_absolute is not supported

where you can see the possible run modes you know can configure in your custom launch file:

<launch>
  <node pkg="uvc_camera" type="uvc_camera_node" name="uvc_camera" output="screen">
    <param name="width" type="int" value="640" /> 
    <!-- we raised the value by the factor 2, as it is supported by previous output -->
    <param name="height" type="int" value="480" /> 
    <!-- we raised the value by the factor 2 -->
    <param name="fps" type="int" value="30" />
    <param name="frame" type="string" value="wide_stereo" />

    <param name="auto_focus" type="bool" value="False" />
    <param name="focus_absolute" type="int" value="0" />
    <!-- other supported params: auto_exposure, exposure_absolute, brightness, power_line_frequency -->
    <!-- in case you want to use a different video input device, change the value below -->
    <param name="device" type="string" value="/dev/video0" /> 
    <param name="camera_info_url" type="string" value="file://$(find uvc_camera)/example.yaml" />
  </node>
</launch>

You can now start rqt and its plugin Visualization > Image View, choose e.g. the /image_raw topic and in case you have club mate and a hitchhiker’s guide to the galaxy by Douglas Adams around, you’ll get the following output:

RQT Image View UVC camera

 

 

ROS Basics – Step by step guide to a working ROS Indigo Ubuntu 14.04 Laptop/PC

We are beginning with a blank Xubuntu 14.04 Trusty x86 on a [amazon asin=B004URCE4O&text=Lenovo Thinkpad T520] . Any other version of a working Ubuntu 14.04 x86 should be compatible to this tutorial.

Setup Ubuntu environment:

If you are a complete beginner with Linux and Ubuntu, i would advice you to install several tools that are necessary or at least helpful while working with ROS. To install them use the following command and allow sudo to run with administrative permissions by entering your password when asked:

sudo apt-get install fail2ban ufw terminator git

In short, fail2ban is a advanced firewall tool that protects you from bruteforce, ufw is a ‘human readable interface’ to iptables and allows easy firewall rule organisation. Next, terminator is a terminal multiplexer that provides multiple terminals at once without leaving the keyboard while operating. Another essential tool is git, a source code versioning system.

There are more tools that are helpful, but can be considered as optional:

sudo apt-get install vim vnstat htop bmon chromium-browser

Setup ROS desktop environment Ubuntu 14.04 Trusty:

To install ROS itself we can easily follow the well written tutorials provided by their wiki:  http://wiki.ros.org/indigo/Installation/Ubuntu .

In short the commands are like shown below:

  • sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
  • wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O – | sudo apt-key add –
  • sudo apt-get update
  • sudo apt-get install ros-indigo-desktop-full

Setup ~/.bashrc part I:

In order to work correctly ROS requires several bash environment variables, that are not very well documented in the install tutorial. You can enter the following commands every time you start a new bash, or add it to .bashrc, the script that gets executed every time you start a bash.

The most important command:

source /opt/ros/hydro/setup.bash

enables bash to provide all ROS related commands like roscore  and rostopic .

In order to work in an network environment (see their wiki), ROS also requires three more variables, namely:

export ROS_MASTER_URI="http://127.0.0.1:11311"
export ROS_HOSTNAME="127.0.0.1"
export ROS_IP="127.0.0.1"

Where ROS_MASTER_URI, defines the ip location of the roscore and the other two the ip of the local instance. As you see, in the example all ips are the local host ip 127.0.0.1 and need to be accordingly changed in order to work properly.

To simplify the ip settings, I suggest some modifications to the commands like that:

export ROS_MASTER_URI="http://`ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'`:11311"
export ROS_HOSTNAME="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"
export ROS_IP="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"

Which sets the ips to the local wlan0 adapter.

Create catkin workspace:

To use non packaged versions of ROS packages or the latest versions that did not have been compiled to the repository, you’ll need a local catkin workspace. Catkin is the ROS build tool, that is required to build packages from source. It allows multiple programming languages per package and handles linking dependencies.  To create a local workspace you can follow the ROS wiki tutorial:  http://wiki.ros.org/catkin/Tutorials/create_a_workspace.

In short, you also can follow these comands:

  • mkdir -p ~/catkin_ws/src
  • cd ~/catkin_ws/src
  • catkin_init_workspace

We will now build the empty work space as a first test:

  • cd ~/catkin_ws/
  • catkin_make
  • source devel/setup.bash

Setup ~/.bashrc part II:

We also need to reference the newly created local workspace in our bashrc. Without doing that, tools like roslaunch and rosrun wouldn’t be able to find the customly created packages.

source /home/insert-your-username/catkin_ws/devel/setup.bash

You can now build, clone or fork your custom packages and therefore can call your pc a working ROS Indigo environment! 🙂

ROS Tools you can try:

Synchronize the time in ROS offline environments without chrony

As our [amazon &title=CubieTruck&text=CubieTruck] is faced with strange issues when using chrony and internet access is not a general prerequisite on ROS setups, i needed to figure out a new way to synchronize the time with no internet ntp server available. For some reasons, even my local ntp was broken, which is why I try to set the time according to the ros master on all clients by this simple bash command:

ntpdate `echo $ROS_MASTER_URI | grep -oE "b([0-9]{1,3}.){3}[0-9]{1,3}b"`

it simply extracts the IPv4 part of the $ROS_MASTER_URI environment and uses ntpdate to set the time on the excecuting client system.

In case you only want to know the exact time derivation consider using the ntpdate parameter -q which only emulates the request.

LeapMotion and ROS

Today I’ve got the chance to get my hands on a Leap Motion. As it uses depth information to track hands on a short range from the device and as there is a ros driver package existing for it, I hoped to get a 3D PointCloud. It costs about 80€ and could have been a cheap replacement for the [amazon &title=Xtion&text=Asus Xtion].

Unfortunately its not possible (yet?) – here is a very nice post why.

But it is fun anyways to get both hands tracked:

Youtube Video

The ros driver interfaces ros with only one hand – but we could do something like shown below to control the amosero:

Youtube Video

Later it would be a nice way to control a robot arm – but for now we leave that nice little device as there is a lot of other stuff to be done.

aMoSeRo – first Simultaneous Planning Localization and Mapping (SPLAM)

Far from done, but right on the way – the aMoSeRo did his first 2D planning today.  There are still a lot of adjustments needed for the mapping to work properly, but it’s already impressive to see ROS working.

first SPLAM - first navigation through a map

first SPLAM – first navigation through a map

The node graph still grows and will need some changes when used with multiple robots, but organising goes on 🙂

first SPLAM - it is getting even worse, more topics, more nodes

aMoSeRo – very first mapping view available

After days with latex and struggling with all sensor data a mobile robot needs, today is the first day of ROS showing me a small map view. It’s anything but stable and I can’t claim understanding everything – but because I’ve hadn’t something to report for some time now here a small demonstration:

First Mapping Experience

Screenshot fromMAP

ScreenshotTopics Topics Overview – amosero a distributed system still far from optimal

Because my IMU doesn’t do its work as it should, I’ve used a WiiMote Motion + and run it by a common ros driver and bluetooth.

Yeeeha 🙂

ROS Hydro / Indigo PointCloud to Laserscan for 2D Navigation

UPDATE: see my step by step guide for depthimage_to_laserscan here.

PointClouds need a lot of processing and network traffic load. For 2D navigation LaserScans are a good option to decrease this loads, in my case below to 30% of the original.

So after starting my customized openni2_launch (the launchers of openni_camera, and both ros drivers) with a custom Openni2 driver dir:

OPENNI2_DRIVERS_PATH=/usr/lib/OpenNI2/Drivers/ roslaunch /home/user/catkin_ws/src/robot_bringup/launch/openni2.launch

(OPENNI2_DRIVERS_PATH fixes all issues with the camera driver nodelet)

its possible to process this down to a /scan topic by:

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

or directly in a launch file using the handy nodelet manager:

  <group>
    <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager">
      <param name="scan_height" value="10"/>
      <param name="output_frame_id" value="/$(arg camera)_depth_frame"/>
      <param name="range_min" value="0.45"/>
      <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="scan" to="$(arg scan_topic)"/>

      <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/>
    </node>
  </group>

After that it is possible to visualize the new topic using rviz getting something like that:

of course the robot is still able to laserscan with the xtion (now in rainbow indicating z of the camera)

 

 

ROS is about: simulation, simulation and simulation …

ROS needs to know everything about the physics of a robot. It starts with dimension to avoid collusions – both with the outside world and the robot itself (e.g. if it is using two robot arms at once).  Further it is relevant where the sensors are – or in my case where the [amazon &title=Asus Xtion&text=Asus Xtion] is located in relation to the robots base. Another interesting information are the robots joints. Its needed to drive the wheels and to rotate the camera. For all that, a detailed description and representation of the robot in a format that a computer understands is essential.

Today I’ve made a hugh step in the simulation field, so struggeling with the motors in the real world for the last few days doesn’t feel too bad – at least I can generate some nice pictures now:

For me an interesting journey started, with a lot ups and downs – currently I am really excited where we will be in 18 weeks – because 2 weeks of my thesis already passed.