Tag Archives: Xtion

ROS Basics – depthimage_to_laserscan with low cost depth sensors Asus Xtion or Microsoft Kinect

Most of my work depended on the efficient connection between the [amazon &title=Asus Xtion&text=Asus Xtion] and the [amazon &title=CubieTruck&text=CubieTruck] as a low cost laser scanner. As the [amazon &title=Asus Xtion&text=Asus Xtion] usually delivers 3D  sensor_msgs/PointCloud  data and most slamming algorithms need 2D  sensor_msgs/LaserScan messages to work properly, we need to find a solution to this issue: depthimage_to_laserscan .

If you already managed to use the ros-indigo-openni2-camera and  ros-indigo-openni2-launch you can use the following code:

As you might see, the  depthimage_to_laserscan gets initialized in a separate nodelet manager.

Nodelets are designed to provide a way to run multiple algorithms on a single machine, in a single process, without incurring copy costs when passing messages intraprocess. (Quote Wiki)

They hugely improve the performance of our 3D point clouds and allow significantly higher publishing rates.

An important property of an robot is the rate of data creation. A low rate influences most
higher algorithms leads to incorrect results. In most cases especially the depth sensors are
required to publish sufficient material to create detailed maps. The [amazon &title=Asus Xtion&text=Asus Xtion] Pro driver OpenNi2 and the ROS package openni2_camera offers multiple run modes which can be set by dynamic_reconfigure (I suggest using it in combination with rqt). Another essential option influencing performance is the data_skip parameter, which allows the system to skip a certain amount of pictures the hardware produces before loading them into memory and by that remarkably reduces computational load. It can be set to an integer value between zero, which means not to skip any frames at all, and ten, leading to every eleventh frame to be processed.

Performance Check

The different combinations of resolutions, maximum frequencies and the data_skip -parameter
ran on the aMoSeRo (my low cost [amazon &title=CubieTruck&text=CubieTruck] robot) is illustrated in the table below.  As it can be seen, especially the amount of frames that has to be processed per second highly influences the complete system.

 

In conlusion, the depthimage_to_laserscan package is really useful when working with low cost setups like depth sensors [amazon &title=Asus Xtion&text=Asus Xtion] or the [amazon &title=Kinect&text=Microsoft Kinect]. It furthermore is essential when interfacing SLAM algorithms.

first presentation of aMoSeRo at the BHT Freiberg Germany

Today the BHT which is a mining research forum in Freiberg, Germany took place. As the amosero should run as a support robot in mining somewhen this has been a great chance to firstly show off what we’ve got so far. After 4 weeks from zero to robot:

So we were able to demonstrate the [amazon &title=Asus Xtion&text=Asus Xtion] Features like a live IR Image, some 1fps RGB DepthCloud visualized in RVIZ, driving around including to spot turn.

The plate cookie box we used had some negative effect on the wlan capacity, which we need to address soon by e.g. changing the material or excluding the antenna.

Had been a nice experience showing that little low cost ros robot to public an I am still very exited where the journey leads in the remaining 4 months of my thesis.

Virtual rearrangement of aMoSeRo One

It’s still not easy finding the right combination and arrangement of all robot parts. Like mentioned in the previous post, SketchUp is a nice tool for easy 3D visualization using real physical dimensions. So I spend some time again:

So tomorrow I am trying to by the planned box and the new motors, hopefully posting real world photos soon.

Sketch him up!

Today I am trying to setup all parts of the coming robot. Because assembling and disassembling would take hours until finding the right configuration and saving steps in between it would be impossible – I thought of a better way. Using my shopworn SketchUp skills and way more time than I expected – finally there is a non perfect but practicable model with the most important parts that are going to be installed. All of them already have the correct physical dimensions which also means that it would be possible to deploy the robot in rviz later, or at least parts of it.

Here are some early stage impressions:

CubiBot1

concept phase: iso view

The wireframe boxes are space required by usb plugs or power jacks. These need to be accessible and can’t be blocked by anything else.

This robot is not ready yet, everything needs to be rearranged and boxed soon. Some parts are still missing, and no cables are shown, so everything will be more packed than it looks like.

ROS is about: simulation, simulation and simulation …

ROS needs to know everything about the physics of a robot. It starts with dimension to avoid collusions – both with the outside world and the robot itself (e.g. if it is using two robot arms at once).  Further it is relevant where the sensors are – or in my case where the [amazon &title=Asus Xtion&text=Asus Xtion] is located in relation to the robots base. Another interesting information are the robots joints. Its needed to drive the wheels and to rotate the camera. For all that, a detailed description and representation of the robot in a format that a computer understands is essential.

Today I’ve made a hugh step in the simulation field, so struggeling with the motors in the real world for the last few days doesn’t feel too bad – at least I can generate some nice pictures now:

For me an interesting journey started, with a lot ups and downs – currently I am really excited where we will be in 18 weeks – because 2 weeks of my thesis already passed.