Tag Archives: PointCloud

ROS is about: simulation, simulation and simulation …

ROS needs to know everything about the physics of a robot. It starts with dimension to avoid collusions – both with the outside world and the robot itself (e.g. if it is using two robot arms at once).  Further it is relevant where the sensors are – or in my case where the [amazon &title=Asus Xtion&text=Asus Xtion] is located in relation to the robots base. Another interesting information are the robots joints. Its needed to drive the wheels and to rotate the camera. For all that, a detailed description and representation of the robot in a format that a computer understands is essential.

Today I’ve made a hugh step in the simulation field, so struggeling with the motors in the real world for the last few days doesn’t feel too bad – at least I can generate some nice pictures now:

For me an interesting journey started, with a lot ups and downs – currently I am really excited where we will be in 18 weeks – because 2 weeks of my thesis already passed.

ROS DepthCloud processing distributed

Today I achieved the following setup by dividing my openni2_launch files into two separate launchers beeing executed on two different machines: one for processing (nodelet_managing) running at a powerful server and one for streaming the [amazon &title=Xtion&text=Asus Xtion]-Image data from the [amazon &title=CubieTruck&text=CubieTruck] to the /camera -topic namespace. After that I could visualize, what my laptop wasn’t able to do before: a 3D DepthCloud with RGB-Data coloring in rviz. It has a native resolution of 640*480 and looks like that:

my first ROS 3D DepthCloud

I can’t say how efficient the load is balanced right now – because I am currently still optimizing.