Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Our objective of this iteration is to find a way to clean Turtlebot's costmap, so any long gone obstacle will not stay on the map, but the Turtlebot should also not run into a transient obstacle like a person or a chair.
We are using roslaunch cr_ros campus_rover.launch
command to bring up Turtlebot. This launch file launches amcl with a configuration similar to amcl_demo.launch
file in turtlebot_navigation
package. Then we run rviz to visualize the static floor plan map and costmap.
In previous demos, we found that Turtlebot would successfully mark transient obstacles, such as a person passing by, on its costmap and avoid them. But it failed to unmark them even after they are gone. These marks of long gone obstacles would cause the path planner to avoid them. Eventually Turtlebot would stuck because it cannot find a valid path to its goal.
We found a possible pattern and cause for this problem. In this post thread, someone mentions that:
"Costmap2D seems not to "clear" the space around the robot if the laser scan range is max."
We tested this claim. Indeed, when an obstacle in front of Turtlebot is out of max range of its camera sensor, someone who pass through the empty space between the obstacle and Turtlebot's camera would be marked permanently on the costmap. However, if an obstacle is within max range of camera sensor, someone pass through will be marked and then unmarked immediately once this person is gone.
The above post thread also mentions an explanation for this behavior:
"Its actually not the costmap that is ignoring max_scan_range values from the laser, its the laser projector that takes a laser scan and turns it into a point cloud. The reason for this is that there is no guarantee that max_scan_range actually corresponds to the laser not seeing anything. It could be due to min_range, a dark object, or another error condition... all the laser knows is that it didn't get a return for one reason or another. "
Based on our experiment and this explanation, a possible solution for the max_range problem could be setting up a scan filter chain. Theoretically when a scan value is "max_range", we could replace it with a big number such as 100 (Turtlebot's scan range is 10 meters). However we could not make it work this week, so we will do more experiments in the coming week.
The campus_rover.launch
file includes another launch file move_base.launch.xml
from turtlebot_navigation
package. In move_base.launch.xml
, a move_base
node is launched with a bunch of parameters stored in yaml files. This node basically runs navigation stack for Turtlebot and also includes all the costmap drawing/clearing behaviors.
What we found out was that in turtlebot_navigation/param/move_base_params.yaml
, all the parameters for recovery behaviors were commented out. According to ros documentation on expected robot behaviors, recovery behaviors are an essential part of robot's navigation. When the robot perceive itself as stuck (unable to find a valid path to its goal), it should perform recovery behaviors to clear its costmap and then replan a path.
Therefore, we brought back an recovery behavior with the code:
reset_distance: 0.0
means Turtlebot clears its costmap outside of 0.0 radius, so it will clear all the costmaps when it perceive itself as stuck. Based on our experiments and previous experience, Turtlebot was pretty good at dodging obstacles that were not on its costmap, so this "aggressive" reset is safe for most occasions, unless Turtlebot is physically surrounded by obstacles that are very close to it, but in this extreme circumstance, "conservative" costmap clearing would also be useless because clearing costmaps several meters away would not be enough to unstuck it.
We also specified the costmap layer name since obstacle layer
is the one contains all the marks of dynamic obstacles:
These changes would ensure the clearing of costmap when Turtlebot perceive itself as stuck, and it will no longer get stuck by marks of long-gone obstacles.
We will look more into implementing the scan filter, so Turtlebot would immediately unmark a gone obstacle even if the scan is out of max range.
During navigation I've run into a lot different errors and warnings. I copied some of the frequent ones here:
What I found in common from these problems is that they all have something to do with information loss in the update cycle of different parts of navigation. And this could be caused by computer not having enough processing power for the desired update frequency, which is actually not high at all, like the 1.0000Hz and 5.0000Hz in the warning messages above.
Then I found that the laptop's both CPUs' usage is nearly at 100% during navigation. I checked each node one by one. Rviz is very CPU-hungry, when running navigation and rviz together, the CPUs will have nearly 100% usage. But we can avoid using rviz once we have fiducial working. Besides Rviz, several custom nodes made by us are also very CPU-hungry.
For now, we are using Dell 11 inch laptop as the onboard computer for Turtlebot. The situation might not be the same if more powerful devices are used. However, generally speaking, in our custom nodes we should avoid pulishing/updating with a frequency that's too high.
Also please remember to check CPU usage if you find these errors and warnings again during navigation.
This week, we built a node to handle the problem of fiducial detection - namely, the question of what the robot does when it doesn't know where it is. This would happen in two cases:
Bringup: When the robot is initially started up, it won't know where it is unless it happens to have a fiducial in view already.
The "kidnapped robot problem": When someone picks up the robot and moves it, its localization won't recognize the new position, so the robot needs to identify that it's been moved.
In both of these cases, the robot must act on its state of being lost by semi-intelligently wandering around the area and avoiding obstacles until it sees a fiducial and can re-localize.
To solve this problem, we first added two new states to state.py
: FLYING and LOST. These states will be used to identify when the robot is being picked up and when the robot doesn't know where it is, respectively.
lost_and_found
nodeBecoming lost:
The node includes a subscriber to the /mobile_base/events/wheel_drop
topic, which publishes a message every time the robot's wheels move up or down. As the wheels' natural state is to be pushed up as the body of the TurtleBot rests on top of them, a message that the wheels have moved down indicates that the robot has been picked up, triggering a change into the FLYING state. Similarly, once the robot is flying, a message that the wheels are up again indicates that the robot has been set down, such that the robot is now LOST and able to start looking for fiducials.
Becoming found:
The while loop of our wandering is controlled by an if statement designed to catch state != States.LOST
. Ideally, the fiducial detection will trigger the localization of the TurtleBot, which will change the state of the TurtleBot to LOCALIZING. Once the state changes, the while loop will break and the node will stop making the TurtleBot wander until LOST is detected again.
We ensure maximum camera coverage, for the best odds of finding a fiducial, by having the robot drive in a rectangular outward spiral away from where it had been:
The robot starts by spinning 360º, then driving a set distance to its right
The robot then spins 360º, turns 90º to the left, and drives that same distance
Until a fiducial is found:
The robot spins 360º
The robot turns left and drives for a slightly further distance than last time
The robot spins 360º
The robot turns left and drives the same distance as the previous step
Repeat, increasing the distance to be driven for the next two turns.
Implementation:
In order to ensure the best possible obstacle avoidance in this algorithm, rather than implement the driving ourselves, we send the movements described above to the robot as a series of AMCL goals using the following algorithm:
One potential bug arising from AMCL-based wandering is that the robot would forget any AMCL goal it had been working towards when it was kidnapped. To fix this, we have included a /move_base_simple/goal
subscriber. Whenever it receives a message, indicating a new AMCL goal, it saves that goal in this node as current_goal
.
In our flying_or_lost
method, which recognizes wheel drops as described above, we have included a check for if the robot's state at the moment of kidnapping was FLYING
. If the state was NAVIGATING
, such that the robot was in the middle of AMCL, we set lock_current_goal
to True, which acts as a flag to indicate that our node should stop saving new incoming goals because our AMCL-based wandering is about to start.
Finally, our if get_state() != States.LOST
block, which is responsible for resetting the node once wandering is complete, includes a check for lock_current_goal
. If lock_current_goal
is True, then the robot must have been working towards an AMCL goal prior to the kidnapping, so our node re-publishes that goal with an updated timestamp and the robot can continue its journey.
I investigated further with the max_range of camera and found that it was indeed 10 meters. When the camera is more than 10 meters away from an obstacle, the range readings in the /scan
topic corresponding to the angle of the obstacle will be nan
. Also, when an obstacle is within the minimum range of camera or the surface of the obstacle does not reflect any laser, the laser readings will be nan
. These nan
readings make the move base think there’s something wrong with laser and will not unmark an obstacle once it’s gone... I wrote a filter node called scan_filter.py
which will replace the nan
readings with 9.9 (a number slightly smaller than max_range), and publish to a new topic called /scan_filtered
. Then I passes an argument to the move base in our launch file to make the cost map in move base subscribe to /scan_filtered
. However, amcl should still subscribe to the original /scan
topic because localization relies on unfiltered readings.
At first I changed all the nan
readings to 9.90, but later Alex help me notice that the nan
readings at the beginning and end of the array should not be changed, because they correspond to the laser being blocked by robot's body. Therefore I chose not to change these nan
readings.
Now the robot will immediately unmark an obstacle on cost map once it is gone even the camera is out of range.
Fiducials are an attempt to localize a robot without any prior knowledge about location. They use unique images which are easily recognizable by a camera. The precise size and orientation of a fiducial tag in an image can uniquely localize a camera with respect to the tag. By measuring the location of the tag ahead of time, the location of the camera with respect to the global frame can be found.
Aruco is a standard for fiducial tags. There are many variants but generally look like this:
TF
publishingThe transform from camera to fiducial can be combined with a transform between the map and fiducial to find the pose of the camera. This is best done using ROS
's extensive tf
tooling. Each fiducial location can be published as a static transform and a TransformListener
can find the total transform from map to camera.
New tags can be placed on the map and published as static transforms from within the bringup.launch
file. To find the x
, y
, and z
position, use a tape measure.
The three euler angles describing the angle of the tag are more difficult to determine. To find the first rotation parameter, x, consider the orientation of the fiducial relative to the map. If the fiducial faces north x = 0, if west x = π/2, if south x = π, if east x = 3π/2. The second (y) component accounts for leaning left or right of fiducial on the verical wall. If positioned straight up, it should be set to π which is approximately 3. The third (z) component describes how far forward or back the fiducial is oriented. If the wall is vertical, roll = 0. If leaning forward, 0 < z < π /2. If leaning backwards, 2π > z > 3π/2.
x is red, y is green, z is blue
Add a new tag to the bringup. The static publisher accepts x, y, z, for position and either yaw, pitch and roll or a quaternion for rotation, but for your own sake please chose the former.
Fiducials were very challenging to implement and it's worth spending some time discussing the reasons why and how we can learn from the process.
To start, fiducial localization involves complex coordinate transforms from image space to tag-relative space and then to map-relative space. By itself, this would have been fine. It is easy for a computer to calculate the matrix multiplication at the heart of these transforms, but it is hard for a developer to debug. More clarity and better documentation about the output of the tooling would have helped. In addition, the transforms are implemented in quaternions and other complex formats which are difficult to understand in any form and took some time to get used to.
A few tools exist which solve "fiducial slam" and originally we tried to implement one of those to compute the transforms from image space to map-relative space. These tools are, in fact, not built to solve that problem, but to assist in mapping the fiducials in the first place - a problem less challenging in our use case.
The biggest breakthrough came when I began using the built in tf
tooling. This allowed me to work with a robust set of tools including rviz
for easy debugging. Through this process I was able to see that the y
and z
axes needed to be swapped and that an inversion of the transform was needed. These were not clear when using other tools, but were at the heart of the strange results were were seeing early on.
More familiarity with ROS
would have brought me to tf
s sooner, but now I have that familiarity for next time. All together, I'm not sure what lesson there is to take away from this. Sometimes hardcore debugging is required.
Siyuan Chen, December 2018, sychen1996@brandeis.edu
Measure the static obstacles such as walls, doors, and corners that will not be changed ever. The measurements should be in centimeters.
Use keynote to draw the space.
better to have two people measure the walls because it could be longer than 10 meters(10000 centimeters) that is very hard for one person to do the job.
Use a mouse to draw the map is going to save you a lot of time because you can easily drag the lines on the keynote.
Be patient. Each map is going to take you about four hours.
Ubiquity Robotic's module accurately finds aruco tags in camera images. It relies on a compressed image stream from the robot camera and the camera's specs published as well. aruco_detect
publishes a number of outputs, but crucially it sends a geometry_msgs
transform relating the fiducial to the camera on the topic /fiducial_transforms
. This needs to be inverted to get the transform from camera to tag.
aruco_detect
is brought up as part of the cr_ros
bringup.launch
file, but can be independently launched with roslaunch aruco_detect aruco_detect.launch
if the proper topics are supplied in the launch file. takes the output of aruco_detect
and publishes pose messages. It is in cr_ros
and also in the bringup file. Static tf
publishers are also needed for tag position. See "adding a new tag" below. One static publisher is needed to correct a 90 offset. This should relate the frame fiducial_camera
to rotated_fiducial_camera
with a transform of π/2 in the yaw (rosrun tf static_transform_publisher 0 0 0 1.57079632679 0 0 /fiducial_camera /rotated_fiducial_camera 100
).