Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Architecture diagram here (Outdated. Sorry (No worry, GEN3 will update a new one😉))
On the first step, we need to bring the laptop inside the Mark-I out and turn it on. Afterwards we need to SSH into the Mark-I laptop with our own device by ssh turtlebot@129.64.243.64
. Then we attach the Mark-I laptop to her mothership and do a bringup for her by entering roslaunch cr_ros campus_rover.launch
in the ssh terminal (Note this bringup will automatically star roscore so we don't need to do roscore seperately). By our past experience, we need to wait for a 'bep bep bep' sound effect made by the Kabuki machine and check for "Odom received" message on the ssh terminal.
Then on the side of our own device, we want to open another terminal and enter roslaunch cr_ros offboard_launch.launch
. This will do the off_board bringup and boot the Rviz for Mark-I to navigating herself. Once Mark-I has correctly localized herself, then we can minimize the Rviz window or even close it without any harm.
For the final step of booting Mark-I, here comes to the web server. First I assume our device has already had Flask installed and cr_web repo cloned. Then we want to open another terminal window and cd into the cr_web directory. Then entering command flask run --no-reload --host=0.0.0.0 --with-threads
. This will start the localhost web server and we can manually open a browser and go to localhost:5000/
and we shall see that Mark-I on the web server is ready for you to do some rock! 🦄
The navigation algorithm we used is the most complex software controlling the rover. It must efficiently process sensor data and plan routes over long range maps and avoid the short distance obstacles. A full-fledged campus rover must also handle complex obstacles like doors, elevators, and road crossings which would each require special navigation decision making. A fully functioning rover would also incorporate a unique localization algorithm to combine sensor data from fiducials, GPS, wifi signals, and camera/lidar inputs, etc.
Currently, reliability is a more pressing concern in building a rover to meet the mark 1 requirements. While a solution which provides more control in navigation would allow us to better expand our solution beyond Volen 1, it is not feasible at this stage. The turtlebot navigation package has been well tested and is proven to carefully plan routes and avoid obstacles. It also incorporates a powerful adaptive Monte Carlo localization algorithm which draws on data points from a single-beam lidar or depth camera.
A better solution for the mark 1 rover is to make use of the robust turtlebot navigation code and supplement the AMCL localization with a separate fiducial localization system which will supply poses to the existing algorithm by publishing pose estimates. A navigation_controller will also wrap the standard navigation and localization nodes to provide additional control and feedback where possible.
It is also beneficial to wrap the cmd_vel outputs of the navigation will allow for distinct wandering to refine an initial pose and for teleop commands. We use the /cmd_vel_mux/inputs
topics. This allows teleop commands to block navigation directions which in turn will block localization commands.
At the top level, a central control node will manage core tasks like interacting with the tablet controls, publishing a map stored on disk, and deciding when the robot must return to recharge. Keeping these simple high level tasks combined will allow for rapid development but could be spun off into independent ROS nodes if the functionality required grows in complexity.
process_fiducial_transforms
also publishes acts primarily as a transform from the camera-relative pose from the built-in aruco_detect node, to a map relative pose based on it’s knowledge of the locations of fiducials in the map. The node is always running, but only when it sees a fiducial will it publish a cur_pose message to assist the AMCL localization.
Takes button presses from the UI and sends cmd_vels
. (However, there is a bug in this application. After the user teleop the rover, it can’t respond to new commands, such as back to charging station or go to xxx office. In gen3, we will fix this bug.)
Uses a move_base
action to navigate. Subscribes to /web/destination
which parses JSON input and /destination
which takes a PoseStamped.
The campus rover 4 prototype is based off of the "chefbot" from the textbook "Learning Robotics Using Python" by Lentin Joseph. Rather than using an OpenCR board like a turtlebot3, it uses a Tiva C Launchpad. To write code for the Tiva C, we need two things:
The Energia IDE to write our code, compile it and upload it to the board.
rosserial_tivac to allow the Tiva C to act like a ROS node.
go to The Energia Download Page to get the IDE. Then, choose where to extract the files. You can put them wherever you like, I chose to keep them in my Downlaods folder, so for me the path to the IDE and it's related files/ folders is ~/Downloads/energia-1.8.10E23-linux64/energia-1.8.10E23
, because my version of energia is 1.8.10E23. In the future, your version may be different.
To make it easy to open the Energia IDE, I added this alias to my .bashrc
:
We need to download, build and install the rosserial_tivac package.
Run these five commands in your terminal:
Add this line to your .bashrc
(preferably near similar lines that were automatically added when ROS was installed):
In the terminal, cd to where energia is. So, because my Energia is in the Downloads directory, I input:
ls
to see the files and subdirectories in the folder. There should be a libraries
directory. Run this command:
If that command completes without error, you should find the directory ros_lib
inside libraries
. Congratulations! You can now turn your Tiva C launchpad into a ROS node.
This guide up to this point was adapted from here
Open Energia. From the Menu at the top, Select Tools
, then Boards
, then Boards Manager
.
In the Boards Manager, scroll down and select Energia TivaC boards
, and install it.
Under Tools/Boards
again, select LaunchPad (Tiva C) tm4c123 (80MHz)
Now, plug the Tiva C into your PC via USB. Now, under Tools
you should now be able to select Ports
, and the only option should be /dev/ttyACM0
. Select it.
You are now configured to compile and upload your code to the Tiva C board by using the Upload
button (it looks like an arrow pointing to the right)
Download the Ti udev rules
in a terminal, cd to where you downloaded the udev rules. then, move them using this command: sudo mv 71-ti-permissions.rules /etc/udev/rules.d/
restart the udev service using this command: sudo service udev restart
PRO TIP: Follow the same three steps above on the robot's raspberry pi to give rosserial access to communicate with the tivac. (a restart may be required)
For the node on your Tiva C to communicate with ROS, the computer it is attached to must be running the serial_node
from the rosserial_python
package. This node is installed by default along with ROS. Run it independantly with rosrun
, or add it to your project's launch file.
This section of the Lab Notebook contains pages related to the ongoing campus rover project, specifically the cr_ros, cr_web, cr_ros_2, cr_ros_3 and rover_4_core repositories.
cr_ros and cr_web were the original pair of repos that gens 1 and 2 created to use with a TurtleBot2 (Spring 2018, Fall 2018)
cr_ros_2 is a port of cr_ros to work with TurtleBot3. It features a number of unstable features from external repos that were created that semester. It works with the mutant branch of cr_web. (Spring 2019)
cr_ros_3 is a more stable version of cr_ros_2 which also works on TB3 with a number of tweeks under the hood which help the package to run smoothly and be easy to pick up and develop on. (Fall 2019)
rover_4_core is low-level code that is intended to run on a new custom model of robot hardware, but it could be interfaced and produce the same information as a TB3. A companion package to cr_ros_3. Development was cut short. (Spring 2020)
Pito Salas, November 2018, pitosalas@brandeis.edu
Basement space is set up like an office
We have to update the map to include fake hallways and offices
Lines will be taped to the floor to indicate the walls
There will be four "offices", two "hallways", one "lab", and one "reception"
We will move furniture into the "rooms" but navigation will be based on the
Map is updated to match the tapes on the floor
Locations are added to the location table
A package has been delivered to reception
The robot is sitting in its charging station
Using a computer sitting on his desk, the receptionist summons the robot over
When the robot arrives at the receptionist it says: "hello Jo, did you call?"
The receptionish gives the robot the package and uses the laptop to send the robot to Jane's office
THere are some people chatting and in the way
As the robot navigates around them it says, "Excuse me I am delivering a package to Jane's office"
Jane accepts the package and the robot starts driving back to its charging station
An evil genius picks the robot up and starts walking it to the Evil Genius office.
When placed back on the floor the robot realizes that it was kidnapped
It recovers by going back into the hall and finding the nearby fiducial
It then continues back to the charging station, awaiting it's next mission
Along the way the robot sees Alex and says "Hello Alex, how's it going?"
The robot comes into the charging station and relaxes until the next mission
This section contains guides and notes from the campus rover 4 team.
There are currently two versions of the mutant in existence. They both have the same footprint, that of a Turtlebot 3 Waffle.
One of those robots, has large green wheels but operates as a standard Turtlebot3 with ROS etc. The only difference is in the OpenCR firmware, where the wheel diameter, robot radius, etc. fields are modified to account for the different chassis and wheels. It is fully operational.
The other robot, has blue wheels of similar size to the standard TB3 wheels. This model is where new motors/motor controllers are tested. The current combination are Uxcell 600 RPM encoder gear motors. When installing/wiring new motors ALWAYS GO OFF OF THE PCB (printed circuit board). Meaning follow whatever is actually written on the back of the motor. They are plugged in and controlled by a RoboClaw motor controller. For our purposes, it is communicating over packet serial with an Arduino, which gives it serial commands. The RoboClaw is well documented and has a functional Arduino library that is used for most of its operation. The RoboClaw comes with a Windows software interface that allows for simple initial programming and Autotuning of the PID control. The robot is equipped with a latching Estop button. The robot does not currently drive particularly straight but that does not seem to be the result of the speed control (the encoder counts stay similar). Rather it is likely a result of the alignment of the wheels. To attach the larger wheels to these motors, an adapter needs to be made or found from the smaller hex of the driver pin to a larger size. The RoboClaw controller is top of the line across the board. To interface with ROS, the best approaches would be to do so through an Arduino using ROSSerial or directly over MicroUSB, using the existing ROS-RoboClaw libraries.
Wiring diagrams and pictures:
Basically all the steps we think are needed to get the to work. This was gathered by the team and then experimented and revised further by Pito during the post-semester break.
SSH into the robot’s onboard laptop (turtlebot@129.64.243.64)
Start a roscore
Do a bringup for the onboard laptop: “roslaunch cr_ros campus_rover.launch”
Wait until the bringup finishes. Look for “Odom received” message
Run roslaunch cr_ros offboard_launch.launch
Wait until the bringup finishes. Look for “Odom received” message
Install Flask
Clone cr_web repo
cd into cr_web
Make sure to bring up TB2 (with cr_ros), or run a local roscore
export ROS_MASTERURI=_
export FLASK_APP=rover_app
flask run --no-reload --host=0.0.0.0 --with-threads
Note: --host=0.0.0.0 and --with-threads are only needed if other client machines need to access the hosted app.
Go to localhost:5000/, or replace localhost with the other machine’s IP if you’re accessing it from a client machine. (edited)
aruco_detect (for fiducial_msgs import error)
pip install face_recognition
Kobuki docking might be part of the general TB2 install or may need to be separate
sudo apt-get install ros-kinetic-moveit-ros-visualization
To improve upon version 2 of the code base, focusing on changes to:
Ease of expansion - building infrastructure that makes adding functional modules to the code base easy
Reclaiming functionality of some nodes from version 1, that were unused in version 2
Improving upon existing nodes
Cleaning up the code base by throwing away unused files
Continue reading for details on each aspect of this project.
Rather than lumping all nodes into either "onboard" or "offboard" launch files, we have created a series of launch files that can be added to either main launch file. Furthermore, these launch modules can be disabled in the command line. Here is how this was achieved:
Here is the launch module for alexa voice integration:
It is added to the offboard launch like so:
SO if you wish to launch offboard without voice, use this command:
Going forward, all new features should be added to their own launch module, where the module contains all nodes that are required for the feature to work properly. The only excepition is if a new feature is added to the core of the package, and is required for the package to function properly.
A handful of changes have been made to the state manager interface. They include
get_state now communicates via the state query service, rather than directly accessing the current_state field.
get_state and change_state now have error handling to deal for the event that the state serivces are unavailable.
New functions have been added to the state_tools file which should make interfacing with the state manager easier.
To make it easier for new users to begin working with the existing campus rover platform, we have created a set of install scripts that will make the setup time significantly faster, and remove all the guesswork. Refer to the cr_ros_3 package readme for installation instructions.
Prior to cr_ros_3, the campus roverr platform only worked on robots running ROS Kinetic. Now, the platform will run on robot running ROS Melodic.
Formerly known as "message_switch", the talking queue was an overlooked node that did not recieve much attention until now, as it was overlooked for directly interfacing with the talk service. As the code base grows and more features may wish to use the on-board text to speech feature, a queue for all talking requests is a nessecary feature.
Using the talk queue from another node is easy. First, import the following:
Set up a publisher to the /things_to_say topic
Whenever your node wishes to produce speech, utilize a line of code similar to this:
The location narrator is a novelty feature that has been reclaimed by:
indtroducing a new set of waypoints in files/waypoints.json that correspond to the basement demo area
modifying the existing script to narrate whenever the nearest waypoint or the state changes.
Innaccuracies in costmaps can lead to difficulty in navigation. As such, we have implemented costmap clearing under the following circumstances:
Navigation returns the 'aborted' code
The robot exits the flying state and enters the lost state
Here's why:
When navigation fails it is often because the costmap is muddled in some way that prevents a new route from being plotted. If the costmap is cleared and the goal is re-sent, then navigation may become successful.
A human lifting the robot create a significant amount of lidar noise that is innacurate because the robot is not on the ground. Therefore, when the robot is place back on the ground it deserves a clean slate.
Here's how:
First, the Empty service topic must be impported
Then establish a serivce proxy
Finally, any time a node requires the costmap cleared, ultilize this line:
The Rasbperry Pi camera that is mounted to Mutant is mounted such that by default, it is upside down. Fortunately, Ubiqutiy's rapicam_node that we ultize supports dynamic reconfiguration. This can be done manually using the following shell command:
This will bring up a GUI which will allow various aspects of the camera configuration to be manipulated. It also allows for a given configuration to be saved in a yaml file. Unfortunately, changes made through dynamic reconfiguration are temporary, and using a GUI to reconfigure every time is tedium that we strive to eliminate. The solution lies in camera_reconfigure.sh. We saved our desired configuration to camera.yaml from rqt_reconfigure. camera_reconfigure.sh uses that yaml file in this crucial line:
Thus, automatic reconfiguration is acheived.
Transforms are tricky. Version 2 never quite had accurate fiducial localizations due to bad static fiducial transforms. Through trial and error, static fiducial transforms have been edited and the pose esitmates have become more accurate as a result. We found it useful to refer to this static transform documentation to help remember the order of the arguments supplied to the static transform publisher:
If Rviz is not supplied the right settings file, it can produce a lot of frustrating errors. This was the case with our old settings file. We have generated a new settings file. How? Here are the steps:
launch default turtlebot navigation
save the settings from Rviz as a new file
set Rviz in mutant_navigation.launch to use the new settings file
make the necessary changes to the left bar to make Rviz visualize navigation properly (ex. using scan_filter as the scan topic rather than scan)
save your changes to your settings file
We have added ngrok_launcher.sh which will bringup an ngrok tunnel to the alexa webhook to allow the alexa voice nodes to function.
Minor updates have been made to cr_web to accomodate a few changes to the code base. They are:
localhost port changed from 5000 (default) to 3000, to accomodate cr web app and ngrok webhook running on the same machine
Updated the map file to match current demo area layout.
Files related to the following were removed:
gen2 facial recognition
gen2 package delivery
gen3 facial recognition
gen3 hand detection
Fortunately, all these files still exist in version 1 and/or version 2 of the code base, so if they wish to be salvaged, they could be.
The campus rover code base is arguably in it's best state that it has ever been in. We eagerly look forward to how it will continue to grow in the future.
make sure that ROS_MASTER_URI= (or whereever roscore is runnung)
Due to novel Coronavirus COVID-19, the efforts of the CR4 team in spring 2020 was cut short. In about two months, we were able to complete quite a bit but unable to get the new rover to a stable and usable state for future students. This document lists what was finished, and elaborates on what remains to be done.
The biggest endeavor of this project was to use the Diff_Drive_Controller
to produce stable and reliable motor movement (in tandem with a PID loop on the Tivac board) and odometry.
the embedded ROS node on Tivac publishes IMU and Sonar topics, but their data is unreliable.
a basic urdf was constructed
basic launch files created for robot bringup
depth camera and lidar fusion as a point cloud
PID gains need to be more finely tuned. Refer to motor.h for the defines of the gains
IMU should be calibrated more finely. Perhaps DMP should be used?
Diff_Drive
published NAN in odoms by default. perhaps some of the arrays in hw_interface.cpp need to be initialized with 0's?
attach a more reliable lidar and prove that SLAM works with diff_drive
finish tuning navigation params
create a more deatiled robot model and urdf (perhaps usinf xacro) in a CAD or other modeling softawre (blender?)
build a sinple system that can detect when the battery is at low levels and indicate to the user in some way (a beeping noise, a topic, a light, turning off the motors, etc)
We hope that this is a substantial base with a clear direction to move forward with, once the time is right. When resuming work, please go to the hardware_interface
branch of rover_4_core
, which is the latest branch.
During navigation I've run into a lot different errors and warnings. I copied some of the frequent ones here:
What I found in common from these problems is that they all have something to do with information loss in the update cycle of different parts of navigation. And this could be caused by computer not having enough processing power for the desired update frequency, which is actually not high at all, like the 1.0000Hz and 5.0000Hz in the warning messages above.
Then I found that the laptop's both CPUs' usage is nearly at 100% during navigation. I checked each node one by one. Rviz is very CPU-hungry, when running navigation and rviz together, the CPUs will have nearly 100% usage. But we can avoid using rviz once we have fiducial working. Besides Rviz, several custom nodes made by us are also very CPU-hungry.
For now, we are using Dell 11 inch laptop as the onboard computer for Turtlebot. The situation might not be the same if more powerful devices are used. However, generally speaking, in our custom nodes we should avoid pulishing/updating with a frequency that's too high.
Also please remember to check CPU usage if you find these errors and warnings again during navigation.
This Robotis emanual page describes how to setup the Raspberry Pi camera to be used with Turtlebot3.
Here is a streamlined guide to quickly get a raspi camera working with a TB3. This entire process may take a few minutes - worst case, if you have to fix apt-get errors, upwards of 30 minutes.
sudo raspi-config
in the TB3's terminal. Navigate to option 3: Interfacing options. The first option in the sub-menu is camera - select it, then select yes when prompet to enable camera interfacing. Then, navigate to finish and reboot the robot for the change to take effect.
do a sudo apt-get update
and sudo apt-get upgrade
to make sure there are no errors. If update throws a missing pubkey error, then record the pubkey and use this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <PUBKEY>
where is the pubkey that you recorded. once the pubkey is added, update & upgrade. If there are no errors, continue.
run the following two commands to add Ubiquity Robotic's repos to apt
update & upgrade again.
sudo apt-get install ros-kinetic-raspicam-node
catkin make
if catkin_make fails due to missing diagnostics, install this: sudo apt-get install ros-kinetic-diagnostic-updater
roslaunch turtlebot3_bringup turtlebot3_rpicamera.launch
will launch the camera alone at a resolution of 640x480. Alternatively, you can also use roslaunch raspicam_node camerav1_1280x720.launch
to launch at a higher resolution. To include in a custom launch file, consider using a command like this:
The following parameters can be edited in a launch file that launches the Raspi cam to alter its performance:
enable_raw
: allows the camera to publish a topic ~/image
, of topic type sensor_msgs/Image
if set to true
. If not true, only ~/image/compressed
will publish (which publishes a topic type sensor_msgs/CompressedImage
).
height
and width
: change the resolution of the image.
framerate
: changes the rate at which the camera publishes images (maximum 90 fps). Max FPS is also affected by the resolution (higher resolution -> lower max fps)
rqt_image_view
: opens a gui where you can select an image topic currently being published and view it from your remote PC.
rosrun rqt_reconfigure rqt_reconfigure
: opens a gui which can edit various raspi settings, such as vertical/ horizontal flipping, image stabilization, and other sliders for various settings
Our objective of this iteration is to find a way to clean Turtlebot's costmap, so any long gone obstacle will not stay on the map, but the Turtlebot should also not run into a transient obstacle like a person or a chair.
We are using roslaunch cr_ros campus_rover.launch
command to bring up Turtlebot. This launch file launches amcl with a configuration similar to amcl_demo.launch
file in turtlebot_navigation
package. Then we run rviz to visualize the static floor plan map and costmap.
In previous demos, we found that Turtlebot would successfully mark transient obstacles, such as a person passing by, on its costmap and avoid them. But it failed to unmark them even after they are gone. These marks of long gone obstacles would cause the path planner to avoid them. Eventually Turtlebot would stuck because it cannot find a valid path to its goal.
We found a possible pattern and cause for this problem. In this post thread, someone mentions that:
"Costmap2D seems not to "clear" the space around the robot if the laser scan range is max."
We tested this claim. Indeed, when an obstacle in front of Turtlebot is out of max range of its camera sensor, someone who pass through the empty space between the obstacle and Turtlebot's camera would be marked permanently on the costmap. However, if an obstacle is within max range of camera sensor, someone pass through will be marked and then unmarked immediately once this person is gone.
The above post thread also mentions an explanation for this behavior:
"Its actually not the costmap that is ignoring max_scan_range values from the laser, its the laser projector that takes a laser scan and turns it into a point cloud. The reason for this is that there is no guarantee that max_scan_range actually corresponds to the laser not seeing anything. It could be due to min_range, a dark object, or another error condition... all the laser knows is that it didn't get a return for one reason or another. "
Based on our experiment and this explanation, a possible solution for the max_range problem could be setting up a scan filter chain. Theoretically when a scan value is "max_range", we could replace it with a big number such as 100 (Turtlebot's scan range is 10 meters). However we could not make it work this week, so we will do more experiments in the coming week.
The campus_rover.launch
file includes another launch file move_base.launch.xml
from turtlebot_navigation
package. In move_base.launch.xml
, a move_base
node is launched with a bunch of parameters stored in yaml files. This node basically runs navigation stack for Turtlebot and also includes all the costmap drawing/clearing behaviors.
What we found out was that in turtlebot_navigation/param/move_base_params.yaml
, all the parameters for recovery behaviors were commented out. According to ros documentation on expected robot behaviors, recovery behaviors are an essential part of robot's navigation. When the robot perceive itself as stuck (unable to find a valid path to its goal), it should perform recovery behaviors to clear its costmap and then replan a path.
Therefore, we brought back an recovery behavior with the code:
reset_distance: 0.0
means Turtlebot clears its costmap outside of 0.0 radius, so it will clear all the costmaps when it perceive itself as stuck. Based on our experiments and previous experience, Turtlebot was pretty good at dodging obstacles that were not on its costmap, so this "aggressive" reset is safe for most occasions, unless Turtlebot is physically surrounded by obstacles that are very close to it, but in this extreme circumstance, "conservative" costmap clearing would also be useless because clearing costmaps several meters away would not be enough to unstuck it.
We also specified the costmap layer name since obstacle layer
is the one contains all the marks of dynamic obstacles:
These changes would ensure the clearing of costmap when Turtlebot perceive itself as stuck, and it will no longer get stuck by marks of long-gone obstacles.
We will look more into implementing the scan filter, so Turtlebot would immediately unmark a gone obstacle even if the scan is out of max range.
To launch Mutant, follow these steps:
Ensure that mutant has the most recent version of cr_ros_2
. This can be accomplished by running roscd cr_ros_2
and then gp
.
SSH into the mutant and run bu-mutant
. This will launch the mutant onboard bringup.
On your local machine (again after making sure that you have the most recent version of cr_ros_2
, run roslaunch cr_ros_2 mtnt_onb_rpicam.launch
. This command will start the web app, and you can proceed from there.
This section should be updated as problems arise and their solutions are discovered
We've found that forcing powercycles with the switch on the OpenCR board can be detrimental to the robot's ability to SSH. We recommend running sudo poweroff
every time you want to powercycle, if possible. Give the robot ample time to fully shut down before turning it back on. Usually, when turning the robot back on, waiting for the Echo Dot to fully turn on is a good indicator of when the robot will be ready to SSH - if the Echo is powered through the Raspberry pi board. Another indicator is that if the echo is setup to communicate with the robot via bluetooth, then the Echo will say "now connected to mutant". This means the robot is ready. (please note - even though the Echo says it is connected via bluetooth, the raspberry pi does not default to use the Echo as an audio sink, so it will not play audio from the robot.)
Another solution we have found is to disable DNS in SSH settings on the robot. Go to etc/ssh
and then open the config file with sudo nano sshd_config
. If there is a line that says UseDNS yes
then change the yes
to no
. If UseDNS
is not present in the file, then add the line UseDNS no
to the bottom of the file.
This is probably because the volume was turned up too high, and the raspberry pi cannot supply enough power. Plug the Echo into a wall outlet, turn the volume down, and then plug it back into the robot. Rule of thumb: keep the echo's volume no greater than 70%.
Turn the robot all the way off (this means powering off the reaspberry pi, then switching the openCR board off as well), then turn it back on. If the wheel continues to not spin after this, then consult the lab's resident roboticist, Charlie.
Shut down and restart roscore.
Make sure everything is namespaced correctly (on your local machine) - see above for namespacing troubleshooting.
Check that the battery is fully charged.
Your node may have crashed right off the bat - check rqt_graph
and check that all nodes that are supposed to be communicating with each other are actually communicating.
This is usually caused by interference from another robot. Even with namespacing, another robot running on the roscore can cause interference specifically on rviz. We have determined that the likely cause of this is because the odom tf (transform) is not namespaced.
Type rosrun rqt_reconfigure rqt_reconfigure
into the command line.
Click the boxes to flip the image hortizontally/vertically.
To check whether the image is flipped correctly, just run rqt_imageview
.
For further details on the raspicam, look at Hardware - Raspberry Pi Camera page in the lab notebook.
The best way to debug topic communication is through rqt_graph in the terminal. This will create a visual representation of all nodes and topics currently in use. There are two setting to toggle:
First, in the upper left, select Nodes/Topics (all) (the default setting is Nodes only)
Next, in the check box bar, make sure it is grouped by namespace and that dead sinks, leaf topics and unreachable topics are un-hidden.
Now it's important to note that nodes appear on the graph as ovals, and topics appear in the graph as boxes. hovering over a node or topic will highlight it and all topics/ nodes connected to it. This will help to show whether a node is subscribing to and publishing all the nodes that it is expected to.
Based on the rqt graph information, update your topic names in your nodes and launch files to match what you expect.
Some nodes automatically namespace their published topics, but some don't. This can be an annoyance when you desire a mutli-robot setup. Fear not, it is possible to namespace topics that aren't auto-namespaced. There are two ways to do this, and both require some launch file trickery. One way is to declare a topic name, then to remap the default topic name to your new one. The below example is used in the file move_base_mutant.launch
The new topic name argument can be declared anywhere in the launch file. The remapping must occur nested within the node tag of the node which publishes the topic you want renamed. This method has been used and approved by the students of gen3.
The other way is to use a group tag, and declare a namespace for the entire group. gen3 has not tested this method, but there does exist some documentation about it on the ROS wiki.
Gen3's preferred method of namespacing nodes that we have written ourselves is as follows: Before initializing any publishers or subscribers, make sure this line of code is present: ns = rospy.get_namespace()
Then, use python string formatting in all your publishers and subscribers to namespace your topics: cmd_pub = rospy.Publisher('{}cmd_vel'.format(ns), . . .)
ns
will contain both backslashes needed for the topic to conform to ROS's topic formatting guidelines, e.g. the publisher above will publish to the topic /namespace/cmd_vel
.
/fiducial_transforms (from the node aruco_detect)
/diagnostics (from turtlebot3_core)
/cmd_vel (from move_base)
Any and all transforms
Many components of Turtlebot3 software depend on knowing the size of wheels of the robot. Some examples include Odometry, cmd_vel (the turtlebot core), and move_base. By default, turtlebot3 wheels are 6.6cm in diameter. Mutant has 10cm diameter wheels. If you use larger wheels, but the software believes it has smaller wheels, then movement behavior will not be as expected.
On your remote pc, follow steps 4.1.1 through 4.1.6 of this Robotis e-manual guide to install the arduino IDE and configure it to work with the OpenCR board. Please note that as of May 2019, the latest version of the Arduino IDE is 1.8.9, but the guide displays version 1.6.4.
In the IDE, go to File
--> Examples
--> TurtleBot3
--> turtlebot3_waffle
(or turtlebot3_burger
if that better fits the form factor of your mutant TB3) --> turtlebot3_core
. This will open three files: turtlebot3_core
, turtlebot3_core_config.h
and turtlebot3_waffle.h
. Go to turtlebot3_waffle.h
. You will see that this file defines a number of characteristics of a robot, including wheel radius! Edit your wheel radius variable, then save your work - two windows will pop up. In the first one, click "ok" and in the second, click "save" - don't edit anything!
Now it's time to upload the edited firmware to the OpenCR board. This is actually not too difficult - first, unplug the usb cable that connects the OpenCR board to the Raspberry Pi (or whatever SBC your mutant is using), and plug it into your remote PC. You may have noticed that you weren't able to select a port in step 4.1.5.3 of the emanual instructions - now that the OpenCR board is connected, you should be able to select the port. Once you've done that, click the upload button at the top of the IDE to upload the firmware to your robot. Once the upload is complete, your firmware should be updated and your robot should behave as expected in movement-based tasks. To test, make the robot move forward at 0.1 m/s for 10 seconds - it should travel about a meter. If not, your firmware may not have been updated properly or your wheel measurements may be incorrect.
Restoring .bashrc
is not very difficult.
First, type /bin/cp /etc/skel/.bashrc ~/
into your terminal. This will replace a corrupt .bashrc file with the default .bashrc. Then, try ls -a
in your user directory to view all files, including hidden ones, such as .bashrc. If a file such as .bashrc.swp
appears, delete it with rm .bashrc.swp
. Continue removing files that look related to .bashrc until nano ~/.bashrc
properly opens the file without giving an error. Now you'll have to restore all the lines at the bottom of .bashrc that were added when ROS was installed. Fortunately, there are many computers in the lab and they all have almost identical .bashrc files! ssh from one of those computers to the one that you are restoring, copy the lines that need to be re-added, and then edit them to fit the computer that the .bashrc file is located on (things to consider: namespace, ip address, aliasas that are useful on the given machine) Don't forget to source ~/.bashrc
after you are done editing the file, so the changes you made can take effect in the terminal!
Now don't make the same mistake twice.
I investigated further with the max_range of camera and found that it was indeed 10 meters. When the camera is more than 10 meters away from an obstacle, the range readings in the /scan
topic corresponding to the angle of the obstacle will be nan
. Also, when an obstacle is within the minimum range of camera or the surface of the obstacle does not reflect any laser, the laser readings will be nan
. These nan
readings make the move base think there’s something wrong with laser and will not unmark an obstacle once it’s gone... I wrote a filter node called scan_filter.py
which will replace the nan
readings with 9.9 (a number slightly smaller than max_range), and publish to a new topic called /scan_filtered
. Then I passes an argument to the move base in our launch file to make the cost map in move base subscribe to /scan_filtered
. However, amcl should still subscribe to the original /scan
topic because localization relies on unfiltered readings.
At first I changed all the nan
readings to 9.90, but later Alex help me notice that the nan
readings at the beginning and end of the array should not be changed, because they correspond to the laser being blocked by robot's body. Therefore I chose not to change these nan
readings.
Now the robot will immediately unmark an obstacle on cost map once it is gone even the camera is out of range.
In the event that a new mutant ever has to be set up, or software problems require mutant to be reset, here are the necessary tips to installing everything that is needed to operate using the most recent campus rover ros package.
The Robotis Emanual serves as an adequate guide to getting 95% of the way to setting up ROS on a mutant turtlebot. There are a few divergences from their instructions, though:
In part 3 of section 6.2.1.1, you could easily miss the note about installing what is needed to use a raspi camera. Either do not miss it (it is right below the first block of terminal commands) or check out our labnotebook page on configuring the raspberry pi camera.
just below the camera hint, the emanual instructs to use these commands to remove unneeded packages:
However, slam and navigation are actually useful to us, so use these commands instead:
Once you have finished the emanual SBC setup, you still need to install a few dependencies that Robotis assumes you will only use on a remote pc. For your convenience, run this command:
If you are curious as to where this came from, it is an edited version of the first command of emanual section 6.1.3, under PC setup.
you will need:
the fiducial package sudo apt-get install ros-kinetic-fiducials
cr_ros_2 github repo on branch mutant_transfer
cr_web github repo on branch mutant
the turtlebot3_navigation package, which should come with turtlebot3.
google chrome (for the web app - you can use another browser, but that would require editing a shell script in the cr_web package)
flask module for python
Google how to get Chrome and flask. cr_ros_2 and cr_web are repos in the campus rover github community.
If this class is your first time using github - don't worry! Though it may seem mildly confusing and daunting at first, it will eventually become your best friend. There's a pretty good guide called git - the simple guide - no deep shit! which can walk you through the basics of using git in the terminal. Here's a bit of a short guide to the commands we have used most in gen3:
git clone
is how you will initially pull a repository off of Github
in a repository's main directory in your terminal, gs
(or git status
) is a super useful command that will show you all the files you have edited.
git add
--> git commit -m
--> git push
is how you will be updating your repos on github. Pro Tip: if you've edited multiple files in a subdirectory, for example in src, then you can type git add src
to add all modified files in src, rather than typing each file individually.
always do a pull before a push if you think someone else has made edits to your repo.
if you've made changes locally that you don't want to keep, git reset --hard
will revert back to your last pull or clone.
package_handler.py
package_sender.py
recording_sender.py
The package delivery stack is comprised of one main node, package_handler.py
, and two secondary nodes package_sender.py
and recording_sender.py
. Each secondary node corresponds to a type of package that can be handled by the handler (the current implementation supports physical packages on top of the robot, and voice recordings recorded on the on-board laptop. Each secondary node (when pinged on its respective topic) generates a file with a specific suffix (.package
and .wav
respectively) that will be processed accordingly by the package handler. Each file type has an associated package release protocol, which removes the package filename from the packages
queue (maintained by the handler node).
Run recording_sender.py
on-board (so computer microphone is accessed)
Run package_handler.py
and package_sender.py
After navigating to the package sender (wait for WAITING state) place package on the robot
After being prompted, hold down B0, B1, or B2 on the robot's base to record a message
Send the robot a delivery navigation goal
Once the robot has reached the goal (wait for WAITING state) pick up the package. The robot will also play back the audio recording
After the
This week, we built a node to handle the problem of fiducial detection - namely, the question of what the robot does when it doesn't know where it is. This would happen in two cases:
Bringup: When the robot is initially started up, it won't know where it is unless it happens to have a fiducial in view already.
The "kidnapped robot problem": When someone picks up the robot and moves it, its localization won't recognize the new position, so the robot needs to identify that it's been moved.
In both of these cases, the robot must act on its state of being lost by semi-intelligently wandering around the area and avoiding obstacles until it sees a fiducial and can re-localize.
To solve this problem, we first added two new states to state.py
: FLYING and LOST. These states will be used to identify when the robot is being picked up and when the robot doesn't know where it is, respectively.
lost_and_found
nodeBecoming lost:
The node includes a subscriber to the /mobile_base/events/wheel_drop
topic, which publishes a message every time the robot's wheels move up or down. As the wheels' natural state is to be pushed up as the body of the TurtleBot rests on top of them, a message that the wheels have moved down indicates that the robot has been picked up, triggering a change into the FLYING state. Similarly, once the robot is flying, a message that the wheels are up again indicates that the robot has been set down, such that the robot is now LOST and able to start looking for fiducials.
Becoming found:
The while loop of our wandering is controlled by an if statement designed to catch state != States.LOST
. Ideally, the fiducial detection will trigger the localization of the TurtleBot, which will change the state of the TurtleBot to LOCALIZING. Once the state changes, the while loop will break and the node will stop making the TurtleBot wander until LOST is detected again.
We ensure maximum camera coverage, for the best odds of finding a fiducial, by having the robot drive in a rectangular outward spiral away from where it had been:
The robot starts by spinning 360º, then driving a set distance to its right
The robot then spins 360º, turns 90º to the left, and drives that same distance
Until a fiducial is found:
The robot spins 360º
The robot turns left and drives for a slightly further distance than last time
The robot spins 360º
The robot turns left and drives the same distance as the previous step
Repeat, increasing the distance to be driven for the next two turns.
Implementation:
In order to ensure the best possible obstacle avoidance in this algorithm, rather than implement the driving ourselves, we send the movements described above to the robot as a series of AMCL goals using the following algorithm:
One potential bug arising from AMCL-based wandering is that the robot would forget any AMCL goal it had been working towards when it was kidnapped. To fix this, we have included a /move_base_simple/goal
subscriber. Whenever it receives a message, indicating a new AMCL goal, it saves that goal in this node as current_goal
.
In our flying_or_lost
method, which recognizes wheel drops as described above, we have included a check for if the robot's state at the moment of kidnapping was FLYING
. If the state was NAVIGATING
, such that the robot was in the middle of AMCL, we set lock_current_goal
to True, which acts as a flag to indicate that our node should stop saving new incoming goals because our AMCL-based wandering is about to start.
Finally, our if get_state() != States.LOST
block, which is responsible for resetting the node once wandering is complete, includes a check for lock_current_goal
. If lock_current_goal
is True, then the robot must have been working towards an AMCL goal prior to the kidnapping, so our node re-publishes that goal with an updated timestamp and the robot can continue its journey.
This week, we built a ROS node that subscribes to a new topic, /things_to_say
, and uses a text-to-speech service to make the computer speak the messages of type std_msgs/String
.
If you receive an error when you try to run this node, the computer likely does not have pyttsx
installed. To install it, simply run pip install pyttsx --user
.
If that command fails, the computer likely does not have pip installed. To install it, run sudo apt-get install python-pip
and then attempt the pyttsx
install again.
To run the node, run rosrun cr_ros talk.py
Once the node is launched, publish a String to the /things_to_say
topic from any device connected to the same ROS core as the node.
Sample standalone command: rostopic pub /things_to_say std_msgs/String 'Kill all humans'
Sample Python script:
Updated May 2019 with progress following gen3 and mutant mark 1.
Dormant
Converts the Pose messages it receives from its subscription to PoseWithCovarianceStamped messages and passes them on via its publication
Publications
/initialpose
Subscriptions
/fid_pose
Defunct
Updates the robot's state to reflect whether it is currently being charged at its dock based on charging data from its subscription
Now defunct - mutant does not dock, because it is not based on the kobuki base.
Subscriptions
/mobile_base/sensors/core_throttle
Current
Publishes CPU usage data and prints it to the warning log if it is high or otherwise to the debug log based on data from process and system utilities
Publications
/laptop_cpu_usage
Dormant
Uses facial recognition to detect and recognize known faces in the camera feed based on provided data and greets them appropriately by name via a vocal service
Subscriptions
/camera/rgb/image_raw/compressed_throttle
Current
Uses pickup detector data to determine whether the robot is flying or not. Handles localization recovery upon returning to the ground.
Publications
/initialpose
/cmd_vel
/destination
Subscriptions
/airborne
/destination
Dormant
Organizes speech messages chronologically and feeds them to the speech service at appropriate times
Subscriptions
/things_to_say
Dormant
Publishes speech messages narrating the robot's behavior current and proximate location based on its state and on data from its subscription
Publications
/things_to_say
Subscriptions
/nearest_waypoint
Publications
/cmd_vel_mux/input/navi
Subscriptions
/amcl_pose
Dormant
Detects the presence of a physical package via its publications and converses with a user to determine goals and to communicate successes and errors while updating its goals to respond to expected and unexpected changes.
Currently not in use due to the lack of a sensor to detect packages on gen3's mutant.
Publications
/release_package
/record_start
/record_stop
/physical_package
/destination
Subscriptions
/release_package
/receive_package
/mobile_base/events/button
/mobile_base/events/digital_input
/destination
Dormant
Publishes filename of appropriate prerecorded message for the robot to play based on data from its subscription
Dormant for same reason as package_handler
Publications
/receive_package
Subscriptions
/physical_package
Current
Provides scripts for automatically converting from different pose types
Current
Uses fiducial data from its subscription to to determine and publish the robot's position relative to the map
Publications
initialpose
Subscriptions
fiducial_transforms
Current
Records short audio clips featuring user instructions to a file and publishes its name
Publications
/receive_package
Subscriptions
/record_start
/record_stop
Current
Controls the robot and its state with respect to a wide range of input sources and publishes a wide range of data for other nodes to use
Publications
temp_pose
/teleop_keypress
/destination
/web/camera
/web/state
/web/map
/cmd_vel
Subscriptions
/raspicam_node/image/compressed
/web/teleop
/web/destination
/destination
Current
applies a filter to scan data to ignore the structural posts of the mutant
Publications
/scan_filter
Subscriptions
scan
Current
Handles and validates requested state changes for legality and publishes relevant information accordingly
Publications
/move_base_simple/goal
/initialpose
/goal_pose_for_fids
/state
Current
Uses text to speech to turn strings into audio output
Current
Cancels existing robot goals and allows for manual control of the robot via teleoperation
Publications
/cmd_vel_mux/input/teleop
Subscriptions
/web/teleop
initialpose
Dormant
Publishes the name of the nearest waypoint when it changes based on data from its subscription
Publications
/nearest_waypoint
Subscriptions
/amcl_pose
Current
Uses IMU accelerometer data to decide whether the robot has been lifted, and when it has been placed on the ground.
Publications
/airborne
Subscriptions
/imu
Current
takes information from the alexa webhook, and if it involves going to a destination, publishes the goal pose of the specified destination.
Publications
/destination
Subscriptions
/voice_intents
Current
only slightly usable in demo pauses navigation for ten seconds if it receives signal that a hand is in view of the camera.
Publications
/destination
Subscriptions
/destination
/hand_command
Current
only slightly usable in demo spins, searching for recognized person, then stops.
Publications
/destination
/cmd_vel
Subscriptions
/odom
/face_detection
/has_package
Fiducials are an attempt to localize a robot without any prior knowledge about location. They use unique images which are easily recognizable by a camera. The precise size and orientation of a fiducial tag in an image can uniquely localize a camera with respect to the tag. By measuring the location of the tag ahead of time, the location of the camera with respect to the global frame can be found.
Aruco is a standard for fiducial tags. There are many variants but generally look like this:
TF
publishingThe transform from camera to fiducial can be combined with a transform between the map and fiducial to find the pose of the camera. This is best done using ROS
's extensive tf
tooling. Each fiducial location can be published as a static transform and a TransformListener
can find the total transform from map to camera.
New tags can be placed on the map and published as static transforms from within the bringup.launch
file. To find the x
, y
, and z
position, use a tape measure.
The three euler angles describing the angle of the tag are more difficult to determine. To find the first rotation parameter, x, consider the orientation of the fiducial relative to the map. If the fiducial faces north x = 0, if west x = π/2, if south x = π, if east x = 3π/2. The second (y) component accounts for leaning left or right of fiducial on the verical wall. If positioned straight up, it should be set to π which is approximately 3. The third (z) component describes how far forward or back the fiducial is oriented. If the wall is vertical, roll = 0. If leaning forward, 0 < z < π /2. If leaning backwards, 2π > z > 3π/2.
x is red, y is green, z is blue
Add a new tag to the bringup. The static publisher accepts x, y, z, for position and either yaw, pitch and roll or a quaternion for rotation, but for your own sake please chose the former.
Fiducials were very challenging to implement and it's worth spending some time discussing the reasons why and how we can learn from the process.
To start, fiducial localization involves complex coordinate transforms from image space to tag-relative space and then to map-relative space. By itself, this would have been fine. It is easy for a computer to calculate the matrix multiplication at the heart of these transforms, but it is hard for a developer to debug. More clarity and better documentation about the output of the tooling would have helped. In addition, the transforms are implemented in quaternions and other complex formats which are difficult to understand in any form and took some time to get used to.
A few tools exist which solve "fiducial slam" and originally we tried to implement one of those to compute the transforms from image space to map-relative space. These tools are, in fact, not built to solve that problem, but to assist in mapping the fiducials in the first place - a problem less challenging in our use case.
The biggest breakthrough came when I began using the built in tf
tooling. This allowed me to work with a robust set of tools including rviz
for easy debugging. Through this process I was able to see that the y
and z
axes needed to be swapped and that an inversion of the transform was needed. These were not clear when using other tools, but were at the heart of the strange results were were seeing early on.
More familiarity with ROS
would have brought me to tf
s sooner, but now I have that familiarity for next time. All together, I'm not sure what lesson there is to take away from this. Sometimes hardcore debugging is required.
Siyuan Chen, December 2018, sychen1996@brandeis.edu
Measure the static obstacles such as walls, doors, and corners that will not be changed ever. The measurements should be in centimeters.
Use keynote to draw the space.
better to have two people measure the walls because it could be longer than 10 meters(10000 centimeters) that is very hard for one person to do the job.
Use a mouse to draw the map is going to save you a lot of time because you can easily drag the lines on the keynote.
Be patient. Each map is going to take you about four hours.
Defunct
All functionality was moved to
Ubiquity Robotic's module accurately finds aruco tags in camera images. It relies on a compressed image stream from the robot camera and the camera's specs published as well. aruco_detect
publishes a number of outputs, but crucially it sends a geometry_msgs
transform relating the fiducial to the camera on the topic /fiducial_transforms
. This needs to be inverted to get the transform from camera to tag.
aruco_detect
is brought up as part of the cr_ros
bringup.launch
file, but can be independently launched with roslaunch aruco_detect aruco_detect.launch
if the proper topics are supplied in the launch file. takes the output of aruco_detect
and publishes pose messages. It is in cr_ros
and also in the bringup file. Static tf
publishers are also needed for tag position. See "adding a new tag" below. One static publisher is needed to correct a 90 offset. This should relate the frame fiducial_camera
to rotated_fiducial_camera
with a transform of π/2 in the yaw (rosrun tf static_transform_publisher 0 0 0 1.57079632679 0 0 /fiducial_camera /rotated_fiducial_camera 100
).
Our objective is to initialize ROS nodes within a Flask app, enabling publishing and subscribing to ROS topics via API calls or UI interactions.
For example, when run/served, the Flask app would:
Initialize a node and a publisher to a teleop topic.
Define a Flask route as an endpoint for receiving POST requests containing data representing teleop commands (e.g. 'forward', 'back', 'left', 'right', 'stop').
Handle POST requests to the endpoint by executing a Python function in which the ROS publisher publishes the appropriate cmd_vel message to a teleop topic.
It's important to note that this differs from a more common approach to Web-ROS interfacing, which involves the following:
Establish a websocket connection between the web client and the machine running ROS, often via a 3rd party Python package called "rosbridge."
Within the web client's JavaScript, import a 3rd party library called "roslibjs," which provides ROS-like classes and actions for subscribing and publishing to ROS topics.
Unlike publishers and subscribers implemented in ROS, roslibjs sends JSON to rosbridge which, in turn, publishes and subscribes to actual ROS messages. In short, rosbridge is required as an intermediary between a web client and a machine running ROS.
This has the advantage of providing a standard way for any web client to interface with ROS via JSON. However, this not only makes running rosbridge a necessity, but it also requires ROS developers to implement ROS-like programming in JavaScript. Flask, on the other hand, seems to offer a way to implement ROS patterns purely in Python on both client and server, without rosbridge and roslibjs as dependencies.
There is an apparent obstacle to implementing ROS within Flask, though. It seems to involve the way Flask serves an app and the way ROS nodes need to be initialized. More specifically, the issue might arise from initializing a ROS node in a thread other than the main thread, which seems to be the case for some of the ways Flask apps can be run/served. Others in the ROS community seem to have encountered this issue:
Note that the 3rd example proposes a solution; their Flask app's main thread initializes and starts a new thread in which a ROS node is initialized:
However, to actually serve the app, they call Flask.run()
:
Flask's documentation on Flask.run() advises against using it in a production environment:
"Do not use run() in a production setting. It is not intended to meet security and performance requirements for a production server. Instead, see Deployment Options for WSGI server recommendations."
"It is not recommended to use this function for development with automatic reloading as this is badly supported. Instead you should be using the flask command line script’s run support."
"The alternative way to start the application is through the Flask.run() method. This will immediately launch a local server exactly the same way the flask script does. This works well for the common case but it does not work well for development which is why from Flask 0.11 onwards the flask method is recommended. The reason for this is that due to how the reload mechanism works there are some bizarre side-effects (like executing certain code twice, sometimes crashing without message or dying when a syntax or import error happens). It is however still a perfectly valid method for invoking a non automatic reloading application."
Instead of using Flask.run()
within a Flask app's main method/script, we've had success with using the following via Flask's command line interface:
Without the --no-reload
argument, the lines in which your ROS node is initialized will be executed twice, resulting in a ROS error stating that the node was shut down because another with the same name was initialized.
The objective is to implement a 2D map in the CR_Web application that depicts:
The floorplan Campus Rover is currently using to navigate
Campus Rover's "real-time" location as it navigates
The goal destination, toward which Campus Rover is navigating
Our first implementation was based on a tutorial that relied on a websocket connection between the robot and web client, and had the following dependencies on 3rd party libraries:
This initial implementation (repo here) was successful, but presented several issues:
Building upon 3rd party dependencies risked future breaks and maintenance.
As discussed here, it entailed "ROS-like" programming in JavaScript instead of Python.
The implementation described in the tutorial generates a 2D map image from an amcl occupancy grid. This is unecessary for our purposes, because Campus Rover uses a pre-generated floorplan image; re-generating it is redundant and thus computationally wasteful.
Generating the map and loading the 4 JavaScript libraries mentioned above on every page load created noticeable performance issues, limiting any additional page content.
The current iteration resolves the issues identified through the first iteration and enables additional map features:
Instead of generating a map image from an occupancy grid, an existing floorplan image file is rendered.
Instead of using 3rd-party JavaScript libraries, the map is rendered using HTML5's Canvas element.
Instead of writing "ROS-like" JavaScript in the front end as before, all ROS code is implemented with regular ROS Python programming in the Flask layer of the application.
Unlike the initial iteration, the current map includes the option to "track" the robot as it traverses the map, automatically scrolling to keep up with the robot as it moves.
The current iteration now displays the robot's goal location, too.
Support for:
Multiple floorplans/maps
Switching between different floorplans
Adjusting the size and scale of a map (for zooming in/out, resizing, etc.)
Brad Nesbitt 11/18/2018
After several preceding iterations of "live" 2D maps, it became clear that a single abstraction for such mapping would be appropriate. An instance of the LiveMap
class maps waypoints, the robot's current pose, and its goal poses onto 2D floorplan for display within a web application.
The static
directory in rover_app
now contains map_files
, which contains the local files needed to generate a given map, including a JSON file with parameters specific to each map. For example:
--
all_maps.json
The JSON object for a map includes references to local files comprising the map's floorplan .png
file, a JSON file of the map's waypoint data, and a copy of the yaml parameters used for amcl navigation of the .png
-based map.
--
live_map.py
Initializing a LiveMap object requires 2 parameters:
The name/String corresponding to a map in all_maps.json
, such as "Gerstenzang Basement"
The desired centimeters per pixel ratio to be used when displaying the map.
An optional parameter is the centimeter diameter of the robot, which is the Turtlebot2's spec of 35.4 by default.
For example, live_map = LiveMap("Gerstenzang Basement", 2)
initializes a LiveMap object of the Gerstenzang Basement floorplan with a 2cm/pixel scale. The object maintains the following abstraction representing the state of the map, including the robot's current place within it and it's goal destination:
Note that a nested dictionary of ROS subscribers continually updates the scaled pixel value equivalents of the current and goal poses.
Implementing 2D mapping in this way aims to achieve two main advantages:
The LiveMap class allows the initialization of multiple, differing maps, with custom scales in the web application. For instance, a small, "thumbnail" map could be implemented on one page, while large map could be displayed somewhere else. This also makes switching between maps is also possible.
Representing a map_state
as a Python dictionary (shown above) makes it easy to send the data needed to work with a live 2D map as JSON. For instance, a map route or endpoint could be implemented to return a map_state
JSON object which could, in turn, be used to render or update a map in the UI.
This week, we built an enum, all_states.py
to represent the varying states of the robot, and two ROS services: StateChange manages state changes, as well as any associated publishes, and StateQuery reports the current state of the robot. Both services are hosted in the state.py
node.
While the robot's states aren't too complicated yet, our goal was to create a modular architecture to make it easy to add new states as the robot's functionality expands.
StateQuery
is a service that takes no arguments, and returns the state of the robot as the string value of the appropriate enum.
all_states.py
serves two purposes: It defines a series of enums for the various states of the robot, and it defines methods to act as the service clients. In any node that needs to request a state change or get the current state of the robot, add from all_states import *
to gain access to the enums (type State
) and the two methods.
get_state
acts as a client method for the StateQuery
service, taking no parameters and returning the string value of the robot's state.
change_state
acts as a client method for the StateChange
service, taking up to three parameters and returning a boolean value of whether or not the state change was legal:
new_state
: The string value of the desired new state
to_say
: A message to be said by talk.py
upon the state change
pose_to_pub
: A Pose
to be published, if the new state is either NAVIGATING
or LOCALIZING
state.py
contains the code needed to keep track of the robot's current state and facilitate state changes. It includes the is_legal
method, which contains a dict mapping every state to an array of the states that could legally follow it.
If an illegal state change is requested, the current state of the robot is set to States.ILLEGAL_STATE_CHANGE
.
The campusrover robot relies on an on-board Amazon Echo to verbally communicate with the user. The robot, via the echo, is expected to understand a varity of tasks such as motion, rotation, halting, item delivery, and navigation to specific locations.
Architecturally, spoken dialog is handled by Amazon Alexa in the cloud and by a Robotic Operating System (ROS) node called voice_webhook running on the main computer alongside roscore1. The Alexa skill may be accessed and edited by logging into the Alexa Developers Console with the campusrover email (deis.campusrover@gmail.com) and the standard password (ask Pito if you don't know). The voice_webook node may be found in the webhooks package on the main computer in the catkin workspace ('catkin_ws') source code ('src').
Assuming you successfully made it to the Alexa Developers Console and logged in (see above), you should see a heading titled Alexa Skills, under which you should see only one skill, Campusrover. The term 'skill' is a bit misleading — you may prefer to think of it as an agent rather than as a command (each particular command is an 'intent' within the skill). This is the skill we use to communicate with the robot. Click on the Campusrover skill to access it.
On the left sidebar, you should first see a button titled 'Invocation'. This basically determines what will trigger your skill. It is currently set to 'campusrover', and so may be triggered by saying to the echo, "echo tell campusrover to ...".
Next you should see a heading titled 'Intents', under which the various intents are listed. Each intent consists of sample phrases for the natural language processor to understand. Notice that many of these have embedded variables, termed 'slot types', so that the parser learns to recognize, for instance, both "echo tell campusrover to move forward 5 feet" and "echo tell campusrover to move back 7 meters".
These slot types may appropriately be found under the 'Slot Types' heading below 'Intents' on the left sidebar. You may want to look over the intents and slot types to understand how they work, how they interact, and how they have been set up for campusrover.
Under this on the left sidebar, you should eventually see a button titled 'Endpoint'. This is where the skill connects to the ROS node. On this page, the 'HTTPS' radio button should be selected (as opposed to 'AWS Lambda ARN'). Next to the 'Default Region' label, you should see a url of the format 'https://[Random String].ngrok.io/alexa_webhook'. This is the ngrok webhook Alexa uses to communicate with the ROS node (more on this later). If you ever have to change this URL, you must rebuild your entire model by clicking on the 'Build' tab at the top of the page and then clicking on the '3. Build Model' button on the right side of the page under the 'Skill builder checklist' heading. Furthermore, if you decide to use an ngrok URL from another computer (which I won't recomend), you must also upload a new certificate in the endpoint (you may download this by typing your ngrok url (more on this later) into chrome, clicking on the little lock icon to the left of the URL text box, and downloading the certificate.
Ngrok is basically a service which creates a public URL for your computer's localhost. In our case, it allows Alexa to communicate with a node. To run ngrok, first ensure it is downloaded onto the appropriate computer (it should be for the main computer with roscore1, where I recommend you run it and the node). Go to the terminal, change directory (cd) to wherever ngrok is located (home in the case of the main computer), and run the command ./ngrok http 5000
. You should a url of the format 'https://[Random String].ngrok.io/' (make sure to choose the one that begins with https). This URL, with "\alexa_webhook" appended to it, is the URL Alexa will use to connect to the webhook node (more on this later). I would recomend having this constantly running alongside roscore1 and the voice webhook node on the main computer to avoid having to keep updating the URL in the Alexa skill endpoint (closing and reopening ngrok will change the URL).
The voice_webhook node is a flask app (and thus will require that flask is installed on the computer it is running on, as it should be on the main computer). Furthermore, as it relied on ngrok to make itself available to Alexa, it must be running on the same computer as ngrok. Look over the node python file to get an understanding of how it works. Alexa will basically send over a rather large and convoluted JSON string representing a user intent every time the user commands the skill to do something. This node will basically parse the JSON string, produce a new, simplified JSON string, publish it to the voice_intents topic (in most cases) for other nodes to execute, and return a reply for the echo to verbally say to the user.
mkdir MyNewFlaskApp
cd MyNewFlaskApp
pip install Flask
python3 -m venv venv
. venv/bin/activate
mkdir flaskr
touch __init__.py
Adding an __init__.py
file to a directory tells Python to treat it as a package rather than a module. Though it can be left empty, it usually contains code used to initialize a package. We can use it to house our main application code.
__init__.py
)See full tutorial here.
Make these are run from the project's directory, not within the flaskr directory.
A Flask app is an instance of the Flask class. However, instead of using just one, global Flask object, a Flask app is best implemented by defining a function (e.g. create_app()
) that creates and returns an instance of the Flask class whenever called. This function is often referred to as the "Application Factory." See the Flask tutorial for further details.
Note that the create_app()
function takes the name of a configuration file, which contains the names and values of environmental variables to be used by the Flask application.
Using a python virtual environment in a project is a way to ensure all of project's dependencies (e.g. python version, python packages, etc.) "accompany" it and are met wherever it happens to be run.
Here's the Flask documentation on virtual environments.
To add a virtual environment to a project, cd into the project's directory and run python3 -m venv venv
Whenever you work on your project, activate its virtual environment first, by running . venv/bin/activate
SQLite is a serverless relational database. Simply put, it allows you to implement a database in your project without having to run/connect to a separate database server.
It's intended for light use, ideal for a development environment (or in production with light activity).
Python also has built-in support for SQLite3, so there's no need to install it. Adding it to a project is a simple as import sqlite3
. Further Flask-specific documentation is available here.
A tutorial on using Flask with SQLAlchemy to interface with a SQLite database in a more object-oriented way can be found here.
Bootstrap provides a large selection of pre-made UI components.
To enable using Bootstrap via CDN: 1. Paste this into your HTML document's header, before any other links to css: <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
Paste these in order near the very of your HTML document, right bofore the </body>
closing tag:
The full set-up instructions can also be found here.
Most Bootstrap elements are added by creating a <div>
with the class
attribute set to one of Bootstrap's predefined classes. For example, adding a Boostrap alert element consists of the following: <div class="alert alert-primary" role="alert">A simple primary alert—check it out!</div>
Bootstrap uses the role
attribute to ensure accessability.
To enable use of free FontAwesome icons via CDN, add the following link tag to the header of your HTML document: <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.4.1/css/all.css" integrity="sha384-5sAR7xN1Nv6T6+dT2mhtzEpVJvfS3NScPQTrOxhwjIuvcA67KV2R5Jz6kr4abQsz" crossorigin="anonymous">
To add a specific icon, pick the one you want from the FontAwesome gallery, then simply copy its html tag (e.g. <i class="fas fa-arrow-alt-circle-up"></i>
) and paste it into the desired section of your HTML document.