Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
All sorts of small topics are covered here
Git clone this repository to your computer: campusrover/labnotebook
In the directory structure, find the folder called faq
. In it you can create a .md
file for your contribution.
Write your contribution using markdown. You can include images as well as links. There are many tutorials on Markdown but you most likely already know it.
If you want the title to be something other than the file name, then add:
as the first few lines of the file 5. Git add, commit and push your changes.
Brand new files (vs. edits) will not appear in the directory until it is rebuilt by rpsalas@brandeis.edu so send them an email!
by Kirsten Tapalla - Spring 2023 This is a quick guide to connecting to ChatGPT and getting it to generate a message that you can publish to a topic for use in other nodes.
import requests
: used to connect to the internet and post the url to connect to ChatGPT
import rospy
: used to publish the response from ChatGPT to a topic
from std_msgs.msg import String
: used to publish the response as a ros String message
Since you want to publish the message into a topic, you will have to initialize a rospy node withing your file using rospy.init_node('ENTER-NODE-NAME-HERE')
.
You will also want to initialze a rospy Publisher with the name of the topic you would like to publish the data to, for example: text_pub = rospy.Publisher('/chatgpt_text', String, queue_size=10)
.
If you would like to be able to change the prompt you are passing it when running the node, you should add a line allowing you to do this by initializing an input string to get the parameter with the name of what you would like to use to specify the input message you would like to pass it.
For example, do this by writing input_string = rospy.get_param('~chatgpt_prompt')
When setting up your launch file later on, you will want to include a line to handle this argument. This can be done by including arg name="chatgpt_prompt" default="ENTER-YOUR-DEFAULT-PROMPT-HERE"
into your launch file. You can set the default to whatever default prompt you would like to be passed if you are not giving it a specific one.
You will also want to add this line into your code to to specify the URL going that will be used to connect to ChatGPT: url = 'https://api.openai.com/v1/completions'
. Since the chat/text completions model is what we are using the get the output responses, the URL specifies 'completions' at the end.
To be able to access ChatGPT, you will need to include the following information in your 'header' dictionary: Content-Type and Authorization. Below is an example of what yours might look like:
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer INSERT-YOUR-OPENAI-API-KEY-HERE',
}
This will include the information you will want to pass into ChatGPT. The only required field will be specifying the model you want to use, but since you are passing in a prompt, you will also want to include that as well. You will be able to specify the maximum amount of tokens you want ChatGPT to generate, but the best way to get the full output messeage from ChatGPT is to enter the maximum amount for the model you are using. For example, as you can see below I am using the 'text-davinci-003' model, and the maximum tokens that this model can generate is 2048. Furthermore, you can adjust the sampling temperature, which will determine how creative the output of ChatGPT will be. The range goes between 0-2, and higher values will cause it to be more random, while lower values will cause it to be more focused and deterministic. An example of a request body you can make is shown below:
data = {
'model': 'text-davinci-003',
'prompt': input_string,
'max_tokens': 2048,
'temperature': 0.5,
}
To get the response for your input message from ChatGPT, include the following line in your code response = requests.post(url, headers=headers, json=data)
. Note that if you are using different names for your variables, you will want to pass in those names in place of 'headers' and 'data'.
To publish your output, you will want to make sure that your request went through. If it did, you will be able to get the output from the json file that was returned in the response variable. An example of how to do this is shown below:
if response.status_code == 200:
generated_text = response.json()['choices'][0]['text']
text_pub.publish(generated_text)
By doing all of the steps above, you will be able to connect to ChatGPT, pass it a prompt, and publish its response to that prompt to a topic in ros.
How to calibrate a camera before using computer vision algorithms (e.g. fiducial detection or VSLAM)
I want to run a computer vision algorithm on my robot, but I'm told that I need to calibrate my camera(s) first. What is camera calibration, and how can I do it?
Camera calibration is the act of determining the intrinsic parameters of your camera. Roughly speaking, the intrinsic parameters of your camera are constants that, in a mathematical model of your camera, describe how your camera (via its interior mechanisms) converts a 3D point in the world coordinate frame to a 2D point on its image plane.
Intrinsic parameters are distinct from extrinsic parameters, which describe where your camera is in the world frame.
So, since calibration deals with the intrinsic parameters of your camera, it practically doesn't matter where you place your camera during calibration.
To hear more about the basics of camera calibration, watch the following 5-minute videos by Cyrill Stachniss in order:
This note describes two ways you can calibrate your camera. The first is by using the camera_calibration
ROS package. This is the easier approach, since it basically does almost all of the work for you. The second is by using OpenCV's library directly, and writing your own calibration code (or using one in circulation).
camera_calibration
PackageFirst, let's install the package:
Third, tape the corners of the paper to a firm, flat surface, like the surface of a piece of cardboard.
Fourth, measure a side of a single square, convert your measurement to millimeters, and divide the result by 1000
. Let's call your result, RESULT
.
Now, let the number of rows of your checkerboard be M
and its number of columns N
. Finally, let's say your camera node's name is CAM
, such that, when you connect it with ROS, it publishes the /CAM/camera_info
and /CAM/image_raw
topics. Now, after ensuring that these two topics are being published, execute:
WARNING The two sections stated above are the only ones you actually want to follow in the official tutorial. Much of the rest of the material there is outdated or misleading.
Sometimes, you might want to use object detection or use certain algorithms that require a camera such as VSLAM. These algorithms usually require a very good calibration of the camera to work properly. The calibration fixes things like distortion by determining the camera’s true parameters such as focal length, format size, principal point, and lens distortion. If you see lines that are curved but are supposed to be straight, then you should probably calibrate your camera.
The most ideal way to do is to print the checkerboard on a large matte and sturdy piece of paper so that the checkerboard is completely flat and no reflections can be seen on it. However, it’s okay to just print it on a normal piece of paper as well and put it on a flat surface. Then, take at least ten photos with your camera from a variety of angles and positions so that the checkerboard is in all corners of the photos. Make sure the whole checkerboard is seen in each picture. Save those photos in an easy to find place and use the following to get your intrinsic calibration matrix.
Step by step:
print out checkerboard pattern
take at least 10 photos of the checkerboard at a variety of angles and positions (see image 1 for examples) and save in an easy to access place
download/copy the opencv calibration code and run it after changing the folder path
get the intrinsic matrix and distortion and enter it into whatever you need
Image 1: some examples of having the checkerboard cover all corners of the image
Potential intrinsic matrix:
[[688.00030953 0. 307.66412893]
[ 0. 689.47629485 274.053018 ]
[ 0. 0. 1. ]]
Pastable into python:
fx, fy, cx, cy = (688.00030953, 689.47629485, 307.66412893, 274.053018)
Distortion coefficients:
[9.39260444e-03, 4.90535732e-01, 1.48962564e-02, 4.68503188e-04, -1.77954077e+00]
, also by Cyrill Stachniss, is a deep dive into Zhang's method, which is what the camera_calibration
package we discuss below uses under the hood.
This guide assumes you've already got your camera working on ROS, and that you're able to publish camera_info
and image_raw
topics for the camera. If you need to set up a new usb camera, see this in our lab notebook.
Second, print out on a letter-sized piece of paper.
Next, follow the instructions under section 4 "Moving the Checkboard", and section 5 Calibration Results of .
of what a successful calibration process might look like.
Usually this is done with some kind of checkerboard pattern. This can be a normal checkerboard or a Charuco/Aruco board which has some patterns that look like fiducials or QR codes on it to further help with calibration. In this tutorial, we’ll be using a 7x9 checkerboard with 20x20mm squares: .
The code I used was this calibration. It also has more notes and information about what the information you are getting is.
How to use the claw
This FAQ will guide you through the process of using the claw in our color sorting robot project. The applications for the claw are endless, and this guide will allow you to easily write code for use of the claw on a robot. The instructions below are written for ROS and Python.
Make sure you are connected to a robot with a claw. As of now, the only robots with a claw are the platform robots in the lab.
Import 'Bool' from 'std_msgs.msg':
Create a publsiher that publishes commands to the claw:
This code creates a publisher called 'servo_pub' that publishes to the '/servo' node and sends a Bool value.
Write code to open or close the claw:
A: The code provided does not control the speed of the claw. You will need to modify the code and use a different message type to control the speed.
A: There are two robots as of right now with the claw attachment, both are platform robots. One of the claws is a big claw while the other one is a smaller claw. Both can be used for different applications and in both cases, the above code should work.
A: As long as you have a publisher, you can publish a command to open or close a claw at any time during the main loop of your program. You can have multiple lines of codes that opens or closes the claw multiple times throughout a program or you can just write code to have the claw open once. It's up to you.
For advanced users who are trying to tix weird problems
Ensure that the camera is enabled and there is enough GPU memory:
Add the following lines to the /boot/firmware/config.txt
file:
And then reboot. Note that the config.txt file will say that you should not modify it and instead make the changes in a file called userconfig.txt. However I found that the userconfig.txt file is not invoked. So I added the lines above directly to config.txt.
Check the amount of gpu_memory
It should show 256 or whatever number you put there.
For advanced users who are trying to tix weird problems
You really need to know what you are doing when you are dealing at this level.
It turns out that it is safe to delete the build and devel subdirectories in ROS.
Sometimes this helps:
First: lean your build by running "catkin_make clean" in the root of your workspace.
Second: remake your project with "catkin_make"
Third: re-source the devel/setup.bash in your workspace.
plug in battery and turn on robot with the power switch, give it a moment and wait for the lidar to start spinning.
run tailscale status | grep <name>
to find the robot’s IP address. Replace with the name of the robot you are trying to connect to.
go to .bashrc in my_ros_data folder and get it to look like this with your robot’s name and IP address instead:
Open a new terminal and you should see:
You can then ssh into the robot, ssh ubuntu@100.117.252.97
(enter your robot’s IP) and enter the password that is in the lab.
Once onboard the robot, enter the command bringup
which starts roscore and the Turtlebot’s basic functionalities. For the Platform robots, run this: roslaunch platform full_bringup.launch
After that, open a new terminal (you’ll be in real mode again) and run your program!
To go back to simulation mode, go back to .bashrc and uncomment the settings for simulation mode and comment out the settings for a physical robot. Or type the command sim
in the terminal. You will need to do this in every terminal that you open then. To switch to real mode, type command real
.
When you're done, run sudo shutdown now
onboard the robot (where is ran bringup) and then turn off the robot with the power switch.
Very handy command in vscode that makes this really easy!
How do I download files from my online VSCode to my local computer?
While the online VSCode does not offer a way to download entire folders and their contents with one click, there is a non-obvious functionality that allows you to download individual files directly from VSCode.
Right click the desired file and select 'Download' from the menu. The file will download to your default download location on your computer.
You will be able to now access the files from your local system and submit them to Gradescope. To submit packages like this, you can set up a Git repository and push the files there, or you can individually download each file in the package using this method, arrange them properly, and submit it on Gradescope.
Yolo8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks. It is a powerful model that can be used to detect multiple objects in an image.
It has been wrapped into a user-friendly python package Ultralytics (https://docs.ultralytics.com/). To detect objects of interest, the pre-trained model yolo8 can be used. Or one can customize the yolo8 model by training it with provided train image data. Here the website Roboflow (https://roboflow.com/) has a variety of object datasets, e.g. traffic sign dataset (https://universe.roboflow.com/usmanchaudhry622-gmail-com/traffic-and-road-signs/model/1). Once the dataset is downloaded and the Ultralytics package is installed, the yolo8 model can be trained easily:
Where the traffic_sign.yaml is the path to the yaml file inside your downloaded dataset from roboflow. You have the options to use various yolo8 models. See Detection Docs for usage examples with these models.
The larger the model is, the higher the latency will present in the ros node that holds the model. Therefore, the smaller model should be used as long as the model works for the objects of interest.
This is a quick ann simply way to train a customized model that is powerful in objects detection of robot vision.
The detection of edges in image processing is very important when needing to find straight lines in pictures using a camera. One of the most popular ways to do so is using an algorithm called a Canny Edge Detector. This algorithm was developed by John F. Canny in 1986 and there are 5 main steps to using it. Below are examples of the algorithm in use:
The second image displays the result of using canny edge detection on the first image.
Find intensity gradients of image
Apply gradient magnitude thresholding or lower bound cut-off suppression to remove false results
Track edge by surprisessing weak edges so only strong ones appear
The below function demonstrates how to use this algorithm:
First, the image is converting into something usable by cv. It is then grayed and the intensity gradient for the kernel is found
The image is then blurred for canny preparation.
The lower and upper bounds are decided and the Canny algorithm is run on the image. In the case of this function, the new image is then published to a topic called "canny mask" for use by another node.
The above code was created for use in a project completed by myself and fellow student Adam Ring
by Sampada Pokharel
This is a quick guide to finding the HSV values for any color that you may need for you project. We found this particularly helpful for following a line.
A robot with a camera
VNC
Once your robot is connected, open vnc and run cvexample.py file in the terminal.
In a seperate terminal, run rqt.
In rqt window, click Plugning -> Dynamic Reconfigure
Click cvexample and the hsv slides should pop up.
Adjust the sliders to find the hsv values for your desired colors.
In a seperate terminal, run rqt_image_view to get a larger frame of the camera image. This is optional.
Apply to smooth image
Connect your robot. You can find the guide
Model
size (pixels)
mAPval 50-95
Speed CPU ONNX (ms)
Speed A100 TensorRT (ms)
params (M)
FLOPs (B)
640
37.3
80.4
0.99
3.2
8.7
640
44.9
128.4
1.20
11.2
28.6
640
50.2
234.7
1.83
25.9
78.9
640
52.9
375.2
2.39
43.7
165.2
640
53.9
479.1
3.53
68.2
257.8
by Helen Lin edited by Jeremy Huey
This guide shows you how to create launch files for your code so that you can launch multiple nodes at once rather than running each node individually.
Open a new terminal window and navigate to a package you want to create a launch file in. Create a folder called 'launch'.
mkdir launch
Navigate to the launch directory and create a new launch file ending in .launch. Replace 'name' with the name of your launch file.
touch name.launch
Add the following code and fill in the parameters within the double quotes.
Here is an example of what the node will look like filled in, using code from the Mini Scouter project:
The pkg name can be found in the package.xml. Eg.
Make sure you have run the following command on all of the files used in the launch file so they can all be found by ROS launch. Replace 'name' with the name of the python file.
chmod +x name.py
Change the permissions of the launch file as well by going to the launch directory and running the following command. Replace 'name' with the name of the launch file.
chmmod +x name.launch
Open a new terminal window and run the following command. Replace 'package_name' with the name of the package and 'name' with the name of the launch file.
roslaunch package_name name.launch
For example, to run the Mini Scouter launch file:
roslaunch mini_scouter teleop.launch
All of the nodes you specified in the launch file should now be running.
To have a node launch and open in a new window, such as to run things like key_publisher.py, you can modify the line to include this: <node pkg="object_sorter" type="key_publisher.py" name="key" output="screen" launch-prefix="xterm -e"/>
You must then run in terminal:
To continue to get more information on launch files, go to here: labnotebook/faq/launch-files.md
Line follower may work well and easy to be done in gazebo because the color is preset and you don't need to consider real life effect. However, if you ever try this in the lab, you'll find that many factors will influence the result of your color.
Light: the color of the tage can reflect in different angles and rate at different time of a day depend on the weather condition at that day.
Shadow: The shadow on the tape can cause error of color recognization
type of the tape: The paper tage is very unstable for line following, the color of such tape will not be recognized correctly. The duct tape can solve most problem since the color that be recognized by the camera will not be influenced much by the light and weather that day. Also it is tougher and easy to clean compare to other tape.
color in other object: In the real life, there are not only the lines you put on the floor but also other objects. Sometimes robots will love the color on the floor since it is kind of a bright white color and is easy to be included in the range. The size of range of color is a trade off. If the range is too small, then the color recognization will not be that stable, but if it is too big, robot will recognize other color too. if you are running multiple robots, it might be a good idea to use electric tape to cover the red wire in the battery and robot to avoid recognizing robot as red line.
So if we use color pick we find online, we may need to rescale it to opencv's scale.
An overview of the utilization of the iPhone for more accurate GPS data
The utilization of GPS data on a robot is a common requirement within projects. However, a majority of hardware components that can be configured with the robot produce lackluster results. The iPhone uses sophisticated technology to produce more accurate GPS data, which makes it a prime candidatee for situation in which a robot is in need of accurate information. The application, GPS2IP, uses the technology of the iPhone and communicates it over the internet. Through this application and the iPhone technology an accurate vehicle to produce GPS data is obtained.
The application is solely available on iOS. No research was conducted on applications on Android that produce similar functionality. There are two versions of the application on the App Store: GPS2IP ($8.99) and GPS2IP Lite (Free). The free version only allows transmission for 4 minutes before automatically turning off. The paid version has no restrictions.
Settings > Display & Brightness > Auto-Lock > Never
Open GPS2IP > Toggle On "Enable GPS2IP" Switch
Open GPS2IP > Settings > NMEA Messages to Send > Only Toggle On "GLL" Switch
About roubles that you may run into if you are trying to connect to bluetooth using linux or raspberry pi
There are some troubles that you may run into if you are trying to connect to bluetooth using linux or raspberry pi, so this is a guide to try and overcome those difficulties. Hopefully, it is helpful.
Run the following commands to install Pipewire and disable PulseAudio.
To check that Pipewire is properly installed, run
If this doesn't work, try restarting Pipewire or your computer:
If you get the error: Connection failure: Connection refused
Check the status of your bluetooth:
To connect your bluetooth device, run the following commands:
After this, run:
You'll get a list of devices and the bluetooth device will be in the form of bluez_card.84_6B_45_98_FD_8E
From what I understand, most bluetooth headsets have two different profiles: ad2p and headset-head-unit. To use the microphone, you will need to set the card profile of your bluetooth device to headset-head-unit
Then, test whether or not the device is recording and playing properly:
You can set the default input and output devices using the following commands.
First check what sources are available:
Then set the default source and sink devices:
The InterbotixPincherX100 can rotate, move up and down, and extend. To get the arm to go to a specific point in space given (x, y, z) coordinates, the x and y compenents must be converted to polar coordinates.
Since the arm can move out/back and up/down to specific points without conversion, consider only a top-down view of the arm (x-y axis). To get the rotation and extension of the arm at a specific point, consider the triangle shown above. For rotation θ: $θ=atan(x/y)$. For extension r: $r=y/cos(θ)$. The up-down movement of the arm remains the same as the given z coordinate.
The arm can be moved to the desired point using set_ee_cartesian_trajectory. In this method, the extension and motion up/down is relative so the current extension and vertical position of the arm must be known as current_r and current_z.
Amazon Web Services RoboMaker is an online service which allows users to develop and test robotic applications online. Robomaker has a number of advanced features, but this notebook page will focus on developing nodes and simulating them in Gazebo. To use RoboMaker, you will also need to use AWS S3 and Cloud9.
Create an AWS root account if you do not already have one. A free tier account will suffice for getting started, though make note that under a free tier membership you will be limited to 25 Simulation Units (hours) for the first twelve months.
Going forward, you should be logged in with your IAM account. Log into AWS with your IAM, then proceed.
From the AWS Management Console, type “S3” into the “find services” field and click on S3 in the autofill list below the entry box. From the S3 Management Console, click “Create Bucket”
On the first page, enter a name for your bucket. Under region, make sure that it is US East (N Virginia), NOT US East (Ohio), as RoboMaker does not work in the Ohio region.
Skip step 2, “configure options”
In step 3, “Set Permissions”, uncheck all four boxes
Click “Create Bucket”
This bucket is where your bundled robotic applications will be stored.
Beck at the AWS Management Console, type “robomaker” in to the same entry field as S3 in the last part to go to the RoboMaker Management Console. In the left hand menu, under Development, select “Development environments” then click “Create Environment”
Give your environment a name
Keep the instance type as its default, m4 large
Under networking, select the default VPC and any subnet, then click “create” to finish creating your environment
At the end, you will have both a robot workspace and a simulation workspace. The robot workspace (robot_ws
) contains source files which are to be run on the robot. The simulation workspace (simulation_ws
) contains the launch files and the world files needed to run gazebo simulations. Going forward this guide assumes that you have directories set up exactly as described in the walkthrough linked above, especially that the folders robot_app
and simulation_app
exist inside the src
folders of robot_ws
and simulation_ws
, respectively.
robot_ws
Python ROS node files should be stored in the scripts folder inside robot_app. When you add a new node, you must also add it to the CMakeLists.txt
inside robot_app
. In the section labelled “install” you will see scripts/rotate.py
, below that line is where you should list all file names that you add (with scripts/ preceding the name).
When creating your directories, two files were put inside simulation_app/launch
: example.launch
and spawn_turtlebot.launch
Inside spawn_turtlebot.launch
, you will see a line that looks like this (it should be on line 3): <arg name="model" default="$(optenv TURTLEBOT3_MODEL waffle_pi)" doc="model type [burger, waffle, waffle_pi]"/>
In this section: default="$(optenv TURTLEBOT3_MODEL waffle_pi)"
you can replace waffle_pi
with burger
or waffle
to change the model of turtlebot3 used in the simulation
Inside example.launch
you will see this line (it should be on line 8): <arg name="world_name" value="$(find simulation_app)/worlds/example.world"/>
You can replace example.world
with the name of another world file to change the world used for the simulation. Note that the world file must be present in the folder simulation_app/worlds
. You can copy world files from the folder catkin_ws/src/turtlebot3_simulations/turtlebot3_gazebo/worlds
(which is on your personal machine if you have installed ROS Kinetic)
In order to use your applications in simulation, they must first be built and bundled to a format that RoboMaker likes to use. At the top of the IDE, click: RoboMaker Run → Add or Edit configurations, then click Colcon Build.
Give your configuration a name. I suggest it be robot or simulation>. For working directory, select the path to either robot_ws
or simulation_ws
(you will have to do this twice, once for each workspace).
Do the same, but this time for Colcon Bundle.
You now have shortcuts to build and bundle your robot and simulation applications.
Next, in the configuration window that you have been using, select workflow to create a shortcut which will build and bundle both applications with one click. Give your workflow a name, then put your actions in order. It is important that the builds go before the bundles.
Once you’ve made your workflow, go to: RoboMaker Run → workflow → your workflow to build and bundle your applications. This will create a bundle folder inside both robot_ws
and simulation_ws
. Inside bundle
, there is a file called output.tar.gz
. You can rename it if you like, but remember where it is.
Finally, we will go back to the configuration window to configure a simulation launcher.
Give it a name
Give it a duration (in seconds)
Select “fail” for failure behavior
Select an IAM role - it doesn’t necessarily matter which one, but AWSServiceRoleForRoboMaker is recommended
Skip to the robot application section
Give it a name
Bundle path is the path to output.tar.gz
(or whatever you renamed it) inside robot_ws/bundle
S3 bucket should be the bucket you created at the beginning of this guide
Launch package name is robot_app
Launch file is the launch file you wish to use
NOTE: the name of the robot application and the launch file should related in some way The simulation application section is much of the same, except everything that was “robot” should be replaced with “simulation.”
OPTIONAL: Once your simulation launch configuration has been saved, you can add it as the final action of the workflow you made earlier.
These are the steps that have been found to work when you want to run a simulation in RoboMaker. They are Kludgy, and perhaps a more elegant solution exists, but for now this is what has been found:
Make sure your applications have been built and bundled. Then from the IDE, go to RoboMaker Run → Simulation Launch → your simulation config. This will upload your application bundles to the S3 bucket you specified, then try to start the simulation. IT WILL PROBABLY FAIL. This is okay, the main goal of this step was to upload the bundles.
Go back to the RoboMaker Management Console, and in the left menu Select Simulations → Simulation Jobs, then click “Create simulation job”
Now we will configure the simulation job again:
Set the failure behavior to fail
For IAM role, select “create new role”, then give your role a name. Each simulation job will have its own IAM role, so make the name related to the simulation job.
Click next
For robot application, select the name you gave when you configured the simulation in the IDE. The launch package name and launch file will be the same too, but you must type those in manually.
Click next, the next page is for configuring the simulation application, the same rules apply here as the robot application
Click next, review the configuration then click create.
RoboMaker will begin preparing to launch your simulation, in a few minutes it will be ready and start automatically. Once it is running, you will be able to monitor it through gazebo, rqt, rviz and the terminal.
Be sure that once you are done with your simulation (if it is before the duration expires) to cancel the simulation job under the action menu in the upper right of the simulation job management screen.
Inputs: posex1
is a float that represents the x-axis coordinate of your robot, posey1
is a float that represents your robot's y-axis coordinate. posex2
and posey2
are floats that represent the x and y of your robot's goal. Lastly, theta
represents your robot's current pose angle.
Firstly, the angle_goal
from your robot's coordinate (not taking its current angle into account) is calculated by finding the arc tangent of the difference between the robot's coordinates and the goal coordinates.
In order to decide whether your robot should go left or right, we must determine where the angle_goal
is relative to its current rotational direction. If the angle_goal
is on the robot's left rotational hemisphere, the robot should rotate left, otherwise it should rotate right. Since we are working in Radians, π is equivilant to 180 degrees. To check whether the angle_goal
is within the left hemisphere of the robot, we must add π to theta
(the robot's current direction) to get the upperbound of the range of values we want to check the target may be included in. If the angle_goal
is between theta
and that upper bound, then the robot must turn in that direction to most efficiently reach its goal.
If your robot is at (0,0), its rotational direction is 0, and it's target is at (2,2), then its angle_goal
would equal = 0.785. First we check whether its current angle's deviation from the angle_goal
is significant by finding the difference and seeing if its larger than 0.1. If the difference between the angles is insignificant the robot should go straight towards its goal. In this case however, angle_goal
- theta
(0.785 - 0) is greater than 0.1, so we know that we must turn left or right to near our angle_goal
. To find out whether this angle is to the left or the right of the robot's current angle, we must add π to its current angle to discover the point between its left and right hemispheres. In this case, if the angle_goal
is between theta
and its goal_range, 3.14 (0(theta
) + π), then we would know that the robot must turn left to reach its goal.
However, if theta
(your robot's current direction) + π is greater than 2π (maximum radians in a circle) then the left hemisphere of your robot is partially across the 0 radian point of the circle. To account for that case, we must calculate how far the goal range wraps around the circle passed the origin. If there is a remainder, we check whether the angle_goal
is between theta
and 2π or if the angle_goal
is present within the remainder of the range that wraps around the origin. If either of these conditions are met then we know that your robot should turn left to most efficiently arrive at its goal, otherwise it should turn right.
Moving the arm to point (x, y), top-down view
Once you have an AWS account, you should create an IAM for your account. AWS recommends not using your root user account when using services like RoboMaker. . Remember the username and password of the account you create. Additionally, save the link to where you can log in with those credentials.
RoboMaker has a limited documentation set that can help you use the software. The “getting Started” section can help familiarize yourself with the software by working with a sample application. .
In your Cloud9 environment, use the bash command line at the bottom of the screen and follow these instructions: to create the directories needed to work with ROS.
ImageNet file xml format to Darknet text format.
Full Repo here: https://github.com/campusrover/Robotics_Computer_Vision/tree/master/utils/xml_2_txt
Input xml file.
Output text file.
I used Darknet for real-time object detection and classification. Sometimes you need to collect your own trainig dataset for train your model. I collected training dataset images and fine awesome tool for labeling images. But it generates xml files. So I needed to implement tool which translates from ImageNet xml format to Darknet text format.
Step by step tutorial for creating a gazebo world
This tutorial is largely based on what I have learnt here: Building a world. Please refer to this official tutorial if you need more details.
First, open Gazebo - either search for gazebo in the Unity Launcher GUI or simply type gazebo
onto the terminal. Click on Edit
--> Building Editor
and you should see the following page. Note there are three areas:
Platte: You can choose models that you wish to add into the map here.
2D View: The only place you make changes to the map.
3D View: View only.
You may create a scene from scratch, or use an existing image as a template to trace over. On the Platte, click on import
and selet a 2D map plan image in the shown prompt and click on next
.
To make sure the walls you trace over the image come up in the correct scale, you must set the image's resolution in pixels per meter (px/m). To do so, click/release on one end of the wall. As you move the mouse, an orange line will appear as shown below. Click/release at the end of the wall to complete the line. Once you successfully set the resolution, click on Ok
and the 2D map plan image you selected should show up in the 2D-View area.
Select Wall from Platte.
On the 2D View, click/release anywhere to start the wall. As you move the mouse, the wall's length is displayed.
Click again to end the current wall and start an adjacent wall.
Double-click to finish a wall without starting a new one.
Double-clicking on an existing wall allows you to modify it.
You can manipulate other models likewise. For more detailed instructions, please refer to http://gazebosim.org/tutorials?tut=build_world for more details
You need to create a package for your Gazebo world so that you can use roslaunch
to launch your it later.
Go to your catkin workspace
$ cd ~/catkin_ws/src
Create a package using the following command.
$ catkin_create_pkg ${your_package_name}
Go to your package and create three folders launch, worlds and models.
Once you finish editing the map, give a name to your model on the top on the Platte and click on File
-> Save As
to save the model you just created into ../${your_package_name}/models
.
Click on File
-> Exit Building Editor
to exit. Please note that once you exit the editor, you are no longer able to make changes to the model. Click on File
-> Save World As
into ../${your_package_name}/worlds
.
I will refer to this world file as ${your_world_file_name}.world
from now on.
Go to ../${your_package_name}/launch
and make a new file ${your_launch_file}
Copy and paste the following code into your launch file and substitute ${your_package_name}
and {your_world_file_name}
with their actual names.
Go to the workspace where your new package was created e.g. cd ~/catkin_ws
run catkin_make
and then roslaunch ${your_package_name} ${your_launch_file}
You should see the Gazebo map you just created along with a turtlebot loaded.
The building editor is a faster, easier to use tool than the model editor, as it can create a map in mere minutes. With the model editor, you have more technical control over the world, with the trade off being a more tedious process. The model editor can help make more detailed worlds, as you can import .obj files that can be found on the internet or made in 3d modeling software such as Blender. For the purposes of use in this class, USE THE BUILDING EDITOR For your own recreational robotic experimentation purposes, of course, do whatever you like.
If you do wish to use the model editor, here are two tips that will help you to get started making basic, serviceable worlds.
The basic shapes that gazebo has are a greyish-black by default- which is difficult to see on gazebo's greyish-black background. To change the color, follow these steps: 1. Right click on the model 1. select "open link inspector" 1. go to the "visual" tab 1. scroll down to "material" and open that section 1. use the RGB values labeled "ambient" to alter the color - set them all to 1 to make it white.
use the shortcut s to open the scaling tool - grab the three axis to stretch the shape. Hold ctrl to snap it to the grid. use the shortcut t to switch to the translation tool - this moves the model around. Hold ctrl to snap it to the grid. use the shortcut r to open the rotation tool. grab the rings to rotate the object.
If an object isn't static, it will fall over/ obey the laws of physics if the robot collides with it - to avoid this, click the object in the left hand menu and click the is_static field.
Does the model editor seem like a hassle already? Then just use the building editor.
Another tutorial for creating gazebo worlds
Building a gazebo world might be a little daunting of a task when you are getting started. One might want to edit existing gazebo worlds but I will save you the trouble and state that its not going to work.
Open a vnc terminal
This would open a gazebo terminal and once you get in there you would want to bring your cursor to the top left hand corner and find the Building Editor in the Edit tab.
Once in the building editor click on the wall to create the boundaries on the top half of the editor. Left click to get out of the building mode. If you would like to create walls without standard increments, press shift while dragging the wall.
If you would like to increase or decrease the thickness of the wall. Click on the walls you would like to change and it will open up a modal with options to change.
After you are satisfied with their boundaries, save it as a model into your models folder within your project.
When the models have been saved. You would be brought back to gazebo with the model that you have built.
Change the pose of the model if need be according to your needs then save the world.
Upload your new model into this part of the launch file
If you find that your robot is not in the right place. Open the launch file, make the changes to the model accordingly and save it again as the world again.
Thats how you can build a world from scratch, hope this helped.
How to use the TKInter package for Ros Tools
Brendon Lu and Benjamin Blinder
Make sure you have the following packages imported: tkinter
and rospy
. The tkinter module is a basic and simple, yet effective way of implementing a usable GUI.
Because tkinter has a hard time recognizing functions created at strange times, you should next create any functions you want to use for your node. For this example, I recommend standard functions to publish very simple movement commands.
You still need to initialize the ros node, as well as any publishers and subscribers, which should be the next step in your code.
Now it is time to create a basic window for your GUI. First, write [window name] = tk.Tk()
to create a window, then set the title and size of the window. Not declaring a window size will create a window that adapts automatically to the size of whatever widgets you create on the window.
The step is to populate your window with the actual GUI elements, which tkinter calls "Widgets". Here we will be making two basic buttons, but there are other common widget types such as the canvas, entry, label, and frame widgets.
And now that you have created widgets, you will notice that if you run your code, it is still blank. This is because the widgets need to be added to the window. You can use "grid", "place", or "pack" to put the widget on the screen, each of which have their own strengths and weaknesses. For this example, I will be using "pack".
And now finally, you are going to run the tkinter mainloop. Please note that you cannot run a tkinter loop and the rospy loop in the same node, as they will conflict with each other.
To run the node we have created here, you should have your robot already running either in the simulator or in real life, and then simply use rosrun to run your node. Here is the code for the example tkinter node I created, with some more notes on what different parts of the code does
Although this code does technically move the robot and with some serious work it could run a much more advanced node, I do not recommend doing this. I would recommend that you create two nodes: A GUI node and a robot node. In the GUI node, create a custom publisher such as command_pub=rospy.Publisher('command', Twist, queue_size=1)
and use this to send messages for movement to the robot node. This way, the robot node can handle things like LiDAR or odometry without issues, since the tkinter update loop will not handle those kinds of messages very efficiently.
Overall, tkinter is an industry staple for creating simple GUIs in Python, being fast, easy to implement, versatile, and flexible, all with an intuitive syntax. For more information, check out the links below.
If you're having trouble getting a file from your code virtual machine onto your actual computer to submit it onto Gradescope, never fear, an easy solution is here:
Right click on the desired file from the file explorer (normally on the left panel) on your code viewer.
Select 'Download' and the file will download to your browser's default download location (typically your 'Downloads' folder).
Voila! You have successfully moved a file from your online code to your machine.
By Adam Ring
Using a pre-trained deep learning model from a framework such as Pytorch has myriad applications in robotics, from computer vision to speech recognition, and many places inbetween. Sometimes you have a model that you want to train on another system with more powerful hardware, and then deploy the model elsewhere on a less powerful system. For this task, it is extremely useful to be able to transfer the weights of your trained model into another system, such as a virtual machine running Ubuntu 18.04. These methods for model transfer will also run on any machine with pytorch installed.
It is extremely discouraged to mix versions of Pytorch between training and deployment. If you train your model on Pytorch 1.8.9, and then try to load it using Pytorch 1.4.0, you may encounter some errors due to differences in the modules between versions. For this reason it is encouraged that you load your Pytorch model using the same version that is was trained on.
Let's assume that you have your model fully trained and loaded with all of the necessary weights.
model = MyModel()
model.train()
For instructions on how to train a machine learning model, see this section on training a model in the lab notebook. There are multiple ways to save this model, and I will be covering just a few in this tutorial.
state_dict
This is reccommended as the best way to save the weights of your model as its state_dict
, however it does require some dependencies to work. Once you have your model, you must specify a PATH
to the directory in which you want to save your model. This is where you can name the file used to store your model.
PATH = "path/to/directory/my_model_state_dict.pt"
or
PATH = "path/to/directory/my_model_state_dict.pth"
You can either specify that the state_dict
be saved using .pt
or .pth
format.
Then, to save the model to a path, simply call this line of code.
torch.save(model.state_dict(), PATH)
state_dict
Download the my_model_state_dict.pt/pth
into the environment in which you plan on deploying it. Note the path that the state dict is placed in. In order to load the model weights from the state_dict
file, you must first initialize an untrained istance of your model.
loaded_model = MyModel()
Keep in mind that this step requires you to have your model architecture defined in the environment in which you are deploying your model.
Next, you can simply load your model weights from the state dict using this line of code.
loaded_model.load_state_dict(torch.load("path/to/state/dict/my_model_state_dict.pt/pth"))
The trained weights of the model are now loaded into the untrained model, and you are ready to use the model as if it is pre-trained.
TorchScript is a framework built into Pytorch which is used for model deployment in many different types of environments without having the model defined in the deployment environment. The effect of this is that you can save a model using tracing and load it from a file generated by tracing it.
What tracing does is follow the operations done on an input tensor that is run through your model. Note that if your model has conditionals such as if
statements or external dependencies, then the tracing will not record these. Your model must only work on tensors as well.
In order to trace your trained model and save the trace to a file, you may run the following lines of code.
PATH = "path/to/traced/model/traced_model.pt/pth"
dummy_input = torch.ones(typical_input_size, dtype=dype_of_typical_input)
traced_model = torch.jit.trace(model, dummy_input)
torch.jit.save(traced_model, PATH)
The dummy_input
can simply be a bare tensor that is the same size as a typical input for your model. You may also use one of the training or test inputs. The content of the dummy input does not matter, as long as it is the correct size.
In order to load the trace of a model, you must download the traced model .pt
or .pth
file into your deployment environment and note the path to it.
All you need to do to load a traced model for deployment in Pytorch is use the following line of code.
loaded_model = torch.jit.load("path/to/traced/model/traced_model.pt/pth")
Keep in mind that the traced version of your model will only work for torch tensors, and will not mimic the behavior of any conditional statements that you may have in your model.
Please see the full tutorial in the repo: https://github.com/campusrover/Robotics_Computer_Vision/tree/master/utils/labelImg
Apriiltags are al alternative to Aruco Tags
Setting up apriltags to work with your program is pretty simple.
You can run the following two lines to install apriltags and its ros version onto your ubuntu:
sudo apt install ros-noetic-apriltag
sudo apt install ros-noetic-apriltag-ros
Afterwards, connect your camera
roslaunch usb_cam usb_cam-test.launch
changed the video_device to /dev/video2 (or whichever video device you figure out it is) to take from usbcam
created head_camera.yaml file from this example and added the calibration information:
roslaunch apriltag_ros continuous_detection.launch publish_tag_detections_image:=true
can remap input topics to what you need:
image_rect:=usb_cam/image_raw
camera_info:=usb_cam/camera_info
This guide will show how to set up an external camera by connecting it to a computer running ROS. This guide assumes your computer is running ROS on Ubuntu natively, not via the VNC, in order to access the computer's USB port. (The lab has computers with Ubuntu preinstalled if you need one.)
Installation
Install the ros package usb_cam: sudo apt install ros-noetic-usb-cam
Install guvcview for the setup: sudo apt install guvcview
Edit the launch file, usb_cam-test.launch
Find the location of the file inside the usb_cam package: roscd usb_cam
Set video_device
parameter to the port of your camera
Check which ports are in use: ls /dev
and look for video0, video1, etc.
Check the output of each port: guvcview /dev/<port>
If you unplug the camera between uses or restart your computer, the camera may be on a different port. Check every time!
Run the node! roslaunch usb_cam usb_cam-test.launch
Don't know how to read a BLDC motor's spec sheet, check out this section
Authors: Julian Ho, Cass Wang
BLDC motor stands for Brushless DC motor, as their name implies, brushless DC motors do not use brushes. With brushed motors, the brushes deliver current through the commutator into the coils on the rotor.
With a BLDC motor, it is the permanent magnet that rotates; rotation is achieved by changing the direction of the magnetic fields generated by the surrounding stationary coils. To control the rotation, you adjust the magnitude and direction of the current into these coils.
By switching on/off each pairs of stators really quickly, the BLDC motor can achieve a high rotational speed.
This is a simple table comparing a brushed DC motor, AC induction motor, and a BLDC motor:
BLDC motors are commonly found in drones, electric cars, even robots!
There are a couple different types of BLDC motor on the market for different applications. Here are some examples,
<150g weight
<5cm diameter
11.1-22.2v operational voltage
~0.3 NM torque
application: small drones
400-1000g weight
5-10cm diameter
22.2-44.4v operational voltage
~4 NM torque
application: RC cars, electric skateboard, robot actuator
~400g weight
5-8cm diameter
11.1-22.2v operational voltage
~0.9 NM torque
application: 3D printers, servos
When shopping for a BLDC motor, there are a couple motor specific terms to consider.
Max RPM (KV - RPM per volt)
2200KV @ 10v = KV x V
= 22,000 RPM max speed
Max Torque (NM - newton-meter)
1 NM = able to lift 1 KG weight attached to the end of a 1 meter long stick
Max Power (W - Watts)
835w @ 10v = W/V
= 83.5Amp max power draw
Motor Efficiency (%)
90% efficiency = 90% of theoretical power
Input Volt (S - li-ion Cells)
6S = S x 3.7V
= 22.2v
Max Current (A - Amps)
50A @ 10v = A x V
= 500W max power
Motor poles (N-P)
24N40P = 24 stator poles, 40 permanent magnet poles
Outrunner/Inrunner
Outrunner = motor body spin with output shaft
Inrunner = only output shaft will spin, body is stationary
Motor numbering
6355 = 63mm diameter, 55mm length
To drive a BLDC motor, you need a dedicated speed controller (ESC) to control it. Here are different types of ESC for different applications. These ESCs (like the motors above) are considered hobbyist-use, but they are quite sufficient for building small/mid-size robots.
very light ~9g
very small footprint (size of a dollar coin)
1-6S input voltage
~40A max continuous current
cheap
application: small drone, small fighter robot, RC helicopter
downside: cannot handle big motors, heat up very quickly, only simple motor control algorithms available
3-12S input voltage
~50A max continuous current
can handle medium size motors
have active cooling
affordable
application: RC car, RC boat, electric skateboard
downside: limited control protocol (PWM only), only simple motor control algorithms available
commonly used in robotic arm, actuator control
more expensive
~120A max continuous current
can handle large motors
offer fine positional control of motor
offer programmatic control (serial/USB/CANbus)
application: robot, robotic arm
downside: quite pricey, not plug-and-play, need to tune the motor manually before use
There are 2 most common motor control algorithm used in hobbyist ESCs.
Sensorless BLDC Motor Control
Advantage: No need for dedicated encoder on the motor
Downside: Weak low speed control, less speed less torque
Field Oriented Control (FOC)
Advantage: Full torque at any speed
Downside: Require fine motor tuning (PID), and dedicated encoder
Detailed steps for using the Coral TPU
The Google Coral TPU is a USB machine learning accelerator which can plugged into a computer to increase machine learning performance on Tensorflow Lite models. The following documentation will detail how to install the relevant libraries and use the provided PyCoral library in Python to make use of the Coral TPU.
Linux Debian 10, or a derivative thereof (such as Ubuntu 18.04)
A system architecture of either x86-64, Armv7 (32-bit), or Armv8 (64-bit), Raspberry Pi 3b+ or later
One available USB port (for the best performance, use a USB 3.0 port)
Python 3.6-3.9
Note: For use with ROS, you will want to have ROS Noetic installed on Ubuntu 20.04.
Follow the following steps in order to get your environment configured for running models on the Coral TPU
Open up a terminal window and run the following commands:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
Plug in your Coral TPU USB Accelerator into a USB 3.0 port on your computer. If you already have it plugged in, unplug it and then plug it back in.
Install one of the following Edge TPU Runtime libraries:
To install the reduced clock-speed library, run the following command:
sudo apt-get install libedgetpu1-std
Or run this command to install the maximum clock-speed library:
sudo apt-get install libedgetpu1-max
Note: If you choose to install the maximum clock-speed library, an excessive heat warning message will display in your terminal. To close this window, simply use the down arrow to select OK
and press enter and your installation will continue.
In order to switch runtime libraries, just run the command corresponding to the runtime library that you wish to install. Your previous runtime installation will be deleted automatically.
Install the PyCoral Python library with the following command:
sudo apt-get install python3-pycoral
You are now ready to begin using the PyCoral TPU to run Tensorflow Lite models.
The following section will detail how to download and execute a TFLite model that has been compiled for the Edge TPU
Once you have downloaded your model, place it into a folder within your workspace.
A guide to installing the Astra Pro Depth Camera onto a robot
by Veronika Belkina
This is a guide to installing the Astra Pro Depth Camera onto a robot and the various problems and workarounds that were experienced along the way.
If this goes well, then you're pretty much all set and should skip down to the usb_cam section. If this doesn't go well, then keep reading to see if any of the errors that you received can be resolved here.
make sure to run sudo apt update
on the robot
if you are getting an error that mentions a lock when you are installing dependencies, try to reboot the robot: sudo reboot now
If you have tried to install the dependencies using this:
and this is not working, then here is an alternative to try:
If this is successful, then within ~/catkin_ws/src, git clone https://github.com/orbbec/ros_astra_camera
and run the following commands:
After this, run catkin_make
on the robot. This might freeze on you. Restart the robot and run catkin_make -j1
to run on a single core. This will be slower, but will finish.
At this point, hopefully, all the errors have been resolved and you are all set with the main astra_camera package installation.
There is one more step that needs to be done.
Test it by running rosrun usb_cam usb_cam_node
Afterwards, when you need to run the depth camera and need the rgb stream as well, you will need to run the following instructions onboard the robot:
If this is something that you will be needing often, then it might be worth it to add the usb_cam node into the astra launch file as you do need to ssh onto the robot for each instruction. The usb_cam_node publishes to the topic /usb_cam/image_raw
. You can check rostopic list
to see which one suits your needs.
If you want to just explore the depth camera part of the camera, then just run the astra.launch file.
Then there will be a view topics that the camera publishes:
If you open rviz, click add, go to the by topic tab and open the PointCloud2 topic under /camera/depth/points:
If you’re working with just the camera, you might need to fix the frame and can just pick any random one for now other than map. A point cloud of what’s in front of you should appear:
If you're looking through rqt, then you might see something like this:
From there, you could use colour detection, object detection, or whatever other detector you want to get the pixels of your object and connect them with the pixels in the point cloud. It should output a distance from the camera to the object. However, I can’t speak for how reliable it is. It can’t see objects right in front of it - for example when I tried to wave my hand in front of it, it wouldn’t detect it until it was around 40 or so cm away.
Before you use this tutorial, consult with the Campus Rover Packages which outline setting up ngrok with flask and getting to the Alexa Developer Console.
The Flask Alexa Skills Kit module allows you to define advanced voice functionality with your robotic application.
In your VNC terminal
Then in your rospy node file
declare your Flask application as
and connect it to the Flask ASK
So, if your ngrok subdomain is campusrover and your Alexa endpoint is /commands
your code would be
On the Alexa Developer Consoles define your intents and determine which slots your intents will use. Think of intents as functions, and slots as parameters you need for that function. If a slot can have multiple items in a slot mark that in the developer console. You do not need to provide the Alexa responses on the Developer Console because you will be writing them with Flask-ASK. The advantage of doing this is you can write responses to the user that take into account the robot's state and publish information to other nodes as you receive input with Alexa.
Let's assume we have an intent called 'bring_items' that can contain multiple items to fetch. Assume that the robot needs at least one item to fulfill this intent and that the robot has a weight capacity for how much it can carry. Let's also assume we have some kind of lookup table for these items which tells us their weight. With Flask-ASK we can quickly build this.
You are required to have a response for launching the intent marked as
This intent is not called every single time you use the skill, but its a good way to tell the user what the bot can do or tell the user what information the bot needs from them. There are a few different response types,
statement(str: response)
which returns a voice response and closes the session.
question(str: response)
which you can use to ask the user a question and keep the session open.
elicit_slot(str: slot_name, str: response)
which you can use to prompt the user for a specific type of information needed to fulfill the intent.
confirm_slot(str: slot_name, str: response)
which you can use to confirm with the user that Alexa heard the slot correctly.\
It is important to note, that to use elicit_slot
and confirm_slot
you must have a Dialog Model enabled on the Alexa Developer Console. The easiest way I have to found to enable this is to create an intent on the console that requires confirmation. To avoid this intent being activated, set its activation phrase to gibberish like 'asdlkfjaslfh'. You can check a switch marking that this intent requires confirmation.
Now let's build out an intent in our flask application.
Start with a decorator for your skill response marking which intent you are programming a response for.
First, let's assure that the user has provided the robot with some items. When you talk to Alexa, it essentially just publishes a JSON dictionary to your endpoints which your flask app can read. To get a list of the slots for this intent use the line:
Let's call our items slots 'items'. To check that the user has provided at least one item, write.
This will check that there is an item, and if there is not, prompt the user for some items. The string python 'value'
is a key in the dictionary if and only if the user has provided the information for this slot, so this is the best way to check.
Let's assume that our robot has a carrying capacity of 15 pounds and has been asked to carry some items that weigh more than 15 pounds. Once we've checked that the items are too heavy for the robot, you can elicit the slot again with a different response like
and the user can update the order. Returning elicit_slots
keeps the Alexa session open with that intent, so even though each call is a return statement, you are essentially making a single function call and updating a single JSON dictionary.
Once the robot is given a list of items that it can carry, you can use a rospy publisher to send a message to another node to execute whatever robotic behavior you've implemented.
You should also include
This is an intent you can use for any phrase that you have not assigned to an intent that your robot will use. If you do not implement a response for this, you will very likely get error messages from Alexa.
Skills come prebuilt with some intents such as these. If a user activates one of these intents, and you don't define a response, you will get a skills response error. You should thoroughly test possible utterances a user might use to avoid errors. Skills response errors do not crash the application, but it makes for bad user-experience.
Another default intent you will likely need is
This handles negative responses from the user. An example of this is:
User: Alexa launch campus rover Alexa: hello, what do you need User: nothing Alexa: AMAZON.CancelIntent response\
I hope this tutorial has made clear the advantages of using the flask-ASK for your Alexa integration. It is a great way to rapidly develop different voice responses for your robot and quickly integrate those with robotic actions while avoiding the hassle of constantly rebuilding your Alexa skill in the Developer Console.
This FAQ section assumes understanding of creating a basic Gazebo world and how to manipulate a XML file. This tutorial relies on the assets native to the Gazebo ecosystem.
Setting up empty Gazebo world
By Nathan Cai
(If you have a prexisting Gazebo world you want to place an actor you can skip this part) Empty Gazebo worlds often lack a proper ground plane so it must be added in manually. You can directly paste this code into the world file.
Placing an actor in the world
TL:DR Quick Setup
Here is the quick setup of everything, one can simply copy and paste this code and change the values to suit the need:
(If you do not have a Plugin for the model, please delete the Plugin section)
Defining an actor
Human or animated models in Gazebo are called actors, which contains all the information of an actor. The information can include: pose
, skin
, animation
, or any plugins
. Each actor needs a unique name. It uses the syntax:
Change the actor pose
The pose of the is determined using the pose parameter of an actor. The syntax is: (x_pos, y_pos, z_pos, x_rot, y_rot, z_rot)
Add in Skins
The skin is the mesh of the actor or model that you want to place into the world. it is placed in the actor group and takes in the input of the filename. The syntax is:
The mesh scale is also adjustable by changing the scale parameter.
Add in Animations
Though the actor can operate without animations, it is preferable for you to add one, especially if the model is to move, as it would make the enviorment more interesting and realistic.
To add an animation to the actor, all it needs is a name fore the animation, and the file that contains the animation. The syntax for this is: NOTE: THE FILE BECOMES BUGGY OR WILL NOT WORK IF THERE IS NO SKIN.
IMPORTANT: IN ORDER FOR THE ANIMATION TO WORK, THE SKELETON OF THE SKIN MUST BE COMPATABLE WITH THE ANIMATION!!!!
The animation can also be scaled.
Scripts
Scripts are tasks that you can assign an actor to do, in this case it is to make the actor walk around to specific points at specific times. The syntax for this is:
You can add as many waypoitns as you want so long as they are at different times. The actor will navigate directly to that point at the specified time of arrive in <time>0</time>
and pose using <pose>0 0 0 0 0 0</pose>
.
Plugin addons
The actor can also take on plugins such as obstacle avoidance, random navigation, and potentially teleop. The parameters for each plugin may be different, but the general syntax to give an actor a plugin is:
With all of this you should be able to place a human or any model of actor within any Gazebo world. For reference, you can refer to the Gazebo actor tutorial for demonstration material.
References
For details on how to set up for Windows or Mac OS, see on the Coral website.
Pretrained TFLite models that have been precompiled for the Coral TPU can be found on the .
To start, try to follow the instructions given on the .
If you are getting a publishing checksum error, try to update the firmware on the robot using commands from or run these commands, line by line:
The Astra Pro camera doesn't have an RGB camera that's integrated with OpenNI2. Instead, it has a regular Video4Linux webcam. This means that from ROS's point of view, there are two completely separate devices.To resolve this, you can install another package onto the robot called usb_cam following these :
Webviz is an advanced online visualization tool
I have spent many hours tracking down a tricky tf tool and in doing that came across some new techniques for troubleshooting. In this FAQ I introduce using rosbag with Webviz
This CLI from ROS monitors all topic publications and records their data in a timestamped file called a bag or rosbag. This bag can be used for many purposes. For example the recording can be "played back" which allows you to test the effects that a certain run of the robot would have had. You can select which topics to collect.
One other use is to analyze the rosbag moment to moment.
Is an online tool (https://webviz.io). You need to understand topics and messages to use it and the UI is a little bit obscure. But it's easy enough. You supply your bag file (drag and drop) and then arrange a series of panes to visualize all of them in very fancy ways.
Junhao Wang
When using Turtlebot in the lab, it only publishs the CompressedImage from '/raspicam_node/image/compressed'. Here's how to read CompressedImage and Raw if you need.
When the computation resource and the bandwidtn of robot are restricted, you need limit the frame rate.
Again, when you have multiple nodes need image from robot, you can republish it to save bandwidth.
An alternative function that is sometimes suggested as an alternative to pid
To use a sigmoid function instead of a PID controller, you need to replace the PID control algorithm with a sigmoid-based control algorithm. Here is a step-by-step guide to do this:
Understand the sigmoid function: A sigmoid function is an S-shaped curve, mathematically defined as: f(x) = 1 / (1 + exp(-k * x))
where x is the input, k is the steepness factor, and exp() is the exponential function. The sigmoid function maps any input value to a range between 0 and 1.
Determine the error: Just like in a PID controller, you need to calculate the error between the desired setpoint and the current value (process variable). The error can be calculated as: error = setpoint - process_variable
Apply the sigmoid function: Use the sigmoid function to map the error to a value between 0 and 1. You can adjust the steepness factor (k) to control the responsiveness of the system: sigmoid_output = 1 / (1 + exp(-k * error))
Scale the output: Since the sigmoid function maps the error to a range between 0 and 1, you need to scale the output to match the actual range of your control signal (e.g., motor speed or actuator position). You can do this by multiplying the sigmoid_output by the maximum control signal value: control_signal = sigmoid_output * max_control_signal
Apply the control signal: Send the control_signal to your system (e.g., motor or actuator) to adjust its behavior based on the error.
Note that a sigmoid function-based controller may not provide the same level of performance as a well-tuned PID controller, especially in terms of overshoot and settling time. However, it can be useful in certain applications where a smooth, non-linear control response is desired.
Arguments and parameters are important tags for roslaunch files that are similar, but not quite the same.
Evalyn Berleant, Kelly Duan
Arguments and parameters are important tags for roslaunch files that are similar, but not quite the same.
Parameters are either set within a launch file or taken from the command line and passed to the launch file, and then used within scripts themselves.
Getting parameters
Parameters can be called inside their nodes by doing
Example from ROS parameter tutorial.
Adding parameters to launch files
Setting Parameters
Parameters can be set inside nodes like such (python):
For instance, if you wanted to generate a random number for some parameter, you could do as follows:
which would generate a random position for the parameter.
Be careful that if you are setting parameters in more than one place that they are set in order correctly, or one file may overwrite the parameter’s value set by another file. (See links in resources for more detail).
While parameters can pass values from a launch file into a node, arguments (that look like <arg name=”name”/>
in the launch file) are passed from the terminal to the launch file, or from launch file to launch file. You can put arguments directly into the launch file like such and give it a value (or in this case a default value):
Or you can pass arguments into “included” files (launch files included in other launch files that will run):
Substitution args, recognized by the $
and parentheses surrounding the value, are used to pass values between arguments. Setting the value of a parameter or argument as value=”$(arg argument_name)”
will get the value of argument_name in the same launch file. Using $(eval some_expression)
will set the value to what the python expression at some_expression
evaluates to. Using $(find pkg)
will get the location of a package recognized by the catkin workspace (very often used).
The if
attribute can be used on the group tag, node tag, or include tag and work like an if statement that will execute what is inside the tag if true. By using eval
and if
together, it is possible to create loops to run files recursively. For example, running a launch file an arbitrary number of times can be done by specifying the number of times to be run in the launch file, including the launch file within itself, and decrementing the number of times to be run for each recursive include
launch, stopping at some value checked by the if
attribute. Here is an example of a recursive launch file called follower.launch
to spawn in robots.
followers
here will impact the number of times the launch file is recursively called. $(eval arg('followers') - 1)
will decrement the value of followers
inside each recursive launch, and the if
attribute
checks that once the new number is below 0, it will not call the launch file again.
Both arguments and parameters can make use of substitution args. However, arguments cannot be changed by nodes like parameters are with rospy.set_param()
. Because of the limits of substitution, you cannot take the value of a parameter and bring it to an argument. If you want to use the same value between two params that require generating a specific value with rospy.set_param()
then you should create another node that sets both parameters at once.
For example, this script
is called within a parameter using the command
attribute.
The command attribute sets the value of the parameter to whatever is printed by stdout
in the script. In this case, the script generates a random number for x_pos
. In the same file, rospy.setparam
is called to set another parameter to the same value of x_pos
. In that way, both parameters can be set at once.
If you have too many parameters and/or groups of parameters, not only is it inefficient to write them into a launch file, but is also prone to many more errors. That is when a rosparam file comes in handy--a rosparam file is a YAML file that stores parameters in an easier-to-read format. A good example of the utility of rosparam is the parameters for move_base, which uses the command
which loads the parameters from the yaml file here:
Very common requirement
Current best resource on web: The fastest way to clone an SD card on macOS
You’ll need to find out which disk your SD card represents. You can run diskutil list
and should see an output like below:
From that output we can see that our SD card must be /dev/disk4
as our card is 32GB in size and has a fat32 and linux partition (standard for most raspberry pi images). You should add an r in front of disk4 so it looks like this /dev/rdisk4
. The r means when we’re copying, it will use the “raw” disk. For an operation like this, it is much more efficient.
Now you should run the following command, replacing 4 with whatever number you identified as your sd card:
sudo gdd if=/dev/rdisk4 of=sd_backup.dmg status=progress bs=16M
Tip: you can experiment with different numbers for the block size by replacing bs=16M with larger or smaller numbers to see if it makes a difference to the speed. I’ve found 16M the best for my hardware.
You should see some progress feedback telling you the transfer speed. If you’d like to experiment with different block sizes, just type ctrl + c to cancel the command, then you can run it again.
Once the command has finished running, you’ll end up with a file in your home directory called sd_backup.dmg. If you’d like to backup multiple SD cards (or keep multiple backups!) simply replace sd_backup.dmg with a different file name. This will contain a complete disk image of your SD card. If you’d like to restore it, or clone it to another SD card, read on.
You’ll first need to unmount your SD card. Do not click the eject button in finder, but run this command, replacing 4 with whatever number you identified as your sd card
sudo diskutil unmountDisk /dev/disk4
Then to copy the image, run the following command:
sudo gdd of=/dev/rdisk4 if=sd_backup.dmg status=progress bs=16M
Tip: you can experiment with different numbers for the block size by replacing bs=16M with larger or smaller numbers to see if it makes a difference to the speed. I’ve found 16M the best for my hardware.
You should see some progress feedback telling you the transfer speed. If you’d like to experiment with different block sizes, just type ctrl + c to cancel the command, then you can run it again.
Once the command has finished running, your SD card should be an exact copy of the disk image you specified.
Author: Shuo Shi
This tutorial describes the details of a SDF Model Object.
SDF Models can range from simple shapes to complex robots. It refers to the SDF tag, and is essentially a collection of links, joints, collision objects, visuals, and plugins. Generating a model file can be difficult depending on the complexity of the desired model. This page will offer some tips on how to build your models.
Links: A link contains the physical properties of one body of the model. This can be a wheel, or a link in a joint chain. Each link may contain many collision and visual elements. Try to reduce the number of links in your models in order to improve performance and stability. For example, a table model could consist of 5 links (4 for the legs and 1 for the top) connected via joints. However, this is overly complex, especially since the joints will never move. Instead, create the table with 1 link and 5 collision elements.
Collision: A collision element encapsulates a geometry that is used for collision checking. This can be a simple shape (which is preferred), or a triangle mesh (which consumes greater resources). A link may contain many collision elements.
Visual: A visual element is used to visualize parts of a link. A link may contain 0 or more visual elements.
Inertial: The inertial element describes the dynamic properties of the link, such as mass and rotational inertia matrix.
Sensor: A sensor collects data from the world for use in plugins. A link may contain 0 or more sensors.
Light: A light element describes a light source attached to a link. A link may contain 0 or more lights.
Joints: A joint connects two links. A parent and child relationship is established along with other parameters such as axis of rotation, and joint limits.
Plugins: A plugin is a shared library created by a third party to control a model.
This step involves gathering all the necessary 3D mesh files that are required to build your model. Gazebo provides a set of simple shapes: box, sphere, and cylinder. If your model needs something more complex, then continue reading.
Meshes come from a number of places. Google's 3D warehouse is a good repository of 3D models. Alternatively, you may already have the necessary files. Finally, you can make your own meshes using a 3D modeler such as Blender or Sketchup.
Gazebo requires that mesh files be formatted as STL, Collada or OBJ, with Collada and OBJ being the preferred formats.
Start by creating an extremely simple model file, or copy an existing model file. The key here is to start with something that you know works, or can debug very easily.
Here is a very rudimentary minimum box model file with just a unit sized box shape as a collision geometry and the same unit box visual with unit inertias:
Create the box.sdf model file
gedit box.sdf Copy the following contents into box.sdf:
Note that the origin of the Box-geometry is at the geometric center of the box, so in order to have the bottom of the box flush with the ground plane, an origin of 0 0 0.5 0 0 0 is added to raise the box above the ground plane.
With a working .sdf file, slowly start adding in more complexity.
Under the <geometry> label, add your .stl model file
The <geometry> label can be added below <collision> and <visual> label.
In your .world file, import the sdf file using < include > tag
Then open termial to add the model path to Gazebo variable.
Step 1: switch the .bashrc to be running in sim mode. Step 1.1: Go into .bashrc file and uncomment the simulation mode as shown below:
Step 1.2: Comment out real mode/robot ip addresses For Example:
Step 2: run roscore on vnc. To do this type "roscore" into the terminal
Step 3: Now in the terminal do these steps Step 3.1: get vpn ip address: In the terminal type "myvpnipaddress" Step 3.2: Type "$(bru mode real)" Step 3.3: Type "$(bru name robotname -m "vpn ip address from step 4")" Step 3.4: type"multibringup" in each robot terminal
Step 4: Repeat step 3 in a second tab for the other robot(s)
Using machine learning models for object detection/ classification) might be harder than you expect, from our own groups’ experience, as machine learning models are usually uninterpretable and yield stochastic results. Therefore, if you don’t have a solid understanding of how to build/train a model or if you’re not confident that you know what to do when the model JUST doesn’t work as expected, I recommend you to start from more interpretable and stable computer vision techniques to avoid confusion/frustration, and also try to simplify the object you want to detect/classify – try to make them easily differentiable with other background objects from color and shape. For examples, several handy functions from OpenCV that I found useful and convenient-to-use include color masking (cv2.inRange), contour detection (cv2.findContours), indices extraction (cv2.convexHul) etc.. These functions suffice for easy object detection (like colored circles, colored balls, arrows, cans, etc.); and you can use cv2.imshow to easily see the transformation from each step -- this would help you debug faster and have something functional built first.
The algorithm of finding the tip might be a little complicated to understand (it uses the "convexity defects" of the "hull"). But the point I'm making here is that with these few lines of code, you can achieve probably more than you expected. So do start with these "seamingly easy" techniques first before you use something more powerful but confusing.
Do you need to give information to your roscore that you can't transport with rosnodes?
Do you need to give information to your roscore that you can't transport with rosnodes?
You may have trouble running certain libraries or code in your vnc environment, a UDP connection could allow you to run it somewhere else and broadcast it into your vnc. There are many reasons this could happen and UDP sockets are the solution. In our project we used multiple roscores to broadcast the locations of our robots. We send the robot coordinates over a UDP socket that the other roscore can then pickup and use.
Here is an example of the most basic sender that you could use for your project. In this example the sender sends out a string to be picked up:
You need to run a receiver to pickup the information that your sender put out
Often times you may want to create a sender node that will take information from your ROS environment and publish it to the outside world. Here is an example of how I went about doing this.
And here is the receiver we created to handle our sender
Overall UDP-sockets aren't very difficult to make. Ours simplified the complexity of our project and helped build modularity. Overall this receiver and sender acts as another Ros publisher and subscriber. It has the same function it instead builds a metaphorical over-arching roscore for both roscores the sender and receiver live in.
Collection current knowledge we found in our GPS research
The high level of how GPS functions relies on several GPS satellites sending out a radio signals which encode the time at which the signal was sent and the position of the satellite at that time. You should imagine this radio signal as a sphere of information expanding out at the speed of light with the satellite, which emitted the signal, at the center.
If I were just looking at a single satellite, I would receive this radio signal and be able to calculate the difference in time between the moment I received it, and the time the signal left the satellite which again is encoded in the radio signal. Let’s say I calculate that the signal traveled 10,000 miles before I received it. That would indicate to me that I could be in any position in space exactly 10,000 miles away from the satellite which sent the signal. Notice that this is a giant sphere of radius 10,000 miles; I could be standing anywhere on this massive imaginary sphere. Thus, GPS is not very useful with a single satellite.
Now let’s say I receive a signal from 2 satellites, I know their positions when they sent their messages and the time it took each message to reach me. Each satellite defines a sphere on which I could be standing, however, with both spheres I now know I must be somewhere on the intersection of these two spheres. As you may be able to picture, the intersection of two spheres is a circle in space, this means with 2 satellites I could be standing anywhere on this circle in space, still not very useful.
Now if I manage to receive a signal from 3 satellites, I suddenly have three spheres of possible locations which all intersect. Because 3 spheres will intersect in a single point, I now have my exact point in space where I must be standing.
This is how GPS works. The name of the game is calculating how far I am from several satellites at once, and finding the intersection; luckily for us, people do all of these calculations for us.
This is a bit of a trick question since technically GPS refers specifically to the global positioning satellites which the United States have put in orbit. The general term here is Global Navigation Satellite System (GNSS) which encompasses all satellite constellations owned and operated by any country. This includes the US’s GPS, Russia's GLONASS, Europe's Galileo, and China's BeiDou. Any GPS sensor I reference here is actually using all of these satellites to localize itself, not just the GPS constellation.
There is a big problem with GPS. The problem lies in the fact that the radio signals are not traveling in a vacuum to us, they are passing through our atmosphere! The changing layers of our atmosphere will change the amount of time it takes the radio signal to travel to us. This is a huge problem when we rely on the time it took the signal to reach us to calculate our distance from each satellite. If we look at a really cheap GPS sensor (there is one in the lab, I will mention it later) you will find that our location usually has an error of up to 20 meters. This is why people will “correct” their GPS signals.
There are two parts to every GPS sensor: the antenna and the module. The GPS antenna is responsible for receiving the radio signals from the satellites and passing them to the module which will perform all of the complex calculations and output a position. High quality antennas and modules are almost always purchased separately and can be quite expensive. There are also cheap all in one sensors which combine the antenna and module into something as small as a USB drive.
Satellites transmit multiple radio frequencies to help the GPS module account for the timing errors created by our atmosphere. Multi-band GPS means that you are listening to multiple radio frequencies (L1 and L2 refer to these different radio frequencies emitted by a single satellite) which can improve accuracy overall, single-band GPS is not as good. Keep in mind that both the antenna and the module will need to be multi-band and both units will be significantly more expensive because of that.
When you plug any kind of GPS sensor into a linux computer using a USB cable you will see the device appear as a serial port. This can be located in the directory /dev
most of the time it will be called /dev/ttyACM0
or /dev/ttyACM1
. If you run $ cat /dev/ttyACM0
in your terminal you will see all of the raw information coming off of the GPS sensor. Usually this is just NMEA messages which are what show you the location of the GPS as well as lots of other information.
NMEA messages provide almost all of the information about our GPS that we could desire. There are several types of NMEA messages all with 5 character names. They will all begin GP or GN depending on whether they are using just the GPS constellation or all satellite constellations, respectively. Sometimes the first 2 characters are noted Gx for simplicity. The last 3 characters will show you the type of message. Here is a complete list of their definitions. The most important message for us is the GxGGA message, the complete definition of which you can view in the previous link. It will include the latitude, latitude direction (north or south), longitude, and longitude direction (east or west). There are several different ways of writing latitude and longitude values but online converters can convert between any of them, and they can always be viewed on Google Maps for verification. The other important piece of information in the GxGGA message is the "fix quality". This value tells you what mode your GPS is currently operating in. 0 indicates no valid position and 1 indicates uncorrected position (this is the standard GPS mode). 2, 4, and 5 are only present when you are correcting your GPS data, more on what this means later. You can use this Python script to read all of the NMEA messages and print relevant data to console. Obviously this can be edited to do much more complex things with this data.
For less than $20 you can buy a cheap plug and play GPS sensor which does not attempt to correct its GPS data in any way. We purchased the G72 G-Mouse USB GPS Dongle to take some preliminary results. Below we see 5 minutes of continuous GPS data (in blue) taken from a fixed GPS location (in red). I will note it was a cloudy rainy day when the data was recorded and the true GPS location was under a large concrete overhang outdoors near other buildings. This is a particularly difficult situation which lead to the larger than normal maximum inaccuracy of ~25 meters.
NOTE: no matter how fancy or expensive your GPS sensor is, if it is not being corrected by some kind of secondary remote device, you will not see good accuracy. This is confusing because a lot of GPS sensors tout their "centimeter accuracy in seconds" which would imply you could just plug it in and achieve that accuracy.
There are no shortcuts, for centimeter accuracy you need to correct your GPS data with an outside source.
The most common and most accurate way to correct GPS data is by utilizing two GPS sensors in a process called Differential GPS (DGPS). There are three ways to achieve a differential GPS system according to RACELOGIC:
SBAS – Correction messages are sent from Geostationary Satellites, for example, EGNOS or WASS.
RTCMv2 – Correction messages are sent from a static base station, giving 40 – 80 cm accuracy.
RTK – Correction messages are sent from a static base station signal giving <2cm accuracy on RTK enabled units.
We will ignore SBAS and RTCMv2 (see above source for more detail) and focus entirely on RTK correction because it is the most common and most accurate differential GPS correction method.
RTK stands for Real-Time Kinematic and provides ~1cm accuracy when it is set up properly and operated in reasonable conditions. This is our ticket to highly accurate GPS data.
RTK correction relies on two GPS sensors to provide our ~1cm accuracy. One sensor is part of the "base station" and the other is a "rover".
The base station consists of a GPS sensor in an accurately known, fixed location on earth which is continually reading in the radio signals from the satellites. The goal of the base station GPS is to compute, in real time, the error in the amount of time it takes the radio signal from each satellite to reach the base station. This is an incredibly complex calculation to figure out which timing errors each individual radio signal is experiencing. We cannot simply say that the measurement is off by 4 meters and assume that all nearby GPS sensors will experience the same 4 meter error vector. The base station computer must look at each satellite signal it is using to calculate location, look at the total error in location, and then reverse engineer the timing error that each radio signal exhibits. (Accurate GPS requires 3-4 different satellites to determine location, our calculation will thus produce at least 3-4 timing error values, one for each satellite).
The base station will then send these timing errors in the form of an RTCM message (this is the standard RTK error message) to the "rover" so that the rover can perform its own calculations based on which satellites it is currently using. This will ultimately achieve the ~1cm accuracy.
To summarize, RTK correction requires a fixed base station to determine the error in the amount of time it takes each radio signal from all satellites in view to reach the sensor. It then sends this list of errors to the rover GPS. The rover GPS will look at all of the radio signals it is using to calculate its position, adjust each time value by the error sent from the base station, and calculate a very accurate position.
There are two ways to acquire RTK corrections. You can either set up a local base station or you can utilize RTK corrections from various public or subscription based base stations around the country.
The subscription based base stations are often quite expensive and difficult to find, the good news is that most states have state owned public base stations you can receive RTK correction data from, here is the Massachusetts public base stations site. and here is a national Continuously Operating Reference Stations map The problem is that these base stations are often old and not very high quality. They often use solely single-band antenna which means that to have accurate RTK correction you need be within 10km of the public base station and the correction values get drastically better the closer you are. If you set up your own base station you will be able to use multi-band signals for higher accuracy, you will be much closer, and this is where you will see ~1cm accuracies. That being said, Olin College of Engineering uses public base stations for their work.
This will take you through the process of setting up a local Brandeis base station.
The GPS module is what processes all of the information read in from the antenna and gives you actual usable data. For cost and performance reasons we have selected the ZED-F9P module from u-blox. More specifically, we have selected the developer kit from SparkFun which is called the SparkFun GPS-RTK2 Board - ZED-F9P which is a board containing the ZED-F9P module and has convenient ports attached for everything you will need to plug in.
Important information about this board:
A GPS-RTK2 board and a GPS-RTK board are not the same! Do not mix RTK2 and RTK, it will not work.
The board must be connected to an antenna (more below) to receive the radio signals.
This is a multi-band module which will allow us to have much more accurate data even if our rover goes near building, under trees, or far away from the base station.
The board plugs into a Raspberry Pi with a USB cable and is powered and sends data through that single cable.
We will require two of these boards, one for the base station and one for the rover.
We need a quality multi-band antenna to receive the multi-band signal, these can get very fancy and very expensive, we will be using this ArduSimple antenna.
If you use the same antenna on your base station and your rover it will marginally improve accuracy since the noise characteristics will be very similar.
The GPS module will send data through USB to the Raspberry Pi and appear as serial port. You can watch this video to see what these GPS communications look like to the Raspberry Pi and how to process them in a python script.
All configuration of the GPS module can be done while connected to a Windows computer running u-center which is a u-blox application. Sadly u-center only runs on Windows. This configuration is important because it will establish whether your GPS module is a rover or a base station and will allow you to set the base station's known location etc.
The base station will need to have a very precise known location for the antenna. This should be as close to your rover on average as possible. To find the exact location, you can use the Survey-in mode on the GPS or use a fixed location determined by Google Maps, the configuration video will cover this.
Your base station will output RTCM messages which are the standard RTK correction messages which a rover type GPS module will be able to use to correct its own GPS data. These RTCM messages will be outputted over the serial port to the base station Raspberry Pi and you will need to set up some kind of messaging protocol to get these messages from the base station to the rover. I recommend using rtk2go.com to handle this message passing. More on this in the configuration video.
Once the rover u-blox module is continually receiving RTK corrections as RTCM messages, it will use these messages to perform calculations and in turn output over serial port the ~1cm accurate GPS data in the form of an NMEA message. These NMEA messages are simple to parse and will clearly provide latitude and longitude data as well as a lot more information for more complex applications. The Raspberry Pi will be able to read these messages (as described in the video above) and now you have incredibly accurate GPS data to do with as you wish.
For rtk2go.com, username: asoderberg@brandeis.edu password: [the standard lab password]
For macorsrtk.massdot.state.ma.us, username: asod614 password: mzrP8idxpiU9UWC
/home/rover/RTKLIB/app/str2str/gcc/str2str -in ntrip://rover:ROSlab134@rtk2go.com:2101/brandeis -out serial://ttyACM0:115200
https://youtu.be/qZ2at1xV8DY
This FAQ documents specific instructions to define a new Message
After created a ROS package, our package is constructed by a src folder, a CMakeLists.txt file, and a package.xml file. We need to create a msg folder to hold all of our new msg file. Then in your msg folder, create a new file <new_message>.msg, which contains fields you needed for this message type. For each fields in your msg file, define the name of the field and the type of the field (usually use std_msgs/ or geometry_msgs/). <br > For example, if you want your message to have a list of string and an integer, you msg file should look like:
There are some modifications you need to make to CMakeLists.txt and package.xml in order to let the new message type recognized by ROS. <br > For CMakeLists.txt:
Make sure message_generation is in find_package().
Uncomment add_message_files() and add your .msg file name to add_message_files().
Uncomment generate_messages()
Modify catkin_package() to
Uncomment include in include_directories()
For package.xml:
Uncomment <build_depend>message_generation</build_depend> on line 40
Uncomment <exec_depend>message_runtime</exec_depend> on line 46
How to import the message?
If your message type contains a list of buildin message type, also make sure to import that buildin message type:
How to use the message? <br > The publisher and subscriber's syntax are the same. However, we want to create a new topic name and make sure the new topic message type is specified. For example in my project I have a message type called see_intruder:
Since our new message depends on some build in message type, when we try to access the feild of our msg, we need to do msg.<field_name>.data given that the build in message type has a field named data. So in the first example I had, if we want to access the string stored in index int of list, I need to do
Do a cm in your vnc. if the message is successfully recognized by ROS, you will see the msg file being successfully generated.
In your vnc terminal, type the command
This should solve the error.
only needed to install the grippers
the screws would not go through initially - had to go from the other side with the screw first to ensure that the holes were completely open and the screw could be secured and then installed the grippers as the instructions said
plug in power supply first and then plug usb into computer and then plug microUSB into arm
after installing software and plugging arm in, scan for the arm in the dynamixel software to check that everything is working properly:
if any link is in 57600, then change to 1000000
make sure to disconnect before running ros
NOTE: you may not actually need to do this, but it may be good to do the first time you try to connect the arm to your computer to make sure everything is running correctly.
contains command line configs for launch file
contains details on methods that control the arm using their methods - can find basic overview at the bottom of this file.
Installation:
On Intel/AMD based processor:
Basic Commands:
move the arm manually:
roslaunch interbotix_xsarm_control xsarm_control.launch robot_model:=px100
disable torque:
rosservice call /px100/torque_enable "{cmd_type: 'group', name: 'all', enable: false}"
re-enable torque to hold a pose:
rosservice call /px100/torque_enable "{cmd_type: 'group', name: 'all', enable: true}"
run with moveit:
roslaunch interbotix_xsarm_moveit xsarm_moveit.launch robot_model:=px100 use_actual:=true dof:=4
roslaunch interbotix_xsarm_moveit xsarm_moveit.launch robot_model:=px100 use_gazebo:=true dof:=4
run using ROS-PYTHON API:
roslaunch interbotix_xsarm_control xsarm_control.launch robot_model:=px100 use_sim:=true
roslaunch interbotix_xsarm_control xsarm_control.launch robot_model:=px100 use_actual:=true
play with joints:
roslaunch interbotix_xsarm_descriptions xsarm_description.launch robot_model:=px100 use_joint_pub_gui:=true
publish static transforms between two frames:
rosrun tf static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period(milliseconds)
arm.set_ee_pose_components()
sets an absolute position relative to the base of the frame
ee_gripper_link frame with respect to the base_link frame
arm.set_single_joint_position()
move the specified joint
usually used for the waist to turn the robot
arm.set_ee_cartesian_trajectory()
move the end effector the specified value in each direction relative to the current position
for a 4dof arm, the y and yaw values cannot be set through this
arm.go_to_sleep_position()
return the arm to the sleep position
arm.go_to_home_position()
return the arm to the home position
gripper.open()
open the gripper
gripper.close()
close the gripper
arm.set_trajectory_time()
moving_time - duration in seconds it should take for all joints in the arm to complete one move.
accel_time - duration in seconds it should take for all joints in the arm to accelerate/decelerate to/from max speed.
For one example (this is what our team used for traffic sign classification): Using this piece of code: You can detect the contour of the arrow
After which you can find the tip of the arrow, and then determine the direction of the arrow
in , select baudrates 57600 and 1000000