Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Boundary generation occurs in the environment
class within src/boundary_detection.py
The class works by setting an ego agent (the pupper), adding each detected fiducial as an obstacle, and if a fiducial with id = 0
was detected then a goal object is added to the class as well
Obstacle locations are updated if a fiducial with an existing id
is detected and obstacle locations can be cleared with clear_obstacles
Then a list of boundaries corresponding to each obstacle is created using the [Minkowski Sum](https en.wikipedia.org/wiki/Minkowski_addition) formula
Configuration space is essentially a series of points that represent infeasible locations and rotations for a robot in an environment
We can use this along with discretization to generate a graph of all the feasible locations for the pupper robot in an environment with obstacles, assuming we have the models for the agent and the obstacles
Below are two example configuration spaces shown in red for an ego agent in green for the obstacle in blue, notice how the configuration spaces changes when the rotation of the agent changes
The Pupper project uses the Stanford Pupper 2.0 which does not currently have public documentation but information on the original Pupper project can be found at Stanford Pupper. More detailed documentation on the original Pupper can be found at Pupper Docs. This project explores topics related to fiducial detection, boundary generation, motion planning, controls, and hardware.
We tested with 16 printed 0.03 meter
fiducials and a floor sign to hold the goal fiducial
A Pupper 2.0 robot flashed with the low level Teensey software for the motor controllers
A successful calibration and correctly configured motor IDs
The RaspberryPi and RaspberryPi camera running with the latest software
Running the test is simple, once you have the calibrated pupper robot operating you can then run python3 /src/main.py
A test for the boundary generation, discretization, and graph search can be run through python3 /src/path_test.py
This opens a pickled Environment
object containing obstacle, agent, and goal data from a testing session and then performs a graph search using the PathFinder
class
We use a RaspberryPi v1 camera whose full documentation can be found at Raspicam Docs, this also includes hardware and software specs
Additional information on how to install Raspicam hardware can be found at Raspicam Installation
We use the pupil_apriltags
package to detect fiducials in the Raspicam image so full documentation for the AprilTags can be found at Pupil AprilTags
The package works by taking the camera parameters, fiducial size and family, and additional fiducial detection parameters and creating a detector
class which contains a detect
method that inputs a camera frame and outputs a list of detected fiducials
In order to print new apriltags you have to follow the instructions at AprilTag Generation and Premade AprilTags
The params.yaml
file contains all of the fiducial vision parameters
The camera parameters which are used in the params.yaml
file were found online in the raspicam section
The size of the fiducial in meters can be adjusted by printing out new fiducials of a larger size
The src/transforms.py
and src/geometry.py
contain the methods used for transforming the fiducial detection results into easier to work with translations and rotations
Also, parameters in params.yaml
are used to slightly adjust the detection results after transformation
The discretization of the environment is done in /src/path_finder.py
in the PathFinder
class
The parameters for discretization are in the params.yaml
file and affect the inflation of obstacles in the graph, step size of the robot (distance from one vertex to it's immediate neighbors), the length and width of the graph to generate, as well as if the graph search should explore nodes diagonally
The discretization happens in the explore
method which uses two serial doubly nested lambda expressions to create a matrix of Point
objects which gets converted into a matrix of Node
objects containing point
, distance
, previous
, and explored
which are the required fields to perform a graph search on this data
The graph search happens in the PathFinder
class in the solve
method which performs a Dijkstra shortest path search from the node corresponding to the agent location to the node nearest to the goal location
The algorithm operates the same as a standard shortest path search but has several optimizations built in to account for the limited hardware of the RaspberryPi
Path profiling is the process of converting the path, a list of Point
objects to a series of distances, and headings for the controller to follow
The math for it is in src/geometry.py
which finds the angle and distance between two points
Many potential improvements exist to boos the performance, accuracy, and resolution of the planning module. Some ideas are:
Use dynamic programming to eliminate redundant in_range_of_boundary_quick
checks for the same node
Implement a gradient based approach to converting configuration space into edge weights
Use a better shortest path search algorithm or a sampling-based approach so discretization is not necessary
The code can be found at Pupper GitHub Repository
The models can be found at /models/
The obstacle, agent, and goal models are of type Shape which can be found within src/boundary_detection.py
The parameters of the models can be found within params.yaml
The goal model is a shape containing only a single point
The obstacle and agent models are defined by 4 corner points which are then interpolated to create a series of points defining the boundary of the model
Each model also has a center as well which would be it's relative location to the Pupper
The computer vision module can be found at /src/fiducial_vision.py
The fiducial and computer vision package requires numerous parameters to work correctly, these include fiducial tag size, lens size, and center pixel of the image
The module itself contains the Vision
class which contains a method capture_continuous
which returns a generator which yields the results of the fiducial detection module on frames of the RaspberryPi camera
The modules relevant to boundary generation and planning are src/boundary_detection.py
, src/path_finder.py
, /src/path_profiler.py
, and path_test.py
Boundary generation works by taking in the detected positions and rotations of fiducials and then creating obstacle
classes to represent each fiducial. Then each obstacle.points
of type Point[]
are transformed to it's corresponding fiducials location by the src/transform.py
class. Then The Pupper robot model models/agent.py
which is at (0, 0)
and each obstacle
are used to calculate the configuration space for robot using the [Minkowski Sum](https en.wikipedia.org/wiki/Minkowski_addition)
Also, the models/goal.py
class is used to represent the goal of the Pupper which corresponds to the fiducial with id = 0
Each point in the resulting configuration space is used to generate a graph of the area where vertices of the graph close to the points in the configuration space are removed so that when a shortest path search is performed the resulting path only includes valid positions from the robot's current position to the goal
Finally, the path is interpolated and converted into an array of distances to travel and at what angle it should travel at, which is then converted into command interfaces commands based on the velocity of the robot
The visualization class in src/viz.py
uses matplotlib
to generate a plot of the agent model, obstacles, obstacle boundaries, position of the goal, graph nodes, and the path of the robot
The program can be run simply by navigating to the root of the repository and then running python3 main.py
Instructions for the base software setup can be found in the . See notes section for project-specific instructions.
3.c: en1
might not exist on the Pi. To find out which network interface to use run sudo ifconfig
and look at the first interface it lists. For this project, it was eth0
.
4.d.ii: Although it says Mac/linux
, this is not always the case. If ls /dev | grep tty.usbmodem
shows no results try ls /dev | grep ttyACM
. For this project, it was ttyACM0
.
Follow instructions . The keyboard program mentioned can be found .
Install the dependencies using pip:
picamera
pupil_apriltags
tqdm
scipy
UDPComms
pyyaml
opencv
or cv2
argparse
pickle
matplotlib
The full software can be run using python main.py
, or the controller can be run separately using python3 src/controller.py
.
The controller.py
provides an API to control the robot by emulating a PS4 controller, similar to . Main difference is some keybinds are different, but it can also be called from code.
This is a breakdown of the dictionary's fields that is sent through UDPComms to the robot.
Note: values [-1,1]
means any values between -1 and 1 and values 1 or 0
are a toggle. This means that the first time 1
is sent, it will cause the value on the pupper to change. A 0
needs to be sent before another 1
will cause a toggle.
It is a good idea to use some sort of smoothing for lx
, ly
and rx
to avoid abrupt stops.
If running the controller manually, these are the controls:
Robot must first be activated, this will also trigger calibration if the pupper software was run with the --zero
flag. Then it must be in trotting mode and only then can it be controlled with other functions.
The robot was donated by the . Due to time constraints and some delays, a prebuilt robot was delivered instead of a parts kits. The build instructions can be found on this . It also includes other relevant instructions that will be referenced later, such as motor calibration, software installation and run instructions.
The robot consists of 12 C610 motor controllers, 12 M2006 motors, a Teensy board, cables and the chassis.
Raspberry Pi
Raspberry Pi Camera
USB Battery Pack for Raspberry Pi
Keyboard (for emergency stop)
The battery charger in the lab supports charging the 6S Lipo Battery. Settings can be found below:
The batteries took around 1 hour to charge and last around 2-3 hours per charge.
The USB Battery Pack used was a generic one that provided enough voltage for the Pi. These are usually battery packs that support fast charge technology for smart phones. An appropriate cable is needed (e.g: USB 3.0 to Micro-USB or otherwise)
Motor calibration can be found in the . The robot's casing may have to be opened to access some of the motor controllers.
Instructions for leg calibration can be found . Best results were achieved by supporting the robot with a box and adding paddings (made of paper towels and tape) between the legs and the box to get desired motor angles.
space
Toggle trot/rest
v
Toggle activation
w/s
Move forward/back
a/d
Turn left/right
q/e
Move left/right
x
Stop any sort of movement
lx
[-1,1]
Walk left or right
ly
[-1,1]
Walk front or backwards
rx
[-1,1]
Turn left or right (yaw)
ry
[-1,1]
Tilt up or down (pitch)
L2
0
Nothing
R2
0
Nothing
R1
1 or 0
Toggle trot/rest
L1
1 or 0
Toggle activation/deactivation
dpadx
[-1,1]
Move body up or down
dpady
[-1,1]
Roll left/right (roll)
square
0
Nothing
circle
0
Nothing
triangle
0
Nothing
x
0
Nothing
message_rate
20
Rate of which messages are sent