LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • Building Blocks
  • Packages necessary
  • raspicam_node - Main Camera Node
  • aruco_detect - Reconition software
  • Fiducials
  • Fiducial Dictionaries
  • Print out some fiducials
  • Running the software`
  • Enable the camera
  • Detect Fiducials
  • Basic
  • Transforms
  • Reference Links

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA

fiducial_follows.md

Previousfiducial-tips.mdNextgazebo_tf.md

Last updated 1 year ago

Was this helpful?

UNDER DEVELOMENT

This document will not be explaining fiducials or localization. It is meant to help you get the software up and running on your robot. We are working here purely with ROS1 based robots, running on a Raspberry Pi with a Rasberry Pi Camera. Also we are working purely with Aruco Fiducials and the Ubiquity aruco_detect library. There are many other variations which are going to be ignored.

Building Blocks

There are several new components which come into play in order to use fiducials

  1. Camera - Which converts light into a published topic with an image

  2. Fiducial "signs" - Which you print out and place within view of the robot

  3. aruco_detect package - which analyzes the images and locates fiducials in them, publishing a tf for the relative position between the camera and the fiducial

Packages necessary

- Main Camera Node

This package, by Ubiquity Robotics enables the raspberry Pi camera. It should be installed on your robot already. Raspi cameras have quite a lot of configuration parameters and setting them up as well as possible requires 'calibration'.

- Reconition software

This package is also from Ubituity contains the components that go into recognizing the fiducials and working with them.

Fiducials

Fiducials as you know are black and white images. There is software that recognizes them and more than that, are able to compute precisely the relative positions between the camera and the fiducials.

Fiducial Dictionaries

All fiducials are not created equally. Simply because it looks like a black and white image doesn't mean that it will work. The algorithm depends on the images coming from a standardized set of possibiltities. An individual Aruco fiducial comes from one of a small set of predefined Dictionaries and has a specific index therein. You need to know what you are dealing with. Therefore you must make sure the fiducials you are using match the software

Print out some fiducials

Running the software`

Enable the camera

First you have to enable the camera. On the robot (onboard) run the following, either by itself or as part of another launch file.

roslaunch raspicam_node camerav2_410x308_30fps.launch

You can view the image of the camera within Rviz by subscribing to /raspicam_node/image or rqt_image. If the image happens to be upside down then. If the image is upsdide down get help to change the VFlip default variable.

Other launch options

roslaunch raspicam_node camerav2_1280x960_10fps.launch # for a comppressed image, a good default roslaunch raspicam_node camerav2_1280x960_10fps.launch enable_raw:=true # for an uncompressed image

Topics

You will see that when you enable the camera it will begin publishing on one of several topics, for example:

raspicam_node/camera/compressed

Note that the last part of the topic (as you would see it in rostopic list) is not actually used when you subscribe to it. In fact the topic you subscribe to is raspicam_node/camera and the /compressed is used to designate that the data is a compressed image. This is confusing and unusual.

Detect Fiducials

Basic

Make sure first that your camera is pointed at a Fiducial that you printed earlier. Now run (on your vnc)

roslaunch aruco_detect aruco_detect.launch vis_msgs:=false dictionary:=16 fiducial_len:=0.10

If detect sees the tag and identifies it you should see a large number of new topics (rostopic list). One that you can check is `/fiducial_images/. View it with rqt_image or rviz. If it is working and the fiducial is in view, you will see an colored outline around the fiducial. aruco_detect does have numerous parameters that in the future you can look at tweaking.

Transforms

When you have just one fiducial things are simpler. aruco_detect will create a tf between that fiducial and the robot itself. In theory that means that as the robot moves, as long as it can see the fiducial, you can use the TF to determin the robot's location relative to the tf.

You can see this by running rviz, looking at the tf tree. When you display the tf tree you will see the tf and the robot. Depending on which of the two you make the rviz "fixed frame".

Reference Links

Use this web site: and print out one or more fiducials. Pick "Original Aruco Dictionary" which corresponds to "16" in the code. If you print more than one give them different IDs. Later on that ID will come through in the code to allow you to tell one from the other. Tape the fiducial within sight of the camera on your robot.

raspicam_node
aruco_detect
Aruco Markers Generator
Ubiquity Overview of Fiducials
Fiducial Slam Project Report