LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • Question
  • What is Camera Calibration?
  • How Can I Calibrate my Camera?
  • The camera_calibration Package
  • OpenCV

Was this helpful?

Edit on GitHub
Export as PDF
  1. ZZZ!

Camera Calibration

How to calibrate a camera before using computer vision algorithms (e.g. fiducial detection or VSLAM)

PreviousBasic Chatgpt ROS interfaceNextClaw Movement

Last updated 1 year ago

Was this helpful?

Question

I want to run a computer vision algorithm on my robot, but I'm told that I need to calibrate my camera(s) first. What is camera calibration, and how can I do it?

What is Camera Calibration?

Camera calibration is the act of determining the intrinsic parameters of your camera. Roughly speaking, the intrinsic parameters of your camera are constants that, in a mathematical model of your camera, describe how your camera (via its interior mechanisms) converts a 3D point in the world coordinate frame to a 2D point on its image plane.

Intrinsic parameters are distinct from extrinsic parameters, which describe where your camera is in the world frame.

So, since calibration deals with the intrinsic parameters of your camera, it practically doesn't matter where you place your camera during calibration.

To hear more about the basics of camera calibration, watch the following 5-minute videos by Cyrill Stachniss in order:

, also by Cyrill Stachniss, is a deep dive into Zhang's method, which is what the camera_calibration package we discuss below uses under the hood.

How Can I Calibrate my Camera?

This note describes two ways you can calibrate your camera. The first is by using the camera_calibration ROS package. This is the easier approach, since it basically does almost all of the work for you. The second is by using OpenCV's library directly, and writing your own calibration code (or using one in circulation).

The camera_calibration Package

First, let's install the package:

sudo apt-get update
sudo apt-get install ros-noetic-camera-calibration

Third, tape the corners of the paper to a firm, flat surface, like the surface of a piece of cardboard.

Fourth, measure a side of a single square, convert your measurement to millimeters, and divide the result by 1000. Let's call your result, RESULT.

Now, let the number of rows of your checkerboard be M and its number of columns N. Finally, let's say your camera node's name is CAM, such that, when you connect it with ROS, it publishes the /CAM/camera_info and /CAM/image_raw topics. Now, after ensuring that these two topics are being published, execute:

rosrun camera_calibration cameracalibrator.py --size MxN --square
RESULT image:=/CAM/image_raw camera:=CAM

WARNING The two sections stated above are the only ones you actually want to follow in the official tutorial. Much of the rest of the material there is outdated or misleading.

OpenCV

Sometimes, you might want to use object detection or use certain algorithms that require a camera such as VSLAM. These algorithms usually require a very good calibration of the camera to work properly. The calibration fixes things like distortion by determining the camera’s true parameters such as focal length, format size, principal point, and lens distortion. If you see lines that are curved but are supposed to be straight, then you should probably calibrate your camera.

The most ideal way to do is to print the checkerboard on a large matte and sturdy piece of paper so that the checkerboard is completely flat and no reflections can be seen on it. However, it’s okay to just print it on a normal piece of paper as well and put it on a flat surface. Then, take at least ten photos with your camera from a variety of angles and positions so that the checkerboard is in all corners of the photos. Make sure the whole checkerboard is seen in each picture. Save those photos in an easy to find place and use the following to get your intrinsic calibration matrix.

Step by step:

  • print out checkerboard pattern

  • take at least 10 photos of the checkerboard at a variety of angles and positions (see image 1 for examples) and save in an easy to access place

  • download/copy the opencv calibration code and run it after changing the folder path

  • get the intrinsic matrix and distortion and enter it into whatever you need

Image 1: some examples of having the checkerboard cover all corners of the image

Potential intrinsic matrix:

[[688.00030953 0. 307.66412893]

[ 0. 689.47629485 274.053018 ]

[ 0. 0. 1. ]]

Pastable into python:

fx, fy, cx, cy = (688.00030953, 689.47629485, 307.66412893, 274.053018)

Distortion coefficients:

[9.39260444e-03, 4.90535732e-01, 1.48962564e-02, 4.68503188e-04, -1.77954077e+00]

This guide assumes you've already got your camera working on ROS, and that you're able to publish camera_info and image_raw topics for the camera. If you need to set up a new usb camera, see this in our lab notebook.

Second, print out on a letter-sized piece of paper.

Next, follow the instructions under section 4 "Moving the Checkboard", and section 5 Calibration Results of .

of what a successful calibration process might look like.

Usually this is done with some kind of checkerboard pattern. This can be a normal checkerboard or a Charuco/Aruco board which has some patterns that look like fiducials or QR codes on it to further help with calibration. In this tutorial, we’ll be using a 7x9 checkerboard with 20x20mm squares: .

The code I used was this calibration. It also has more notes and information about what the information you are getting is.

image
Camera Intrinsics and Extrinsics
Mapping the 3D World to an Image
Intrinsic Camera Calibration
This video
entry
this checkerboard
the official camera_calibration tutorial
Here's a video
checkerboard pdf
opencv