LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • Original Question
  • Answer
  • Answer
  • Answer
  • Answer
  • Answer

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA

camera-performance-notes.md

Author: Pito Salas, Date: April 2023

I posted a question to the world about improving camera performance. For posterity I am saving the rought answers here. Over time I will edit these intomore specific, tested instructions.

Original Question

My robot has:

  • ROS1 Notetic

  • Rasberry Pi 4 Raspberry

  • Pi Camera V2 Running raspicam node (https://github.com/UbiquityRobotics/r...)

  • My "remote" computer is running

  • Ubuntu 20.04 on a cluster which is located in my lab

  • They are communicating via Wifi.

  • roscore is running on the robot

  • The raspicam node is publishing images to it's variety of topics.

I have two nodes on my remote computer, each is processing the images in a different way. One of them is looking for fiducially and one of them is doing some simple image processing with opencv.

Performance is not good. Specifically it seems like the robot itself is not moving smoothly, and there is too much of a delay before the image gets to the remote computer. I have not measured this so this is just a subjective impression.

I hypothesize that the problem is that the image data is big and causing one of a number of problems (or all).

a. Transmitting it iis too much for the Pi b. The wifi is not fast enough c. Because there are several nodes on the remote computer that are subscribing to the images, the images are being sent redunantly to the remote computer

I have some ideas on how to try to fix this:

a. Reduce the image size at the raspicam node b. Do some of the imate processing onboard the Pi c. Change the image to black and white d. Turn off any displays of the image on the remote computer (i.e. rviz and debugging windows)

Answer

If you have more nodes receiving from 1 raspberry ROS node, image is sent many times over wifi. One clear improvement would be having one node that communicates to raspberry over wifi and publishes to subscribers over ethernet and offloads raspberry. Maybe use something else than ROS node to async copy image towards listeners so you dont need to first wait for whole picture before you can send data forward to reduce latency.

Not sure what is the exact problem, but there are different speeds that different wifi versions support. Base raspberry has 1 pcb antenna that is not so great. Compute node can support better antenna, some embedded support m.2 slot where you can plug better stuff. Mimo wlan with 2 antennas and 40mhz channel can provide much more bandwidth. But also AP need to support similar modes. Faster transmission is also reduced latency.

If in urban area, wifi channels can have more traffic. There are tools like iptraf and more to show you the traffic. And analyzers for wifi, too.

And raspberry might not be the most powerful platform, see that you minimize image format changes on raspberry side or use hw acceleration for video to do that (if available).

Answer

I'd suggest experimenting with drone FPV camera, 5.8 GHz USB receiver connected to desktop, which would process video. No need to use Wifi for video stream. I posted here before on my use of it, and the optimal setup.

Answer

I have a similar setup on a recent robot, except that I'm using a USB camera instead of the Raspberry Pi camera. I was initially using OpenCV to capture the images and then send over WiFi. The first limitation I found was that Image messages are too large for a decent frame rate over WiFi, so I initially had the code on the robot compress to JPEG and send ImageCompressed. That improved things but there was still a considerable delay in receiving the images (> 1.5 sec delay to the node on the remote computer doing object detection). I was only able to get about 2 frames/sec transferred using this method, too, and suspected that the JPEG compression in OpenCV was taking too much time.

So I changed the capture to use V4L2 instead of OpenCV, in order to get JPEG images directly from an MJPEG stream from the camera. With this change in place I can get over 10 frames/sec (as long as lighting is good enough - the camera drops the frame rate if the lighting is low), and the delay is only a fraction of a second, good enough for my further processing. This is with a 1280x720 image from a fisheye camera. If I drop the resolution to 640x360 I believe I can get 20 frames/sec, but that's likely overkill for my application, and I'd rather keep the full resolution.

(Another difference from your setup that probably doesn't matter: My robot is running on a Beaglebone Blue rather than Raspberry Pi, and does not run ROS. Instead, I use ZeroMQ to serve the images to a ROS2 node on my remote computer.)

What Sampsa said about ROS messages to multiple subscribers is also salient: unless the underlying DDS implementation is configured to use UDP multicasting you'll end up with multiple copies of the image being transferred. With the setup above, using a bridging node on the remote computer as the only recipient, only one copy of the image is transferred over WiFi.

Answer

Sergei's idea to use analog video removes the delay completely but them you need an analog to digital conversion at the server. The video quality is as good as you are willing to pay for. Analog video seems to many people a radical solution as they are used to computers and networks.

An even more radical solution is to move the camera off the robots and place them on the walls. Navigation is easier but of course the robots is confined to where you have cameras. Then with fixed cameras you can use wires to transmit the signals and have zero delay. You can us standed security cameras that use "POE".

But if the camera must be digital and must be on the robot them it is best to process as much as you cam on the robot, compress it and send it to one node on the server, that node then redistributes the data.

With ROS2 the best configuration is many CPU cores and large memory rather then a networked cluster. Because ROS2 can do "zero copy" message passing where the data states on the same RAM location and pointers are passed to nodes. The data never moves and it is very fast.

Answer

Quick comment: as with everything wireless: make sure to first base benchmark wireless throughput/performance (using something like iperf). Compare result to desired transfer rate (ie: image_raw bw). If achieved_bw < desired_bw (or very close to), things will not work (smoothly). Note that desired_bw could be multiple times the bw of a single image_raw subscription, as you write you have multiple subscribers.

Previousbouncy-objects.mdNextcamera_pitch.md

Last updated 1 year ago

Was this helpful?

In all cases though: transmitting image_raw (or their rectified variants) topics over a limited bw link is not going to work. I'd suggest looking into image_transport and configure a compressed transport. There are lossless plugins available, which would be important if you're looking to use the images for image processing tasks). One example would be . It takes some CPU, but reduces bw significantly.

See also for a recent discussion about a similar topic.

swri-robotics/imagezero_transport
#q413068