LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • By Peter Zhao
  • Introduction
  • darknet_ros
  • How to use darknet_ros
  • How to run darknet_ros
  • Subscribing to darknet_ros's topics
  • Working with CompressedImage

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA

object_detection_yolo_setup.md

Previousnamespacing-tfs.mdNextpublish_commands_to_commandline.md

Last updated 1 year ago

Was this helpful?

By Peter Zhao

Introduction

The FAQ section describes how to integrate YOLO with ros that allows the turtlebot to be able to detect object using its camera. (You only look once) is a widely used computer vision algorith that makes use of convolutional neural networks (CNN) to detect and label an object. Typically before a machine learning algorithm can be used, one needs to train the algorithm with a large data set. Fortunately, YOLO provides many pre-trained model that we can use. For example, yolov2-tiny weight can classify around 80 objects including person, car, bus, bir, cat, dog, and so on.

darknet_ros

In order to integrate YOLO with ROS easily, one can use this third party package known as , which uses darknet (a neural network library written in C) to run YOLO and operates as a ros node. The node basically subscribes to the topics that has Image message type, and publishes bounding box information as message type to the /darknet_ros/bounding_boxes topic.

How to use darknet_ros

To use darknet_ros, first clone the github repo, and place this directory somewhere inside your catkin_workspace so that when you run catkin_make, this can be built:

cd ~/catkin_ws/src/
git clone https://github.com/leggedrobotics/darknet_ros

Afterward, run catkin_make to build the darknet_ros package

cd ~/catkin_ws
catkin_make

You are now ready to run darknet_ros!

How to run darknet_ros

darknet_ros can be run aloneside with your other nodes. In your launch file include the following lines

<include file="$(find darknet_ros)/launch/darknet_ros.launch">
         <arg name="image" value="rapicam_node/image/raw"/>
</include>
cd DIRECTORY_TO_DARKNET_ROS/darknet_ros/yolo_network_config/weights
wget http://pjreddie.com/media/files/yolov2-tiny.weights

Then run this launch file

roslaunch [your_package_name] [your_launch_file.launch]

You should be able to see the darknet_ros node running with a window that displays the image with bounding boxes placed around objects. If you don't see any GUI window, try to check if the topic passed in in the "image" arg is publishing a valid image.

Subscribing to darknet_ros's topics

To receive information from the darknet_ros topics, we need to subscribes to the topic it is publishing. Here is an example usage:

from darknet_ros_msgs.msg import BoundingBoxes, BoundingBox

class ObjectRecognizer

    def __init__(self):
        self.boxes = []
        self.box_sub = box_sub = rospy.Subscriber(topics.BOUNDING_BOXES, BoundingBoxes, self.get_bounding_box_cb())

    def get_bounding_box_cb(self)
        
        def cb(msg: BoundingBoxes):
            self.boxes = msg.bounding_boxes
            for box in self.boxes:
                print("box_class: {}".format(box.Class))
                print("box_x_min: {}".format(box.xmin))
                print("box_x_max: {}".format(box.xmax))
                print("box_y_min: {}".format(box.ymin))
                print("box_y_max: {}".format(box.ymax))
                print()

Working with CompressedImage

You might have noticed that darknet_ros expects to get raw image from the topic it subscribes to. Sometimes, we may want to use CompressedImage in order to reduce network latency so that the image can be published at a higher frame rate. This can be done by slightly modifying the source code of darknet_ros.

<include file="$(find darknet_ros)/launch/darknet_ros.launch">
         <arg name="image" value="rapicam_node/image/compressed"/>
</include>

The changes I've made can be found in darknet_ros/darknet_ros/include/darknet_rosYoloObjectDetector.hpp and darknet_ros/darknet_ros/src/YoloObjectDetector.cpp.

Essentially, you need to change the

void YoloObjectDetector::cameraCallback(const sensor_msgs::CompressedImageConstPtr& msg) {
  ROS_DEBUG("[YoloObjectDetector] USB image received.");

method to accept msg of type sensor_msgs::ImageConstPtr& instead of sensor_msgs::CompressedImageConstPtr&. You also need to change the corresponding header file so that the method signature matches.

Next you need to download the pretrained weight or put the weight you've trained yourself into the darknet_ros/darknet_ros/yolo_network_config/ folder. To download a pretrained weight, follow the insturction given . Essentially run the following commands

The detailed description of the message types can be found in folder.

I have forked the original darknet_ros repository and made the modification myself. If you wish to use it, you can simply clone this . Now you can modify the launch file to ensure that the darknet_ros subscirbes to a topic that publishes CompressedImage message type:

YOLO
darknet_ros
BoundingBoxes
here
darknet_ros/darknet_ros_msg/msg
repo