LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • adjust_position
  • check_docked
  • cpu_checker
  • greeter
  • lost_and_found
  • message_switch
  • location_narration
  • navigation_controller
  • package_handler
  • package_sender
  • pose_converter
  • process_fid_tfs
  • recording_sender
  • rover_controller
  • scan_filter
  • state
  • talk
  • turtlebot_teleop
  • whereabouts
  • detect_pickup
  • voice_destination_pub
  • hand_gesture
  • go_to_person

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA
  2. Campus Rover Packages

nodes.md

Previouslost-and-found.mdNextpackage-delivery

Last updated 1 year ago

Was this helpful?

Updated May 2019 with progress following gen3 and mutant mark 1.

Dormant Converts the Pose messages it receives from its subscription to PoseWithCovarianceStamped messages and passes them on via its publication

Publications

  • /initialpose

Subscriptions

  • /fid_pose

Defunct Updates the robot's state to reflect whether it is currently being charged at its dock based on charging data from its subscription

Now defunct - mutant does not dock, because it is not based on the kobuki base.

Subscriptions

  • /mobile_base/sensors/core_throttle

Current Publishes CPU usage data and prints it to the warning log if it is high or otherwise to the debug log based on data from process and system utilities

Publications

  • /laptop_cpu_usage

Dormant Uses facial recognition to detect and recognize known faces in the camera feed based on provided data and greets them appropriately by name via a vocal service

Subscriptions

  • /camera/rgb/image_raw/compressed_throttle

Current Uses pickup detector data to determine whether the robot is flying or not. Handles localization recovery upon returning to the ground.

Publications

  • /initialpose

  • /cmd_vel

  • /destination

Subscriptions

  • /airborne

  • /destination

Dormant Organizes speech messages chronologically and feeds them to the speech service at appropriate times

Subscriptions

  • /things_to_say

Dormant Publishes speech messages narrating the robot's behavior current and proximate location based on its state and on data from its subscription

Publications

  • /things_to_say

Subscriptions

  • /nearest_waypoint

Publications

  • /cmd_vel_mux/input/navi

Subscriptions

  • /amcl_pose

Dormant Detects the presence of a physical package via its publications and converses with a user to determine goals and to communicate successes and errors while updating its goals to respond to expected and unexpected changes.

Currently not in use due to the lack of a sensor to detect packages on gen3's mutant.

Publications

  • /release_package

  • /record_start

  • /record_stop

  • /physical_package

  • /destination

Subscriptions

  • /release_package

  • /receive_package

  • /mobile_base/events/button

  • /mobile_base/events/digital_input

  • /destination

Dormant Publishes filename of appropriate prerecorded message for the robot to play based on data from its subscription

Dormant for same reason as package_handler

Publications

  • /receive_package

Subscriptions

  • /physical_package

Current Provides scripts for automatically converting from different pose types

Current Uses fiducial data from its subscription to to determine and publish the robot's position relative to the map

Publications

  • initialpose

Subscriptions

  • fiducial_transforms

Current Records short audio clips featuring user instructions to a file and publishes its name

Publications

  • /receive_package

Subscriptions

  • /record_start

  • /record_stop

Current Controls the robot and its state with respect to a wide range of input sources and publishes a wide range of data for other nodes to use

Publications

  • temp_pose

  • /teleop_keypress

  • /destination

  • /web/camera

  • /web/state

  • /web/map

  • /cmd_vel

Subscriptions

  • /raspicam_node/image/compressed

  • /web/teleop

  • /web/destination

  • /destination

Current applies a filter to scan data to ignore the structural posts of the mutant

Publications

  • /scan_filter

Subscriptions

  • scan

Current Handles and validates requested state changes for legality and publishes relevant information accordingly

Publications

  • /move_base_simple/goal

  • /initialpose

  • /goal_pose_for_fids

  • /state

Current Uses text to speech to turn strings into audio output

Current Cancels existing robot goals and allows for manual control of the robot via teleoperation

Publications

  • /cmd_vel_mux/input/teleop

Subscriptions

  • /web/teleop

  • initialpose

Dormant Publishes the name of the nearest waypoint when it changes based on data from its subscription

Publications

  • /nearest_waypoint

Subscriptions

  • /amcl_pose

Current Uses IMU accelerometer data to decide whether the robot has been lifted, and when it has been placed on the ground.

Publications

  • /airborne

Subscriptions

  • /imu

Current takes information from the alexa webhook, and if it involves going to a destination, publishes the goal pose of the specified destination.

Publications

  • /destination

Subscriptions

  • /voice_intents

Current only slightly usable in demo pauses navigation for ten seconds if it receives signal that a hand is in view of the camera.

Publications

  • /destination

Subscriptions

  • /destination

  • /hand_command

Current only slightly usable in demo spins, searching for recognized person, then stops.

Publications

  • /destination

  • /cmd_vel

Subscriptions

  • /odom

  • /face_detection

  • /has_package

Defunct All functionality was moved to

adjust_position
check_docked
cpu_checker
greeter
lost_and_found
message_switch
location_narration
navigation_controller
rover_controller
package_handler
package_sender
pose_converter
process_fid_tfs
recording_sender
rover_controller
scan_filter
state
talk
turtlebot_teleop
whereabouts
detect_pickup
voice_destination_pub
hand_gesture
go_to_person