LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA

Simplifying_Lidar.md

Author: Aiden Dumas

Using Lidar is fundamental for a robot’s understanding of its environment. It is the basis of many critical mapping and navigation tools such as SLAM and AMCL, but when not using a premade algorithm or just using Lidar for more simple tasks, some preprocessing can make Lidar readings much more understandable for us as programmers. Here I share a preprocessing simplification I use to make Lidar intuitive to work with for my own algorithms.

It essentially boils down to bundling subranges of the Lidar readings into regions. The idea for this can be found from: [Article](https://github.com/ssscassio/ros-wall-follower-2-wheeled-robot/blob/master/report/Wall-following-algorithm-for-reactive%20autonomous-mobile-robot-with-laser-scanner-sensor.pdf “On Github”) The simple preprocessing takes the 360 values of Lidar’s range method (360 distance measurements for 360 degrees around the robot) and gives you a much smaller number of values representing more intuitive concepts such as “forward”, “left”, “behind”. The processing is done by creating your subscriber for the Lidar message: scan_sub = rospy.Subscriber(‘scan’, LaserScan, scan_cb)

Our callback function (named scan_cb as specified above), will receive the message as such: def scan_cb(msg):

And depending on how often we want to preprocess a message, we can pass the 360 degree values to our preprocessing function: ranges = preprocessor(msg.ranges)

In my practice, calling this preprocessing step as often as the message was received by the subscriber didn’t have any slowdown effect on my programs. The processor function itself is defined as follows:

def preprocessor(all_ranges):
	ranges = [float[(‘inf’)] * 20
	ranges _index = 0
	index = -9
	sum = 0
	batch = 0
	actual = 0
	for i in range(360):
		curr = all_ranges[index]
		if curr != float(‘inf” and not isnan(curr):
			sum += curr
			actual += 1
		batch += 1
		index += 1
		if batch == 18:
			if actual != 0:
				ranges[ranges_index] = sum/actual
			ranges_index += 1
			sum = 0
			batch = 0
			actual = 0
	return ranges

Essentially all we do here is take the average of valid values in a region of the 360 degree scan and put it into our corresponding spot in our new abbreviated/averaged ranges array that will serve as our substitute for the more lofty 360 degree value msg.ranges (passed to all_ranges). Something implicit to note here is that in the first line of this function we choose how many regions to divide our Lidar data into. Here 20 is chosen. I found that the size of these regions work well for balancing the focus of their region while not compromising being too blind to what lies in a region’s peripheral. This 20 then computes 360/20 = 18 which is our batch size condition (number of data points per region) and where we start our index 0 - 18/2 = -9 which is the offset we use to start to get a range for the “front” region (makes sense looking at the figure in the pdf linked). With this base code, any modifications can be made to suit more specific purposes. The averaging of the regions could be replaced by finding a max or min value for example. In one of my projects, I was focused on navigation in a maze where intersections were always at right angles, so I only needed 4 of these regions (forward, backward, left, right). For this I used the following modification of the code with a helper function listed first again using 18 as the number of values sampled per region: (Note here for reference that in the original msg.ranges Front is at index 0, Back at 180, Left at 90, and Right at 270 (implying the Lidar data is read counterclockwise))

def averager(direction_ranges):
	real = 0
	sum = 0
	for i in ranges:
		if i != float(‘inf’) and not isnan(i):
			real += 1
			sum += 1
	return float(‘inf’) if real == 0 else sum/real
def cardinal_directions(ranges):
	directions = {“Front”: float(‘inf’), “Back”: float(‘inf’), “Left”: float(‘inf’), “Right”: float(‘inf’)}
	plus_minus = 9
	Front = ranges[-plus_minus:] + ranges[:plus_minus]
	Backward = ranges[180 - plus_minus:180+plus_minus]
	Left = ranges[90 - plus_minus:90+plus_minus]
	Right = ranges[270-plus_minus:270 + plus_minus]
	directions_before_processed = {“”Front”: Front, “Back”: Back, “Left”: Left, “Right”: Right}
	for direction, data in directions_before_processed:
		directions[direction] = averager(data)
	return directions

Hopefully this code or at least this idea of regional division helps simplify your coding with Lidar.

PreviousRunning Multi Robot in Gazebo and Real RobotNextSpawning Multiple Robots

Last updated 1 year ago

Was this helpful?