LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • Campus Rover Live Map
  • First Iteration
  • Current Iteration
  • Next Steps
  • Follow-up Iteration
  • Overview of the LiveMap class
  • Brad Nesbitt & Huaigu Lin 11/10/2018

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA
  2. Campus Rover Packages
  3. web-application

livemap.md

Previousflask.mdNextLab Infrastructure

Last updated 1 year ago

Was this helpful?

Campus Rover Live Map

The objective is to implement a 2D map in the CR_Web application that depicts:

  • The floorplan Campus Rover is currently using to navigate

  • Campus Rover's "real-time" location as it navigates

  • The goal destination, toward which Campus Rover is navigating

First Iteration

Our first implementation was based on a that relied on a websocket connection between the robot and web client, and had the following dependencies on 3rd party libraries:

This initial implementation () was successful, but presented several issues:

  • Building upon 3rd party dependencies risked future breaks and maintenance.

  • As discussed , it entailed "ROS-like" programming in JavaScript instead of Python.

  • The implementation described in the generates a 2D map image from an amcl occupancy grid. This is unecessary for our purposes, because Campus Rover uses a pre-generated floorplan image; re-generating it is redundant and thus computationally wasteful.

  • Generating the map and loading the 4 JavaScript libraries mentioned above on every page load created noticeable performance issues, limiting any additional page content.

Current Iteration

The current iteration resolves the issues identified through the first iteration and enables additional map features:

  • Instead of generating a map image from an occupancy grid, an existing floorplan image file is rendered.

  • Instead of using 3rd-party JavaScript libraries, the map is rendered using HTML5's Canvas element.

  • Instead of writing "ROS-like" JavaScript in the front end as before, all ROS code is implemented with regular ROS Python programming in the Flask layer of the application.

  • Unlike the initial iteration, the current map includes the option to "track" the robot as it traverses the map, automatically scrolling to keep up with the robot as it moves.

  • The current iteration now displays the robot's goal location, too.

Next Steps

Support for:

  • Multiple floorplans/maps

  • Switching between different floorplans

  • Adjusting the size and scale of a map (for zooming in/out, resizing, etc.)

Follow-up Iteration

Brad Nesbitt 11/18/2018

Overview of the LiveMap class

After several preceding iterations of "live" 2D maps, it became clear that a single abstraction for such mapping would be appropriate. An instance of the LiveMap class maps waypoints, the robot's current pose, and its goal poses onto 2D floorplan for display within a web application.

The static directory in rover_app now contains map_files, which contains the local files needed to generate a given map, including a JSON file with parameters specific to each map. For example:

--

all_maps.json

"Gerstenzang Basement": {
    "files": {
        "path": "rover_app/static/map_files/basement/",
        "png_file": {
            "file_name": "basement.png",
            "cm_per_pixel": 1
        },
        "waypoint_file": "basement_waypoints.json"
    },
    "yaml_parameters": {
        "resolution":  0.01,
        "origin": [0.0, 0.0, 0.0]
    }

The JSON object for a map includes references to local files comprising the map's floorplan .png file, a JSON file of the map's waypoint data, and a copy of the yaml parameters used for amcl navigation of the .png-based map.

--

live_map.py

Initializing a LiveMap object requires 2 parameters:

  1. The name/String corresponding to a map in all_maps.json, such as "Gerstenzang Basement"

  2. The desired centimeters per pixel ratio to be used when displaying the map.

  3. An optional parameter is the centimeter diameter of the robot, which is the Turtlebot2's spec of 35.4 by default.

For example, live_map = LiveMap("Gerstenzang Basement", 2) initializes a LiveMap object of the Gerstenzang Basement floorplan with a 2cm/pixel scale. The object maintains the following abstraction representing the state of the map, including the robot's current place within it and it's goal destination:

    self.map_state = {
        "map_parameters": {
            "map_name": map_name_string,
            "files": {
                "path": path,
                "png": {
                    "file_name": map_json["files"]["png_file"]["file_name"],
                    "cm_per_pixel": map_json["files"]["png_file"]["cm_per_pixel"],
                    "pixel_width": png_height,
                    "pixel_height": png_width,
                },
                "yaml": map_json["yaml_parameters"]
            },
            "bot_radius": bot_cm_diameter/2,
            "cm_per_pixel": scale_cm_per_pixel, # Desired scale
            "waypoints": waypoints,
            "current_pose": {},
            "goal_pose": {}
        },
        "scaled_pixel_values": {
            "bot_radius": (bot_cm_diameter / 2) * png_cm_per_pixel / scale_cm_per_pixel,
            "cm_per_pixel": scale_cm_per_pixel,
            "png_pixel_width": png_width * png_cm_per_pixel / scale_cm_per_pixel,
            "png_pixel_height": png_height * png_cm_per_pixel / scale_cm_per_pixel,
            "current_pose": {},
            "goal_pose": {}
        },
        "subscribers": {
            "current_pose_sub": rospy.Subscriber('/amcl_pose', PoseWithCovarianceStamped, self.update_current_pose),
            "goal_pose_sub": rospy.Subscriber('/move_base/current_goal', PoseStamped, self.update_goal_pose),
            "rviz_goal_pose_sub": rospy.Subscriber('/move_base_simple/goal', PoseStamped, self.update_goal_pose)
        }
}

Note that a nested dictionary of ROS subscribers continually updates the scaled pixel value equivalents of the current and goal poses.

Implementing 2D mapping in this way aims to achieve two main advantages:

  1. The LiveMap class allows the initialization of multiple, differing maps, with custom scales in the web application. For instance, a small, "thumbnail" map could be implemented on one page, while large map could be displayed somewhere else. This also makes switching between maps is also possible.

  2. Representing a map_state as a Python dictionary (shown above) makes it easy to send the data needed to work with a live 2D map as JSON. For instance, a map route or endpoint could be implemented to return a map_state JSON object which could, in turn, be used to render or update a map in the UI.

Brad Nesbitt & Huaigu Lin 11/10/2018

tutorial
RosBridge
2Djs
RosLibJs
EaselJs
EventEmitter2
repo here
here
tutorial