LogoLogo
Navigate?
  • XXX!
    • Frequently Asked XQuestions
  • YYY!
    • Advanced: Help me troubleshoot weird build problems
    • Advanced: Help me troubleshoot weird camera problems
  • ZZZ!
    • Basic Chatgpt ROS interface
    • Camera Calibration
    • Claw Movement
    • Computer Vision With Yolo8a
    • Connecting to the robot
    • Creating and Executing Launch Files
  • FIIVA
    • Download File From vscode
    • Edge Detection
    • Finding HSV values for any color
    • Finding correct color for line following
    • GPS Data with iPhone (GPS2IP)
    • How can I calculate a better way to rotate?
    • How do I attach a Bluetooth headset?
    • How do I control AWS RoboMaker?
    • How do I control the Arm
    • How do I convert Imagenet to Darknet
    • How do I create a Gazebo world
    • How do I create a ROS UI with TkInter?
    • How do I creating a gazebo world
    • How do I deploy a Pytorch model our cluster?
    • How do I move a file from my vnc and back?
    • How do I read a BDLC motor spec sheet
    • How do I set up AprilTags
    • How do I set up a USB camera?
    • How do I set up the Astra Pro Depth Camera?
    • How do I setup to Coral TPU
    • How do I spawn an animated Human?
    • How do I use Alexa Flask-ASK for ROS
    • How do I use OpenCV and Turtlebot3 Camera
    • How do I use Parameters and Arguments in ROS?
    • How do I use a sigmoid function instead of a PID
    • How do I visualize the contents of a bag
    • How do you use UDP to communicate between computers?
    • How does GPS work?
    • How to Copy a MicroSD
    • How to add an SDF Model
    • How to approach computer vision
    • How to connect to multiple robots
    • How to define and Use your own message types
    • Interbotix Pincher X100 Arm
    • PID-guide.md
    • PX-100 Arm ROS2 Setup
    • Pincer Attachment
    • ROSBridge and ROSLIBJS
    • Recognizing Objects Based on Color and Size using OpenCV
    • Reinforcement Learning and its Applications
    • Robot Arm Transforms
    • Running Multi Robot in Gazebo and Real Robot
    • Simplifying_Lidar.md
    • Spawning Multiple Robots
    • Tips for using OpenCV and Cameras
    • Using ROS2 with Docker
    • What are some Computer Vision Tips
    • What are the ROS Message Types
    • Why does roscd go wrong?
    • Why is my robot not moving?
    • Working with localStorage in React for web clients
    • bouncy-objects.md
    • camera-performance-notes.md
    • camera_pitch.md
    • change_model_color.md
    • communicate-with-rosserial.md
    • contribution-guide.md
    • customize_tb3.md
    • diy-gazebo-world.md
    • fiducial-tips.md
    • fiducial_follows.md
    • gazebo_tf.md
    • gazebo_world.md
    • handy-commands.md
    • how-to-add-texture-to-sdf.md
    • how_to_get_correct_color_for_line_following.md
    • joint-controllers.md
    • laserscan-definition-modify.md
    • launch-files.md
    • lidar_placement_and_drift.md
    • logging.md
    • model_teleportation.md
    • modular_teleop.md
    • multi-robot-one-core.md
    • multirobot-map-merge.md
    • namespacing-tfs.md
    • object_detection_yolo_setup.md
    • publish_commands_to_commandline.md
    • quaternions.md
    • reset-world-gazebo.md
    • robot multitasking
    • ros_and_aws_integration.md
    • rosbridge.md
    • rviz-markers.md
    • sdf_to_urdf.md
    • spawn_model_terminal.md
    • using-conditionals-in-roslaunch.md
    • ROS and TkInter
    • Brandeis Robotics Utility
      • Controlling Robots from VNC
      • BRU Concepts
      • Commands
      • Standard ROSUTILS directory everywhere
      • script.md
    • Cosi119 Final Reports!
      • 2023
        • Autopilot
        • Bowling Bot
        • Cargo Claw
        • Command and Control Dashboard
        • Dynamaze
        • Guard Robot
        • Multi Robot Surveilance
        • Object Sorter
        • Robot Race
        • Typenator
      • 2022
        • NASCAR-style-turtlebot-racing.md
        • RoboTag.md
        • litter_picker.md
        • mini_scouter.md
        • not-play-catch.md
        • Waiterbot
      • 2020
        • Behavior Trees Investigatoin
        • Computer Vision Maze Solver
        • FiducialSLAM.md
        • Gesture Recognition
          • color.md
          • demo.md
          • gestures.md
          • kinect.md
          • kinectv2.md
          • leap-motion.md
          • leap_motion.md
          • local-camera.md
          • ssd.md
        • dangersigns.md
        • pathplanning.md
        • reinforcement-learning-racer.md
        • stalkerbot.md
      • 2019
        • robot-arm.md
      • Sample Project Template
      • past-gen-letters.md
    • Brandeis Rover Cluster
      • intro.md
      • operation-guide
        • architecture.md
        • cluster.md
        • faq.md
        • graphs
        • graphs.md
        • image.md
        • lifecycle.md
        • nodes.md
        • operating
          • cluster.md
          • users.md
        • sources.md
      • user-guide
        • code-editor.md
        • desktop-ui.md
        • getting-started.md
    • Robots in our Lab
      • linorobot
        • Platform Hardware Specs
        • connections.md
        • howto.md
        • stack.md
        • troubleshooting.md
        • validating.md
      • minirover
        • mrbuild.md
        • mrsetup.md
        • mrtroubleshooting.md
        • mruse.md
      • platform
      • platform.md
      • pupper
        • boundary-generation.md
        • controls.md
        • fiducial-detection.md
        • hardware.md
        • introduction.md
        • planning.md
        • software-overview.md
        • software-setup.md
        • testing.md
    • Campus Rover Packages
      • C3 Architecture Notes
      • Campus Rover V 3
      • campus-rover-4
        • Working with TIVAC
        • progress-report.md
      • demos
        • demo-script-fall-2018.md
        • gen2-demo-instructions.md
      • mutant
        • Description of Mutant
        • mutant-usage.md
        • mutantsetup.md
        • raspicam.md
      • navigation
        • costmap-clearing
          • costmap-clearing-part-1.md
          • costmap-clearing-part-2.md
        • cpu-usage-and-errors-in-navigation.md
        • fiducials.md
        • floormapping.md
        • lost-and-found.md
      • nodes.md
      • package-delivery
        • talker-node.md
      • state-management-services.md
      • voice
        • [voice integration.md](./cr-package/voice/voice integration.md)
        • voice-integration.md-.-cr-package-voice-voice-integration.md.md
        • voice.md
      • web-application
        • Integrating using Flask and ROS
        • flask.md
        • livemap.md
    • Lab Infrastructure
      • Tailscale VPN
      • Creating a bootable SSD
      • Danger Signs with Batteries and Chargers
      • How to use the Rover Cluster
      • Setting up SSH on a new robot
      • Turtlebot3s.md
      • copying-robot-sd-cards.md
      • external-ssd-instructions
      • external-ssd-instructions.md
      • linux_terminal_eduroam_setup.md
      • multi-robot-infrastructure.md
      • networking.md
      • our-robots.md
      • private-networking.md
      • ros-melodic.md
      • setup-hints.md
      • ubuntubrandeis.md
    • Our ROS Packages
      • Behavior Trees
        • Nodes
        • Visualization of the behavior Tree
        • basic_movement.md
        • build
          • defining_blackboard.md
          • defining_nodes.md
          • defining_references.md
        • custom_nodes
          • custom_action.md
          • custom_conditional.md
          • custom_update.md
        • included_nodes
          • action_nodes.md
          • conditional_nodes.md
          • included_nodes.md
          • parent_nodes.md
          • update_nodes.md
        • nodes
          • leaf_nodes.md
          • parent_nodes.md
      • Speech Recognition Report
Powered by GitBook

Copyright (c) Brandeis University

On this page
  • Deploying a Pretrained Pytorch Model in Ubuntu 18.04 Virtual Environment
  • Note
  • Saving and loading a trained model
  • Saving and loading the model using TorchScript
  • Data Annotation

Was this helpful?

Edit on GitHub
Export as PDF
  1. FIIVA

How do I deploy a Pytorch model our cluster?

PreviousHow do I creating a gazebo worldNextHow do I move a file from my vnc and back?

Last updated 1 year ago

Was this helpful?

Deploying a Pretrained Pytorch Model in Ubuntu 18.04 Virtual Environment

By Adam Ring

Using a pre-trained deep learning model from a framework such as Pytorch has myriad applications in robotics, from computer vision to speech recognition, and many places inbetween. Sometimes you have a model that you want to train on another system with more powerful hardware, and then deploy the model elsewhere on a less powerful system. For this task, it is extremely useful to be able to transfer the weights of your trained model into another system, such as a virtual machine running Ubuntu 18.04. These methods for model transfer will also run on any machine with pytorch installed.

Note

It is extremely discouraged to mix versions of Pytorch between training and deployment. If you train your model on Pytorch 1.8.9, and then try to load it using Pytorch 1.4.0, you may encounter some errors due to differences in the modules between versions. For this reason it is encouraged that you load your Pytorch model using the same version that is was trained on.

Saving and loading a trained model

Let's assume that you have your model fully trained and loaded with all of the necessary weights.

model = MyModel()

model.train()

For instructions on how to train a machine learning model, see in the lab notebook. There are multiple ways to save this model, and I will be covering just a few in this tutorial.

Saving the state_dict

This is reccommended as the best way to save the weights of your model as its state_dict, however it does require some dependencies to work. Once you have your model, you must specify a PATH to the directory in which you want to save your model. This is where you can name the file used to store your model.

PATH = "path/to/directory/my_model_state_dict.pt"

or

PATH = "path/to/directory/my_model_state_dict.pth"

You can either specify that the state_dict be saved using .pt or .pth format.

Then, to save the model to a path, simply call this line of code.

torch.save(model.state_dict(), PATH)

Loading the state_dict

Download the my_model_state_dict.pt/pth into the environment in which you plan on deploying it. Note the path that the state dict is placed in. In order to load the model weights from the state_dict file, you must first initialize an untrained istance of your model.

loaded_model = MyModel()

Keep in mind that this step requires you to have your model architecture defined in the environment in which you are deploying your model.

Next, you can simply load your model weights from the state dict using this line of code.

loaded_model.load_state_dict(torch.load("path/to/state/dict/my_model_state_dict.pt/pth"))

The trained weights of the model are now loaded into the untrained model, and you are ready to use the model as if it is pre-trained.

Saving and loading the model using TorchScript

TorchScript is a framework built into Pytorch which is used for model deployment in many different types of environments without having the model defined in the deployment environment. The effect of this is that you can save a model using tracing and load it from a file generated by tracing it.

What tracing does is follow the operations done on an input tensor that is run through your model. Note that if your model has conditionals such as if statements or external dependencies, then the tracing will not record these. Your model must only work on tensors as well.

Saving the trace of a model

In order to trace your trained model and save the trace to a file, you may run the following lines of code.

PATH = "path/to/traced/model/traced_model.pt/pth" dummy_input = torch.ones(typical_input_size, dtype=dype_of_typical_input) traced_model = torch.jit.trace(model, dummy_input)

torch.jit.save(traced_model, PATH)

The dummy_input can simply be a bare tensor that is the same size as a typical input for your model. You may also use one of the training or test inputs. The content of the dummy input does not matter, as long as it is the correct size.

Loading the trace of a model

In order to load the trace of a model, you must download the traced model .pt or .pth file into your deployment environment and note the path to it.

All you need to do to load a traced model for deployment in Pytorch is use the following line of code.

loaded_model = torch.jit.load("path/to/traced/model/traced_model.pt/pth")

Keep in mind that the traced version of your model will only work for torch tensors, and will not mimic the behavior of any conditional statements that you may have in your model.

Data Annotation

Please see the full tutorial in the repo:

this section on training a model
https://github.com/campusrover/Robotics_Computer_Vision/tree/master/utils/labelImg