arrow-left

All pages
gitbookPowered by GitBook
1 of 10

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

kinectv2.md

hashtag
Intro

In Kinect.md, the previous generations dicussed the prospects and limitations of using a Kinect camera. We attempted to use the new Kinect camera v2, which was released in 2014.

figure1

Thus, we used the libfreenect2 package to download all the appropiate files to get the raw image output on our Windows. The following link includes instructions on how to install it all properly onto a Linux OS.

https://github.com/OpenKinect/libfreenect2

hashtag
Issues

We ran into a lot of issues whilst trying to install the drivers, and it took about two weeks to even get the libfreenect2 drivers to work. The driver is able to support RGB image transfer, IR and depth image transfer, and registration of RGB and depth images. Here were some essential steps in debugging, and recommendations if you have the ideal hardware set up:

  • Even though it says optional, I say download OpenCL, under the "Other" option to correspond to Ubuntu 18.04+

  • If your PC has a Nvidia GPU, even better, I think that's the main reason I got libfreenect to work on my laptop as I had a GPU that was powerful enough to support depth processing (which was one of the main issues)

  • Be sure to install CUDA for your Nvidia GPU

Please look through this for common errors:

https://github.com/OpenKinect/libfreenect2/wiki/Troubleshooting

Although we got libfreenect2 to work and got the classifier model to locally work, we were unable to connect the two together. What this meant is that although we could use already saved PNGs that we found via a Kaggle database (that our pre-trained model used) and have the ML model process those gestures, we could not get the live, raw input of depth images from the kinect camera. We kept running into errors, especially an import error that could not read the freenect module. I think it is a solvable bug if there was time to explore it, so I also believe it should continued to be looked at.

However, also fair warning that it is difficult to mount on the campus rover, so I would just be aware of all the drawbacks with the kinect before choosing that as the primary hardware.

hashtag
Database

https://www.kaggle.com/gti-upm/leapgestrecog/data

hashtag
Machine Learning model

https://github.com/filipefborba/HandRecognition/blob/master/project3/project3.ipynb

  • What this model predicts: Predicted Thumb Down Predicted Palm (H), Predicted L, Predicted Fist (H), Predicted Fist (V), Predicted Thumbs up, Predicted Index, Predicted OK, Predicted Palm (V), Predicted C

hashtag
GITHUB REPO

https://github.com/campusrover/gesture_recognition

leap-motion.md

hashtag
Intro

As a very last minute and spontaneous approach, we decided to use a Leap Motion device. Leap Motion uses an Orion SDK, two IR camerad and three infared LEDs. This is able to generate a roughly hemispherical area where the motions are tracked.

It has a smaller observation area dn higher resolution of the device that differentiates the product from using a Kinect (which is more of whole body tracking in a large space). This localized apparatus makes it easier to just look for a hand and track those movements.

local-camera.md

s### Hand Gesture Recognition After reaching the dead-end in the previous approach and being inspired by several successful projects (on Github and other personal tech blogs), I implemented an explict-feature-driven hand recognition algorithm. It relies on background extraction to "extract" hands (giving gray scale images), and based on which compute features to recognize the number of fingers. It worked pretty well if the camera and the background are ABSOLUTELY stationary but it isn't the case in our project: as the camera is mounted on the robot and the robot keeps moving (meaning the background keeps changing). Code can be founded

hashtag
References

Install OpenNI2 if possible

  • Make sure you build in the right location

  • The set up is relatively simple and just involved downloading for the appropriate OS. In this case, Linux (x86 for a 32 bit Ubuntu system).

    hashtag
    Steps to downloading Leap Motion and getting it started

    hashtag
    Link if needed

    • herearrow-up-right

    1. download the SDK from this linkarrow-up-right; you can extract this package and you will find two DEB files that can be installed on Ubuntu.

    2. Open Terminal on the extracted location and install the DEB file using the following command (for 64-bit PCs):

      If you are installing it on a 32-bit PC, you can use the following command:

    3. plug in leap motion and type dmesg in terminal to see if it is detected

    4. clone ros drivers:

    5. edit .bashrc:

    6. save bashrc and restart terminal then run:

    7. to test run:

    Once having Leap Motion installed, we were able to simulate it on RViz. We decided to program our own motion controls based on angular and linear parameters (looking at directional and normal vectors that leap motion senses):

    image

    This is what the Leap Motion sees (the raw info):

    rviz hand
    terminal screenshot

    In the second image above, the x y and z parameters indicate where the leap motion detects a hand (pictured in the first photo)

    This is how the hand gestures looked relative to the robot's motion:

    hashtag
    Stationary

    hand

    hashtag
    Forward

    hand
    rviz
    rviz

    hashtag
    Backward

    hand
    rviz
    rviz

    hashtag
    Left Rotation

    hand
    rviz
    rviz

    hashtag
    Right Rotation

    hand
    rviz
    rviz

    hashtag
    RQT Graph

    node graph

    hashtag
    Conclusion

    So, we got the Leap Motion to successfully work and are able to have the robot follow our two designated motion. We could have done many more if we had discovered this solution earlier. One important thing to note is that at this moment we are not able to mount the Leap Motion onto the physical robot as LeapMotion is not supported by the Raspberry Pi (amd64). If we are able to obtain an Atomic Pi, this project should be able to be furthered explored. Leap Motion is a very powerful and accurate piece of technology that was much easier to work with than the Kinect, but I advise still exploring both options.

    motion axies
    leap laptop setup
    Hand Tracking And Gesture Detectionarrow-up-right

    hashtag
    Dependencies Overview

    • OpenCV

    • ROS Kinetic (Python 2.7 is required)

    • A Camera connected to your device

    hashtag
    How to Run

    1. Copy this package into your workspace and run catkin_make.

    2. Simply Run roslaunch gesture_teleop teleop.launch. A window showing real time video from your laptop webcam will be activated. Place your hand into the region of interest (the green box) and your robot will take actions based on the number of fingers you show. 1. Two fingers: Drive forward 2. Three fingers: Turn left 3. Four fingers: Turn right 4. Other: Stop

    f5
    f4
    f3
    f2

    hashtag
    Brief Explaination towards the package

    This package contains two nodes.

    1. detect.py: Recognize the number of fingers from webcam and publish a topic of type String stating the number of fingers. I won't get into details of the hand-gesture recognition algorithm. Basically, it extracts the hand in the region of insteret by background substraction and compute features to recognize the number of fingers.

    2. teleop.py: Subscribe to detect.py and take actions based on the number of fingers seen.

    hashtag
    Later Plan

    1. Using Kinect on mutant instead of local webcam.

    2. Furthermore, use depth camera to extract hand to get better quality images

    3. Incorporate Skeleton tracking into this package to better localize hands (I am using region of insterests to localize hands, which is a bit dumb).

    herearrow-up-right
    Opencv python hand gesture recognitionarrow-up-right

    leap_motion.md

    hashtag
    Intro

    As a very last minute and spontaneous approach, we decided to use a Leap Motion device. Leap Motion uses an Orion SDK, two IR camerad and three infared LEDs. This is able to generate a roughly hemispherical area where the motions are tracked.

    It has a smaller observation area dn higher resolution of the device that differentiates the product from using a Kinect (which is more of whole body tracking in a large space). This localized apparatus makes it easier to just look for a hand and track those movements.

    The set up is relatively simple and just involved downloading for the appropriate OS. In this case, Linux (x86 for a 32 bit Ubuntu system).

    hashtag
    Steps to downloading Leap Motion and getting it started:

    1. download the SDK from https://www.leapmotion.com/setup/linux; you can extract this package and you will find two DEB files that can be installed on Ubuntu.

    2. Open Terminal on the extracted location and install the DEB file using the following command (for 64-bit PCs):

      $ sudo dpkg -install Leap-*-x64.deb

      If you are installing it on a 32-bit PC, you can use the following command:

    Once having Leap Motion installed, we were able to simulate it on RViz. We decided to program our own motion controls based on angular and linear parameters (looking at directional and normal vectors that leap motion senses):

    This is what the Leap Motion sees (the raw info):

    In the second image above, the x y and z parameters indicate where the leap motion detects a hand (pictured in the first photo)

    This is how the hand gestures looked relative to the robot's motion:

    hashtag
    Stationary

    hashtag
    Forward

    hashtag
    Backward

    hashtag
    Left Rotation

    hashtag
    Right Rotation

    hashtag
    Conclusion

    So, we got the Leap Motion to successfully work and are able to have the robot follow our two designated motion. We could have done many more if we had discovered this solution earlier. One important thing to note is that at this moment we are not able to mount the Leap Motion onto the physical robot as LeapMotion is not supported by the Raspberry Pi (amd64). If we are able to obtain an Atomic Pi, this project should be able to be furthered explored. Leap Motion is a very powerful and accurate piece of technology that was much easier to work with than the Kinect, but I advise still exploring both options.

    hashtag
    GIT HUB REPO

    https://github.com/campusrover/gesture_recognition

        sudo dpkg -install Leap-*-x64.deb
        sudo dpkg -install Leap-*-x86.deb
    while not rospy.is_shutdown():
     twist = Twist()
     if fingers == '2':
         twist.linear.x = 0.6
         rospy.loginfo("driving forward")
     elif fingers == '3':
         twist.angular.z = 1
         rospy.loginfo("turning left")
     elif fingers =='4':
         twist.angular.z = 1
         rospy.loginfo("turning right")
     else:
         rospy.loginfo("stoped")
     cmd_vel_pub.publish(twist)
     rate.sleep()
    sudo dpkg -install Leap-*-x86.deb
  • plug in leap motion and type dmesg in terminal to see if it is detected

  • clone ros drivers:

    $ git clone https://github.com/ros-drivers/leap_motion

  • edit .bashrc:

    export LEAP_SDK=$LEAP_SDK:$HOME/LeapSDK

    export PYTHONPATH=$PYTHONPATH:$HOME/LeapSDK/lib:$HOME/LeapSDK/lib/x64

  • save bashrc and restart terminal then run:

    sudo cp $LeapSDK/lib/x86/libLeap.so /usr/local/lib

    sudo ldconfig

    catkin_make install --pkg leap_motion

  • to test run:

    sudo leapd

    roslaunch leap_motion sensor_sender.launch

    rostopic list

  •     git clone https://github.com/ros-drivers/leap_motion
        export LEAP_SDK=$LEAP_SDK:$HOME/LeapSDK
        export PYTHONPATH=$PYTHONPATH:$HOME/LeapSDK/lib:$HOME/LeapSDK/lib/x64
        sudo cp $LeapSDK/lib/x86/libLeap.so /usr/local/lib
        sudo ldconfig
        catkin_make install --pkg leap_motion
        sudo leapd
        roslaunch leap_motion sensor_sender.launch
        rostopic list
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title

    kinect.md

    This still seems the best approach (if it ever worked). The Kinect camera was developed, at the first place,for human body gesture recognition and I could not think of any reason not using it but to reinvent wheels. However, since the library interfacing Kinect and ROS never worked and contains little documentation ( meaning it is very difficult to debug), I spent many sleepless nights and still wasn't able to get it working.However, for the next generation, I will still strongly recommend you give it a try.

    ROS BY EXAMPLE V1 INDIGO: Chapter 10.9 contains detailed instructions on how to get the package openni_tracker working but make sure you install required drivers listed in chapter 10.3.1 before you start.

    demo.md

    hashtag
    Dependencies Overview

    • OpenCV

    • ROS Kinetic (Python 2.7 is required)

    • A Camera connected to your device

    hashtag
    How to Run

    1. Copy this package into your workspace and run catkin_make.

    2. Simply Run roslaunch gesture_teleop teleop.launch. A window showing real time video from your laptop webcam will be activated. Place your hand into the region of interest (the green box) and your robot will take actions based on the number of fingers you show.

    hashtag
    Brief Explaination towards the package

    This package contains two nodes.

    1. detect.py: Recognize the number of fingers from webcam and publish a topic of type String stating the number of fingers. I won't get into details of the hand-gesture recognition algorithm. Basically, it extracts the hand in the region of insteret by background substraction and compute features to recognize the number of fingers.

    2. teleop.py: Subscribe to detect.py and take actions based on the number of fingers seen.

    hashtag
    Later Plan

    1. Using Kinect on mutant instead of local webcam.

    2. Furthermore, use depth camera to extract hand to get better quality images

    3. Incorporate Skeleton tracking into this package to better localize hands (I am using region of insterests to localize hands, which is a bit dumb).

    ssd.md

    hashtag
    Deep learning

    The final approach I took was deep learning. I used a (Single Shot multibox Detector) model to recongnize hands. The model was trained on the Egohands Dataset, which contains 4800 hand-labeled JPEG files (720x1280px). Code can be found

    Raw images from the robot’s camera are sent to another local device where the SSD model can be applied to recognize hands within those images. A filtering method was also applied to only recognize hands that are close enough to the camera.

    After processing raw images from the robot, a message (hands recognized or not) will be sent to the State Manager. The robot will take corresponding actions based on messages received.

    hashtag
    References

    Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy,Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: single shot multibox detector. CoRR, abs/1512.02325, 2015. Victor Dibia, Real-time Hand-Detection using Neural Networks (SSD) on Tensorflow, (2017), GitHub repository, https://github.com/victordibia/handtrackingarrow-up-right

    pre-trained SSDarrow-up-right
    here

    Gesture Recognition

    This semester I mainly worked on the hand gesture recognition feature. Honestly I spent most of the time learning new concepts and techniques because of my ignorance in this area. I tried several approaches (most of them failed) and I will briefly review them in terms of their robustness and reliability.

    Two fingers: Drive forward
  • Three fingers: Turn left

  • Four fingers: Turn right

  • Other: Stop

  • f5
    f4
    f3
    f2
    while not rospy.is_shutdown():
       twist = Twist()
       if fingers == '2':
          twist.linear.x = 0.6
          rospy.loginfo("driving forward")
       elif fingers == '3':
          twist.angular.z = 1
          rospy.loginfo("turning left")
       elif fingers =='4':
          twist.angular.z = 1
          rospy.loginfo("turning right")
       else:
          rospy.loginfo("stoped")
          cmd_vel_pub.publish(twist)
       rate.sleep()

    color.md

    As said before, we need a reliable way to extract hands. The first approach I tried was to extract hands by colors: we can wear gloves with striking colors (green, blue and etc) and desired hand regions can be obtained by doing color filtering. This approach sometimes worked fine but easily got distracted by subtle changes in lighting at most of the time. In a word, it was not robust enough. After speaking to Prof. Hong, I realized this approached had been tried by hundreds of people decades ago and will never work. Code can be found herearrow-up-right.

    color-filtering

    gestures.md

    hashtag
    Gestures

    hashtag
    Author: Tirtho Aungon and Jade Garisch

    hashtag
    Part 1: Kinect aka libfreenect2

    Intro

    In Kinect.md, the previous generations dicussed the prospects and limitations of using a Kinect camera. We attempted to use the new Kinect camera v2, which was released in 2014.

    Thus, we used the libfreenect2 package to download all the appropiate files to get the raw image output on our Windows. The following link includes instructions on how to install it all properly onto a Linux OS.

    https://github.com/OpenKinect/libfreenect2

    hashtag
    Issues

    We ran into a lot of issues whilst trying to install the drivers, and it took about two weeks to even get the libfreenect2 drivers to work. The driver is able to support RGB image transfer, IR and depth image transfer, and registration of RGB and depth images. Here were some essential steps in debugging, and recommendations if you have the ideal hardware set up:

    • Even though it says optional, I say download OpenCL, under the "Other" option to correspond to Ubuntu 18.04+

    • If your PC has a Nvidia GPU, even better, I think that's the main reason I got libfreenect to work on my laptop as I had a GPU that was powerful enough to support depth processing (which was one of the main issues)

    • Be sure to install CUDA for your Nvidia GPU

    Please look through this for common errors:

    https://github.com/OpenKinect/libfreenect2/wiki/Troubleshooting

    Although we got libfreenect2 to work and got the classifier model to locally work, we were unable to connect the two together. What this meant is that although we could use already saved PNGs that we found via a Kaggle database (that our pre-trained model used) and have the ML model process those gestures, we could not get the live, raw input of depth images from the kinect camera. We kept running into errors, especially an import error that could not read the freenect module. I think it is a solvable bug if there was time to explore it, so I also believe it should continued to be looked at.

    However, also fair warning that it is difficult to mount on the campus rover, so I would just be aware of all the drawbacks with the kinect before choosing that as the primary hardware.

    hashtag
    Database

    https://www.kaggle.com/gti-upm/leapgestrecog/data

    hashtag
    Machine Learning model

    https://github.com/filipefborba/HandRecognition/blob/master/project3/project3.ipynb

    • What this model predicts: Predicted Thumb Down Predicted Palm (H), Predicted L, Predicted Fist (H), Predicted Fist (V), Predicted Thumbs up, Predicted Index, Predicted OK, Predicted Palm (V), Predicted C

    hashtag
    Part 2: Leap Motion: Alternative approach (semi-successful one)

    hashtag
    Intro

    As a very last minute and spontaneous approach, we decided to use a Leap Motion device. Leap Motion uses an Orion SDK, two IR camerad and three infared LEDs. This is able to generate a roughly hemispherical area where the motions are tracked.

    It has a smaller observation area dn higher resolution of the device that differentiates the product from using a Kinect (which is more of whole body tracking in a large space). This localized apparatus makes it easier to just look for a hand and track those movements.

    The set up is relatively simple and just involved downloading for the appropriate OS. In this case, Linux (x86 for a 32 bit Ubuntu system).

    hashtag
    Steps to downloading Leap Motion and getting it started:

    1. download the SDK from https://www.leapmotion.com/setup/linux; you can extract this package and you will find two DEB files that can be installed on Ubuntu.

    2. Open Terminal on the extracted location and install the DEB file using the following command (for 64-bit PCs):

      $ sudo dpkg -install Leap-*-x64.deb

      If you are installing it on a 32-bit PC, you can use the following command:

    Once having Leap Motion installed, we were able to simulate it on RViz. We decided to program our own motion controls based on angular and linear parameters (looking at directional and normal vectors that leap motion senses):

    This is what the Leap Motion sees (the raw info):

    In the second image above, the x y and z parameters indicate where the leap motion detects a hand (pictured in the first photo)

    This is how the hand gestures looked relative to the robot's motion:

    hashtag
    Stationary

    hashtag
    Forward

    hashtag
    Backward

    hashtag
    Left Rotation

    hashtag
    Right Rotation

    hashtag
    Conclusion

    So, we got the Leap Motion to successfully work and are able to have the robot follow our two designated motion. We could have done many more if we had discovered this solution earlier. One important thing to note is that at this moment we are not able to mount the Leap Motion onto the physical robot as LeapMotion is not supported by the Raspberry Pi (amd64). If we are able to obtain an Atomic Pi, this project should be able to be furthered explored. Leap Motion is a very powerful and accurate piece of technology that was much easier to work with than the Kinect, but I advise still exploring both options.

    Install OpenNI2 if possible

  • Make sure you build in the right location

  • sudo dpkg -install Leap-*-x86.deb
  • plug in leap motion and type dmesg in terminal to see if it is detected

  • clone ros drivers:

    $ git clone https://github.com/ros-drivers/leap_motion

  • edit .bashrc:

    export LEAP_SDK=$LEAP_SDK:$HOME/LeapSDK

    export PYTHONPATH=$PYTHONPATH:$HOME/LeapSDK/lib:$HOME/LeapSDK/lib/x64

  • save bashrc and restart terminal then run:

    sudo cp $LeapSDK/lib/x86/libLeap.so /usr/local/lib

    sudo ldconfig

    catkin_make install --pkg leap_motion

  • to test run:

    sudo leapd

    roslaunch leap_motion sensor_sender.launch

    rostopic list

  • figure1
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title
    Your image title