Robotics Software Engineer

Udacity Nanodegree

Notes are based on Robotics Software Engineer Udacity Nanodegree

Curriculum:

1. Introduction to Robotics

Essential elements of Robotics:

  1. Perception through sensors (RGB-D camera, LIDAR, RADAR, encoder, GPS, IMU, microphone, thermometer, barometer, etc).
  2. Decision Making by anaylzing sensor measurements (color, distance, position, relative coordinates, acceleration, sound, temperature, pressure, etc).
  3. Action using actuators (linear / rotary, electric / pneumatic / hydraulic / magnetic / thermal), and other commands (communicate, measure, etc).

2. Gazebo World

Gazebo features:

  1. Dynamics Simulation: Model a robot’s dynamics with a high-performance physics engine.
  2. Advanced 3D Graphics: Render your environment with high-fidelity graphics, including lighting, shadows, and textures.
  3. Sensors: Add sensors to your robot, generate data, and simulate noise.
  4. Plugins: Write a plugin to interact with your world, robot, or sensor.
  5. Model Database: Download a robot or environment from Gazebo library or build your own through their engine.
  6. Socket-Based Communication: Interact with Gazebo running on a remote server through socket-based communication.
  7. Cloud Simulation: Run Gazebo on a server and interact with it through a browser.
  8. Command Line Tools: Control your simulated environment through the command line tools.

Components involved in running an instance of a Gazebo simulation:

  1. Gazebo Server:
     $ gzserver
    

    It is responsible for parsing the description files related to the scene we are trying to simulate, as well as the objects within. It then simulates the complete scene using a physics and sensor engine.

  2. Gazebo Client
     $ gzclient
    

    It provides the very essential Graphical Client that connects to the gzserver and renders the simulation scene along with useful interactive tools. While you can technically run gzclient by itself, but it does nothing at all (except consume your compute resources) as it does not have a gzserver to connect to and receive instructions from.
    It is a common practice to run gzserver first, followed by gzclient, allowing some time to initialize the simulation scene, objects within, and associated parameters before rendering it. But it could be combined with

     $ gazebo
    
  3. World Files
     $ gazebo <yourworld>.world
    

    A world file in Gazebo contains all the elements in the simulated environment. These elements are your robot model, its environment, lighting, sensors, and other objects. It is formatted using the Simulation Description Format (SDF) such as

     <?xml version="1.0" ?>
     <sdf version="1.5">
         <world name="default">
             <physics type="ode">
             ...
             </physics>
                
             <scene>
             ...
             </scene>
    
             <model name="box">
             ...
             </model>
    
             <model name="sphere">
             ...
             </model>
    
             <light name="spotlight">
             ...
             </light>
    
         </world>
     </sdf>
    
  4. Model Files
    For simplification, you must create a separate SDF file of your robot with exactly the same format as your world file. This model file should only represent a single model (ex: a robot) and can be imported by your world file. The reason why you need to keep your model in a separate file is to use it in other projects. To include a model file of a robot or any other model inside your world file, you can add the following code to the world’s SDF file:
     <include>
         <uri>model://model_file_name</uri>
     </include>
    
  5. Environment Variables
    There are many environment variables that Gazebo uses, primarily to locate files (world, model, …) and set up communications between gzserver and gzclient such as GAZEBO_MODEL_PATH: - a list of directories where Gazebo looks to populate a model file.
  6. Plugins
    To interact with a world, model, or sensor in Gazebo, you can write plugins. These plugins can be either loaded from the command line or added to your SDF world file. Include the path to your custom plugins with
     $ export GAZEBO_PLUGIN_PATH=${GAZEBO_PLUGIN_PATH}:/path/to/compiled/files/of/your/custom/plugins
    

    Project1: Build My World

3. ROS Essentials

ROS is an open-source software framework for robotics development.
It is not an operating system in the typical sense. But like an OS, it provides a means of communicating with hardware.
It also provides a way for different processes to communicate with one another via message passing.
Lastly, ROS features a slick build and package management system called catkin, allowing you to develop and deploy software with ease.
ROS also has tools for visualization, simulation, and analysis, as well as extensive community support and interfaces to numerous powerful software libraries.

ROS features:

  • Robot’s physical components (sensors, actuators, etc) can be abstracted with ROS nodes, which contains specific set of operations.
  • ROS Master maintains the registry of all the active nodes on a system, allowing nodes to locate one another and communicate.
  • ROS Master hosts the parameter server where configuration values and parameters are shared among all nodes.
  • ROS nodes can communicate with each other by passing ROS messages over ROS topics, so that there exist publisher and subscriber nodes.
  • ROS node can publish and be subscribed to any number of topics, constantly monitoring the topics to send/receive messages.
  • ROS nodes can interact with ROS services on a one-to-one basis, using request/response messages.
  • ROS provides a RQT graph tool to show compute graph of nodes, i.e. nodes and their means of communication (services, topics)

ROS Master process is responsible for:

  • providing naming and registration services to other running nodes.
  • tracking all publishers and subscribers.
  • aggregating log messages generated by the nodes.
  • facilitating connections between nodes.

ROS basic commands:

  • $ roscore - start ROS Master process.
  • $ rosrun <package_name> <executable_node_name>
  • $ rosnode list - list of active ROS nodes.
  • $ rostopic list - list of active ROS topics.
  • $ rostopic info </rostopic/name> - print info (message type, publishers/subscribers) about a specific ROS topic.
  • $ rosmsg info <rosmsg/type> - print detailed info about the content of a specific ROS message.
  • $ rostopic echo </rostopic/name> - print specific ROS topic’s published messages in real time.
  • $ rosparam set </parameter/name> <value> - set the ROS parameter

$ roslaunch <package_name> <filename>.launch allows to:

  • launch the ROS Master and multiple nodes
  • set default parameters on the parameter server
  • automatically re-spawn processes that have died

$ rosdep tool will check for a package’s missing dependencies, download them, and install them.
$ rosdep check <package_name> - check for missing dependencies in a ROS package.
$ rosdep install -i <package name> - install packages.

ROS package directory structure:

  • scripts (python executables)
  • src (C++ source files)
  • msg (for custom message definitions)
  • srv (for service message definitions)
  • include (headers/libraries that are needed as dependencies)
  • config (configuration files)
  • urdf (Universal Robot Description Files)
  • meshes (CAD files in .dae (Collada) or .stl (STereoLithography) format)
  • worlds (XML like files that are used for Gazebo simulation environments)

ROS Publishers & Subscribers:

ros::Publisher pub1 = n.advertise<message_type>("/topic_name", queue_size);
pub1.publish(msg);

ros::Subscriber sub1 = n.subscribe("/topic_name", queue_size, callback_function);
void callback_function(package_name::ServiceName::Request& req, package_name::ServiceName::Response& res) {}

ROS Services:

ros::ServiceServer service = n.advertiseService(`service_name`, handler);
ros::ServiceClient client = n.serviceClient<package_name::service_file_name>("service_name");
client.call(srv);  // request a service 

$ rosservice call <service_name> “request” - service call $ rossrv show <service_name> - print detailed info (request/response message types) about the service

ROS Cheatsheet (Download PDF)

ROS PublishAndSubscribe Class Template:

#include <ros/ros.h>

class PublishAndSubscribe
{
public:
    PublishAndSubscribe()
    {
        // Define publisher's topic "/published_topic"
        pub_ = n_.advertise<PUBLISHED_MESSAGE_TYPE>("/published_topic", 1);

        // Define subscriber's topic "/subscribed_topic" and declare its callback function
        sub_ = n_.subscribe("/subscribed_topic", 1, &PublishAndSubscribe::callback, this);
    }

    // Define subscriber's callback function
    void callback(const SUBSCRIBED_MESSAGE_TYPE& input)
    {
        PUBLISHED_MESSAGE_TYPE output;
        //.... do something with the input and generate the output...
        pub_.publish(output);
    }

private:
    ros::NodeHandle n_; 
    ros::Publisher pub_;
    ros::Subscriber sub_;

}; // End of class PublishAndSubscribe

int main(int argc, char **argv)
{
    // Initialize ROS node "publish_and_subscribe"
    ros::init(argc, argv, "publish_and_subscribe");

    // Create an instance of PublishAndSubscribe class that will take care of everything
    PublishAndSubscribe SaPObject;

    // Check for an incoming message to the subscriber, or for a service request for the server
    ros::spin();

    return 0;
}

Project2: Go Chase It

4. Localization

Types of localization problems, given a known map of the environment:

  • Local Localization (Position Tracking), when the robot has an initial pose and continuously updated this estimate as it moves.
  • Global Localization (Global Pose Estimation), when the robot has no initial pose and must determine its location from scratch.
  • Kidnapped Robot Problem, when the robot has no initial pose and can be moved to unknown location at any time without warning.

Markov Properties in terms of Localization in Robotics:

  • static world.
  • noise in sensor readings and motion commands independent of the noise from previous readings and movements.
  • perfect model with no approximation errors, matching the actual dynamics and sensing of the robot.
  • estimation of the current pose depends only on the previous belief and current action, and not on the sequence of events that preceded it.
  • observation (sensor measurement) depends only on the current estimated pose, and not on the sequence of events that preceded it.

Markov Localization, based on Markov Properties above, maintains a belief distribution (probability distribution) over the set of all possible states (robot’s poses) and updates this belief according to sensor measurements and motion commands. It could be implemented depending on the specific requirements of the environment and the robot’s sensors with:

  • Kalman Filters (standard, Extended, Unscented) - a type of Bayesian filter that assumes the robot’s (non-)linear dynamic model and the measurements are subject to Gaussian noise. Computationally efficient.
  • Histogram Filters - a type of non-parametric Bayesian filter that uses discretized representation of the state space. The belief about the robot’s pose is represented as a probability distribution over this grid. Limited to discretized thus less flexible for continous spaces, and computationally inefficient for high-dimensional spaces.
  • Particle Filters (Monte-Carlo Localization) - a type of non-parametric Bayesian filter that doesn’t assume any specific distribution (like Gaussian in Kalman filter), thus can represent complex and irregular distributions modelling non-linear and non-Gaussian processes. Set of particles is used to represent the belief distribution. Computationally heavy.

Kalman Filter

Given the proper initial estimate and Gaussian noise in measurments and movements
Pros:

  • computationally efficient - no need for large-scale numerical simulations to make an estimate, as it uses linear equations.
  • sequential processing - data coming continuously in online systems updates sequentially the estimate at each step.
  • sensor fusion - data from multiple sensors (GPS, LIDAR, etc) weighed according to their variance (more accurate measurement is given more weight) and combined together result in Gaussian distribution with a minimized overall variance.

Cons:

  • non-linear systems needs modifications (EKF, UKF).
  • non-Gaussian noise needs more flexible pose estimate.
  • poor initialization of the base estimate result in inaccurate future estimates.

Types of sensors used in Kalman Filter:

  • Inertial Measurement Unit (IMU): 3-DoF Gyroscope for angular velocity + 3-DoF Accelerometer for linear acceleration measuring.
    \(x = \int\int g \, dt\). The error from double integration might accumulate over time. Check the drift.
  • Inertial and Magnetic Measurement Unit (IMMU): 6-DoF IMU + 3-DoF Magnetometer for magnetic field measuring.
  • Rotary Encoders: measures wheels velocity and position. \(x = \int v \, dt\). Wheel slippage and lockup would lead to inaccurate and noisy measurments.
    High-resolution (CPR: counts per revolution) encoders are more sensitive to slippage.
  • Vision Cameras (Stereo, RGB-D), LIDAR: measures distance to obstacles.
    Light conditions, surface texture, max-min range, and sensor sensitivity determine the accuracy of measurements.
Gaussian Distributions:
  • univariate with Mean \(\mu\) and Variance \(\sigma\)
    • \[p(x | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right)\]
  • multivariate
    • \[\text{Mean Vector: } \boldsymbol{\mu} = \begin{bmatrix} \mu_x \\ \mu_y \end{bmatrix}\]
    • \[\text{Covariance Matrix: } \boldsymbol{\Sigma} = \begin{bmatrix} \sigma_x^2 & \rho\sigma_x\sigma_y \\ \rho\sigma_x\sigma_y & \sigma_y^2 \end{bmatrix}\]
    • \[p(\mathbf{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{2\pi |\boldsymbol{\Sigma}|^{1/2}} \exp\left(-\frac{1}{2}(\mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1}(\mathbf{x} - \boldsymbol{\mu})\right)\]
Univariate Kalman Filter: Mean & Variance computation:

State Prediction

  • \[\mu^{\prime} = \mu_1 + \mu_2\]
  • \[\sigma^{\prime \, 2} = \sigma_1^2 + \sigma_2^2\]

Measurement Update

  • \[\mu^{\prime} = \frac{r^2 \mu + \sigma^2 v}{r^2 + \sigma^2}\]
  • \[\sigma^{\prime \, 2} = \frac{r^2 \, \sigma^2}{r^2 + \sigma^2}\]
Multivariate Kalman Filter: Mean & Variance computation:

State Prediction

  • \[\text{Prior Mean Vector: } \mathbf{x} = \begin{bmatrix} x \\ \dot{x} \\ \ddot{x} \end{bmatrix}\]
  • \[\text{State Prediction (noise-free): } {\begin{bmatrix} x \\ \dot{x} \\ \ddot{x} \end{bmatrix}}^\prime = \begin{bmatrix} 1 & \Delta t & \frac{1}{2} \Delta t^2 \\ 0 & 1 & \Delta t \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ \dot{x} \\ \ddot{x} \end{bmatrix}\]
  • \[\text{State Prediction Function (noise-free): } \mathbf{F} = \begin{bmatrix} 1 & \Delta t & \frac{1}{2} \Delta t^2 \\ 0 & 1 & \Delta t \\ 0 & 0 & 1 \end{bmatrix}\]
  • \[\text{Covariance Matrix (noise-free): } \mathbf{P}^\prime = \mathbf{F} \mathbf{P} \mathbf{F}^T\]
  • \[\text{State Prediction (with Gaussian noise): } \mathbf{x}^\prime = \mathbf{F} \mathbf{x} + \text{noise}, \ \text{noise} \sim \mathbf{N} (0, \mathbf{Q})\]
  • \[\text{Covariance Matrix (with Gaussian noise): } \mathbf{P}^\prime = \mathbf{F} \mathbf{P} \mathbf{F}^T + \mathbf{Q}\]

Measurement Update

  • \[\text{Actual Measurement Vector: } \mathbf{z}\]
  • \[\begin{gather*} \text{Expected Measurement Vector: } \mathbf{H} \mathbf{x}^\prime, \\ \text{where} \ \mathbf{H} \text{ is Measurement Function, mapping the state to an observation} \end{gather*}\]
  • \[\text{Measurement Residual Vector: } \mathbf{y} = \mathbf{z} - \mathbf{H} \mathbf{x}^\prime\]
  • \[\text{Covariance Matrix (with Gaussian noise): } \mathbf{S}^\prime = \mathbf{H} \mathbf{P}^\prime \mathbf{H}^T + \mathbf{R}\]

Kalman Gain Calculation

  • \[\text{Kalman Gain: } \mathbf{K} = \mathbf{P}^\prime \mathbf{H}^T \mathbf{S}^{-1}\]

Posterior Mean and Covariance Calculations

  • \[\text{Posterior Mean Vector: } \hat{\mathbf{x}} = \mathbf{x}^\prime + \mathbf{K} \mathbf{y}\]
  • \[\text{Posterior Covariance Matrix: } \hat{\mathbf{P}} = (\mathbf{I} - \mathbf{K} \mathbf{H}) \mathbf{P}^\prime\]
Analysis of Kalman Gain:
  1. Perfect sensor measurements \(\mathbf{R} = \begin{bmatrix} 0 \end{bmatrix}\)
    • \[\mathbf{S}^\prime = \mathbf{H} \mathbf{P}^\prime \mathbf{H}^T + \mathbf{R} = \mathbf{H} \mathbf{P}^\prime \mathbf{H}^T\]
    • \[\mathbf{K} = \mathbf{P}^\prime \mathbf{H}^T \mathbf{S}^{-1} = \mathbf{P}^\prime \mathbf{H}^T (\mathbf{H} \mathbf{P}^\prime \mathbf{H}^T)^{-1} = \mathbf{P}^\prime \mathbf{H}^T (\mathbf{H}^{T - 1} \mathbf{P}^{\prime - 1} \mathbf{H}^{-1}) = \mathbf{H}^{-1}\]
    • \[\begin{gather*} \hat{\mathbf{x}} = \mathbf{x}^\prime + \mathbf{K} \mathbf{y} = \mathbf{x}^\prime + \mathbf{H}^{-1} (\mathbf{z} - \mathbf{H} \mathbf{x}^\prime) = \mathbf{H}^{-1} \mathbf{z}, \\ \text{rely entirely on sensor measurements, state prediction is unreliable} \end{gather*}\]
  2. Noisy sensor measurements \(\mathbf{R} = \begin{bmatrix} \infty \end{bmatrix}\)
    • \[\mathbf{S}^\prime = \mathbf{H} \mathbf{P}^\prime \mathbf{H}^T + \mathbf{R} = \begin{bmatrix} \infty \end{bmatrix}\]
    • \[\mathbf{K} = \mathbf{P}^\prime \mathbf{H}^T \mathbf{S}^{-1} = \begin{bmatrix} 0 \end{bmatrix}\]
    • \[\begin{gather*} \hat{\mathbf{x}} = \mathbf{x}^\prime + \mathbf{K} \mathbf{y} = \mathbf{x}^\prime, \\ \text{rely entirely on state prediction, sensor measurement is unreliable} \end{gather*}\]

Extended Kalman Filter

Non-linear motion and measurment functions can be used to update the mean vector, but need to be linearized over a small section to update the covariance matrix. Otherwise, Gaussian distribution would turn into non-Gaussian distribution, dealing with which is computationally inefficient.\

  • \[\begin{gather*} \text{Linearization is achieved through Taylor series:} \\ \mathbf{T}(\mathbf{x}) = f(\boldsymbol{\mu}) + (\mathbf{x} - \boldsymbol{\mu})^T \, \nabla f(\boldsymbol{\mu}) + \frac{1}{2!} (\mathbf{x} - \boldsymbol{\mu})^T \, \nabla^2 f(\boldsymbol{\mu}) (\mathbf{x} - \boldsymbol{\mu}) + ... \end{gather*}\]
  • \[\begin{gather*} \text{where } \nabla f(\mathbf{\boldsymbol{\mu}}) \text{ is the gradient of } f \text{ evaluated at } \boldsymbol{\mu}, \text{which is a Jacobian matrix } \mathbf{J} \text{ of partial derivatives:} \\ \nabla f(\mathbf{\boldsymbol{\mu}}) = \mathbf{J} = \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial x_1} & \frac{\partial f_m}{\partial x_2} & \cdots & \frac{\partial f_m}{\partial x_n} \end{bmatrix} \end{gather*}\]

Example:

Measurement Update

  • \[\text{State Vector: } \mathbf{x} = \begin{bmatrix} \phi \\ \dot{y} \\ y \end{bmatrix}, \text{where } \phi \text{ is the roll angle}\]
  • \[\text{Measurement Function: } h(\mathbf{x}) = \begin{bmatrix} \frac{wall - y}{ \cos{\phi}} \end{bmatrix}\]
  • \[\text{Measurement Function Linearization: } h(\mathbf{x}) \simeq h(\mathbf{x}) + (\mathbf{x} - \boldsymbol{\mu})^T \, \nabla h(\mathbf{\boldsymbol{\mu}})\]
  • \[\nabla h(\mathbf{\boldsymbol{\mu}}) = \mathbf{H} = \begin{bmatrix} \frac{\partial h}{\partial \phi} & \frac{\partial h}{\partial \dot{y}} & \frac{\partial h}{\partial y} \end{bmatrix} = \begin{bmatrix} \frac{\sin{\phi}}{\cos^2{\phi}} (wall - y) & 0 & \frac{-1}{\cos{\phi}} \end{bmatrix}\]
Extended Kalman Filter Equations:

State Prediction

  • \[\mathbf{x}^\prime = f(\mathbf{x})\]
  • \[\mathbf{P}^\prime = \mathbf{F} \mathbf{P} \mathbf{F}^T + \mathbf{Q}\]

Measurement Update

  • \[\mathbf{y} = \mathbf{z} - h(\mathbf{x}^\prime)\]
  • \[\mathbf{S}^\prime = \mathbf{H} \mathbf{P}^\prime \mathbf{H}^T + \mathbf{R}\]

Kalman Gain Calculation

  • \[\mathbf{K} = \mathbf{P}^\prime \mathbf{H}^T \mathbf{S}^{-1}\]

Posterior Mean and Covariance Calculations

  • \[\hat{\mathbf{x}} = \mathbf{x}^\prime + \mathbf{K} \mathbf{y}\]
  • \[\hat{\mathbf{P}} = (\mathbf{I} - \mathbf{K} \mathbf{H}) \mathbf{P}^\prime\]
Lab: Extended Kalman Filter:

Used ROS packages:

  • turtlebot_gazebo to spawn TurtleBot 2 in a Gazebo environment.
  • turtlebot_teleop to publish robot control commands to /cmd_vel_mux/input/teleop ROS topic from keyboard.
  • rviz to visualize the estimated robot poses.
  • robot_pose_ekf to estimate the robot poses from teleop motion commands and fusion of two sensor measurements: IMU (/mobile_base/sensors/imu_data) and rotary encoders (/odom).
  • odom_to_trajectory (by Udacity) to generate trajectory paths from filtered (/ekfpath) and unfiltered (/odompath) poses.


Monte-Carlo Localization

Pros:

  • can be used for both local and global localization problems, while EKF is for local only.
  • not limited to Gaussian noise as EKF.
  • memory & resolution can be control through the number of particles.
  • particles represent belief distribution of where the robot might be.
  • easier to implement than EKF.

Cons:

  • computationally and memory-wise expensive (especially to achieve high resolution).
  • poor resampling result in particle deprevation and worse resolution.

For more information, check out the paper “Robust Monte-Carlo Localization for Mobile Robots”

Paricles with higher weight, which is defined by how close the predicted pose to the robot actual pose, are more likely to survive during the resampling process.
Types of Range Sensor (LIDAR) Noise
  1. Local Measurement Noise: caused by limited resolution of range sensors; atmespheric effects on the measurement signals.
  2. Unexpected Objects: caused by dynamic environment (e.g. people) in static map.
  3. Sensor Failures: caused by black, light-absorbing objects; bright sunlight; smooth surfaces such as walls, which effectively becomes a mirror (surface material, sensitivity of the sensor).
  4. Random Measurements: caused by cross-talk between different sensors; phantom readings from signals bounced off walls.
$$ z^k_t \text{ - reading from } k^{th} \text{ beam sensor at time } t, \, x_t \text{ - robot position at time } t, m \text{ - map} $$
Sensor Model that takes into account different types of noises.
MCL Pseudo-Code
State Prediction (4) through Motion Model; Measurement Update (5) through Sensor Model; Resampling (8-11).
LIDAR Sensor Model.
Low-Variance Resampler.

Project3: Where Am I