Skip to content

Latest commit

 

History

History
137 lines (77 loc) · 7.16 KB

writeup.md

File metadata and controls

137 lines (77 loc) · 7.16 KB

Writeup: MidTerm Project

Notes:

  • The top view show us that the Udacity Car is located on an interception where we could easily identify multiple vehicles around us
  • The glasses cannot be identified, because the point cloud "passes through" them
    Glass
  • We could identify different kind of vehicles due to the physical shapes, just like SUVs an other
    SUV
  • The occlusion caused by the surrounding vehicles limits the area of vision we have
    occlusion
  • Vehicles located in front of our car, only the bumper and minimum characteristics of the rear part of the car can be identified \
Ex 1 Ex 2
image image
  • The head and tail lights, license plates, grille, and side mirrors as stable features for a vehicle \
  • Finally, as we could expect, the wheels are clearly showed on the data

Steps to achieve:

  • Compute Lidar Point-Cloud from Range Image
  • Create Birds-Eye View from Lidar PCL
  • Model-based Object Detection in BEV Image
  • Performance Evaluation for Object Detection

Compute Lidar Point-Cloud from Range Image

The LIDAR data provided is transformed into a numpy array in order to convert to an image and being displayed. An important characteristic is that the negative data indicate that the range doesn't have a real return, so this data should be removed.

lidar_to_numpy

Visualize pointcloud from the LIDAR data using open3D top_view

Also, the user is able to zoom, drag, etc.. the current image zoom

pull

Create Birds-Eye View from Lidar PCL

Image below show the intensity and height channel of the BEV map

Height Intensity
heigjt intensity

Model-based Object Detection in BEV Image

3D Boxes and card detected by the BEV view.

screenshoot

Performance Evaluation for Object Detection

image

image

Writeup: Track 3D-Objects Over Time Final_Project

1. Write a short recap of the four tracking steps and what you implemented there (filter, track management, association, camera fusion). Which results did you achieve? Which part of the project was most difficult for you to complete, and why?

EKF Implementation

In this step a Kalman Filter is implemented in order to track a single target using the data prvided by a LIDAR over the time.

The RMSE and a Video about this step could be found:

rmse

track1.mp4

Track Management Implementation

The second step main focus is the implementaton of a track management module in charge f initialize and delete tracks, set the current track.state and a track.score.

The RMSE and a Video about this step could be found:

Figure_2

track_confirmed.mp4

Association Module

On the third step of this project an enhanced in the track algorithm is performed in order to apply and nearest association algorithm to multiple targets. This module is based on Mahalanobis distances.

The RMSE and a Video about this step could be found:

Figure_2

multi_track.mp4

Sensor Fusion

A nonlinear camera measurement model was implement in order to increase the performance of our current LIDAR-Only model.

Figure_2

I have some codecs issues. So, I generated the .avi file from the .png images using ffmpeg -framerate 6 -pattern_type glob -i '*.png' -c:v ffv1 out.avi and convert this file to a .mp4 in order to attach it here.

out.mp4

I think the project has a proper workflow, it is easy to follow. One important thing to note, is that I am not yet so familiar with some operations using numpy, as in certain steps there may be more optimized ways to solve it.

2. Do you see any benefits in camera-lidar fusion compared to lidar-only tracking (in theory and in your concrete results)?

Once the camera was integrated into our algorithm, the RMSE drops a little. Although it is not that significant a result, since in my results the improvement over using only LIDAR was not that great.

On the other hand, we do notice an improvement in the number of "possible false detections".

3. Which challenges will a sensor fusion system face in real-life scenarios? Did you see any of these challenges in the project?

In a real scenario, the number of objects to be identified and tracked are many more, there are pedestrians and other types of vehicles which results in the need for more redundant and information-rich sensing systems.

4. Can you think of ways to improve your tracking results in the future?

As I said before, I suppose that some parts of the code can be optimized with other functions, reducing its execution time. Also, taking more into account and making an accurate calibration to the different sensors, will result in better data.