Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Costar hyper readme update #529

Merged
merged 40 commits into from
Oct 18, 2018
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
45de584
relocate dataset file lists
ahundt Oct 5, 2018
1c35ca6
Costar plan and costar hyper readme improvements
ahundt Oct 5, 2018
e0e5366
readme.md add website link
ahundt Oct 5, 2018
78cf2f2
readme cleanup and links
ahundt Oct 5, 2018
b9d3c00
readme.md create & improve links
ahundt Oct 5, 2018
bb2bb6f
Slightly modify README to be more readable
RexxarCHL Oct 8, 2018
81e3eee
Add code to check action labels for consistency
RexxarCHL Oct 8, 2018
a607c91
Modify deprecated argument usage
RexxarCHL Oct 8, 2018
386e808
Implement failure and error set splitting
RexxarCHL Oct 9, 2018
0d0bf81
Implement dataset splitting for error only and failure only subset
RexxarCHL Oct 10, 2018
6c4fd0e
Error is also a type of failure
RexxarCHL Oct 10, 2018
87beadd
Modify output label
RexxarCHL Oct 10, 2018
ba29f96
Modify output file names
RexxarCHL Oct 10, 2018
19dcafc
Increase readability on the help text. Parameterize the hard-coded va…
RexxarCHL Oct 12, 2018
0444fb1
WIP: Add bazilion comments; split_dataset behavior overhaul
RexxarCHL Oct 12, 2018
65e3f78
Fix a bug and add a bunch of sanity checks
RexxarCHL Oct 13, 2018
cf83741
grasp_model.py -> hypertree_model.py
ahundt Oct 13, 2018
648bdd7
Merge branch 'costar_hyper' of github.com:cpaxton/costar_plan into co…
ahundt Oct 13, 2018
9781bb7
Merge pull request #531 from RexxarCHL/action_label_check
ahundt Oct 13, 2018
d0bbdef
Minor code style changes
RexxarCHL Oct 15, 2018
20c8c0f
costar_block_stacking_split_dataset.py set random seed, write summary…
ahundt Oct 15, 2018
d6a0d40
return filename
ahundt Oct 15, 2018
f68cd25
WIP: Refactor dataset splitting to include default behaviour
RexxarCHL Oct 16, 2018
be502c9
WIP: Finish refactoring the script. To be debugged.
RexxarCHL Oct 16, 2018
976fd1b
hyperopt_plot.py dramatically improved plot output with averages
ahundt Oct 16, 2018
563ce98
WIP: Debug all functionalities. Working on output combined files
RexxarCHL Oct 16, 2018
23189ff
hyperopt_plot.py provides a proper summary now
ahundt Oct 16, 2018
e30b507
hyperopt_plot.py better variable parameterization
ahundt Oct 16, 2018
4f158fb
Finish refactoring the split script
RexxarCHL Oct 16, 2018
7466db6
Modify help description
RexxarCHL Oct 16, 2018
8b94c4d
Update IA scripts for default values and metadata
RexxarCHL Oct 16, 2018
ec377c4
Minor update for better readability
RexxarCHL Oct 16, 2018
64e5955
Add expand user for path
RexxarCHL Oct 16, 2018
6496ccd
Change email address
ahundt Oct 17, 2018
a1fa172
Small changes to the code
RexxarCHL Oct 17, 2018
cebefd8
Merge pull request #532 from RexxarCHL/split_failure_error_sets
ahundt Oct 17, 2018
9de577c
cornell_grasp_train.py fix user directory bug
ahundt Oct 17, 2018
05f419f
cornell_hyperopt.py do a new search on full grasp regression
ahundt Oct 17, 2018
f266e4b
Merge branch 'costar_hyper' of github.com:cpaxton/costar_plan into co…
ahundt Oct 17, 2018
14b8a35
hyperopt_plot.py configurable dimensions and model count
ahundt Oct 18, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 42 additions & 12 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,51 @@
# CoSTAR Task Planner (CTP)
# CoSTAR Plan

[![Build Status](https://travis-ci.com/cpaxton/costar_plan.svg?token=13PmLzWGjzrfxQvEyWp1&branch=master)](https://travis-ci.com/cpaxton/costar_plan)

CoSTAR Plan is for deep learning with robots, divided into two main parts, the CoSTAR Task Planner (CTP) library and CoSTAR Hyper.

### CoSTAR Task Planner (CTP)

Code for the paper [Visual Robot Task Planning](https://arxiv.org/abs/1804.00062).

### [CoSTAR Hyper](costar_hyper/README.md)

Code for the paper [Training Frankenstein's Creature To Stack: HyperTree Architecture Search](https://sites.google.com/view/hypertree-renas/home).
Details are in the [costar hyper readme](costar_hyper/README.md).

[![Training Frankenstein's Creature To Stack: HyperTree Architecture Search](https://img.youtube.com/vi/1MV7slHnMX0/1.jpg)](https://youtu.be/1MV7slHnMX0 "Training Frankenstein's Creature To Stack: HyperTree Architecture Search")

### Supported Datasets

- [CoSTAR Block Stacking Dataset](sites.google.com/site/costardataset)
- [Cornell Grasping Dataset](http://pr.cs.cornell.edu/grasping/rect_data/data.php)
- [Google Brain Grasping Dataset](https://sites.google.com/site/brainrobotdata/home/grasping-dataset)


# CoSTAR Task Planner (CTP)


The CoSTAR Planner is part of the larger [CoSTAR project](https://github.com/cpaxton/costar_stack/). It integrates some learning from demonstration and task planning capabilities into the larger CoSTAR framework in different ways.

[![Visual Task Planning](https://img.youtube.com/vi/Rk4EDL4B7zQ/0.jpg)](https://youtu.be/Rk4EDL4B7zQ "Visual Task Planning")

Specifically it is a project for creating task and motion planning algorithms that use machine learning to solve challenging problems in a variety of domains. This code provides a testbed for complex task and motion planning search algorithms. The goal is to describe example problems where actor must move around in the world and plan complex interactions with other actors or the environment that correspond to high-level symbolic states. Among these is our Visual Task Planning project, in which robots learn representations of their world and use these to imagine possible futures, then use these for planning.
Specifically it is a project for creating task and motion planning algorithms that use machine learning to solve challenging problems in a variety of domains. This code provides a testbed for complex task and motion planning search algorithms.

[![CoSTAR Real Robot Data Collection](https://img.youtube.com/vi/LMqEcoYbrLM/0.jpg)](https://youtu.be/LMqEcoYbrLM "CoSTAR Real Robot Data Collection")
The goal is to describe example problems where the actor must move around in the world and plan complex interactions with other actors or the environment that correspond to high-level symbolic states. Among these is our Visual Task Planning project, in which robots learn representations of their world and use these to imagine possible futures, then use these for planning.

To run deep learning examples, you will need TensorFlow and Keras, plus a number of Python packages. To run robot experiments, you'll need a simulator (Gazebo or PyBullet), and ROS Indigo or Kinetic. Other versions of ROS may work but have not been tested. If you want to stick to the toy examples, you do not need to use this as a ROS package.

*About this repository:* CTP is a _single-repository_ project. As such, all the custom code you need should be in one place: here. There are exceptions, such as the [CoSTAR Stack](https://github.com/cpaxton/costar_stack/) for real robot execution, but these are generally not necessary. The minimal installation of CTP is just to install the `costar_models` package as a normal [python package](https://github.com/cpaxton/costar_plan/tree/master/costar_models/python) ignoring everything else.

Datasets:
- [PyBullet Block Stacking](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/simdata.tar.gz)
- [Sample Husky Data](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/husky_data.tar.gz)
- [CoSTAR Real Robot Data](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/sample_real_ur5_robot_data.tar.gz)
# CTP Datasets

Contents:
- PyBullet Block Stacking [download tar.gz](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/simdata.tar.gz)
- Sample Husky Data [download tar.gz](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/husky_data.tar.gz)
- Classic CoSTAR Real Robot Data [download tar.gz](https://github.com/cpaxton/costar_plan/releases/download/v0.6.0/sample_real_ur5_robot_data.tar.gz)
- Early version, deprecated in lieu of the full [CoSTAR Block Stacking Dataset](sites.google.com/site/costardataset).


# Contents
- [0. Introduction](docs/introduction.md)
- [1. Installation Guide](docs/install.md)
- [1.1 Docker Instructions](docs/docker_instructions.md)
Expand All @@ -28,7 +54,7 @@ Contents:
- [2.1 Software Design](docs/design.md): high-level notes
- [3. Machine Learning Models](docs/learning.md): using the command line tool
- [3.1 Data collection](docs/collect_data.md): data collection with a real or simulated robot
- [3.2 MARCC instructions](docs/marcc.md): learning models using JHU's MARCC cluste
- [3.2 MARCC instructions](docs/marcc.md): learning models using JHU's MARCC cluster
- [3.3 Generative Adversarial Models](docs/learning_gan.md)
- [3.4 SLURM Utilities](docs/slurm_utils.md): tools for using slurm on MARCC
- [4. Creating and training a custom task](docs/task_learning.md): overview of task representations
Expand All @@ -45,7 +71,7 @@ Contents:
- [7.2 The Real TOM](docs/tom_real_robot.md): details about parts of the system for running on the real TOM
- [8. CoSTAR Robot](docs/costar_real_robot.md): execution with a standard UR5

Package/folder layout:
# Package/folder layout
- [CoSTAR Simulation](costar_simulation/Readme.md): Gazebo simulation and ROS execution
- [CoSTAR Task Plan](costar_task_plan/Readme.md): the high-level python planning library
- [CoSTAR Gazebo Plugins](costar_gazebo_plugins/Readme.md): assorted plugins for integration
Expand All @@ -61,7 +87,11 @@ Package/folder layout:
- Others are temporary packages for various projects

Many of these sections are a work in progress; if you have any questions shoot me an email (`[email protected]`).
## Contact

This code is maintained by Chris Paxton ([email protected]).
# Contact

This code is maintained by:

- Chris Paxton ([email protected]).
- Andrew Hundt ([email protected])

111 changes: 60 additions & 51 deletions costar_hyper/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,41 +99,6 @@ plt.show()

some of those fields will vary for different use cases.

## Google Brain Grasp Dataset APIs

<img width="1511" alt="2017-12-16 surface relative transforms correct" src="https://user-images.githubusercontent.com/55744/34134058-5846b59e-e426-11e7-92d6-699883199255.png">
This version should be ready to use when generating data real training.

Plus now there is a flag to draw a circle at the location of the gripper as stored in the dataset:
![102_grasp_0_rgb_success_1](https://user-images.githubusercontent.com/55744/34133964-ccf57caa-e425-11e7-8ab1-6bba459a5408.gif)

A new feature is writing out depth image gifs:
![102_grasp_0_depth_success_1](https://user-images.githubusercontent.com/55744/34133966-d0951f28-e425-11e7-85d1-aa2706a4ba05.gif)

Image data can be resized:

![102_grasp_1_rgb_success_1](https://user-images.githubusercontent.com/55744/34430739-3adbd65c-ec36-11e7-84b5-3c3712949914.gif)

The blue circle is a visualization, not actual input, which marks the gripper stored in the dataset pose information.

Color augmentation is also available:

![102_grasp_2_rgb_success_1](https://user-images.githubusercontent.com/55744/34698561-ba2bd61e-f4a6-11e7-88d9-5091aed500fe.gif)
![102_grasp_3_rgb_success_1](https://user-images.githubusercontent.com/55744/34698564-bef2fba0-f4a6-11e7-9547-06b4410d86aa.gif)

### How to view the vrep dataset visualization

1. copy the .ttt file and the .so file (.dylib on mac) into the `costar_google_brainrobotdata/vrep` folder.
2. Run vrep with -s file pointing to the example:

```
./vrep.sh -s ~/src/costar_ws/src/costar_plan/costar_google_brainrobotdata/vrep/kukaRemoteApiCommandServerExample.ttt
```

4. vrep should load and start the simulation
5. make sure the folder holding `vrep_grasp.py` is on your PYTHONPATH
6. cd to `~/src/costar_ws/src/costar_plan/costar_google_brainrobotdata/`, or wherever you put the repository
7. run `export CUDA_VISIBLE_DEVICES="" && python2 vrep_grasp.py`

## Hyperparameter search

Expand Down Expand Up @@ -182,21 +147,7 @@ export CUDA_VISIBLE_DEVICES="0" && python2 costar_block_stacking_train_ranked_re

You may wish to use the `--learning_rate_schedule triangular` flag for one run and then the `--learning_rate_schedule triangular2 --load_weights path/to/previous_best_weights.h5` for a second run. These learning rate schedules use the [keras_contrib](github.com/keras-team/keras-contrib) cyclical learning rate callback, see [Cyclical learning rate repo](https://github.com/bckenstler/CLR) for a detailed description and paper links.

### Google Brain Grasping Dataset

To run the search execute the following command

```
export CUDA_VISIBLE_DEVICES="0" && python2 google_grasp_hyperopt.py --run_name single_prediction_all_transforms
```

Generating a hyperparameter search results summary for google brain grasping dataset classification:

```
python hyperopt_rank.py --log_dir hyperopt_logs_google_brain_classification --sort_by val_acc
```

### Cornell Dataset
## Cornell Dataset

These are instructions for training on the [cornell grasping dataset](http://pr.cs.cornell.edu/grasping/rect_data/data.php).

Expand Down Expand Up @@ -324,4 +275,62 @@ Here is the command to actually run k-fold training:
export CUDA_VISIBLE_DEVICES="0" && python cornell_grasp_train_classification.py --run_name 2018-04-08-21-04-19_s2c2hw4 --pipeline_stage k_fold
```

After it finishes running there should be a file created named `*summary.json` with your final results.
After it finishes running there should be a file created named `*summary.json` with your final results.


## Google Brain Grasp Dataset APIs

Note: The [Google Brain Grasping Dataset](https://sites.google.com/site/brainrobotdata/home/grasping-dataset) has several important limitations which must be considered before trying it out:
- There is no validation or test dataset with novel objects
- There is no robot model available, and the robot is not commercially available
- Data is collected at 1Hz and may not be well synchronized w.r.t. time.
- The robot may move vast distances and change directions completely between frames.

<img width="1511" alt="2017-12-16 surface relative transforms correct" src="https://user-images.githubusercontent.com/55744/34134058-5846b59e-e426-11e7-92d6-699883199255.png">
This version should be ready to use when generating data real training.

Plus now there is a flag to draw a circle at the location of the gripper as stored in the dataset:
![102_grasp_0_rgb_success_1](https://user-images.githubusercontent.com/55744/34133964-ccf57caa-e425-11e7-8ab1-6bba459a5408.gif)

A new feature is writing out depth image gifs:
![102_grasp_0_depth_success_1](https://user-images.githubusercontent.com/55744/34133966-d0951f28-e425-11e7-85d1-aa2706a4ba05.gif)

Image data can be resized:

![102_grasp_1_rgb_success_1](https://user-images.githubusercontent.com/55744/34430739-3adbd65c-ec36-11e7-84b5-3c3712949914.gif)

The blue circle is a visualization, not actual input, which marks the gripper stored in the dataset pose information. You can see the time synchronization issue in these frames.

Color augmentation is also available:

![102_grasp_2_rgb_success_1](https://user-images.githubusercontent.com/55744/34698561-ba2bd61e-f4a6-11e7-88d9-5091aed500fe.gif)
![102_grasp_3_rgb_success_1](https://user-images.githubusercontent.com/55744/34698564-bef2fba0-f4a6-11e7-9547-06b4410d86aa.gif)

### How to view the vrep dataset visualization

1. copy the .ttt file and the .so file (.dylib on mac) into the `costar_google_brainrobotdata/vrep` folder.
2. Run vrep with -s file pointing to the example:

```
./vrep.sh -s ~/src/costar_ws/src/costar_plan/costar_google_brainrobotdata/vrep/kukaRemoteApiCommandServerExample.ttt
```

4. vrep should load and start the simulation
5. make sure the folder holding `vrep_grasp.py` is on your PYTHONPATH
6. cd to `~/src/costar_ws/src/costar_plan/costar_google_brainrobotdata/`, or wherever you put the repository
7. run `export CUDA_VISIBLE_DEVICES="" && python2 vrep_grasp.py`


### Google Brain Grasping Dataset

To run the search execute the following command

```
export CUDA_VISIBLE_DEVICES="0" && python2 google_grasp_hyperopt.py --run_name single_prediction_all_transforms
```

Generating a hyperparameter search results summary for google brain grasping dataset classification:

```
python hyperopt_rank.py --log_dir hyperopt_logs_google_brain_classification --sort_by val_acc
```
8 changes: 4 additions & 4 deletions costar_hyper/cornell_grasp_train.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,10 @@ def tqdm(*args, **kwargs):
from keras.callbacks import TensorBoard
from keras.models import Model
from keras.models import model_from_json
from grasp_model import concat_images_with_tiled_vector_layer
from grasp_model import top_block
from grasp_model import create_tree_roots
from grasp_model import choose_hypertree_model
from hypertree_model import concat_images_with_tiled_vector_layer
from hypertree_model import top_block
from hypertree_model import create_tree_roots
from hypertree_model import choose_hypertree_model
from cornell_grasp_dataset_reader import parse_and_preprocess

from callbacks import EvaluateInputGenerator
Expand Down
4 changes: 2 additions & 2 deletions costar_hyper/costar_block_stacking_train_regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def main(_):
# load_weights = './logs_cornell/2018-07-30-21-47-16_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3/2018-07-30-21-47-16_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3-epoch-016-val_loss-0.000-val_grasp_acc-0.273.h5'
# load_weights = './logs_cornell/2018-07-09-09-08-15_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3/2018-07-09-09-08-15_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3-epoch-115-val_loss-0.000-val_grasp_acc-0.258.h5'
# use these weights for both xyz and axis angle input data
# Be careful if loading the weights below, the correct vector input data and backwards compatibility code must be in place to avoid:
# Be careful if loading the weights below, the correct vector input data and backwards compatibility code must be in place to avoid:
# "ValueError: You are trying to load a weight file containing 13 layers into a model with 11 layers."
# load_weights = './logs_cornell/2018-08-09-11-26-03_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3/2018-08-09-11-26-03_nasnet_mobile_semantic_translation_regression_model-_img_nasnet_mobile_vec_dense_trunk_vgg_conv_block-dataset_costar_block_stacking-grasp_goal_xyz_3-epoch-003-val_loss-0.000-val_grasp_acc-0.160.h5'
# weights below are trained with data augmentation, weights 2018-07-31-21-40-50 above are actual best so far for translation as of 2018-08-12
Expand Down Expand Up @@ -208,7 +208,7 @@ def main(_):
print('EVAL on training data (well, a slightly hacky version) with 0 LR 0 dropout trainable False, no learning rate schedule')
learning_rate = 0.000000000001
hyperparams['dropout_rate'] = 0.000000000001
# TODO(ahundt) it seems set_trainable_layers in grasp_model.py has a bug?
# TODO(ahundt) it seems set_trainable_layers in hypertree_model.py has a bug?
# hyperparams['trainable'] = 0.00000000001
FLAGS.learning_rate_schedule = 'none'
else:
Expand Down
2 changes: 1 addition & 1 deletion costar_hyper/grasp_loss.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import tensorflow as tf
from grasp_model import tile_vector_as_image_channels
from hypertree_model import tile_vector_as_image_channels
import keras
from keras import backend as K
from keras_contrib.losses import segmentation_losses
Expand Down
Loading