Skip to content

Commit

Permalink
Merge branch 'master' into add-ImgaeBind-classifier
Browse files Browse the repository at this point in the history
  • Loading branch information
k-okada authored Nov 7, 2023
2 parents e04d891 + 7937653 commit 05dcf70
Show file tree
Hide file tree
Showing 6 changed files with 57 additions and 32 deletions.
46 changes: 23 additions & 23 deletions doc/install_chainer_gpu.rst → doc/install_chainer_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,37 +15,37 @@ Requirements
Version Compatibilities for 18.04
---------------------------------

(Recommended) Use CUDA 9.1 from Official ubuntu repository (https://packages.ubuntu.com/bionic/nvidia-cuda-dev)
(Recommended) Use CUDA 9.1 from Official ubuntu repository [https://launchpad.net/ubuntu/bionic/+package/nvidia-cuda-dev](https://launchpad.net/ubuntu/bionic/+package/nvidia-cuda-dev)

- Chainer

- chainer == 6.7.0 (last version supoprting python2. See https://github.com/chainer/chainer/releases/tag/v6.7.0)
- cupy-cuda91 == 6.7.0 (chainer v6.7.0 requires cupy/cudnn for hardware acceleration support https://docs.chainer.org/en/v6.7.0/install.html)
- chainer == 6.7.0 (last version supoprting python2. See [https://github.com/chainer/chainer/releases/tag/v6.7.0](https://github.com/chainer/chainer/releases/tag/v6.7.0)
- cupy-cuda91 == 6.7.0 (chainer v6.7.0 requires cupy/cudnn for hardware acceleration support [https://docs.chainer.org/en/v6.7.0/install.html](https://docs.chainer.org/en/v6.7.0/install.html)

- PyTorch

- pytorch == 1.1.0 (Latest pytorch version supporting CUDA 9.1 https://download.pytorch.org/whl/cu90/torch_stable.html)
- CUDA >= 9.0 (Minimum required version for PyTorch 1.1.0 https://pytorch.org/get-started/previous-versions/#v110)
- pytorch == 1.1.0 (Latest pytorch version supporting CUDA 9.1 [https://download.pytorch.org/whl/cu90/torch_stable.html](https://download.pytorch.org/whl/cu90/torch_stable.html)
- CUDA >= 9.0 (Minimum required version for PyTorch 1.1.0 [https://pytorch.org/get-started/previous-versions/#v110](https://pytorch.org/get-started/previous-versions/#v110)

(Experimental) Use CUDA 10.2 from Nvidia Developer's site (https://developer.nvidia.com/cuda-10.2-download-archive)
(Experimental) Use CUDA 10.2 from Nvidia Developer's site [https://developer.nvidia.com/cuda-10.2-download-archive](https://developer.nvidia.com/cuda-10.2-download-archive)

- Chainer

- chainer == 6.7.0 (last version supoprting python2. See https://github.com/chainer/chainer/releases/tag/v6.7.0)
- cupy >=6.7.0,<7.0.0 (chainer v6.7.0 requires cupy/cudnn for hardware acceleration support https://docs.chainer.org/en/v6.7.0/install.html)
- chainer == 6.7.0 (last version supoprting python2. See [https://github.com/chainer/chainer/releases/tag/v6.7.0](https://github.com/chainer/chainer/releases/tag/v6.7.0)
- cupy >=6.7.0,<7.0.0 (chainer v6.7.0 requires cupy/cudnn for hardware acceleration support [https://docs.chainer.org/en/v6.7.0/install.html](https://docs.chainer.org/en/v6.7.0/install.html)
- cuDNN < 8 (cupy 6.7.0 requires cuDNN v5000= and <=v7999)
- CUDA 10.2 (cuDNN v7.6.5 requires CUDA 10.2 https://developer.nvidia.com/rdp/cudnn-archive)
- CUDA 10.2 (cuDNN v7.6.5 requires CUDA 10.2 [https://developer.nvidia.com/rdp/cudnn-archive](https://developer.nvidia.com/rdp/cudnn-archive))

- PyTorch

- pytorch >= 1.4.0
- CUDA >= 9.2 (Minimum required version for PyTorch https://pytorch.org/get-started/previous-versions/#v140)
- Driver Version >= 396.26 (From CUDA Toolkit and Corresponding Driver Versions in https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html)
- CUDA >= 9.2 (Minimum required version for PyTorch [https://pytorch.org/get-started/previous-versions/#v140](https://pytorch.org/get-started/previous-versions/#v140)
- Driver Version >= 396.26 (From CUDA Toolkit and Corresponding Driver Versions in [https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html)

Install CUDA
------------

- Ubuntu 14.04 : Download deb file from https://developer.nvidia.com/cuda-downloads?target_os=Linux::
- Ubuntu 14.04 : Download deb file from [https://developer.nvidia.com/cuda-downloads?target_os=Linux](https://developer.nvidia.com/cuda-downloads?target_os=Linux):

```bash
# If you'd like to use CUDA8.0 on Ubuntu 14.04.
Expand All @@ -56,7 +56,7 @@ Install CUDA
sudo apt-get install cuda
```

- Add below to your `~/.bashrc`::
- Add below to your `~/.bashrc`:

```bash
# setup cuda & cudnn
Expand All @@ -76,7 +76,7 @@ Install CUDA
```


- Ubuntu 16.04 : Download deb file from https://developer.nvidia.com/cuda-downloads?target_os=Linux::
- Ubuntu 16.04 : Download deb file from [https://developer.nvidia.com/cuda-downloads?target_os=Linux](https://developer.nvidia.com/cuda-downloads?target_os=Linux):

```bash
# If you'd like to use CUDA9.2 on Ubuntu 16.04.
Expand All @@ -91,7 +91,7 @@ Install CUDA
sudo apt install nvidia-cuda-toolkit
sudo apt install nvidia-cuda-dev
- (Experimental) Ubuntu 18.04 : CUDA 10.2 is the latest version which supports `jsk_perception`. Download deb file from https://developer.nvidia.com/cuda-downloads?target_os=Linux::
- (Experimental) Ubuntu 18.04 : CUDA 10.2 is the latest version which supports `jsk_perception`. Download deb file from https://developer.nvidia.com/cuda-downloads?target_os=Linux:
```bash
# If you'd like to use CUDA10.2 on Ubuntu 18.04.
Expand Down Expand Up @@ -124,11 +124,11 @@ Install CUDA
Install CUDNN
-------------

- If you install `pip install cupy-cuda91`, you do not need to install CUDNN manually. (c.f. https://github.com/jsk-ros-pkg/jsk_visualization/issues/809). Thus, default 18.04 user can use CUDA 9.1 and `cupy-cuda91==6.7.0` for `chainer==6.7.0` and you can SKIP this section.
- If you install `pip install cupy-cuda91`, you do not need to install CUDNN manually. (c.f. [https://github.com/jsk-ros-pkg/jsk_visualization/issues/809](https://github.com/jsk-ros-pkg/jsk_visualization/issues/809)). Thus, default 18.04 user can use CUDA 9.1 and `cupy-cuda91==6.7.0` for `chainer==6.7.0` and you can SKIP this section.

Installing CUDNN manually only requires for experimental user who install CUDA 10.2 manually.

- You need to login at https://developer.nvidia.com/cudnn
- You need to login at [https://developer.nvidia.com/cudnn](https://developer.nvidia.com/cudnn)
- Go to cuDNN Download and choose version
- Download deb files of cuDNN Runtime Library and cuDNN Developer Library

Expand Down Expand Up @@ -156,15 +156,15 @@ Install Cupy
- (Default) Chainer 6.7.0 requires CuPy 6.7.0 and if you have CUDA 9.1, you can use CuPy pre-compiled binary package.


- Pre-compiled Install Cupy for CUDA 9.1 ::
- Pre-compiled Install Cupy for CUDA 9.1 :

```bash
sudo pip install cupy-cuda91==6.7.0
```

- (Experimental) If you have newer CUDA version. You need to install CuPy with source distribution. This requires CUDNN before you run `pip install cupy` .

- Source Install Cupy for CUDA 10.2 ::
- Source Install Cupy for CUDA 10.2 :

```bash
sudo pip install -vvv cupy --no-cache-dir
Expand All @@ -174,7 +174,7 @@ Install Cupy
Install PyTorch
---------------

- 18.04 provides CUDA 9.1 by defualt. To install PyTorch compatible with this version, download following wheel from https://download.pytorch.org/whl/cu90/torch_stable.html, and install manually.
- 18.04 provides CUDA 9.1 by defualt. To install PyTorch compatible with this version, download following wheel from [https://download.pytorch.org/whl/cu90/torch_stable.html](https://download.pytorch.org/whl/cu90/torch_stable.html), and install manually.

```bash
sudo pip install torch-1.1.0-cp27-cp27mu-linux_x86_64.whl
Expand All @@ -187,12 +187,12 @@ Install PyTorch
sudo pip install torch==1.4.0
```

- See https://github.com/jsk-ros-pkg/jsk_recognition/pull/2601#issuecomment-876948260 for more info.
- See [https://github.com/jsk-ros-pkg/jsk_recognition/pull/2601#issuecomment-876948260](https://github.com/jsk-ros-pkg/jsk_recognition/pull/2601#issuecomment-876948260) for more info.

Try Chainer Samples
-----------

You can try to run samples to check if the installation succeeded::
You can try to run samples to check if the installation succeeded:

roslaunch jsk_perception sample_fcn_object_segmentation.launch gpu:=0
roslaunch jsk_perception sample_people_pose_estimation_2d.launch GPU:=0
Expand All @@ -201,7 +201,7 @@ You can try to run samples to check if the installation succeeded::
Try PyTorch Samples
-----------

You can try to run samples to check if the installation succeeded::
You can try to run samples to check if the installation succeeded:

roslaunch jsk_perception sample_hand_pose_estimation_2d.launch gpu:=0

Expand Down
4 changes: 3 additions & 1 deletion doc/jsk_perception/nodes/classification_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,9 @@ make
## Dynamic Reconfigure Parameters
* `~queries` (string, default: `human;kettle;cup;glass`)

Default categories used for subscribing image topic.
Default categories used for subscribing image topic.

You can send multiple queries with separating semicolon.

### Run inference container on another host or another terminal
Now you can use CLIP or ImageBind.
Expand Down
7 changes: 6 additions & 1 deletion doc/jsk_perception/nodes/vqa_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,12 @@ make
* `~questions` (string, default: `what does this image describe?`)

Default questions used for subscribing image topic.


You can send multiple questions with separating semicolon like the below.
```
What does this image describe?;What kinds of objects exists?
```

## Sample

### Run inference container on another host or another terminal
Expand Down
2 changes: 2 additions & 0 deletions jsk_perception/launch/classification.launch
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
<arg name="run_api" default="false" />
<arg name="model" default="clip" />
<arg name="CLASSIFICATION_INPUT_IMAGE" default="image" />
<arg name="image_transport" default="raw" />

<node name="classification_api" pkg="jsk_perception" type="run_jsk_vil_api" output="log"
args="(arg model) -p $(arg port)" if="$(arg run_api)" />
Expand All @@ -16,6 +17,7 @@
host: $(arg host)
port: $(arg port)
model: $(arg model)
image_transport: $(arg image_transport)
</rosparam>
</node>

Expand Down
2 changes: 2 additions & 0 deletions jsk_perception/launch/vqa.launch
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
<arg name="gui" default="false" />
<arg name="run_api" default="false" />
<arg name="VQA_INPUT_IMAGE" default="vqa_image" />
<arg name="image_transport" default="raw" />

<node name="ofa_api" pkg="jsk_perception" type="run_jsk_vil_api" output="log"
args="ofa -p $(arg port)" if="$(arg run_api)" />
Expand All @@ -14,6 +15,7 @@
<rosparam subst_value="true">
host: $(arg host)
port: $(arg port)
image_transport: $(arg image_transport)
</rosparam>
</node>

Expand Down
28 changes: 21 additions & 7 deletions jsk_perception/src/jsk_perception/vil_inference_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,27 @@ def __init__(self, action,
# default inference image
self.default_img = None
# ROS
self.image_sub = rospy.Subscriber("~image", Image,
callback=self.topic_cb,
queue_size=1,
buff_size=2**26)
self.transport_hint = rospy.get_param('~image_transport', 'raw')
if self.transport_hint == 'compressed':
self.image_sub = rospy.Subscriber(
"{}/compressed".format(rospy.resolve_name('~image')),
CompressedImage,
callback=self.topic_cb,
queue_size=1,
buff_size=2**26
)

else:
self.image_sub = rospy.Subscriber("~image", Image,
callback=self.topic_cb,
queue_size=1,
buff_size=2**26)
self.result_topic_type = result_topic
self.result_pub = rospy.Publisher("~result", result_topic, queue_size=1)
self.image_pub = rospy.Publisher("~result/image", Image, queue_size=1)
if self.transport_hint == 'compressed':
self.image_pub = rospy.Publisher("~result/image/compressed", CompressedImage, queue_size=1)
else:
self.image_pub = rospy.Publisher("~result/image", Image, queue_size=1)
self.vis_pub = rospy.Publisher("~visualize", String, queue_size=1)
self.action_server = actionlib.SimpleActionServer("~inference_server",
action,
Expand Down Expand Up @@ -156,9 +170,9 @@ def topic_cb(self, data):
vis_msg = ""
for i, label in enumerate(msg.label_names):
vis_msg += "{}: {:.2f}% ".format(label, msg.probabilities[i]*100)
vis_msg += "\n"
vis_msg += "\n\nCosine Similarity\n"
for i, label in enumerate(msg.label_names):
vis_msg += "{}: {:.2f}% ".format(label, msg.label_proba[i]*100)
vis_msg += "{}: {:.4f} ".format(label, msg.label_proba[i])
self.vis_pub.publish(vis_msg)

def create_queries(self, goal):
Expand Down

0 comments on commit 05dcf70

Please sign in to comment.