Usage possibility with inference done before hand. #856
-
Hi there As recommended in other discussion topics, its better to run the monailabel server on a GPU. My use case is such that my team has to run the server on a GCP cloud VM and make it accessible to a team of radiologists doing annotations remotely. We love the semi-autonomous annotations given it reduces the annotation time significantly but we wish to make it non interactive. We do have the cloud provision but cannot afford to keep a GPU running all day (for about a month) while the annotators complete the annotation. We could keep a CPU running continuously for some time though and wish to deploy the monailabel server in it. I wanted to ask whether we can do the inference in bulk beforehand and simply place the labels in a directory from where the monailabel server can load the annotations without having a model to infer everytime a user needs the pre-generated annotations to modify if necessary. Our project's scope does not include active learning and training to happen and want the monailabel server to merely help in annotation with quick pre-generated inference loading. We do have the flexibility of using any of the tools: 3D slicer or OHIF based on whichever tools supports the above mentioned issue. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @Nachimak28, Thanks for openning this dicussion, this is a good question. You could run batch inference right after you start the server. For this, make a POST call for batch inference using the browser or a bash script. Batch inference results save the predictions in the labels/original folder. Here it is a bash script to do exactly that: https://github.com/Project-MONAI/MONAILabel/blob/bratsSegmentation/sample-apps/radiology/start_server.sh Hope that helps, |
Beta Was this translation helpful? Give feedback.
Hi @Nachimak28,
Thanks for openning this dicussion, this is a good question.
You could run batch inference right after you start the server. For this, make a POST call for batch inference using the browser or a bash script.
Batch inference results save the predictions in the labels/original folder.
Here it is a bash script to do exactly that: https://github.com/Project-MONAI/MONAILabel/blob/bratsSegmentation/sample-apps/radiology/start_server.sh
Hope that helps,