- Overview
- Preview
- Quickstart
- Input schema
- Creating a dashboard-ready "digest" file
- Running in a Docker container
- Local development
digest
is a web dashboard for exploring subject-level availability of pipeline derivatives and phenotypic variables in a neuroimaging dataset.
It provides user-friendly options for querying data availability, along with interactive visual summaries.
digest
supports any dataset TSV file that follows a data modality-specific schema (called a "digest" file).
digest
is also compatible with the processing status files generated by Nipoppy.
Try out digest
at https://digest.neurobagel.org/!
You can find correctly formatted example input files here to test out dashboard functionality.
digest
supports long format TSVs that contain the columns specified in the digest schemas (see also the schema README).
At the moment, each digest file is expected to correspond to one dataset.
While digest
accepts any TSV compliant with one of the digest schemas, the easiest way to obtain dashboard-ready files for pipeline derivative availability is to use the Nipoppy specification for organizing your neuroimaging dataset.
Nipoppy provides dataset trackers that can automatically extract subjects' imaging data and pipeline output availability, producing digest
-compatible processing status files.
For detailed instructions to get started using Nipoppy, see the documentation.
In brief, the (mostly automated!) Nipoppy steps to generate a processing status file can be as simple as:
- Initializing an empty, Nipoppy-compliant dataset directory tree for your dataset
- Updating your Nipoppy configuration with the pipeline versions you are using, and creating a manifest file containing all available participants and sessions
- Populating the directory tree with any existing data and pipeline outputs *
- Running the tracker for the relevant pipeline(s) to generate a processing status file
*Nipoppy also provides a protocol for running processing pipelines from raw imaging data.
- To get the most recent changes, pull the
neurobagel/digest
docker image taggednightly
:
docker pull neurobagel/digest:nightly
- Currently,
digest
also relies on a local copy of theqpn_workflows
repository, which contains ready-to-usedigest
files that are automatically generated for the Quebec Parkinson Network data.
git clone https://github.com/neurodatascience/qpn_workflows.git
- Run
digest
and mount theqpn_workflows
directory into the container:
docker run -d -p 8050:8050 -v ${PWD}/qpn_workflows:/app/qpn_workflows neurobagel/digest:nightly
Now, the dashboard can be accessed at http://127.0.0.1:8050 on your local machine.
To install digest
from the source repository, run the following in a Python environment:
git clone https://github.com/neurobagel/digest.git
cd digest
pip install -r requirements.txt
To launch the app locally:
python -m digest.app
Once the server is running, the dashboard can be accessed at http://127.0.0.1:8050/ in your browser.
pytest
and dash.testing
are used for testing dashboard functionality during development.
To run the tests, run the following command from the repository's root:
pytest tests