This repo contains the source files for juptyer containers based on jupyter/docker-stacks.
⚠️ Deprecation Notice: No longer updating Dockerhub repository⚠️ Due to March 2023 removal of Docker's free Teams organization & history of price changes, images will no longer be pushed to DockerHub.
Please useghcr.io/ninerealmlabs/<image-name>:<tag>
If unfamiliar with docker see our Intro to Docker tutorial
-
Create/edit a
.env
file in project root to specify image and local path to be mountedIMG_NAME="ninerealmlabs/ds-env:python-3.10" MOUNT_PATH="~/Documents"
-
Launch locally with
docker-compose up -d
from project root. -
View URL and token with
docker logs <IMAGENAME>
-
Run
docker-compose down
to tear down the docker container
base-env - customizes `jupyter/minimal-notebook`
└ ds-env - from `base-env`, catchup `jupyter/scipy-notebook` + customizations
├ ts-env - adds packages for timeseries analysis & forecasting
└ nlp-env - add packages for text analysis & NLP modeling
└ web-env - add packages/binaries for web scraping, including a chromedriver/geckodriver binary
Images may have compatibility issues preventing builds depending on platform architecture and python version.
If image dependencies change, they must be reflected in:
- Images are set to load JupyterLabs, but the standard notebook interface is also available through the menus
- Included in the image is jupytext, allowing concurrent .ipynb and .py development
- Jupyterlab-git allows use of git repos from within JupyterLab
Images are tagged by python version and python version-git hash Since images are automatically build on a timer, it is possible to have newer images overwrite older images if there has been no new activity in the git repo.
Notes:
conda
pins are implemented dynamically in build to stabilize the environment around specific constraints:- Python version {major}.{minor}
Images are built using docker buildx,
which provides support for multi-architecture/multi-platform builds.
Images are automatically tagged with python version and short git hash.
Build scripts assume that the output image name is the same as the source folder name under /src
To update a single image, run the build.sh
script.
Remember, images have inheritance and updating a single image will not (necessarily) update the
packages inherited from the source image!
build.sh
takes the following keyword arguments in flag=value
format:
short flag |
long flag |
default value |
---|---|---|
-p | --platform | linux/amd64,linux/arm64 |
-s | --source | required |
-r | --registry | blank / no default |
-i | --image_name | required |
-v | --python_version | 3.10.* |
--push | true if present |
|
--clean | true if present |
|
--debug | true if present |
platform
defines CPU architecture (linux/amd64
,linux/arm64
, etc.)source
is the source image to build from. This can be provide as full<registry>/<image>:<tag>
format.registry
is the dockerhub (or other container registry) to push to.image_name
is name of the output image. The build script assumes the Dockerfile and required materials are in the <image_name> subdirectory under/src
python_version
is the python version to pin- If present,
--push
will push to registry; otherwise, images will attempt to load locally - If present,
--clean
will remove the local images once built - If present,
--debug
will add debug printouts
Notes:
- If multiple architectures are provided, it is not possible to load locally. In this case, both
--registry=<registry>
and--push
are required- If
--push
is enabled, it assumes the current CLI session hasdocker login
privileges
Examples:
# build 'base-env' locally with python 3.10
bash ./scripts/build.sh --source="jupyter/minimal-notebook" --image_name="base-env" -v="3.10.*"
# build 'ds-env' (note: this assumes that 'base-env' is also available locally); push to `ninerealmlabs` registry
bash ./scripts/build.sh --source="base-env:python-3.10" --registry="ninerealmlabs" --image_name="ds-env" --push
# build multi-arch image (_must_ push b/c of multiarch build)
bash ./scripts/build.sh -p="linux/amd64,linux/arm64" -b="base-env:python38" -r="ninerealmlabs" -i="ds-env" --push
To build all images in the stack, run bash ./scripts/build-stack.sh
from project root.
- Consider whether to update the python version(s) specified
- Review/Update dependencies in
./scripts/dependencies.txt
- Review/Update dependencies in
./tests/images_hierarcy.py
build-stack.sh
takes the following keyword arguments in flag=value
format:
short flag |
long flag |
default value |
---|---|---|
-p | --platform | linux/amd64,linux/arm64 |
-s | --source | jupyter/minimal-notebook |
-r | --registry | blank / no default |
-p | --push | true if present |
-c | --clean | true if present |
--debug | true if present |
platform
defines CPU architecture (linux/amd64
,linux/arm64
, etc.)registry
is the dockerhub (or other container registry) to push to. REGISTRY must be provided.- If present,
--push
will push to registry; otherwise, images will load locally - If present,
--clean
will remove the local images once built - If present,
--debug
will add debug printouts
Notes:
- If multiple architectures are provided, it is not possible to load locally. In this case, both
--registry=<registry>
and--push
are required- If
--push
is enabled, it assumes the current CLI session hasdocker login
privileges
Examples:
# build all images locally - only a single platform can be used
bash ./scripts/build-stack.sh --platform="linux/amd64" --registry="ninerealmlabs"
# build all images, push to `ninerealmlabs` registry, and clean
bash ./scripts/build-stack.sh --registry="ninerealmlabs" --push --clean
# build multi-arch images (_must_ push)
bash ./scripts/build-stack.sh -p="linux/amd64,linux/arm64" -r="ninerealmlabs" --push
Build scripts include options to push. If you build local images and later decide to push to your registry:
- Tag images with
docker tag <imagename> <registry>/<imagename>:<new tag>
- Log in with
docker login
and provide username and password/token when prompted - Push images and all new tags to registry with
docker push <registry>/<imagename>
See also tag-and-push.sh