Skip to content

Commit

Permalink
Add README on hack/ folder resources
Browse files Browse the repository at this point in the history
Contains a few short explanatory sentences and instructions on the
various deployments and when/how to use them, including cleanup for the
S3 volume.

Also contains a mock credential file for running zero-config integration
tests on blockstore puts (with `LakeFSFileSystem.put_file_to_blockstore`)
in the case of S3 via SeaweedFS.
  • Loading branch information
nicholasjng committed Oct 25, 2023
1 parent 8d49758 commit 7110ee9
Showing 1 changed file with 64 additions and 0 deletions.
64 changes: 64 additions & 0 deletions hack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Useful resources for development on `lakefs-spec`

This document contains information on the resources in this directory, and how they can be used in development and testing.

## `docker-compose.yml` - local lakeFS quickstart instance

This Docker Compose file bootstraps a local [lakeFS quickstart instance](https://docs.lakefs.io/quickstart/launch.html).
It does not come with an associated volume, so you can spin it up and down without creating dangling resources.

Requirements:
* A Docker runtime & CLI with `docker compose`.

To bootstrap, run the following command:

```shell
docker compose -f hack/docker-compose.yml up
```

To stop the container again, run

```shell
docker compose -f hack/docker-compose.yml down
```

## `lakefs-s3-local.yml` - lakeFS with a local, SeaweedFS-backed S3 blockstore

For simulating a lakeFS deployment with a remote blockstore, the `lakefs-s3-local.yml` Docker Compose file contains a
recipe mocking an S3 blockstore using SeaweedFS.

To bootstrap this setup, run the command

```shell
docker compose -f hack/lakefs-s3-local.yml up
```

To stop the containers again, run

```shell
docker compose -f hack/lakefs-s3-local.yml down
```

To clean the created volume, e.g. for when you want to remove created storage namespaces after repository deletions,
you can nuke the deployment like so:

```shell
docker compose -f hack/lakefs-s3-local.yml rm
docker volume rm hack_seaweedfs-data
```

## Mock credentials for direct blockstore writes with the `LakeFSFileSystem`

In order to write to the local S3 blockstore using `LakeFSFileSystem.put_file_to_blockstore`, you can use the following
mock credentials:

```shell
cat > $HOME/.aws/credentials <<EOL
[default]
endpoint_url = http://localhost:9001
aws_access_key_id = sandbox
aws_secret_access_key = sandbox
EOL
```

Beware that this will overwrite any existing credential files, so it is recommended to back those up first.

0 comments on commit 7110ee9

Please sign in to comment.