- Assert
openssl
installed - Assert
go version
>= 1.6 - Assert
go env
- contains
GO15VENDOREXPERIMENT="1"
(dependencies are tracked using vendoring)
- contains
cd ~
mkdir gospace
cd gospace
export GOPATH=~/gospace
You might want to put this into your .bashrc
go get github.com/KIT-MAMID/mamid
go get -u github.com/jteeuwen/go-bindata/...
cd $GOPATH/src/github.com/KIT-MAMID/mamid
git submodule update --init
Running tests requires a PostgreSQL instance with permission to CREATE DATABASE
and DESTROY DATABASE
Running the test instance in a docker container different than the one used for make testbed_*
builds is recommended
docker run --name mamid-postgres-tests -p 5432:5432 -e POSTGRES_PASSWORD=foo1 -d postgres
# You should now be able to connect to the container using the password above
psql -h localhost -U postgres
# You can run tests by setting the appropriate DSN environment variable
MAMID_TESTDB_DSN="host=localhost port=5432 user=postgres password=foo1 sslmode=disable dbname=postgres" make test-verbose
The Makefile includes targets to create a cluster test environment in docker:
make testbed_down
kills all containers.make testbed_up
(re)spawns the master, postgres, the notifier and three slaves. This deletes all data from the previous run.
If you want to spawn more than three slaves you can adjust the TESTBED_SLAVE_COUNT
variable in the Makefile.
But before you can start the cluster you have to configure some things...
For communication between master and slaves certificates are needed for encryption and authentication. These have to be signed by a local CA.
You will have to create a testing CA and certificates for all containers on your host machine. These will then be mounted into the containers by the Makefile target.
You can generate the CA using
./scripts/generateCA.sh
The CA public and private keys will be generated in ca/mamid.pem
and ca/private/mamid_private.pem
.
You further have to create certificates for the master and the slaves:
./scripts/generateAndSignSlave.sh master 10.101.202.1
You can generate multiple certificates for the slaves at once using:
read -s -p "Enter CA signing key passphrase:: " CA_PASS
export CA_PASS
for i in $(seq -f %02g 1 20); do ./scripts/generateAndSignSlaveCert.sh slave${i} 10.101.202.1${i}; done
This is optional. If this step is skipped the notifier will exit immediately.
Go into the notifier
directory and create a config.ini file
[notifier]
api_host=http://10.101.202.1:8080
contacts=contacts.ini
[smtp]
relay_host=mail.foo.bar:25 #Your local smarthost
[email protected]
and a contacts.ini file
[person-name]
[email protected]
You should then receive an Email for every problem.
If you now start the testbed using make testbed_up
you should have a running instance of the master at 10.101.202.1:8080
.
You can now continue by adding and enabling some slaves in the GUI or using the test fixtures script:
./scripts/fixtures.py -c -n <number of slaves specified in TESTBED_SLAVE_COUNT>
The slaves should be active and not generate any problems.
Continue by adding a Replica Set using the GUI. After you clicked on "Create" Mongods will be assigned to the Replica Set.
They will be in destroyed state first and a problem might appear, but after about 30 seconds the problem should have disappeared and the Mongods should be in running state.
You can kill a slave using docker stop slave02
. This should generate a problem.
If you then set the killed slave to disabled (and there still is a free slave of the same persistence type left) a new Mongod should be added to the Replica Set.
Note: The Mongod on the killed slave will not be removed as MAMID can not communicate with the slave. For it to be removed you have to either restart the slave or delete the slave using the GUI.
Note: For the adding of the new Mongod to work the initial Replica Set has to consist of at least three members. Otherwise the Replica Set will not be able to elect a new primary and MAMID can not configure the Replica Set as this can only happen through the primary (without force).
To fix this, the administrator has to log in to the remaining secondary member and force the adding of the new Mongod, e.g.:
mongo 10.101.202.101:18080
use admin
db.auth("mamid", "<password, see system tab>")
conf = rs.conf()
conf.members.push({"_id": <id from 0 to 255 not used in config yet>, "host": "<new host:port>"})
rs.reconfig(conf, {force:true})
To simulate a temporary disconnection of a slave, e.g. by a broken network cable, you can disconnect the container from the network using
docker network disconnect mamidnet0 slave01
This should generate a problem.
When you reattach the container to the network using
docker network connect --ip 10.101.202.101 mamidnet0 slave01
the problem should disappear.
If you increase the member counts of a Replica Set new members should appear and be added to the Replica Set.
If not enough free ports are available a problem should appear.
If you click on the Remove Replica Set Button in the Replica Set Detail View, the Replica Set should disappear and its Mongods and their data should be destroyed.