Usage:
python convert.py dataset path/to/input/files [options]
dataset One of the supported datasets {cifar10, mnist, omniglot}
path Where can the script find the standard dataset file(s)
--timestep Timestep which will be used in the simulations. How many spikes will be emmited
at each timestep can be set with --spikes_per_bin
--percent How many of the possible spikes (number of pixels) should we
output. Percent (0.0 < p <= 1.0)
--output_dir Path to the output location of the generated spike files
--skip_existing Whether to skip database entries corresponding to files already found in the
output directory
--spikes_per_bin How many spikes per timestep will be emmited. Note that more than one is not
standard rank-order encoding
--scaling Scaling applied to the input image (only supported by the Omniglot dataset)
-
Databases in this repository are the property of their authors:
- CIFAR-10: Alex Krizhevsky, (2009). Learning Multiple Layers of Features from Tiny Images
- This database has color images which get transformed into a YUV encoding. The conversion outpus rank-order coded spikes for each of the images (grayscale [Y], blue-yellow [U], red-green [V])
- MNIST: Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998.
- The images in the dataset are converted so that background value is 0 and digit regions values are 255. The output is rank-order coded spikes from the 28x28 images.
- OMNIGLOT: Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332-1338.
- The images in the dataset are converted so that background value is 0 and digit regions values are greater than 0; furthermore, the images are scaled using the
--scaling
parameter. The output is rank-order coded spikes from the scaled images.
- The images in the dataset are converted so that background value is 0 and digit regions values are greater than 0; furthermore, the images are scaled using the
- CIFAR-10: Alex Krizhevsky, (2009). Learning Multiple Layers of Features from Tiny Images
-
The (grayscale) transformation method was:
- Created by Basabdatta Sen Bhattacharya
- Implemented for the NE15 Database reported in Liu Qian, Pineda García Garibaldi, Stromatias Evangelos, Serrano-Gotarredona Teresa, Furber Steve B. (2016) Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation Frontiers in Neuroscience