Skip to content

Commit

Permalink
added docs, decoding colour sort-of works
Browse files Browse the repository at this point in the history
  • Loading branch information
chanokin committed Feb 13, 2020
1 parent 0ce1fbf commit 3831649
Show file tree
Hide file tree
Showing 9 changed files with 238 additions and 364 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

**Usage:**

`python convert.py dataset path/to/input/files [options]`
python convert.py dataset path/to/input/files [options]

--timestep Timestep which will be used in the simulations. How many spikes will be emmited
at each timestep can be set with --spikes_per_bin
Expand All @@ -22,8 +22,11 @@
1. ___Databases___ in this repository are the property of their authors:

* __[CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)__: [Alex Krizhevsky, (2009). Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
* This database has color images which get transformed into a YUV encoding. The conversion outpus rank-order coded spikes for each of the images (grayscale [Y], blue-yellow [U], red-green [V])
* __[MNIST](http://yann.lecun.com/exdb/mnist/)__: [Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.](http://yann.lecun.com/exdb/publis/index.html#lecun-98) Proceedings of the IEEE, 86(11):2278-2324, November 1998.
* The images in the dataset are converted so that background value is 0 and digit regions values are 255. The output is rank-order coded spikes from the 28x28 images.
* __[OMNIGLOT](https://github.com/brendenlake/omniglot)__: [Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction.](http://www.sciencemag.org/content/350/6266/1332.short) _Science_, 350(6266), 1332-1338.
* The images in the dataset are converted so that background value is 0 and digit regions values are greater than 0; furthermore, the images are scaled using the `--scaling` parameter. The output is rank-order coded spikes from the scaled images.

2. The (grayscale) ___transformation method___ was:
* Created by __[Basabdatta Sen Bhattacharya](https://sites.google.com/site/bsenbhattacharya/)__
Expand Down
1 change: 1 addition & 0 deletions cifar_convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ def cifar_convert(data, out_dir, timestep, spikes_per_bin=1, skip_existing=True)
kernels=FOCAL_S.kernels.full_kernels,
bmg_image=bmg, bmg_spikes=bmg_spikes, bmg_spk_src=bmg_spk_src,
rmg_image=rmg, rmg_spikes=rmg_spikes, rmg_spk_src=rmg_spk_src,
wred=WRED, wgreen=WGREEN, wblue=WBLUE,
)

print("\tDone with batch!\n")
Expand Down
22 changes: 11 additions & 11 deletions color_encoding_notebooks/color_encoding_yuv.ipynb
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:bd7595ac957ae27bcfbde87f0b47068141d34b30e8a8aafb55c7a9a18d43eca5"
"signature": "sha256:033d10c0cc4f1d0aefd3313964b60ed7fc2f64373f118ab69e97abdfabe094c8"
},
"nbformat": 3,
"nbformat_minor": 0,
Expand All @@ -17,7 +17,7 @@
"import matplotlib.pyplot as plt\n",
"import os\n",
"import glob\n",
"from focal import Focal, focal_to_spike, spike_trains_to_images_g\n",
"# from focal import Focal, focal_to_spike, spike_trains_to_images_g\n",
"from scipy.signal import convolve2d\n",
"from scipy import misc\n",
"import cv2\n",
Expand Down Expand Up @@ -88,7 +88,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
"image_files = sorted( glob.glob('./test_images/*.png') )\n",
"image_files = sorted( glob.glob('../test_images/*.png') )\n",
"face = cv2.cvtColor( cv2.imread(image_files[7]), cv2.COLOR_BGR2RGB ).astype('float')\n",
"\n",
"face = misc.face() / 255.0\n",
Expand Down Expand Up @@ -230,7 +230,7 @@
]
}
],
"prompt_number": 7
"prompt_number": 6
},
{
"cell_type": "code",
Expand All @@ -239,7 +239,7 @@
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 7
"prompt_number": 6
},
{
"cell_type": "code",
Expand All @@ -258,7 +258,7 @@
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 8
"prompt_number": 7
},
{
"cell_type": "code",
Expand Down Expand Up @@ -298,7 +298,7 @@
]
}
],
"prompt_number": 23
"prompt_number": 8
},
{
"cell_type": "code",
Expand All @@ -321,7 +321,7 @@
]
}
],
"prompt_number": 24
"prompt_number": 9
},
{
"cell_type": "code",
Expand Down Expand Up @@ -420,7 +420,7 @@
]
}
],
"prompt_number": 45
"prompt_number": 10
},
{
"cell_type": "code",
Expand All @@ -439,7 +439,7 @@
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 46
"prompt_number": 11
},
{
"cell_type": "code",
Expand Down Expand Up @@ -499,7 +499,7 @@
]
}
],
"prompt_number": 47
"prompt_number": 14
},
{
"cell_type": "code",
Expand Down
2 changes: 1 addition & 1 deletion convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
parser.add_argument('--spikes_per_bin', type=int, default=1,
help='How many spikes per timestep will be emmited. Note that more '
' than one is not standard rank-order encoding.')
parser.add_argument('--scaling', type=float, default=1.0,
parser.add_argument('--scaling', type=float, default=0.54,
help='Scaling applied to the input image (only supported by the '
' Omniglot dataset)')
args = parser.parse_args()
Expand Down
8 changes: 4 additions & 4 deletions focal/convolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,13 +136,13 @@ def get_subsample_keepers(self, cell_type):
if cell_type == 3:
# col_keep = 7
# row_keep = 7
col_keep = 5
row_keep = 5
col_keep = 3
row_keep = 3
elif cell_type == 2:
# col_keep = 5
# row_keep = 3
col_keep = 4
row_keep = 4
col_keep = 3
row_keep = 3
elif cell_type == 1:
col_keep = 1
row_keep = 1
Expand Down
420 changes: 155 additions & 265 deletions load_and_read_notebooks/test_read_cifar10.ipynb

Large diffs are not rendered by default.

53 changes: 28 additions & 25 deletions load_and_read_notebooks/test_read_mnist.ipynb

Large diffs are not rendered by default.

88 changes: 31 additions & 57 deletions load_and_read_notebooks/test_read_omniglot.ipynb

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions omniglot_convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,10 @@ def omniglot_convert(file_dict, out_dir, timestep, spikes_per_bin=1, skip_existi

h, w = file_dict[alpha][char][0].shape
if scaling != 1.0:
sys.stdout.write('\t\tscaling input image from shape {}'.format((h, w)))
h, w = int(h * scaling), int(w * scaling)
sys.stdout.write(' to {}\n\n'.format((h, w)))
sys.stdout.flush()

s_img = np.zeros((h, w))
n_processed = 0
Expand Down

0 comments on commit 3831649

Please sign in to comment.