Skip to content

Commit

Permalink
Add sample notebooks for NWB and SLEAP analyses. Closes #13
Browse files Browse the repository at this point in the history
  • Loading branch information
benlansdell committed Feb 8, 2024
1 parent 9230c28 commit 04851ba
Show file tree
Hide file tree
Showing 7 changed files with 430 additions and 19 deletions.
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,9 @@ That said, you may want a separate environment for running `ethome`. A conda env
3. Run `conda activate ethome`
3. And finally `pip install ethome-ml`

With both install methods, you may want to also install `tensorflow` if you want to use the CNN features for a resident-intruder setup.
### Optional packages

With both install methods, you may want to also install `tensorflow` (at least version 2.0) if you want to use the CNN features for a resident-intruder setup.

## Quickstart

Expand Down Expand Up @@ -147,6 +149,13 @@ NWB is a data standard for neurophysiology, providing neuroscientists with a com

SLEAP is an open source deep-learning based framework for multi-animal pose tracking. It can be used to track any type or number of animals and includes an advanced labeling/training GUI for active learning and proofreading. SLEAP data must be in exported analysis `h5` files, to import into `ethome`.

## Sample notebooks

Sample notebooks are available ([here](https://github.com/benlansdell/ethome/tree/master/examples)) that you can use as a starting point for your own analyses, using either:
* NWB files
* SLEAP files
* DLC tracking and BORIS annotations

## Contributing

Refer to `CONTRIBUTING.md` for guidelines on how to contribute to the project, and report bugs, etc.
Expand Down
2 changes: 1 addition & 1 deletion ethome/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.6.0"
__version__ = "0.6.1"

# Suppress tensorflow import text...
import os
Expand Down
Binary file added ethome/data/sleap/sample_sleap.h5
Binary file not shown.
10 changes: 10 additions & 0 deletions ethome/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -583,6 +583,16 @@ def get_sample_nwb_paths():
return os.path.join(cur_dir, "data/sample_nwb_.nwb")


def get_sample_sleap_paths():
"""Get path to a sample SLEAP h5 file with tracking data for testing and dev purposes.
Returns:
Path to a sample SLEAP file.
"""
cur_dir = os.path.dirname(os.path.abspath(__file__))
return os.path.join(cur_dir, "data/sleap/sample_sleap.h5")


def get_sample_data_paths_dlcboris():
"""Get path to sample data files provided with package.
Expand Down
26 changes: 9 additions & 17 deletions examples/sample_workflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -41,7 +41,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -58,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -74,17 +74,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"All necessary fields provided -- rescaling to 'mm'\n"
]
}
],
"outputs": [],
"source": [
"#%% Create dataset and add features\n",
"dataset = create_dataset(tracking_files, \n",
Expand Down Expand Up @@ -212,7 +204,7 @@
"metadata": {},
"outputs": [],
"source": [
"#%%################\n",
"###################\n",
"## Dim reduction ##\n",
"###################\n",
"\n",
Expand All @@ -236,9 +228,9 @@
"source": [
"## Post processing\n",
"\n",
"Now we have our model we can make a video of its predictions. Provide the column names whose state we're going to overlay on the video, along with the directory to output the videos.\n",
"Now we have our model we can make a video of its predictions. Provide the column names whose state we're going to print overlaid on the video, along with the directory to output the videos.\n",
"\n",
"NOTE: need to have provided 'video' column in the metadata to make movies."
"(NOTE: this won't work for the sample data provided here, as you need to have provided the 'video' column in the metadata to point the dataframe to the corresponding videos. The package doesn't include these sample data video files in it)"
]
},
{
Expand Down Expand Up @@ -267,7 +259,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15"
"version": "3.8.10"
},
"orig_nbformat": 4
},
Expand Down
200 changes: 200 additions & 0 deletions examples/sample_workflow_nwb.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,200 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Demo workflow\n",
"\n",
"Demonstrate simple workflow for case where only pose data is available (no behavior labels)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ethome import create_dataset, interpolate_lowconf_points\n",
"from ethome.io import get_sample_nwb_paths\n",
"from ethome.unsupervised import compute_umap_embedding\n",
"from ethome.plot import plot_embedding, interactive_tracks"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Gather some sample tracking files to play with"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nwb_file = get_sample_nwb_paths()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Nwb files already contain metedata about the tracking, so we don't have to provide this ourselves:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#%% Create dataset\n",
"dataset = create_dataset(nwb_file)\n",
"dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset.pose.body_parts"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Checkout the tracks with a widget"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"filename = dataset.metadata.videos[0]\n",
"interactive_tracks(dataset,\n",
" filename)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Smooth over low-confidence points"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"interpolate_lowconf_points(dataset)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now create features on this dataset. Can use pre-built featuresets, or make your own. \n",
"As we don't have a resident-intruder setup, here we use generic featuresets. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset.features.add('distances')"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Unsupervised learning"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#%%################\n",
"## Dim reduction ##\n",
"###################\n",
"\n",
"embedding = compute_umap_embedding(dataset, dataset.features.active, N_rows = 10000)\n",
"dataset[['embedding_0', 'embedding_1']] = embedding"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plot_embedding(dataset) "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Post processing\n",
"\n",
"Now we have our model we can make a video of its predictions. Provide the column names whose state we're going to overlay on the video, along with the directory to output the videos.\n",
"\n",
"NOTE: need to have provided 'video' column in the metadata to make movies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset.io.save_movie(['label', 'prediction'], '.')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ethome",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit 04851ba

Please sign in to comment.