Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Examples typos #449

Merged
merged 36 commits into from
Nov 14, 2024
Merged
Show file tree
Hide file tree
Changes from 32 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
b8504a9
changed T_1 to T1
lrlunin Oct 18, 2024
13a65c5
tex commands for ln and cos
lrlunin Oct 18, 2024
65e9ae2
Notebooks updated
github-actions[bot] Oct 18, 2024
ad4aeab
use subscript notation everywhere
lrlunin Oct 18, 2024
26273f8
Notebooks updated
github-actions[bot] Oct 18, 2024
93763b2
Merge branch 'main' into examples_typos
lrlunin Oct 18, 2024
f30ef01
draft for nbsphinx import
lrlunin Oct 18, 2024
24395d5
import executed ipynb notebooks to docs examples
lrlunin Oct 23, 2024
b680430
add pandoc as docs dependency and include subfolders in examples
lrlunin Oct 23, 2024
8507145
pip pandoc is python binding, not the pandoc executable
lrlunin Oct 23, 2024
ab7703d
Myst (#480)
fzimmermann89 Oct 29, 2024
f1bb5c4
fixed linsk, dollarmath and cite
lrlunin Oct 29, 2024
91b74ee
Notebooks updated
github-actions[bot] Oct 29, 2024
4357dfb
change milliseconds to seconds, resolves #462
lrlunin Oct 29, 2024
cbee2ca
Notebooks updated
github-actions[bot] Oct 29, 2024
1a708b3
changed plots scaling
lrlunin Oct 29, 2024
2873a84
fixed initial guess for a better convergence
lrlunin Oct 29, 2024
429c83e
Notebooks updated
github-actions[bot] Oct 29, 2024
57e7910
fixed convergence in t2_star notebook
lrlunin Nov 8, 2024
890b4d3
fixed line length
lrlunin Nov 8, 2024
d27d028
Notebooks updated
github-actions[bot] Nov 8, 2024
7542b77
Merge branch 'main' into examples_typos
lrlunin Nov 8, 2024
136b60c
fixed learning rate
lrlunin Nov 8, 2024
c337373
Notebooks updated
github-actions[bot] Nov 8, 2024
6c21066
Merge branch 'main' into examples_typos
lrlunin Nov 13, 2024
e688c8f
revert non-examples notebook related changes
lrlunin Nov 13, 2024
63bdb48
Merge branch 'main' into examples_typos
lrlunin Nov 14, 2024
3e778d0
resolved #459
lrlunin Nov 14, 2024
406574d
Notebooks updated
github-actions[bot] Nov 14, 2024
9c05660
Merge branch 'main' into examples_typos
lrlunin Nov 14, 2024
3ab9559
Merge branch 'main' into examples_typos
fzimmermann89 Nov 14, 2024
df13204
Merge branch 'main' into examples_typos
fzimmermann89 Nov 14, 2024
83e279f
add units s to colorbars
lrlunin Nov 14, 2024
165e248
Notebooks updated
github-actions[bot] Nov 14, 2024
caae52b
fixed colorbar label
lrlunin Nov 14, 2024
e796343
Notebooks updated
github-actions[bot] Nov 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 21 additions & 21 deletions examples/qmri_sg_challenge_2024_t1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "0f82262f",
"metadata": {},
"source": [
"# QMRI Challenge ISMRM 2024 - T1 mapping"
"# QMRI Challenge ISMRM 2024 - $T_1$ mapping"
]
},
{
Expand Down Expand Up @@ -40,7 +40,7 @@
"source": [
"### Overview\n",
"The dataset consists of images obtained at 10 different inversion times using a turbo spin echo sequence. Each\n",
"inversion time is saved in a separate DICOM file. In order to obtain a T1 map, we are going to:\n",
"inversion time is saved in a separate DICOM file. In order to obtain a $T_1$ map, we are going to:\n",
"- download the data from Zenodo\n",
"- read in the DICOM files (one for each inversion time) and combine them in an IData object\n",
"- define a signal model and data loss (mean-squared error) function\n",
Expand Down Expand Up @@ -105,7 +105,7 @@
"fig, axes = plt.subplots(1, 3, squeeze=False)\n",
"for idx, ax in enumerate(axes.flatten()):\n",
" ax.imshow(torch.abs(idata_multi_ti.data[idx, 0, 0, :, :]))\n",
" ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.0f}ms')"
" ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.3f}s')"
]
},
{
Expand All @@ -116,9 +116,9 @@
"### Signal model and loss function\n",
"We use the model $q$\n",
"\n",
"$q(TI) = M_0 (1 - e^{-TI/T1})$\n",
"$q(TI) = M_0 (1 - e^{-TI/T_1})$\n",
"\n",
"with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T1$. We have to keep in mind that the DICOM\n",
"with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T_1$. We have to keep in mind that the DICOM\n",
"images only contain the magnitude of the signal. Therefore, we need $|q(TI)|$:"
]
},
Expand Down Expand Up @@ -162,7 +162,7 @@
"source": [
"Now we can simply combine the two into a functional to solve\n",
"\n",
"$ \\min_{M_0, T1} || |q(M_0, T1, TI)| - x||_2^2$"
"$ \\min_{M_0, T_1} || |q(M_0, T_1, TI)| - x||_2^2$"
]
},
{
Expand All @@ -187,11 +187,11 @@
"To increase our chances of reaching the global minimum, we can ensure that our starting\n",
"values are already close to the global minimum. We need a good starting point for each pixel.\n",
"\n",
"One option to get a good starting point is to calculate the signal curves for a range of T1 values and then check\n",
"One option to get a good starting point is to calculate the signal curves for a range of $T_1$ values and then check\n",
"for each pixel which of these signal curves fits best. This is similar to what is done for MR Fingerprinting. So we\n",
"are going to:\n",
"- define a list of realistic T1 values (we call this a dictionary of T1 values)\n",
"- calculate the signal curves corresponding to each of these T1 values\n",
"- define a list of realistic $T_1$ values (we call this a dictionary of $T_1$ values)\n",
"- calculate the signal curves corresponding to each of these $T_1$ values\n",
"- compare the signal curves to the signals of each voxel (we use the maximum of the dot-product as a metric of how\n",
"well the signals fit to each other)"
]
Expand All @@ -203,8 +203,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Define 100 T1 values between 100 and 3000 ms\n",
"t1_dictionary = torch.linspace(100, 3000, 100)\n",
"# Define 100 T1 values between 0.1 and 3.0 s\n",
"t1_dictionary = torch.linspace(0.1, 3.0, 100)\n",
"\n",
"# Calculate the signal corresponding to each of these T1 values. We set M0 to 1, but this is arbitrary because M0 is\n",
"# just a scaling factor and we are going to normalize the signal curves.\n",
Expand All @@ -227,8 +227,8 @@
"metadata": {},
"outputs": [],
"source": [
"# The image with the longest inversion time is a good approximation of the equilibrium magnetization\n",
"m0_start = torch.abs(idata_multi_ti.data[torch.argmax(idata_multi_ti.header.ti), ...])"
"# The maximum absolute value observed is a good approximation for m0\n",
"m0_start = torch.amax(torch.abs(idata_multi_ti.data), 0)"
]
},
{
Expand All @@ -242,10 +242,10 @@
"fig, axes = plt.subplots(1, 2, figsize=(8, 2), squeeze=False)\n",
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"im = axes[0, 0].imshow(m0_start[0, 0, ...])\n",
"axes[0, 0].set_title('M0 start values')\n",
"axes[0, 0].set_title('$M_0$ start values')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2500)\n",
"axes[0, 1].set_title('T1 start values')\n",
"im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2.5)\n",
"axes[0, 1].set_title('$T_1$ start values')\n",
"fig.colorbar(im, cax=colorbar_ax[1])"
]
},
Expand All @@ -266,7 +266,7 @@
"source": [
"# Hyperparameters for optimizer\n",
"max_iter = 2000\n",
"lr = 1e0\n",
"lr = 1e-1\n",
"\n",
"# Run optimization\n",
"params_result = adam(functional, [m0_start, t1_start], max_iter=max_iter, lr=lr)\n",
Expand All @@ -283,7 +283,7 @@
"### Visualize the final results\n",
"To get an impression of how well the fit has worked, we are going to calculate the relative error between\n",
"\n",
"$E_{relative} = \\sum_{TI}\\frac{|(q(M_0, T1, TI) - x)|}{|x|}$\n",
"$E_{relative} = \\sum_{TI}\\frac{|(q(M_0, T_1, TI) - x)|}{|x|}$\n",
"\n",
"on a voxel-by-voxel basis"
]
Expand All @@ -304,10 +304,10 @@
"fig, axes = plt.subplots(1, 3, figsize=(10, 2), squeeze=False)\n",
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"im = axes[0, 0].imshow(m0[0, 0, ...])\n",
"axes[0, 0].set_title('M0')\n",
"axes[0, 0].set_title('$M_0$')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2500)\n",
"axes[0, 1].set_title('T1')\n",
"im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2.5)\n",
"axes[0, 1].set_title('$T_1$')\n",
"fig.colorbar(im, cax=colorbar_ax[1])\n",
"im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...], vmin=0, vmax=1.0)\n",
"axes[0, 2].set_title('Relative error')\n",
Expand Down
42 changes: 21 additions & 21 deletions examples/qmri_sg_challenge_2024_t1.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # QMRI Challenge ISMRM 2024 - T1 mapping
# # QMRI Challenge ISMRM 2024 - $T_1$ mapping

# %%
# Imports
Expand All @@ -22,7 +22,7 @@
# %% [markdown]
# ### Overview
# The dataset consists of images obtained at 10 different inversion times using a turbo spin echo sequence. Each
# inversion time is saved in a separate DICOM file. In order to obtain a T1 map, we are going to:
# inversion time is saved in a separate DICOM file. In order to obtain a $T_1$ map, we are going to:
# - download the data from Zenodo
# - read in the DICOM files (one for each inversion time) and combine them in an IData object
# - define a signal model and data loss (mean-squared error) function
Expand Down Expand Up @@ -53,15 +53,15 @@
fig, axes = plt.subplots(1, 3, squeeze=False)
for idx, ax in enumerate(axes.flatten()):
ax.imshow(torch.abs(idata_multi_ti.data[idx, 0, 0, :, :]))
ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.0f}ms')
ax.set_title(f'TI = {idata_multi_ti.header.ti[idx]:.3f}s')

# %% [markdown]
# ### Signal model and loss function
# We use the model $q$
#
# $q(TI) = M_0 (1 - e^{-TI/T1})$
# $q(TI) = M_0 (1 - e^{-TI/T_1})$
#
# with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T1$. We have to keep in mind that the DICOM
# with the equilibrium magnetization $M_0$, the inversion time $TI$, and $T_1$. We have to keep in mind that the DICOM
# images only contain the magnitude of the signal. Therefore, we need $|q(TI)|$:

# %%
Expand All @@ -76,7 +76,7 @@
# %% [markdown]
# Now we can simply combine the two into a functional to solve
#
# $ \min_{M_0, T1} || |q(M_0, T1, TI)| - x||_2^2$
# $ \min_{M_0, T_1} || |q(M_0, T_1, TI)| - x||_2^2$
# %%
functional = mse @ model

Expand All @@ -88,17 +88,17 @@
# To increase our chances of reaching the global minimum, we can ensure that our starting
# values are already close to the global minimum. We need a good starting point for each pixel.
#
# One option to get a good starting point is to calculate the signal curves for a range of T1 values and then check
# One option to get a good starting point is to calculate the signal curves for a range of $T_1$ values and then check
# for each pixel which of these signal curves fits best. This is similar to what is done for MR Fingerprinting. So we
# are going to:
# - define a list of realistic T1 values (we call this a dictionary of T1 values)
# - calculate the signal curves corresponding to each of these T1 values
# - define a list of realistic $T_1$ values (we call this a dictionary of $T_1$ values)
# - calculate the signal curves corresponding to each of these $T_1$ values
# - compare the signal curves to the signals of each voxel (we use the maximum of the dot-product as a metric of how
# well the signals fit to each other)

# %%
# Define 100 T1 values between 100 and 3000 ms
t1_dictionary = torch.linspace(100, 3000, 100)
# Define 100 T1 values between 0.1 and 3.0 s
t1_dictionary = torch.linspace(0.1, 3.0, 100)

# Calculate the signal corresponding to each of these T1 values. We set M0 to 1, but this is arbitrary because M0 is
# just a scaling factor and we are going to normalize the signal curves.
Expand All @@ -114,18 +114,18 @@
t1_start = rearrange(t1_dictionary[idx_best_match], '(y x)->1 1 y x', y=n_y, x=n_x)

# %%
# The image with the longest inversion time is a good approximation of the equilibrium magnetization
m0_start = torch.abs(idata_multi_ti.data[torch.argmax(idata_multi_ti.header.ti), ...])
# The maximum absolute value observed is a good approximation for m0
m0_start = torch.amax(torch.abs(idata_multi_ti.data), 0)

# %%
# Visualize the starting values
fig, axes = plt.subplots(1, 2, figsize=(8, 2), squeeze=False)
colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]
im = axes[0, 0].imshow(m0_start[0, 0, ...])
axes[0, 0].set_title('M0 start values')
axes[0, 0].set_title('$M_0$ start values')
fig.colorbar(im, cax=colorbar_ax[0])
im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2500)
axes[0, 1].set_title('T1 start values')
im = axes[0, 1].imshow(t1_start[0, 0, ...], vmin=0, vmax=2.5)
axes[0, 1].set_title('$T_1$ start values')
fig.colorbar(im, cax=colorbar_ax[1])

# %% [markdown]
Expand All @@ -134,7 +134,7 @@
# %%
# Hyperparameters for optimizer
max_iter = 2000
lr = 1e0
lr = 1e-1

# Run optimization
params_result = adam(functional, [m0_start, t1_start], max_iter=max_iter, lr=lr)
Expand All @@ -146,7 +146,7 @@
# ### Visualize the final results
# To get an impression of how well the fit has worked, we are going to calculate the relative error between
#
# $E_{relative} = \sum_{TI}\frac{|(q(M_0, T1, TI) - x)|}{|x|}$
# $E_{relative} = \sum_{TI}\frac{|(q(M_0, T_1, TI) - x)|}{|x|}$
#
# on a voxel-by-voxel basis

Expand All @@ -158,10 +158,10 @@
fig, axes = plt.subplots(1, 3, figsize=(10, 2), squeeze=False)
colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]
im = axes[0, 0].imshow(m0[0, 0, ...])
axes[0, 0].set_title('M0')
axes[0, 0].set_title('$M_0$')
fig.colorbar(im, cax=colorbar_ax[0])
im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2500)
axes[0, 1].set_title('T1')
im = axes[0, 1].imshow(t1[0, 0, ...], vmin=0, vmax=2.5)
axes[0, 1].set_title('$T_1$')
fig.colorbar(im, cax=colorbar_ax[1])
im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...], vmin=0, vmax=1.0)
axes[0, 2].set_title('Relative error')
Expand Down
26 changes: 14 additions & 12 deletions examples/qmri_sg_challenge_2024_t2_star.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "5efa18e9",
"metadata": {},
"source": [
"# QMRI Challenge ISMRM 2024 - T2* mapping"
"# QMRI Challenge ISMRM 2024 - $T_2^*$ mapping"
]
},
{
Expand Down Expand Up @@ -39,7 +39,7 @@
"source": [
"### Overview\n",
"The dataset consists of gradient echo images obtained at 11 different echo times, each saved in a separate DICOM file.\n",
"In order to obtain a T2* map, we are going to:\n",
"In order to obtain a $T_2^*$ map, we are going to:\n",
"- download the data from Zenodo\n",
"- read in the DICOM files (one for each echo time) and combine them in an IData object\n",
"- define a signal model (mono-exponential decay) and data loss (mean-squared error) function\n",
Expand Down Expand Up @@ -100,6 +100,8 @@
"source": [
"te_dicom_files = data_folder.glob('**/*.dcm')\n",
"idata_multi_te = IData.from_dicom_files(te_dicom_files)\n",
"# scaling the signal down to make the optimization easier\n",
"idata_multi_te.data[...] = idata_multi_te.data / 1500\n",
"\n",
"# Move the data to the GPU\n",
"if flag_use_cuda:\n",
Expand All @@ -120,7 +122,7 @@
"fig, axes = plt.subplots(1, 3, squeeze=False)\n",
"for idx, ax in enumerate(axes.flatten()):\n",
" ax.imshow(torch.abs(idata_multi_te.data[idx, 0, 0, :, :]).cpu())\n",
" ax.set_title(f'TE = {idata_multi_te.header.te[idx]:.0f}ms')"
" ax.set_title(f'TE = {idata_multi_te.header.te[idx]:.3f}s')"
]
},
{
Expand All @@ -131,9 +133,9 @@
"### Signal model and loss function\n",
"We use the model $q$\n",
"\n",
"$q(TE) = M_0 e^{-TE/T2^*}$\n",
"$q(TE) = M_0 e^{-TE/T_2^*}$\n",
"\n",
"with the equilibrium magnetization $M_0$, the echo time $TE$, and $T2^*$"
"with the equilibrium magnetization $M_0$, the echo time $TE$, and $T_2^*$"
]
},
{
Expand Down Expand Up @@ -176,7 +178,7 @@
"source": [
"Now we can simply combine the two into a functional which will then solve\n",
"\n",
"$ \\min_{M_0, T2^*} ||q(M_0, T2^*, TE) - x||_2^2$"
"$ \\min_{M_0, T_2^*} ||q(M_0, T_2^*, TE) - x||_2^2$"
]
},
{
Expand Down Expand Up @@ -207,11 +209,11 @@
"# The shortest echo time is a good approximation of the equilibrium magnetization\n",
"m0_start = torch.abs(idata_multi_te.data[torch.argmin(idata_multi_te.header.te), ...])\n",
"# 20 ms as a starting value for T2*\n",
"t2star_start = torch.ones(m0_start.shape, dtype=torch.float32, device=m0_start.device) * 20\n",
"t2star_start = torch.ones(m0_start.shape, dtype=torch.float32, device=m0_start.device) * 20e-3\n",
"\n",
"# Hyperparameters for optimizer\n",
"max_iter = 20000\n",
"lr = 1e0\n",
"lr = 1e-3\n",
"\n",
"if flag_use_cuda:\n",
" functional.cuda()\n",
Expand All @@ -235,7 +237,7 @@
"### Visualize the final results\n",
"To get an impression of how well the fit has worked, we are going to calculate the relative error between\n",
"\n",
"$E_{relative} = \\sum_{TE}\\frac{|(q(M_0, T2^*, TE) - x)|}{|x|}$\n",
"$E_{relative} = \\sum_{TE}\\frac{|(q(M_0, T_2^*, TE) - x)|}{|x|}$\n",
"\n",
"on a voxel-by-voxel basis."
]
Expand All @@ -257,11 +259,11 @@
"colorbar_ax = [make_axes_locatable(ax).append_axes('right', size='5%', pad=0.05) for ax in axes[0, :]]\n",
"\n",
"im = axes[0, 0].imshow(m0[0, 0, ...].cpu())\n",
"axes[0, 0].set_title('M0')\n",
"axes[0, 0].set_title('$M_0$')\n",
"fig.colorbar(im, cax=colorbar_ax[0])\n",
"\n",
"im = axes[0, 1].imshow(t2star[0, 0, ...].cpu(), vmin=0, vmax=500)\n",
"axes[0, 1].set_title('T2*')\n",
"im = axes[0, 1].imshow(t2star[0, 0, ...].cpu(), vmin=0, vmax=5)\n",
"axes[0, 1].set_title('$T_2^*$')\n",
"fig.colorbar(im, cax=colorbar_ax[1])\n",
"\n",
"im = axes[0, 2].imshow(relative_absolute_error[0, 0, ...].cpu(), vmin=0, vmax=0.1)\n",
Expand Down
Loading