We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eq. 2 in SEA-RAFT is
$$ -log[MixLap(x,\alpha,\beta_1,\beta_2,\mu)] $$
where
$$ MixLap(x,\alpha,\beta_1,\beta_2,\mu)=\alpha\frac{e^{-\frac{|x-\mu|}{\beta_1}}}{2\beta_1} + (1-\alpha)\frac{e^{-\frac{|x-\mu|}{\beta_2}}}{2\beta_2} $$
replace $\alpha=\frac{exp(\alpha_1)}{exp(\alpha_1)+exp(\alpha_2)}$
$$ MixLap(x,\alpha,\beta_1,\beta_2,\mu)=\frac{exp(\alpha_1)}{exp(\alpha_1)+exp(\alpha_2)}\frac{e^{-\frac{|x-\mu|}{\beta_1}}}{2\beta_1}+\frac{exp(\alpha_2)}{exp(\alpha_1)+exp(\alpha_2)}\frac{e^{-\frac{|x-\mu|}{\beta_2}}}{2\beta_2} $$
rewrite as follows
$$ MixLap(x,\alpha,\beta_1,\beta_2,\mu)=\frac{exp(\alpha_1)\frac{e^{-\frac{|x-\mu|}{\beta_1}}}{2\beta_1}+exp(\alpha_2)\frac{e^{-\frac{|x-\mu|}{\beta_2}}}{2\beta_2}}{exp(\alpha_1)+exp(\alpha_2)} $$
therefore,
$$ -log[MixLap(x,\alpha,\beta_1,\beta_2,\mu)]=-(log[{exp(\alpha_1)\frac{e^{-\frac{|x-\mu|}{\beta_1}}}{2\beta_1}+exp(\alpha_2)\frac{e^{-\frac{|x-\mu|}{\beta_2}}}{2\beta_2}}]-log[{exp(\alpha_1)+exp(\alpha_2)}]) $$
remove the negtive
$$ -log[MixLap(x,\alpha,\beta_1,\beta_2,\mu)]=log[{exp(\alpha_1)+exp(\alpha_2)}]-log[{exp(\alpha_1)\frac{e^{-\frac{|x-\mu|}{\beta_1}}}{2\beta_1}+exp(\alpha_2)\frac{e^{-\frac{|x-\mu|}{\beta_2}}}{2\beta_2}}] $$
Notes in code raft.py Line145-Line156 , where $\alpha,\beta,\mu$ is replace by a, b, u:
raft.py Line145-Line156
obtain $log(\beta)$ with shape $(N,2,H,W)$
raw_b = info_predictions[i][:, 2:] log_b = torch.zeros_like(raw_b) # Large b Component log_b[:, 0] = torch.clamp(raw_b[:, 0], min=0, max=var_max) # Small b Component log_b[:, 1] = torch.clamp(raw_b[:, 1], min=var_min, max=0)
obtain $\alpha$ with shape $(N,2,H,W)$, which can be splited into $\alpha_1,\alpha_2$ on dim=1
dim=1
weight = info_predictions[i][:, :2]
obtain $|x-\mu|/\beta$ with shape $(N,2,H,W)$
# |x-u|*exp[-log(b)] # |x-u|/b term2 = ((flow_gt - flow_predictions[i]).abs().unsqueeze(2)) * (torch.exp(-log_b).unsqueeze(1))
obtain $\alpha-log(2)-log(\beta)$ with shape $(N,2,H,W)$
# term1: [N, m, H, W] # a-log(2)-log(b) term1 = weight - math.log(2) - log_b
obtain nf_loss= $-log[MixLap(x,\alpha,\beta_1,\beta_2,\mu)]$ with shape $(N,1,H,W)$
nf_loss = torch.logsumexp(weight, dim=1, keepdim=True) - torch.logsumexp(term1.unsqueeze(1) - term2, dim=2)
where torch.logsumexp(weight, dim=1, keepdim=True) = $log[exp(\alpha_1)+exp(\alpha_2)]$, torch.logsumexp(term1.unsqueeze(1) - term2, dim=2) = $log[exp(\alpha_1-log(2)-log(\beta_1)-|x-\mu|/\beta_1)+exp(\alpha_2-log(2)-log(\beta_2)-|x-\mu|/\beta_2)]$ $=log[exp (\alpha_1) \frac{1}{2\beta_1} e^{-\frac{|x-\mu|}{\beta_1}}+exp (\alpha_2) \frac{1}{2\beta_2} exp^{-\frac{|x-\mu|}{\beta_2}}$
torch.logsumexp(weight, dim=1, keepdim=True)
torch.logsumexp(term1.unsqueeze(1) - term2, dim=2)
Hope this helps guys read the code!
The text was updated successfully, but these errors were encountered:
Thanks for the explanation!
Sorry, something went wrong.
Very good explanation. Thanks!
This greatly helps to understand the code. Thanks!
No branches or pull requests
Eq. 2 in SEA-RAFT is
where
replace$\alpha=\frac{exp(\alpha_1)}{exp(\alpha_1)+exp(\alpha_2)}$
rewrite as follows
therefore,
remove the negtive
Notes in code$\alpha,\beta,\mu$ is replace by a, b, u:
raft.py Line145-Line156
, whereobtain$log(\beta)$ with shape $(N,2,H,W)$
obtain$\alpha$ with shape $(N,2,H,W)$ , which can be splited into $\alpha_1,\alpha_2$ on
dim=1
obtain$|x-\mu|/\beta$ with shape $(N,2,H,W)$
obtain$\alpha-log(2)-log(\beta)$ with shape $(N,2,H,W)$
obtain nf_loss=$-log[MixLap(x,\alpha,\beta_1,\beta_2,\mu)]$ with shape $(N,1,H,W)$
where$log[exp(\alpha_1)+exp(\alpha_2)]$ ,
$log[exp(\alpha_1-log(2)-log(\beta_1)-|x-\mu|/\beta_1)+exp(\alpha_2-log(2)-log(\beta_2)-|x-\mu|/\beta_2)]$
$=log[exp (\alpha_1) \frac{1}{2\beta_1} e^{-\frac{|x-\mu|}{\beta_1}}+exp (\alpha_2) \frac{1}{2\beta_2} exp^{-\frac{|x-\mu|}{\beta_2}}$
torch.logsumexp(weight, dim=1, keepdim=True)
=torch.logsumexp(term1.unsqueeze(1) - term2, dim=2)
=Hope this helps guys read the code!
The text was updated successfully, but these errors were encountered: