Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Philipp Arras
whatisalikelihood
Commits
f9f9824a
Commit
f9f9824a
authored
Aug 24, 2018
by
Philipp Arras
Browse files
Rework introduction
parent
6adf12f8
Changes
1
Hide whitespace changes
Inline
Side-by-side
main.tex
View file @
f9f9824a
...
...
@@ -51,35 +51,42 @@
\title
{
IFT Collaboration Guide
}
\begin{document}
\maketitle
\VerbatimFootnotes
\section*
{
The need for a guide
}
A Bayesian reconstruction algorithm may be viewed as having three basic building
blocks. First, there is the
\emph
{
likelihood
}
$
\mathcal
P
(
d|s
)
$
which describes the
measurement process. It incorporates all information one has about the
experiment. It contains the data, the statistics of the data including its
uncertainty and a description of the measurement device itself. Second, the
\emph
{
prior
}
$
\mathcal
P
(
s
)
$
describes the knowledge the scientist has
\emph
{
before
}
executing the experiment. In order to set a proper prior one needs
to make the knowledge one has about the physical process which was observed by
the measurement device. Finally, one needs an algorithmic and computational
framework which is able to actually do the Bayesian inference given the above
information.
It becomes clear that these three parts are relatively separate from each other.
If you have some data and know your measurement process and measurement device,
you've got all information in order to implement the likelihood. In contrast,
the IFT group led by Torsten En
{
\ss
}
lin has knowledge and GPL-licensed software
which can perform the inference. The interesting part is the second one, i.e.
thinking about the priors. Here one needs to have an understanding both of the
underlying physics which one has observed and probability theory and Bayesian
statistics. This is moment where an interesting interaction between observers,
i.e. you, and information field theorists, i.e. Torsten and his group, can
happen.
\section*
{
Why should I read this guide?
}
Bayesian reconstruction algorithms may be viewed in terms of three basic
building blocks.
\begin{enumerate}
\item
The
\emph
{
likelihood
}
$
\mathcal
P
(
d|s
)
$
describes the measurement process.
It incorporates all available information about the experiment. It contains
the data, the statistics of the data including the error bars and a
description of the measurement device itself.
\item
The
\emph
{
prior
}
$
\mathcal
P
(
s
)
$
describes the knowledge the scientist has
\emph
{
before
}
executing the experiment. In order to define prior one needs to
make one's knowledge about the physical process which was observed by the
measurement device explicit.
\item
Finally, one needs an algorithmic and computational framework which is
able to actually do the Bayesian inference given the above information.
\end{enumerate}
It becomes clear that these three parts are separate from each other. To
implement the likelihood one needs the data and know the measurement process and
measurement device. Then, one's got all information in order to implement the
likelihood. This is your part!
The IFT group led by Torsten En
{
\ss
}
lin has knowledge and GPL-licensed software
to address the third part.
The interesting part is the second one, i.e. thinking about the priors. Here one
needs to have an understanding both of the underlying physics and probability
theory and Bayesian statistics. This is where an interesting interaction between
observers, i.e. you, and information field theorists, i.e. Torsten and his
group, will happen.
Since Torsten's group cannot help you a lot in implementing the likelihood, we
decided to put together this guide in order to explain to you how to do it on
your own in a fashion which is compatible with our inference machinery called
NIFTy.
have put together this guide in order to explain to you how to do it on your own
in a fashion which is compatible with our inference machinery called NIFTy.
\section*
{
Disclaimer
}
\begin{itemize}
...
...
@@ -234,10 +241,11 @@ above (\texttt{R(s)}, \texttt{R\_prime(position, s)} and
\texttt
{
R
\_
prime
\_
adjoint(position, d)
}
) aware of the space on which
\texttt
{
position
}
,
\texttt
{
s
}
(these are the same) and
\texttt
{
d
}
are defined.
To this end, define your own class
\texttt
{
Response
}
which inherits from the
NIFTy class
\texttt
{
LinearOperator
}
. I recommend to take a simple linear
operator like the
\texttt
{
FieldZeroPadder
}
, copy it and adopt it to your needs.
The method
\texttt
{
apply()
}
takes an instance of
\texttt
{
Field
}
(which is
\subsection*
{
Derivative of response
}
To this end, define your own class
\texttt
{
DerivativeResponse
}
which inherits
from the NIFTy class
\texttt
{
LinearOperator
}
. I recommend to take a simple
linear operator like the
\texttt
{
FieldZeroPadder
}
, copy it and adopt it to your
needs. The method
\texttt
{
apply()
}
takes an instance of
\texttt
{
Field
}
(which is
esentially a numpy array accompanied by a domain) and returns one as well.
Some tipps exclusively for you:
...
...
@@ -263,7 +271,11 @@ ift.extra.consistency_check(op)
\end{lstlisting}
This test does the same as the test for the adjointness above.
% TODO Include models
\subsection*
{
Response
}
So far, we have wrapped the derivative of the response in a
\texttt
{
LinearOperator
}
. The other thing to be done is to make the function
\texttt
{
R(s)
}
field-aware. Rewrite it such that it takes a field in signal space
as input and returns a field in data space.
\section*
{
Example:
$
\gamma
$
-ray imaging
}
The information a
$
\gamma
$
-ray astronomer would provide to the algorithm (in the
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment