End-to-End Learning with an Optimization Model
Learn the "right problem" for modeling a task
Lots of models leverage data to make predictions.
But, did you know neural network inferences can
encode prior knowledge via optimization?
Imagine an optimization problem with
both hand-crafted and tunable parameters
that can depend on input data.
We can set inferences to be
solutions to these problems.
When a loss function is minimized
over a class of neural networks like this
(i.e. embeds an optimization model),
we say it is trained “end-to-end.”
With end-to-end learning,
one can learn the "right problem”
to solve for a particular distribution of data.
In addition to using tunable weights,
these models can handle
nontrivial constraints (e.g. large linear systems),
exploit structured sparsity properties,
and more.
Check out more details and an inverse problem example in the slides below.
Cheers,
Howard