Neural nets with implicit layers

Also, declarative networks, bi-level optimization and other ingenious uses of the implicit function theorem

December 8, 2020 — June 28, 2023

dynamical systems
linear algebra
machine learning
neural nets
optimization
regression
sciml
signal processing
sparser than thou
statmech
stochastic processes
Figure 1

Yonina Eldar on Model-Based Deep Learning

In our lab, we are working on model-based deep learning, where the design of learning-based algorithms is based on prior domain knowledge. This approach allows to integrate models and other knowledge about the problem into both the architecture and training process of deep networks. This leads to efficient, high-performance and yet interpretable neural networks which can be employed in a variety of tasks in signal and image processing. Model-based networks require far fewer parameters than their black-box counterparts, generalize better, and can be trained from much less data. In some cases, our networks are trained on a single image, or only on the input itself so that effectively they are unsupervised.

1 Unrolling algorithms

Turning iterations into layers. Connection to Implicit NNs.

A classic is Gregor and LeCun (2010), and a number of others related to this idea intermittently appear (Adler and Öktem 2018; Borgerding and Schniter 2016; Gregor and LeCun 2010; Sulam et al. 2020).

2 Incoming

3 References

Adler, and Öktem. 2018. Learned Primal-Dual Reconstruction.” IEEE Transactions on Medical Imaging.
Banert, Rudzusika, Öktem, et al. 2021. Accelerated Forward-Backward Optimization Using Deep Learning.” arXiv:2105.05210 [Math].
Borgerding, and Schniter. 2016. Onsager-Corrected Deep Networks for Sparse Linear Inverse Problems.” arXiv:1612.01183 [Cs, Math].
Gregor, and LeCun. 2010. Learning fast approximations of sparse coding.” In Proceedings of the 27th International Conference on Machine Learning (ICML-10).
———. 2011. Efficient Learning of Sparse Invariant Representations.” arXiv:1105.5307 [Cs].
Monga, Li, and Eldar. 2021. Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing.” IEEE Signal Processing Magazine.
Satorras, and Welling. 2021. Neural Enhanced Belief Propagation on Factor Graphs.” In.
Shlezinger, Whang, Eldar, et al. 2021. Model-Based Deep Learning: Key Approaches and Design Guidelines.” In 2021 IEEE Data Science and Learning Workshop (DSLW).
———, et al. 2022. Model-Based Deep Learning.”
Sulam, Aberdam, Beck, et al. 2020. On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence.