1.7. scattering transform

some papers involving the scattering transform and similar developments bringing structure to replace learned filters

  • some of the researchers involved

    • edouard oyallan, joan bruna, stephan mallat, Helmut Bölcskei, max welling

1.7.1. goals

  • benefits

    • all filters are defined

    • more interpretable

    • more biophysically plausible

  • scattering transform - computes a translation invariant repr. by cascading wavelet transforms and modulus pooling operators, which average the amplitude of iterated wavelet coefficients

1.7.3. initial papers scat_conv neuro style

  • https://arxiv.org/pdf/1809.10504.pdf papers by other groups

1.7.4. helmut lab papers

  • Deep Convolutional Neural Networks Based on Semi-Discrete Frames (wiatowski et al. 2015)

    • allowing for different and, most importantly, general semidiscrete frames (such as, e.g., Gabor frames, wavelets, curvelets, shearlets, ridgelets) in distinct network layers

    • translation-invariant, and we develop deformation stability results

  • wiatoski_18 “A mathematical theory of deep convolutional neural networks for feature extraction

    • encompasses general convolutional transforms - general semi-discrete frames (including Weyl-Heisenberg filters, curvelets, shearlets, ridgelets, wavelets, and learned filters), general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), and general Lipschitz-continuous pooling operators emulating, e.g., sub-sampling and averaging

      • all of these elements can be different in different network layers.

    • translation invariance result of vertical nature in the sense of the features becoming progressively more translation-invariant with increasing network depth

    • deformation sensitivity bounds that apply to signal classes such as, e.g., band-limited functions, cartoon functions, and Lipschitz functions.

  • wiatowski_18 “Energy Propagation in Deep Convolutional Neural Networks

1.7.5. nano papers

  • yu_06 “A Nanoengineering Approach to Regulate the Lateral Heterogeneity of Self-Assembled Monolayers”

    • regulate heterogeneity of self-assembled monlayers

      • used nanografting + self-assembly chemistry

  • bu_10 nanografting - makes more homogenous morphology

  • fleming_09 “dendrimers”

    • scanning tunneling microscopy - provides highest spatial res

    • combat this for insulators

  • lin_12_moire

    • prob moire effect with near-field scanning optical microscopy

  • chen_12_crystallization l2 functions

  • \(L^2\) function is a function \(f: X \to \mathbb{R}\) that is square integrable: \(|f|^2 = \int_X |f|^2 d\mu\) with respect to the measure \(\mu\)

    • \(|f|\) is its \(L^2\)-norm

  • **measure ** = nonnegative real function from a delta-ring F such that \(m(\empty) = 0\) and \(m(A) = \sum_n m(A_n)\)

  • Hilbert space H: a vectors space with an innor product \(<f, g>\) such that the following norm turns H into a complete metric space: \(|f| = \sqrt{<f, f>}\)

  • diffeomorphism is an isomorphism of smooth manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are smooth. reversible/invertible models

1.7.6. unsupervised learning