Spectral Bottleneck in Deep Neural Networks: Noise is All You Need

Preprint under review | v1:2025

Hemanth Chandravamsi, Dhanush V. Shenoy, Itay Zinn, Shimon Pisnoy, Steven H. Frankel

Technion - Israel Institute of Technology, Haifa, Israel, 3200003

Code Google Colab Cite
SIREN vs SIREN² noise scheme
When we first started, our goal was to use deep neural networks to represent 1D and 2D fluid-flow data so that we could extract spatial gradients through auto-grad. We turned to sinusoidal representation networks (SIRENs), since they’re a popular choice for implicit representations of coordinate-based data. They worked really well for 2D and 3D cases, but we quickly noticed that SIRENs struggle when dealing with long 1D signals that contain high-frequency content. Even after extensive hyperparameter tuning and increasing the parameter count, the reconstructions were often unsatisfactory. The issue turned out to be a 'spectral bottleneck': at initialization, the intermediate activations of the network don’t carry enough spectral power in the frequency range that the target signal features. To fix this, we came up with WINNER (used inside SIREN2) - Weight Initilization with Noise for NEural Representations. As shown in the figure above, in WINNER, we add Gaussian noise (ηjk01, ηjk12) to the baseline uniformly initialized weights before the first and second hidden layers. This changes the pre-activation spectra of all the intermediate network quantities at initialization, by boosting the spectral power of the activations. In practice, this simple change helps overcome the bottleneck and gives us the reconstructions we were looking for especially for 1D signals like audio. WINNER works well for image and 3D data types as well.

Key Highlights of the Work

Abstract

Deep neural networks are known to exhibit a spectral learning bias, wherein low-frequency components are learned early in training, while high-frequency modes emerge more gradually in later epochs. However, when the target signal lacks low-frequency components and is dominated by broadband high frequencies, training suffers from a spectral bottleneck, and the model fails to reconstruct the entire signal, including the frequency components that lie within its representational capacity. We examine such a scenario in the context of implicit neural representation (INRs) with sinusoidal representation networks (SIRENs), focusing on the challenge of fitting high-frequency-dominant signals that are susceptible to spectral bottleneck. To address this, we propose a generalized target-aware weight perturbation scheme WINNER for network initialization. The scheme perturbs uniformly initialized weights with Gaussian noise, where the noise scales are adaptively determined by the spectral centroid of the target signal. We show that the noise scales can provide control over the spectra of network activations and the eigenbasis of the empirical neural tangent kernel. This method not only addresses the spectral bottleneck but also improves their accuracy, outperforming state-of-the-art approaches in audio fitting and achieving notable gains in image fitting and denoising. Beyond signal reconstruction, our approach opens new directions for adaptive initialization strategies in neural representation tasks.

Bibtex


@misc{chandravamsi2025spectral,
  title          = {Spectral Bottleneck in Deep Neural Networks: Noise is All You Need},
  author         = {Chandravamsi, Hemanth and Shenoy, Dhanush V. and Zinn, Itay and Pisnoy, Shimon and Frankel, Steven H.},
  journal        = {arXiv preprint arXiv:2509.09719},
  eprint         = {2509.09719},
  archivePrefix  = {arXiv},
  year           = {2025},
  url            = {https://cfdlabtechnion.github.io/siren_square/}
}