Odd that the author didn’t try giving a latent embedding to the standard neural network (or modulated the activations with a FiLM layer) and had static embeddings as the baseline. There’s no real advantage to using a hypernetwork and they tend to be more unstable and difficult to train, and scale poorly unless you train a low rank adaptation.
mkmccjr|24 days ago
The idea behind hypernetworks is that they enable Gelman-style partial pooling to explicitly modeling the data generation process while leveraging the flexibility of neural network tooling. I’m curious to read more about your recommendations: their connection to the described problems is not immediately obvious to me but I would be curious to dig a bit deeper.
I agree that hypernetworks have some challenges associated with them due to the fragility of maximum likelihood estimates. In the follow-up post, I dug into how explicit Bayesian sampling addresses these issues.
mkmccjr|14 days ago
I admit the "static embedding" baseline is a bit of a strawman, but I used it to illustrate the specific failure mode of models that can't adapt at inference time.
I then used the Hypernetwork specifically to demonstrate a "dataset-adaptive" architecture as a stepping stone toward the next post in the series. My goal was to show how even a flexible parameter-generating model eventually hits a wall with out-of-sample stability; this sets the stage for the Bayesian Hierarchical approach I cover later on.
I wasn't familiar with the FiLM literature before your comment, but looking at it now, the connection is spot on. Functionally, it seems similar to what I did here: conditioning the network on an external variable. In my case, I wanted to explicitly model the mapping E->θ to see if the network could learn the underlying physics (Planck's law) purely from data.
As for stability, you are right that Hypernetworks can be tricky in high dimensions, but for this low-dimensional scalar problem (4D embedding), I found it converged reliably.
yobbo|23 days ago