top | item 30591892

(no title)

manux | 4 years ago

Yeah the "net" in GFlowNet refers to how the underlying state is interpreted, not to an architecture. It is a way to train generative models, on any kind of discrete data (like graphs).

Source: am first author of original GFlowNet paper.

discuss

order

mcbuilder|4 years ago

Thanks for your comment! I originally finished my PhD in computational neuroscience and have been riding the deep learning wave for the past five years. I listened to the audio Machine Learning Street Talk episode on GFlowNets, and that was my general introduction. It wasn't until I looked at the image of the GFlowNet on your blog post a connection to actually biological plausible architectures became apparent. The individual firing of a neuron can be interpreted as a Poisson process, but with populations of neurons we often would approximate the firing rate of the entire population as a Gaussian distribution. That would be one patch of cortical column, but these populations are linked together to form larger scale functional brain networks. Anyway, thinking about that scale you could imagine the neural activity communicated between these patches of activity as a type of flow from one region to another. I think your paper raises interesting questions about how this inference engine in our brain might be doing some sort of MCMC like sampling and constructing different belief hypothesis.

manux|4 years ago

Yeah! This kind of reasoning is why it's exciting to think that we could use the GFlowNet machinery to construct latent representations--ones that map more directly to the notion of "beliefs" that we're used to think about as humans, something "discrete" and relational.

Concretely what this could mean is using these tools to generate causal hypotheses, like what's been done here: https://arxiv.org/abs/2202.13903