top | item 31923964

(no title)

legothief | 3 years ago

That is true, but unfortunately "out of the box", they're not well suited just be "fed into" an NN. Even if you think of the adjacency matrix as very similar to how the weights are laid out in a feed-forward neural network, you can't ignore that:

- in real life, graphs are not fixed

- you need to deal with the many different potential representations of the graph (permutation invariance)

- the nodes are usually containing more features than a single scalar value

but this is definitely not the best explanation, I think this guy does a lot better job: https://youtu.be/JtDgmmQ60x8

discuss

order

throwawaymaths|3 years ago

Sure, but GNNs modeling neurons is nonsensical since the graph is the analyte of the NN, you are not a priori doing anything with the graph. So in a sense my point is "to use NNs to model neurons", using a GNN doesn't buy you anything because the G in GNN isn't being subjected to dynamic activation.