top | item 38663288

(no title)

potac | 2 years ago

Can anyone explain how conv works in that graph. You have a tensor of shape [2,4,16] and you convolve with a kernel of shape [4,16,8] and that gives you a [2,8] tensor? How's that possible?

discuss

order

phillengel|2 years ago

Does this help?

*1. Input:*

* Tensor shape: [2,4,16] * `2`: This represents the *batch size*, meaning there are two independent data samples being processed. * `4`: This is the *input feature dimension*, indicating each sample has 4 features. * `16`: This is the *input channel dimension*, suggesting each feature has 16 channels of information.

*2. Kernel:*

* Shape: [4,16,8] * `4`: This is the *kernel size*, meaning the filter window used to convolve has a width of 4. * `16`: This matches the *input channel dimension*, ensuring the filter operates on the same number of channels as the input. * `8`: This is the *output channel dimension*, indicating the convolution produces 8 new channels of information per sample.

*3. Output:*

* Shape: [2,8] * `2`: This remains the *batch size* as the operation is applied to each sample independently. * `8`: This matches the *output channel dimension* of the kernel, signifying the final tensor has 8 new features extracted from the input.

*4. How is it possible?*

Despite the seemingly mismatched dimensions in the input and output, convolution on graphs works by leveraging the *neighborhood structure* of the graph. Here's a simplified explanation:

* The kernel slides across the graph, applying its weights to the features of the current node and its neighbors within a specific radius. * This weighted sum is then aggregated to form a new feature for the current node in each output channel. * As the kernel moves across the graph, it extracts information from the local neighborhood of each node, creating new features that capture relationships and patterns within the graph.

*Additional considerations:*

* The graph structure and edge weights likely play a role in how information propagates during the convolution process. * Specific details of the convolution implementation, including padding and stride, might also influence the output shape.

potac|2 years ago

Thanks. What was confusing me is the kernel size 4. Normally in (2D) convolutions you have (in_channels, out_channels, k, k) for a kxk kernel size. In the example above it the k is the first dimension instead of the last. This is in PyTorch, not sure about Keras