NaiveBayesian's comments

NaiveBayesian | 1 year ago | on: AI models collapse when trained on recursively generated data

I believe that counterexample only works in the limit where the sample size goes to infinity. Every finite sample will have μ≠0 almost surely.(Of course μ will still tend to be very close to 0 for large samples, but still slightly off)

So this means the sequence of μₙ will perform a kind of random walk that can stray arbitrarily far from 0 and is almost sure to eventually do so.

NaiveBayesian | 2 years ago | on: From Python to Elixir Machine Learning

If your data loading pipeline grows even slightly complex, then yes, you absolutely need concurrency in order to deliver your samples to the GPU fast enough.

The current workarounds to make this happen in python are quite ugly imho, e.g. Pytorch spawns multiple python processes and then pushes data between the processes through shared memory, which incurs quite some overhead. Tensorflow on the other hand requires you to stick to their Tensor-dsl so that it can run within their graph engine. If native concurrency were a thing, data loading would be much more straightforward to implement without such hacks.

page 1