top | item 45351756

(no title)

cuttothechase | 5 months ago

Would having a Markov chain of Markov chains help in this situation. One chain does this when 2D bitmap patterns are vertical and another one for left to right?

discuss

order

AnotherGoodName|5 months ago

Yes and then you weight between them with a neural network and your vertical predictor catches that every second vertical line is solid (while every other vertical line is static to mess up the horizontal markov chains). Of course then someone passes you video where there’s 3rd dimension. And you need to yet customise again with that consideration. Or maybe the pattern is in 45 degree diagonal lines and not horizontal or vertical. Better have a markov chains for that too. What about 10degree vertical lines? Etc.

In the end you’re inputting a millions of ways there could be a pattern, passing all of those into a neural network and weighting the chains that make correct predictions more.

You start to realize even with all these ways past context could still influence the current prediction and what you want is a generator for all the ways there could be a pattern. At this point you're getting into the realm of multilayer neural networks and starting to consider the attention mechanism.

I don’t want to discourage anyone from learning markov chains here btw. It’s just that they have limitations and those limitations actually make a great learning journey for neural networks as you realize you really need more than an absolute singular state being activated at a time and then you start to think about how all the states activated in the past might influence the current probabilities (essentially you then start thinking about the problem the attention mechanism solves).