top | item 34579234

(no title)

toto444 | 3 years ago

When it comes to ML scalability is a constraint not a goal. The goal is to minimize some loss function and it turns out simple dot product can be outperformed by more complex algorithms.

I remember reading a few years ago that most search engines use some tree based model. If that's the case, that means the idea of monotonic linear weights is not relevant.

discuss

order

h0l0cube|3 years ago

Can you be more specific? Dot product is about as performant as it gets with linear memory access and SIMD multiply accumulate. Throw random memory access and flow control in there and it’s a struggle to do it faster. Unless the factors are sparse, in which case just elide the zero values.

ethbr0|3 years ago

> scalability is a constraint not a goal. The goal is to minimize some loss function