top | item 8155724

(no title)

elzr | 11 years ago

Hash functions impose practical mappings between unlike domains. When volatile (~1-to-1) identifying (crypto), when correlated (~n-to-n) perceiving (AI).

(Tweet summary of the article. I'm becoming increasingly fascinated by hash functions. I'm finding all important this tension between abstraction/correlation/perception & groundedness/volatility/identification.)

https://twitter.com/elzr/status/497893639000190976

discuss

order

morsch|11 years ago

That is a remarkable way to put it. But aren't has functions -- per definition -- n-to-1? That seems to be what the perceiving function is about: map multiple sensations into a single correlated perception.

elzr|11 years ago

Thanks for pointing it out, you're right! I rewrote the summary (the original is still at the linked tweet). So it's ~1-to-1 (usually, that's what I mean with practical) when it's volatile. And I'm calling it ~n-to-n (again, usually, practically) when it's correlated: mapping ~n correlations in the source domain to ~n correlations in the destination.

To my mind, an ~n-to-1 mapping would actually be a bit more like idealistic Platonic classifying than Wittgensteinian perception (~"we spin perceptual threads by twisting n attributes like fiber on fiber. And the strength of the thread does not reside in the fact that some 1 fiber runs through its whole length, but in the overlapping of the fibers.").

What do you think? :)