austinpilot | 10 months ago | on: Harnessing the Universal Geometry of Embeddings
austinpilot's comments
austinpilot | 10 months ago | on: Harnessing the Universal Geometry of Embeddings
The goal here is harder: given an embedding of an unknown text in one space, generate a vector in another space that's close to the embedding of the same text -- but, unlike in the word alignment problem, the texts are not known in advance.
Neither unsupervised transport, nor optimal alignment can solve this problem. Their input sets must be embeddings of the same texts. The input sets here are embeddings of different texts.
FWIW, this is all explained in the paper, including even the abstract. The comparisons with optimal assignment explicitly note that this is an idealized pseudo-baseline, and in reality OA cannot used for embedding translation (as opposed to matching, alignment, correspondence, etc.)
austinpilot | 13 years ago | on: The most dangerous code in the world
https://docs.google.com/document/pub?id=1roBIeSJsYq3Ntpf6N0P...
For example, broken SSL in Amazon FPS allows a MITM attacker to forge instant payment notifications and defraud merchants who use vulnerable SDKs. This is a real vulnerability, acknowledged by Amazon.