top | item 19210463

(no title)

saternius | 7 years ago

It uses a deep sequence to sequence model, and the drop-down thesaurus are phrase embeddings trained on large corpuses like Reddit and Wikipedia.

discuss

order

lumost|7 years ago

What did you use for the parallel data? the paraphrases are much better than an auto-encoder.