AllenNLP started before transformers, and so it provided high level abstractions to experiment with model architectures, which is where much of NLP research was happening at the time. Transformers definitely changed the playing field, as it became the basis for most models!
HuggingFace Transformers is a huge high quality open source repo of pre trained models & associated code. People combine that with Pytorch-Lightning or Fairseq most of the time afaik.
It depends on what you use AllenNLP for. AllenNLP has a ton of functionality for vectorizing text. Most of the tokenizer/indexer/embedder stuff is about that. But these days we all use transformers for that, so there isn't much of a need to experiment with ways to vectorize.
If you like the trainer, or the configuration language, or some of the other components you should check out Tango (https://github.com/allenai/tango). One of Tango's origins is the question "What if AllenNLP supported workflow steps other than read -> train -> evaluate?". We noticed that a lot of work in NLP no longer fit that simple pattern, so we needed a new tool that can support more complex experiments.
If you like the metrics, try torchmetrics. Torchmetrics has almost exactly the same API as AllenNLP metrics.
If you like any of the nn components, please get in touch with the Tango team (on GitHub). We recently had some discussion around rescuing a few of those, since there seems to be some excitement.
[+] [-] lol1lol|3 years ago|reply
[+] [-] schmmd|3 years ago|reply
[+] [-] gardenfelder|3 years ago|reply
[+] [-] Mathnerd314|3 years ago|reply
[+] [-] rmbyrro|3 years ago|reply
[1] https://allenai.org/allennlp
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] ta988|3 years ago|reply
[+] [-] make3|3 years ago|reply
HuggingFace Transformers is a huge high quality open source repo of pre trained models & associated code. People combine that with Pytorch-Lightning or Fairseq most of the time afaik.
[+] [-] toppy|3 years ago|reply
[+] [-] marvinalone|3 years ago|reply
If you like the trainer, or the configuration language, or some of the other components you should check out Tango (https://github.com/allenai/tango). One of Tango's origins is the question "What if AllenNLP supported workflow steps other than read -> train -> evaluate?". We noticed that a lot of work in NLP no longer fit that simple pattern, so we needed a new tool that can support more complex experiments.
If you like the metrics, try torchmetrics. Torchmetrics has almost exactly the same API as AllenNLP metrics.
If you like any of the nn components, please get in touch with the Tango team (on GitHub). We recently had some discussion around rescuing a few of those, since there seems to be some excitement.
[+] [-] unknown|3 years ago|reply
[deleted]