The problem with this approach to text generation is that it's still not flexible enough. If during inference the model changes its mind and wants to output something considerably different it can't because there are too many tokens already in place.
nodja|4 months ago
rafaelero|4 months ago
didibus|4 months ago
oezi|4 months ago