(no title)
TarqDirtyToMe | 2 years ago
I don’t think this models use of alignment implies any sort of censorship, it’s just being tuned to accomplish the task of outputting only important tokens for the target llm
TarqDirtyToMe | 2 years ago
I don’t think this models use of alignment implies any sort of censorship, it’s just being tuned to accomplish the task of outputting only important tokens for the target llm
smeagull|2 years ago
TarqDirtyToMe|2 years ago
> The potential harmful, false or biased responses using the compressed prompts would likely be unchanged. Thus using LLMLingua has no inherent benefits or risks when it comes to those types of responsible AI issues.
behnamoh|2 years ago
[deleted]
TarqDirtyToMe|2 years ago
In the paper, “distribution alignment” is one of methods used to improve the results of compression so intent is preserved:
> To narrow the gap between the distribution of the LLM and that of the small language model used for prompt compression, here we align the two distributions via instruction tuning
So in any case for this paper alignment seems to be used in very specific way that doesn’t seem related to censorship
Edit: would to love to hear from someone who has a better understanding of the paper to clarify. I am operating from the position of layman here
ryanklee|2 years ago