top | item 38691292

(no title)

TarqDirtyToMe | 2 years ago

My understanding is that in an academic context you’ll hear alignment anytime a model is tuned to accomplish a certain task, not just to steer its political affiliation and idea of ethics

I don’t think this models use of alignment implies any sort of censorship, it’s just being tuned to accomplish the task of outputting only important tokens for the target llm

discuss

order

smeagull|2 years ago

In my experience it means the AI will waste tokens apologizing for it's short comings and ignoring task prompts in favour of it's alignment.

TarqDirtyToMe|2 years ago

This does not seem relevant to the alignment discussed in the paper. It seems to be explicitly out of scope:

> The potential harmful, false or biased responses using the compressed prompts would likely be unchanged. Thus using LLMLingua has no inherent benefits or risks when it comes to those types of responsible AI issues.

behnamoh|2 years ago

[deleted]

TarqDirtyToMe|2 years ago

I’m really not all that familiar with the space so I could be mistaking. The definition of ai alignment on Wikipedia says an aligned model one that “advances intended objectives”.

In the paper, “distribution alignment” is one of methods used to improve the results of compression so intent is preserved:

> To narrow the gap between the distribution of the LLM and that of the small language model used for prompt compression, here we align the two distributions via instruction tuning

So in any case for this paper alignment seems to be used in very specific way that doesn’t seem related to censorship

Edit: would to love to hear from someone who has a better understanding of the paper to clarify. I am operating from the position of layman here

ryanklee|2 years ago

It is common, standard usage precisely in this context.