> Our study finds that the politeness of prompts can
significantly affect LLM performance. This phenomenon is thought to reflect human social behavior. The study notes that using impolite prompts
can result in the low performance of LLMs, which
may lead to increased bias, incorrect answers, or
refusal of answers. However, highly respectful
prompts do not always lead to better results. In
most conditions, moderate politeness is better, but
the standard of moderation varies by languages and
LLMs. In particular, models trained in a specific
language are susceptible to the politeness of that
language. This phenomenon suggests that cultural
background should be considered during the development and corpus collection of LLMs.
No comments yet.