top | item 46301035 (no title) nycdatasci | 2 months ago There are no visible watermarks, but model makers can use steganographic codes to identify outputs from their own models. discuss order hn newest nycdatasci|2 months ago Text-to-Image Models Leave Identifiable Signatures: Implications for Leaderboard Securityhttps://arxiv.org/pdf/2510.06525 encroach|2 months ago This is true, however LMArena does employ some methods to mitigate attempts to manipulate the leaderboard, see https://openreview.net/forum?id=zf9zwCRKyPThey also control for style https://news.lmarena.ai/sentiment-control/
nycdatasci|2 months ago Text-to-Image Models Leave Identifiable Signatures: Implications for Leaderboard Securityhttps://arxiv.org/pdf/2510.06525 encroach|2 months ago This is true, however LMArena does employ some methods to mitigate attempts to manipulate the leaderboard, see https://openreview.net/forum?id=zf9zwCRKyPThey also control for style https://news.lmarena.ai/sentiment-control/
encroach|2 months ago This is true, however LMArena does employ some methods to mitigate attempts to manipulate the leaderboard, see https://openreview.net/forum?id=zf9zwCRKyPThey also control for style https://news.lmarena.ai/sentiment-control/
nycdatasci|2 months ago
https://arxiv.org/pdf/2510.06525
encroach|2 months ago
They also control for style https://news.lmarena.ai/sentiment-control/