top | item 40002528

(no title)

haxel | 1 year ago

I use Mistral (7B v0.2 instruct, 6-bit quantized) to generate the title-text for messages that I send to myself via a Discord bot.

Right now, I'm prompting Mistral to generate these titles in "clickbait" style. I fold the topic of the message and other context into the prompt.

My intention is to shift my attention to the message, which shifts my attention to something else I need to do, because I tend to over-focus on whatever I'm doing at the moment.

It doesn't matter whether what I'm doing at the moment is "good" or "bad". Based on probability, I should almost always switch my attention when I receive such a message because I should have switched an hour ago.

To guarantee consistent JSON output, I use a llama.cpp grammar (converted from JSON schema)

Generation is via CPU (Ryzen 5800) because it's an async background operation and also because my 1070 GPU is being used by Stable Diffusion XL Turbo to generate the image that goes along with the message.

discuss

order

Tepix|1 year ago

If you send yourself messages via a Discord bot, you lose the privacy advantage of running a LLM locally.

Discord does not have end-to-end encryption for messages.

haxel|1 year ago

Indeed. Currently, my primary concerns are a) surprise b) accessibility c) efficiency and d) self-containment.

Surprise, because that draws my attention better. Not interested in guardrails here.

Accessibility, because I can involve my sons without friction.

Efficiency, because using Discord lets me skip building or finding a component (for now).

And I still get a degree of self-containment because Discord is the only piece I'll need to swap out. Bonus that it doesn't have a recurring cost until then.

Yet privacy still matters to me. Despite being Discord-compromised, the detailed personal context within the prompts remains private. The data that determines the timing and topic of each message remains private as well.