(no title)
thepasswordapp | 3 months ago
The enjoyment factor is real. The iteration speed with Claude Code is insane. But the model's suggestions still need guardrails.
For security-focused apps especially, you can't just accept what the LLM generates. We spent weeks ensuring passwords never touch the LLM context - that's not something a vibe-coded solution catches by default.
The productivity gains are real, but so is the need for human oversight on the security-critical parts.
000ooo000|3 months ago
thepasswordapp|2 months ago
Two clarifications:
1. We don't ask for your current passwords. The app imports your CSV from your existing password manager (1Password, Bitwarden, etc.), which you already trust with your credentials. We automate the change process - you provide the new passwords you want.
2. Zero passwords leave your machine. The app runs locally. Browser automation happens in a local Playwright instance. The AI (GPT-5-mini via OpenRouter) only sees page structure, never credential values. Passwords are passed to forms via a separate injection mechanism that's invisible to the LLM context.
The "vibe coding" comment was about development speed with AI assistants, not about skipping security review. We spent weeks specifically on credential isolation architecture - making sure passwords can't leak to logs, LLM prompts, or network requests. That's the opposite of careless.
Code's not open source yet, but we're working toward that for exactly the reasons you describe - trust requires verification.
stogot|3 months ago
thepasswordapp|2 months ago
The core approach: browser-use's Agent class accepts a `credentials` parameter that gets passed to custom action functions but never included in the LLM prompt. So when the agent needs to fill a password field, it calls a custom `enter_password()` function that receives the credential via this secure channel rather than having it in the visible task context.
We forked browser-use to add this (github.com/anthropics/browser-use doesn't have it upstream yet). The modification is in `agent/service.py` - adding `credentials` to the Agent constructor and threading it through to the tool registry.
Key parts: 1. Passwords passed via `sensitive_data` dict 2. Custom action functions receive credentials as parameters 3. LLM only sees "call enter_password()" not the actual value 4. Redaction at logging layer as defense-in-depth
Would be happy to clean this up into a standalone pattern/PR. The trickiest part is that it requires changes to the core Agent class, not just custom actions on top.