top | item 43739314

(no title)

heycesr | 10 months ago

TL;DR: We built PromptL, an open-source templating language for AI prompts, because we were frustrated with how messy prompt engineering had become. PromptL lets you write prompts in a structured, reusable way: with variables, logic, and multi-step workflows. No more hard-coded strings or copy-pasting between AI calls. We hope it makes prompt development more reliable and collaborative, and we’d love to share why and how we built it.

Over the past year, we worked a lot with LLMs. We constantly found ourselves wrestling with prompts: embedding variables into giant strings, stitching together calls, and juggling many slightly different versions. It felt like trying to write a program without a programming language.

We weren’t alone. Most teams manage prompts as plain text or JSON, leading to:

- Repetition and inconsistency - Hard to maintain structure - Poor readability - Clumsy collaboration

So we built PromptL to bring the benefits of a markup language to prompt design. In PromptL, you write a declarative prompt script with system/user messages, variables, and logic. A compiler turns this into something any LLM API can use (OpenAI, Anthropic, etc.).

Here's a quick taste of what PromptL looks like in action:

```

---

model: gpt-4o

temperature: 0.7

---

You are a helpful coding assistant.

<user>

Generate unit tests for the following function:

{{ code_snippet }}

</user>

```

This prompt:

- Defines model settings - Includes a system role message in plain text - Has a user block with a {{ code_snippet }} placeholder

Much easier than managing raw JSON or string concatenation. If you want to change something, just edit the template. PromptL handles formatting and interpolation automatically.

We built in features that we felt were missing in our prompt workflows:

1. Variables: Define once, reuse anywhere. No more brittle string concat.

2. Role blocks: <system>, <user>, <assistant> make multi-turn prompts easy to follow.

3. Control flow: Add basic logic: if/else and loops. Great for adapting prompts to context.

4. Chaining: Compose multiple steps in a workflow. For example, one prompt to analyze, another to act.

5. Multi-modal placeholders: Include non-text inputs (like images) cleanly in templates.

6. Provider-agnostic: PromptL compiles to a standard message array usable with any LLM. Built-in adapters for OpenAI, Anthropic, and open-source models.

We kept the syntax simple. If you know basic YAML/Markdown and a little programming logic, you can use PromptL. That was important, we wanted non-engineers to help build and edit prompts. At our company, even non-devs are tweaking PromptL files now.

Why did we build this?

We weren’t sure a “prompt language” was necessary. But once we built a prototype, we were hooked. It let us iterate faster: no more writing glue code for each prompt change, just edit the template and recompile.

We shared it with a few devs and got great feedback: they saved time and could iterate faster. That led to new features like chaining and control flow. PromptL is now at v0.5.4. Still early, but stable and usable.

We open-sourced it because we believe prompt engineering needs better tools and shared standards. You can find the compiler and spec on GitHub. Contributions welcome!

We’re excited (and a little nervous) to finally launch PromptL. Our goal is to start better conversations about prompt design and offer a tool we found useful. Even if PromptL isn’t the final answer, we hope it helps move things forward.

If you're curious, check out the repo or our docs. There’s a quick start guide and example templates. We’ll be hanging out in the comments, ask us anything!

Thanks for reading, and thanks to HN for inspiring tools like this. We built PromptL to make our lives easier. Maybe it can help you too.

discuss

order

No comments yet.