wsxiaoys's comments

wsxiaoys | 1 month ago | on: A 4-part deep dive on building AI code edits inside VS Code

Hi HN,

I wrote a 4-part series on how we built the AI edit model behind Pochi’s coding agent.

It covers everything from real-time context management and request lifecycles to dynamically rendering code edits using only VS Code’s public APIs.

I’ve written this as openly and concretely as possible, with implementation details and trade-offs.

Full series:

1. The Edit Model Behind Tab Completion: https://docs.getpochi.com/developer-updates/how-we-created-n...

2. Real-Time Context Management in Your Code Editor: https://docs.getpochi.com/developer-updates/context-manageme...

3. Request Management Under Continuous Typing: https://docs.getpochi.com/developer-updates/request-manageme...

4. Dynamic Rendering Strategies for AI Code Edits: https://docs.getpochi.com/developer-updates/dynamic-renderin...

wsxiaoys | 2 months ago | on: Working Around VS Code APIs to Render LLM Suggestions

OP here - I've talked in detail about how we rendered NES suggestions using only VS Code public APIs.

Most tools fork the editor or build a custom IDE so they can skip the hard interaction problems.

Our NES is a VS Code–native feature. That meant living inside strict performance budgets and interaction patterns that were never designed for LLMs proposing multi-line, structural edits in real time.

In this case, surfacing enough context for an AI suggestion to be actionable, without stealing attention, is much harder.

That pushed us toward a dynamic rendering strategy instead of a single AI suggestion UI. Each path gets deliberately scoped to the situations where it performs best, aligning it with the least disruptive representation for a given edit.

If AI is going to live inside real editors, I think this is the layer that actually matters.

Happy to hear your thoughts!

wsxiaoys | 3 months ago | on: How we built context management for tab completion

OP here - this is Part 2 of a series documenting how we built NES (Next Edit Suggestions), our real-time edit model inside the Pochi editor extension.

The real challenge (and what ultimately determines whether NES feels “intent-aware”) was how we manage context in real time while the developer is editing live. For anyone building real-time AI inside editors, IDEs, or interactive tools.

I hope you find this interesting. Happy to answer any questions!

wsxiaoys | 3 months ago | on: Creating a Tab completion model from scratch

I’ve been experimenting with next-edit prediction for a while and wrote up how we trained the edit model that powers our Tab completion feature. This post is part of a broader series where we share how we built this feature from the low-level modeling right up to the editor extension.

The cool part is we fine-tuned Gemini Flash Lite with LoRA instead of an OSS model, helping us avoid all the infra overhead and giving us faster responses with lower compute cost.

wsxiaoys | 4 months ago | on: Ask HN: What are you working on? (October 2025)

I've spent the last few months working on a custom RL model for coding tasks. The biggest headache has been the lack of good tooling for tuning the autorater's prompt. (That's the judge that gives the training feedback.) The process is like any other quality-focused task—running batch rating jobs and doing SxS evaluations—but the tooling really falls short. I think I'll have to build my own tools once I wrap up the current project

wsxiaoys | 1 year ago | on: Tabby: Self-hosted AI coding assistant

Never imagined our project would make it to the HN front page on Sunday!

Tabby has undergone significant development since its launch two years ago [0]. It is now a comprehensive AI developer platform featuring code completion and a codebase chat, with a team [1] / enterprise focus (SSO, Access Control, User Authentication).

Tabby's adopters [2][3] have discovered that Tabby is the only platform providing a fully self-service onboarding experience as an on-prem offering. It also delivers performance that rivals other options in the market. If you're curious, I encourage you to give it a try!

[0]: https://www.tabbyml.com

[1]: https://demo.tabbyml.com/search/how-to-add-an-embedding-api-...

[2]: https://www.reddit.com/r/LocalLLaMA/s/lznmkWJhAZ

[3]: https://www.linkedin.com/posts/kelvinmu_last-week-i-introduc...

wsxiaoys | 1 year ago | on: Rank Fusion for improved code context in RAG

Fun fact: We've implemented binary embedding search [1] without the need for a specialized vector database. Instead, dimensional tokens like 'embedding_0_0', 'embedding_1_0' are created and being built into the tantivy index [2].

We're satisfied with the quality and performance this approach yields, while still keep Tabby embed everything into a single binary.

[1] My binary vector search is better than your FP32 vectors: https://blog.pgvecto.rs/my-binary-vector-search-is-better-th...

[2] Tantivy: https://github.com/quickwit-oss/tantivy

wsxiaoys | 1 year ago | on: Ask HN: Who is hiring? (May 2024)

TabbyML | https://tabbyml.com | Software Engineer (Rust) / Product Engineer – Full-Time | Remote

Tabby strives to become the AI Intelligence Stack for the entire development lifecycle. We are a fully distributed, all-remote team.

Our tech stack includes:

  * Frontend: TypeScript, React, Next.js
  * Backend: Rust, GraphQL
  * IDE/Extension: TypeScript, Node.js
  * Tools: GitHub, Slack, Linear, Lark
Please apply here: https://tabbyml.vercel.app/
page 1