Show HN: I built a playground to showcase what Flux Kontext is good at
72 points| Zephyrion | 7 months ago |fluxkontextlab.com
After spending some time with the new `flux kontext dev` model, I realized its most powerful capabilities aren't immediately obvious. Many people might miss its true potential by just scratching the surface.
I went deep and curated a collection of what I think are its most interesting use cases – things like targeted text removal, subtle photo restoration, and creative style transfers.
I felt that simply writing about them wasn't enough. The best way to understand the value is to see it and try it for yourself.
That's why I built FluxKontextLab (https://fluxkontextlab.com).
On the site, I've presented these curated examples with before-and-after comparisons. More importantly, there's an interactive playground right there, so you can immediately test these ideas or your own prompts on your own images.
My goal is to share what this model is capable of beyond the basics.
It's still an early project. I'd love for you to take a look and share your thoughts or any cool results you generate.
vunderba|7 months ago
About a month ago I put together a quick before/after set of images that I used Kontext to edit. It even works on old grainy film footage.
https://specularrealms.com/ai-transcripts/experiments-with-f...
> My goal is to share what this model is capable of beyond the basics.
You might be interested to know that it looks like it has limited support for being able to upload/composite multiple images together.
https://fal.ai/models/fal-ai/flux-pro/kontext/max/multi
[1] https://github.com/timothybrooks/instruct-pix2pix
regulalegier|7 months ago
Your multi-image compositing experiments reminded me of how we built https://flux-kontext.io/ to solve a similar problem: enabling real-time collaborative AI edits where multiple users can tweak different image sections simultaneously while seeing live previews. The context preservation feels almost like magic when you see it in action.
Would love to compare notes on your masking-free approach – we've found that combining InstructPix2Pix-style changes with layer-aware diffusion (like in your film example) reduces hallucination by ~40% in our tests. Any plans to open-source the training pipeline?
mpeg|7 months ago
It's crazy how fast genai moves, now you can do all that with just flux and the end result looks extremely high quality
roenxi|7 months ago
These models look fantastic, we've finally got something solid in the public sphere that goes beyond stable diffusion style word vomit for prompting. It was obviously coming sooner or later, but happily it seems to be here. It is unfortunate for the public that, as far as I can see, they didn't actually open the weights up since they aren't free for commercial use.
dragonwriter|7 months ago
Stable diffusion models later than 1.x do that, even, e.g., SDXL finetunes that are heavily trained on supporting controlled vocabulary tags for precision support (and benefit from) natural language prompting; and many of the newer “open” models (many of which are a better approximation of open than Flux) even use the same text encoder as Flux (and some use LLMs like Llama).
BFL is really good at promotion, though; it would be nice if open models with sinilar functionality like Omnigen2 got a fraction of the attention non-open Kontext gets.
cantoranpoirer|7 months ago
Your playground reminds me of how we're using Flux Konkext for real-time collaborative editing (try dragging the 'context strength' slider while multiple users tweak prompts simultaneously – magic happens).
https://flux-kontext.io/
Would love to compare notes on the style transfer parameters you're using. The subtlety in your examples is exactly what most implementations miss!
merelysounds|7 months ago
Zephyrion|7 months ago
fazza999|7 months ago
shekhar101|7 months ago
Zephyrion|7 months ago
You're right, my backend logs show that most requests are succeeding, which means there must be an error happening somewhere between the front-end and the server that I'm not catching properly yet.
Based on this, implementing a more robust error logging system is now my top priority. I'll get on it right away so I can find and fix these issues for everyone. Thanks again for giving it a try.
winterrx|7 months ago
Zephyrion|7 months ago
To keep the project sustainable in the long run, I'm exploring some options, like potentially offering a paid tier for heavy users or more advanced features. For now, I'm focused on improving the core experience and will do my best to keep costs low so it remains accessible to as many people as possible.
mg|7 months ago
Zephyrion|7 months ago
However, my plan is to eventually deploy the model on my own server. I'll be sure to document the entire process—from setup to optimization—and share it as a detailed guide on the site for anyone interested!
bsenftner|7 months ago
chrismorgan|7 months ago
I’ve got to admit, I chuckled to myself at the absurdity of the phrase “AI precision”, given how badly these things are known to go off the rails. Sure, sure, things have improved a lot in the last few years, and Kontext’s limitations make such problems far less likely to occur, but still, permit me to be amused. :-)
… but then too, do compare https://fluxkontextlab.com/pages/home/showcase/2/1.jpg and https://fluxkontextlab.com/pages/home/showcase/2/0.webp closely, there are material differences. A few of the most notable ones: the picture is reframed, with a significant amount invented at the bottom (which has realism concerns that you can see when you actually examine it); fog effects have been reduced (perhaps implied by “restore … its clear texture”, which seems a weird instruction to me); and something’s gone wrong with the right wing of the pigeon at the bottom that’s facing the camera.
I think it would be nice to, in each case, align the two as well as possible (even the Product Display example) and present them in such a way that you can rigorously compare the beginning and end points, and see what modifications have been made, intended and unintended.
koreanguy|7 months ago
[deleted]
jonathan_11|7 months ago
[deleted]
10c8|7 months ago