top | item 41730822

FLUX1.1 [pro] – New SotA text-to-image model from Black Forest Labs

228 points| fagerhult | 1 year ago |replicate.com

142 comments

order

vessenes|1 year ago

Flux is so frustrating to me. Really good prompt adherence, strong ability to keep track of multiple parts of a scene, it's technically very impressive. However it seems to have had no training on art-art. I can't get it to generate even something that looks like Degas, for instance. And, I can't even fine tune a painterly art style of any sort into Flux dev. I get that there was working, living artist backlash at SD and I can therefore imagine that the BFL team has decided not to train on art, but, it's a real loss. Both in terms of human knowledge of, say composition, emotion, and so on, but also for style diversity.

For goodness sake, the MET in New York has a massive trove of open CC0 type licensed art. Dear BFL, please ease up a bit on this, and add some art-art to your models, they will be better as a result.

crystal_revenge|1 year ago

I've had a similar experience, incredible at generating a very specific style of image, but not great at generating anything with a specific style.

I suspect we'll see the answer to this is LoRAs. Two examples that stick out are:

- Flux Tarot v1 [0]

- Flux Amateur Photography [1]

Both of these do a great job of combining all the benefits of Flux with custom styles that seem to work quite well.

[0] https://huggingface.co/multimodalart/flux-tarot-v1 [1] https://civitai.com/models/652699?modelVersionId=756149

whywhywhywhy|1 year ago

>However it seems to have had no training on art-art. I can't get it to generate even something that looks like Degas, for instance

It feels like they just removed names from the datasets to make it worse at recreating famous people and artists.

throwup238|1 year ago

I’ve had the same problem with photography styles, even though the photographer I’m going for is Prokudin-Gorskii who used emulsion plates in the 1910s and the entire Library of Congress collection is in the public domain. I’m curious how they even managed to remove them from the training data since the entire LoC is such an easy dataset to access.

gs17|1 year ago

And I can't imagine there's a real copyright (or ethical) issue with including artwork in the public domain because the artist died over a century ago.

thomastjeffery|1 year ago

I think that's part of what makes FLUX.1 so good: the content it's trained on is very similar.

Diversity is a double-edged sword. It's a desirable feature where you want it, and an undesirable feature everywhere else. If you want an impressionist painting, then it's good to have Monet and Degas in the training corpus. On the other hand, if you want a photograph of water lilies, then it's good to keep Monet out of the training data.

weebull|1 year ago

I wonder if part of the reason it's good is because it's been trained for a more specific task. I can only imagine that if your concept of a "house" includes range from a stately home to "a pineapple under the sea" you're going to end up with a very generalised concept. It's then takes specific prompting to remove the influences you're not interested in.

I suspect the same goes for art styles. There's such huge variety that really they'd be better surveys by separate models.

pdntspa|1 year ago

I wonder if you can use Flux to generate the base image then img2img on SD1.4 to impart artistic style?

skort|1 year ago

>but, it's a real loss. Both in terms of human knowledge of, say composition, emotion, and so on, but also for style diversity

But that real art still exists, and can still be found, so what exactly is the loss here?

ilaksh|1 year ago

Pretty smart model. Here's one I made: https://replicate.com/p/6ez0x8xqvsrga0cjadg8m7bah0

jug|1 year ago

One thing that makes FLUX so special is the prompt understanding. I now gave FLUX 1.1 a prompt "Closeup of a doll house built to resemble a famous room in the TV show Friends" and it gave me one with the sign "Central Perk". I never prompted for the text "Central Perk". A Redditor also discovered that it has an associative understanding of emotions. For example "Rose of passion" and it may draw a flower that is burning, because passion is fiery.

This is miles ahead of most other image generation models available today.

drdaeman|1 year ago

Yet, it doesn't seem to know how a Tektronix 4010 actually looks like... ;)

I had similar issues trying to paint a "I cast non-magic missile" meme with a fantasy wizard using a missile launcher. No model out there (I've tried SD, SDXL, FLUX.1dev and now this FLUX1.1pro) knows how a missile launcher looks like (neither as a generic term, nor any specific systems) and even has no clue how it's held, so they all draw really weird contraptions.

nikcub|1 year ago

I've gone from counting fingers on a hand to keys on a keyboard

loufe|1 year ago

That is astoundingly good adherence to the description. I already liked and was impressed by Flux1 but that is perhaps the most impressive image generation I've ever seen.

loxias|1 year ago

It's quite good at following a detailed paragraph long description of an scene, which is a double edged sword. A lot of the fun for me with early text to image models was underspecifying an image and then enjoying how the model "invents" it. "Steampunk spaceship", "communist bear", "glass city".

flux is amazing, but I find it requires a very literal description, which pushes the "creative work" back to the text itself. Which can certainly be a good thing, just a bit less gratifying to non visual types like myself. :)

I wonder, only somewhat jokingly, if one could make text generators which "imagine" detailed fantastical scenes, suitable for feeding to a text to image model.

sharkjacobs|1 year ago

"state of the art" has become such tired marketing jargon.

"our most advanced and efficient model yet"

"a significant step forward in our mission to empower creators"

I get it, you can't sell things if you don't market them, and you can't make a living making things if you don't sell them, but it's exhausting.

bemmu|1 year ago

Flux genuinely is the best model I’ve tried though. If there is a better one I’d love to know.

halJordan|1 year ago

It is state of the art. And it's not like the art has stagnated.

arizen|1 year ago

- How do copywriters greet each other in the morning?

- Take your morning to the next level!

minimaxir|1 year ago

The official blog post justifies the marketing copy a bit more with metrics.

Der_Einzige|1 year ago

Far more interesting will be when pony diffusion V7 launches.

No one in the image space wants to admit it, but well over half of your user base wants to generate hardcore NSFW with your models and they mostly don’t care about any other capabilities.

Jackson__|1 year ago

Ah, that was one short gravy train even by modern tech company standards. Really wish the space was more competitive and open so it wouldn't just be one company at the top locking their models behind APIs.

skybrian|1 year ago

It doesn’t get piano keyboards right, but it’s the first image generator I’ve tried that sometimes get “someone playing accordion” mostly right.

When I ask for a man playing accordion, it’s usually a somewhat flawed piano accordion, but If I ask for a woman playing accordion, it’s usually a button accordion. I’ve also seen a few that are half-button, half-piano monstrosities.

Also, if I ask for “someone playing accordion”, it’s always a woman.

vunderba|1 year ago

Periodic data is always hard for generative image systems - particularly if that "cycle" window is relatively large (as would be the case for octaves of a piano).

whitehexagon|1 year ago

I'm running Asahi Linux on a 32GB M1 Pro. Any chance of being able to run text-to-image models locally? I've had some success with LLMs, but only the smaller models. No idea where to start with images, everything seems geared towards msft+nvda.

LeoPanthera|1 year ago

"Draw Things" is a native Mac app for text to image. It's a a lot more advanced than DiffusionBee, it will download the models for you, and it's free. It's also available for iOS. (!)

collinvandyck76|1 year ago

DiffusionBee will let you do this quite easily.

edit: nevermind, it's a macos app

doctorpangloss|1 year ago

I'm worried about what happens when more people find out about Ideogram.

There are a lot of things that don't appear in ELO scores. For one, they will not reflect that you cannot prompt women's faces in Flux. We can only speculate why.

liuliu|1 year ago

What do you mean? FLUX.1 prompts women or women faces just fine? Do you mean the skin texture is unrealistic or some other artifacts?

giancarlostoro|1 year ago

How locked down is it? My problem with a lot of these is I like to make really ridiculous meme type images, but I run into walls for dumb reasons. Like if I want to make something thats "copyrighted" like a mix of certain characters from one franchise or whatever, I cannot sometimes I get told that the model cannot generate copyrighted content, even though courts ruled that AI generated stuff cannot be copyrighted either way...

I feel like AI should just be treated as fair use as long as its not 100% blatantly a literal clone of the original work.

byteknight|1 year ago

I won't pay for a model, but that cake image looks dang good.

mainframed|1 year ago

Although culinarily incorrect :)

fortran77|1 year ago

I've been playing with Flux.Dev and such a big step forward from Stable Diffusion and all the other Generative AIs that could run on consumer GPUs.

I just tried this Flux1.1 pro page (prompt: "A sad Macintosh user who is upset because his computer can't play games") and was very impressed by the detail and "understanding" this model has.

jeffbee|1 year ago

I asked for a simple scene and it drew in the exact same AI girl that every text-to-image model wants to draw, same face, same hair, so generic that a Google reverse image search pulls up thousands of the exact same AI girl. No variety of output at all.

ks2048|1 year ago

Is there a good site that compares text-to-image models - showing a bunch of examples of text w/ output on each model?

nirav72|1 year ago

Are there any projects that allow for easy setup and hosting Flux locally? Similar to SD projects like InvokeAI or a1111

vunderba|1 year ago

The answer is it really depends on your hardware, but the nice thing is that you can split out the text encoder when using ComfyUI. On a 24gb VRAM card I can run the Q8_0 GGUF version of flux-dev with the T5 FP16 text encoder. The Q8_0 gguf version in particular has very little visual difference from the original fp16 models. A 1024x1024 image takes about 15 seconds to generate.

sophrocyne|1 year ago

Invoke is model agnostic, and supports Flux, including quantized versions.

minimaxir|1 year ago

Flux is more weird than old SD projects since Flux is extremely resource dependant and won't run on most hardware.

leumon|1 year ago

Using comfyui with the official flux workflow is easy and works nicely. comfy can also be used via API.

Mashimo|1 year ago

I use InvokeAI to run flux.dev and flux.schnell.

pdntspa|1 year ago

DrawThings on Mac

kindkang2024|1 year ago

I really enjoy its service. It's promising for UI design. My advocacy website pages' UI design was bootstrapped using it. It is quite good for developers without much design ability.

Ironically, I am afraid to type the website out and will keep it unknown here. My account could be suspended because of this. It had already reached -1 karma. It's better to keep my account alive.

nubinetwork|1 year ago

I tried using schnell, it won't fit in a 16gb GPU, and I couldn't get it to run on CPU.

TobTobXX|1 year ago

I've sucessfully run schnell and dev on a 12G GPU. They do take 40s/60s repectively, but it works. I used ComfyUI and didn't have to tweak anything.

Mashimo|1 year ago

Oh neat. I wonder if they also improve .schnell and .dev soon. That would be nice :)

jchw|1 year ago

The generated images look impressive of course but I can't help but be mildly amused by the fact that the prompt for the second example image insists strongly that the image should say 1.1:

> ... photo with the text "FLUX 1.1 [Pro]", ..., must say "1.1", ...

...And of course, it does not.

ionwake|1 year ago

Sorry to be a noob, but how does this relate to fastflux.ai which seems to work great and creates an image in less than a second? Is this a new model on a slower host?