I was commiserating with my brother over how difficult it is to set up an environment to run one LLM or diffusion model, let alone multiple or a combination. It's 5 percent CUDA/ROCm difficulties and 95% Python difficulties. We have a theory that Lanyone working with generative AI has to tolerate output that is only 90% right, and is totaly fine working with a language and environment that only 90% works.
Why is Python so bad at that? It's less kludgy than Bash scripts, but even those are easier to get working.
A tool that was only released, what, a year or two ago? It simply won't be present in nearly all OS/distros. Only modern or rolling will have it (maybe). It's funny when the recommended python dependency manager managers are just as hard to install and use as the script themselves. Very python.
The project is like 80% there by having a pyproject file that should work with uv and poetry. The just aren't any package versions specified and the python version is incredibly lax, and no lock file is provided.
pjc50|6 months ago
dlcarrier|6 months ago
Why is Python so bad at that? It's less kludgy than Bash scripts, but even those are easier to get working.
qingcharles|6 months ago
raybb|6 months ago
zelphirkalt|6 months ago
flanked-evergl|6 months ago
superkuh|6 months ago
wongarsu|6 months ago