Slightly off-topic, but the fact that this script even needs a package manager in a language with a standard library as large as Python is pretty shocking. Making an HTTP request js pretty basic stuff for a scripting language, you shouldn’t need or want a library for it.
And I’m not blaming the author, the standard library docs even recommend using a third party library (albeit not the one the author is using) on the closest equivalent (urllib.request)!
> The Requests package is recommended for a higher-level HTTP client interface.
Especially for a language that has not cared too much about backwards compatibility historically, having an ergonomic HTTP client seems like table stakes.
> Making an HTTP request js pretty basic stuff for a scripting language, you shouldn’t need or want a library for it.
Sometimes languages/runtimes move slowly :) Speaking as a JS developer, this is how we made requests for a long time (before .fetch), inside the browser which is basically made for making requests:
var xhr = new XMLHttpRequest();
xhr.open('POST', 'https://example.com', true);
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr.onload = function () {
console.log(this.responseText);
};
xhr.send('param=add_comment');
Of course, we quickly wanted a library for it, most of us ended up using jQuery.get() et al before it wasn't comfortable up until .fetch appeared (or various npm libraries, if you were an early nodejs adopter)
It has two! — http.client and urllib.request — and they are really usable.
Lots of people just like requests though as an alternative, or for historical reasons, or because of some particular aspect of its ergonomics, or to have a feature they’d rather have implemented for them than have to write in their own calling code.
At this stage it’s like using jQuery just to find an element by css selector (instead of just using document.querySelector.)
They could have used a database driver for msql, postgresql or mongodb for a more realistic example (very common for sysadmin type scripts that are only used once and then thrown away) and your complaint would be invalid, but then you'd have to set up the database and the example would no longer be fit for a quick blog post that gives you the opportunity to just copy paste the code and run it for yourself.
The standard library does not give you a possibility to do async HTTP requests, that's what httpx does. As Python still heavily relies on async this is really a bummer.
Python has historically cared about backwards compatibility. Nowadays they're finally dropping some old libraries that probably shouldn't have been in the stdlib. They're not likely to add more. Especially now that you can add dependencies to scripts so easily
>And I’m not blaming the author, the standard library docs even recommend using a third party library (albeit not the one the author is using) on the closest equivalent (urllib.request)!
For perspective: urllib has existed since at least as 1.4 (released in 1996), as long as python.org's archive goes back (https://docs.python.org/release/1.4/lib/node113.html#SECTION...). Requests dates to 2011. httpx (the author's choice) has a 0.0.1 release from 2015, but effectively didn't exist until 2019 and is still zerover after a failed 1.0.0 prerelease in 2021. Python can't be sanely compared to the modern package-manager-based upstarts because it's literally not from that generation. When Python came out, the idea of versioning the language (not referring to a year some standards document was published) was, as far as I can tell, kinda novel. Python is older than Java, Applescript, and VB; over twice as old as Go; and over three times as old as Swift.
>Especially for a language that has not cared too much about backwards compatibility historically
It's always confused me that people actually see things this way. In my view, excessive concern for compatibility has severely inhibited Python (and especially packaging, if you want to include that despite being technically third-party) from fixing real problems. People switching over to 3.x should have been much faster; the breaking changes were unambiguously for the better and could not have been done in non-breaking ways.
There are tons of things the developers refuse to remove from the standard library that they would never even remotely consider adding today if they weren't already there - typically citing "maintenance burden" for even the simplest things. Trying to get anything added is a nightmare: even if you convince everyone it looks like a good idea, you'll invariably asked to prove interest by implementing it yourself (who's to say all the good ideas come from programmers?) and putting it on PyPI. (I was once told this myself even though I was proposing a method on a builtin. Incidentally, I learned those can be patched in CPython, thanks to a hack involving the GC implementation.) Then, even if you somehow manage to get people to notice you, and everyone likes it, now there is suddenly no reason to add it; after all, you're in a better position to maintain it externally, since it can be versioned separately.
If I were remaking Python today, the standard library would be quite minimal, although it would integrate bare necessities for packaging - APIs, not applications. (And the few things that really need to be in the standard library for a REPL to be functional and aware of the platform, would be in a namespace. They're a honking great idea. Let's do more of those.)
Anyone use PEP 723 + uv with an LSP based editor? What's your workflow? I looked into it briefly, the only thing I saw after a lot of digging around was to use `uv sync --script <script file>` and get the venv from the output of this command, activate that venv or specify it in your editor. Is there any other way, what I describe above seems a bit hacky since `sync` isn't meant to provide the venv path specifically, it just happens to display it.
Edit: I posted this comment before reading the article. Just read it now and I see that the author also kinda had a similar question. But I guess the author didn't happen to find the same workaround as I mention using the `sync` output. If the author sees this, maybe they can update the article if it's helpful to mention what I wrote above.
My general solution to project management problems with PEP 723 scripts is to develop the script as a regular Python application that has `pyproject.toml`.
It lets you use all of your normal tooling.
While I don't use an LSP-based editor, it makes things easy with Ruff and Pyright.
I run my standard Poe the Poet (https://poethepoet.natn.io/) tasks for formatting, linting, and type checking as in any other project.
One drawback of this workflow is that by default, you duplicate the dependencies: you have them both in the PEP 723 script itself and `pyproject.toml`.
I just switched a small server application from shiv (https://github.com/linkedin/shiv) to inline script metadata after a binary dependency broke the zipapp.
I experimented with having `pyproject.toml` as the single source of truth for metadata in this project.
I wrote the following code to embed the metadata in the script before it was deployed on the server.
In a project that didn't already have a build and deploy step, you'd probably want to modify the PEP 723 script in place.
I'm generally not a fan of the incremental rustification of the Python ecosystem, but I started using uv a few weeks ago just for this particular case and have been liking it. And to the point where I'm considering to migrate my full projects as well from their current conda+poetry flow. Just a couple days ago I also modified a script I've been using for a few years to patch pylsp so it can now see uv script envs using the "uv sync --dry-run --script <path>" hack.
Out of curiosity, what are some problems with rustification? Is it an aversion to Rust specifically or a dislike of the ecosystem tools not being written in Python?
The former is subjective, but the latter seems like not really much of an issue compared to the language itself being written in C.
I also modified a script I've been using for a few years to patch pylsp so it can now see uv script envs using the "uv sync --dry-run --script <path>" hack.
This sounds like a really useful modification to the LSP for Python. Would you be willing to share more about how you patched it and how you use it in an IDE?
I used to have a virtual environment for all little scrappy scripts, which would contain libraries I use often like requests, rich, or pandas. I now exclusively use this type of shebang and dependency declaration. It also makes runnings throwaway chatgpt scripts a lot easier, especially if you put PEP-723 instructions in your custom prompt.
Bonus points for "Bonus: where does uv install its virtual environments?" section! I was wondering the same question for a long time but haven't had a chance to dig in. It's great that venv is not being recreated unless any dependencies or Python version got modified
You don't need to run the script as `py wordlookup.py` or make a batch file `wordlookup.cmd` in Windows.
The standard Python installation in Windows installs the py launcher and sets it as the default file handler for .py (and .pyw). So if you try to run `wordlookup.py` Windows will let the py launcher handle it. You can check this with `ftype | find "Python"` or look in the registry.
You can make it even easier that that though. If you add .py to the PATHEXT environment variable you can run .py files without typing the .py extension, just like .exe and .bat.
I have also been switching to uv recently, frequently with --script, and I love it. What I havn't yet figured out though is how to integratge it with VScode's debugger to run the script with F5. It seems to insists on running what it thinks is the right python, not respecting the shebang.
This doesn't require UV, just pip within the same interpreter, but I wouldn't use it for something big, and still requires deps to be updated every now and then ofc (I never tried with raw deps, I always pin dependencies).
Speaking as someone who writes Python code for a living, I like the language, but I consider the ecosystem dire. No one seems able to propose a solution to the problem of 'how do I call someone else's code?' that isn't yelling 'MOAR PACKAGE MANAGERS' in their best Jeremy Clarkson impression.
I have no idea how any of it works and I see no point in learning any of it because by the time I've worked it out, it'll all have changed anyway.
At work, there are plenty of nutjobs who seem to enjoy this bullshit, and as long as following the instructions in the documentation allow me to get the codebase running on my machine, I don't have to deal with any of it.
At home, I refuse to use any Python package that isn't in the Debian repositories. Sure, it's all 'out of date', but if your package pushes breaking changes every fortnight, I'm not interested in using it anyway.
If people are still talking about how great uv is in five years' time, maybe I'll give it a go then.
How long do these isolated uv-created venvs persist for? If you have a lot of scripts then it’s going to be a lot of venvs hanging around ready for subsequent reuse if the same script is run?
Is it possible to curl the uv binary and then invoke such a packaged script with --no-cache to run everything, including the Python installation, from /tmp?
Im fairly certain the answer to this is “yes”. Probably need to futz with env cars to get all the caches etc into /tmp though. It needs to put Python _somewhere_
I don't know of a way to do this for jupyter, but marimo (alternative notebook environment to jupyter) does support self declared dependencies, and indeed uses uv to provide that support.
Tangential, I want to whip up simple apps (or instructing an LLM to do so) but its simpler to do formatted text/tables, inputs, graphs etc in a single HTML file+JS. Which library should I adopt? Seems Marimo is the closest, but are there lighter web popups, graphs, inputs etc?
I let Claude build them (which it does in React). Then I copy and paste them into o3-mini-high and ask it to port to raw html and JS. It pulls in some chart libraries and goes to town. Give it a crack and see.
Seems like you're dismissing the uv single file setup approach without fully understanding it. I'd recommend giving it a try. It's indeed simpler and snappier than any other package manager to date.
[+] [-] ratorx|1 year ago|reply
And I’m not blaming the author, the standard library docs even recommend using a third party library (albeit not the one the author is using) on the closest equivalent (urllib.request)!
> The Requests package is recommended for a higher-level HTTP client interface.
Especially for a language that has not cared too much about backwards compatibility historically, having an ergonomic HTTP client seems like table stakes.
[+] [-] diggan|1 year ago|reply
Sometimes languages/runtimes move slowly :) Speaking as a JS developer, this is how we made requests for a long time (before .fetch), inside the browser which is basically made for making requests:
Of course, we quickly wanted a library for it, most of us ended up using jQuery.get() et al before it wasn't comfortable up until .fetch appeared (or various npm libraries, if you were an early nodejs adopter)[+] [-] gorgoiler|1 year ago|reply
Lots of people just like requests though as an alternative, or for historical reasons, or because of some particular aspect of its ergonomics, or to have a feature they’d rather have implemented for them than have to write in their own calling code.
At this stage it’s like using jQuery just to find an element by css selector (instead of just using document.querySelector.)
[+] [-] imtringued|1 year ago|reply
[+] [-] mkesper|1 year ago|reply
[+] [-] masklinn|1 year ago|reply
But here all the script is going is a trivial GET, that’s
[+] [-] gabrielsroka|1 year ago|reply
[+] [-] rat87|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] jgalt212|1 year ago|reply
[+] [-] zahlman|1 year ago|reply
For perspective: urllib has existed since at least as 1.4 (released in 1996), as long as python.org's archive goes back (https://docs.python.org/release/1.4/lib/node113.html#SECTION...). Requests dates to 2011. httpx (the author's choice) has a 0.0.1 release from 2015, but effectively didn't exist until 2019 and is still zerover after a failed 1.0.0 prerelease in 2021. Python can't be sanely compared to the modern package-manager-based upstarts because it's literally not from that generation. When Python came out, the idea of versioning the language (not referring to a year some standards document was published) was, as far as I can tell, kinda novel. Python is older than Java, Applescript, and VB; over twice as old as Go; and over three times as old as Swift.
>Especially for a language that has not cared too much about backwards compatibility historically
It's always confused me that people actually see things this way. In my view, excessive concern for compatibility has severely inhibited Python (and especially packaging, if you want to include that despite being technically third-party) from fixing real problems. People switching over to 3.x should have been much faster; the breaking changes were unambiguously for the better and could not have been done in non-breaking ways.
There are tons of things the developers refuse to remove from the standard library that they would never even remotely consider adding today if they weren't already there - typically citing "maintenance burden" for even the simplest things. Trying to get anything added is a nightmare: even if you convince everyone it looks like a good idea, you'll invariably asked to prove interest by implementing it yourself (who's to say all the good ideas come from programmers?) and putting it on PyPI. (I was once told this myself even though I was proposing a method on a builtin. Incidentally, I learned those can be patched in CPython, thanks to a hack involving the GC implementation.) Then, even if you somehow manage to get people to notice you, and everyone likes it, now there is suddenly no reason to add it; after all, you're in a better position to maintain it externally, since it can be versioned separately.
If I were remaking Python today, the standard library would be quite minimal, although it would integrate bare necessities for packaging - APIs, not applications. (And the few things that really need to be in the standard library for a REPL to be functional and aware of the platform, would be in a namespace. They're a honking great idea. Let's do more of those.)
[+] [-] frfl|1 year ago|reply
Edit: I posted this comment before reading the article. Just read it now and I see that the author also kinda had a similar question. But I guess the author didn't happen to find the same workaround as I mention using the `sync` output. If the author sees this, maybe they can update the article if it's helpful to mention what I wrote above.
[+] [-] JimDabell|1 year ago|reply
— https://docs.astral.sh/uv/reference/cli/#uv-python-find--scr...
[+] [-] nickysielicki|1 year ago|reply
[+] [-] networked|1 year ago|reply
One drawback of this workflow is that by default, you duplicate the dependencies: you have them both in the PEP 723 script itself and `pyproject.toml`. I just switched a small server application from shiv (https://github.com/linkedin/shiv) to inline script metadata after a binary dependency broke the zipapp. I experimented with having `pyproject.toml` as the single source of truth for metadata in this project. I wrote the following code to embed the metadata in the script before it was deployed on the server. In a project that didn't already have a build and deploy step, you'd probably want to modify the PEP 723 script in place.
[+] [-] skeledrew|1 year ago|reply
[+] [-] ratorx|1 year ago|reply
The former is subjective, but the latter seems like not really much of an issue compared to the language itself being written in C.
[+] [-] htunnicliff|1 year ago|reply
[+] [-] oulipo|1 year ago|reply
[+] [-] stereo|1 year ago|reply
[+] [-] alkh|1 year ago|reply
[+] [-] thisdavej|1 year ago|reply
[+] [-] __float|1 year ago|reply
[+] [-] sireat|1 year ago|reply
Now anyone you give your script to has to install uv first.
[+] [-] sorenjan|1 year ago|reply
The standard Python installation in Windows installs the py launcher and sets it as the default file handler for .py (and .pyw). So if you try to run `wordlookup.py` Windows will let the py launcher handle it. You can check this with `ftype | find "Python"` or look in the registry.
You can make it even easier that that though. If you add .py to the PATHEXT environment variable you can run .py files without typing the .py extension, just like .exe and .bat.
[+] [-] silvanocerza|1 year ago|reply
Hatch has this feature since a year or so too. https://hatch.pypa.io/latest/how-to/run/python-scripts/
[+] [-] smitty1e|1 year ago|reply
[+] [-] ivh|1 year ago|reply
[+] [-] alanfranz|1 year ago|reply
https://www.franzoni.eu/single-file-editable-python-scripts-...
This doesn't require UV, just pip within the same interpreter, but I wouldn't use it for something big, and still requires deps to be updated every now and then ofc (I never tried with raw deps, I always pin dependencies).
[+] [-] cjs_ac|1 year ago|reply
I have no idea how any of it works and I see no point in learning any of it because by the time I've worked it out, it'll all have changed anyway.
At work, there are plenty of nutjobs who seem to enjoy this bullshit, and as long as following the instructions in the documentation allow me to get the codebase running on my machine, I don't have to deal with any of it.
At home, I refuse to use any Python package that isn't in the Debian repositories. Sure, it's all 'out of date', but if your package pushes breaking changes every fortnight, I'm not interested in using it anyway.
If people are still talking about how great uv is in five years' time, maybe I'll give it a go then.
[+] [-] Titan2189|1 year ago|reply
Only problem I haven't been able to solve is how to convince my IDE (PyCharm) to run all scripts through uv before executing them / debugging them.
PyCharm does have uv support, but from what I can see only for uv managed projects, not for individual scripts with embedded requirements.
[+] [-] indigodaddy|1 year ago|reply
[+] [-] incognito124|1 year ago|reply
[+] [-] ukuina|1 year ago|reply
Is it possible to curl the uv binary and then invoke such a packaged script with --no-cache to run everything, including the Python installation, from /tmp?
[+] [-] simonw|1 year ago|reply
There are a bunch of environment variables for controlling where it puts things, here's one that looks relevant: https://docs.astral.sh/uv/configuration/environment/#uv_pyth...
[+] [-] rat87|1 year ago|reply
[+] [-] rtpg|1 year ago|reply
[+] [-] diego898|1 year ago|reply
[+] [-] Galanwe|1 year ago|reply
[+] [-] benrutter|1 year ago|reply
If feels a little bit less elegant, and you don't get access to uv's caching goodness, but that'd more or less achieve what you're looking for!
[+] [-] ionitist|1 year ago|reply
[+] [-] stereo|1 year ago|reply
[+] [-] intellectronica|1 year ago|reply
[+] [-] aitchnyu|1 year ago|reply
[+] [-] renewiltord|1 year ago|reply
[+] [-] MeetingsBrowser|1 year ago|reply
[+] [-] kstrauser|1 year ago|reply
[+] [-] dagenix|1 year ago|reply
And yet, the rest of the article is about uv. According to uv itself:
> An extremely fast Python package and project manager, written in Rust.
It's a package manager!
[+] [-] GolDDranks|1 year ago|reply
[+] [-] frfl|1 year ago|reply