FWIW, I wrote isort, but am seriously considering migrating my projects to use Ruff. Long term I think the design is just better over the variety of tools we use within the Python ecosystem today. The fact we have a plethora of projects that are meant to run per a commit with each one reparsing the AST independently, and often using a different approach to do so, just feels untenable long term to me.
> If your project builds a Docker container, also create a .dockerignore file to specify files and directories that should be excluded from the container.
I would nitpick this. You build images not containers and since files are not copied by default there is more nuance here that the .dockerignore file makes builds faster by not including them in the build context.
That does ultimately prevent COPY directives from using them but it is these sorts of brief, slightly inaccurate summaries that mislead folks as they build understanding.
Shouldn't the speeding up of the build make the program less boring?
From my understanding, the program gets more boring as the time it takes an application to build increases.
> slightly inaccurate
Not entirely, I'm not sure the author even wanted to stress on this in the article. People won't learn docker from a python article about the same.
Not sure if I like the recommendation to not let Black change your code and just give out errors.
I absolutely let Black change code and see the value in Black that it does that so the devs do not have to spend time on manually formatting code.
Black shouldn't break anything (and hasn't broken anything for me in the years I used it) but in the unlikely case it does it, there's still pytests/unittests after that that should catch problems...
As I understood it, it was to not let black do the formatting during CI builds. In local dev you’d let it reformat.
Even while it won’t break anything you want CI to be your safety net, flagging a local setup as being wrong is more valuable than magically autocorrecting it.
My current project is my first project in a while which does not use black.
I liked black, though I was never satisfied with the fact that there was no way to normalize quotes to be single quotes: '. Shift keys are hard on your hands, so avoiding " makes a lot of sense to me. But there's the -S option that simply doesn't normalize quotes so it has never been a real issue.
However, this new project has a lot of typer functions with fairly long parameter lists (which correspond to command line arguments so they can't be broken up).
black reformats these into these weird blocks of uneven code that are very hard to read, particularly if you have comments.
Everyone is a fan of black; no one liked the result. :-/
I have a key in my editor to blacken individual files, but we don't have it as part of our CI. Perhaps next project again.
> I absolutely let Black change code and see the value in Black that it does that so the devs do not have to spend time on manually formatting code
100% this. I also let Black auto-format code in the CI and commit these formats.
A lot of developers, intentionally or not, don't have commit hooks properly setup. If Black doesn't change the code in CI they need to spend another cycle manually fixing the issues that Black could have just fixed for them.
You're saying that there's a risk that Black could break your code when formatting? Well, so could developers and I'd trust a machine to be less error-prone.
> Not sure if I like the recommendation to not let Black change your code and just give out errors.
Let black format code before it is checked in. Code should not be reformatted for CI or production, and bad formatting should either ALWAYS throw errors (no known defects allowed) or NEVER throw errors (if it passes tests & runs ship it). Consistency is the key.
Even since the start of python typing, it was recommended to use a more generic type like Iterable instead of List. The author claims that List is too specific -- this seems like a straw man argument against typing that doesn't acknowledge python's own advice.
Also, mypy has gotten really good in recent years and I can vouch that on projects that have typing I catch bugs much much sooner. Previously I would only catch bugs when unit testing, now they are much more commonly type errors.
The other thing typing does is allow for refactoring code. If anything, high code quality relates to the ability to refactor code confidently and typing helps this. Therefore I would put it at the top of the list above all the tooling presented (exception I agree with ci/cd)
Iterable is an import away, while list is already at my fingers.
There's zero harm in using list in private interfaces: I know I'm the only one passing the value, I know it is always a list.
As an argument type, Iterable is compatible with list, so it's benefits are minimal (with rare exceptions).
Lists are easier to inspect in a debugging session.
Iterable can be useful as return type, because it limits the interface.
Iterable is useful if you are actually making use of generators because of memory implications, but in this case you already know to use it, because your interfaces are incompatible with lists.
I can count on fingers of my hands when using Iterable instead of list actually made a difference.
> The other thing typing does is allow for refactoring code.
No. What allows you confident refactoring code are automated tests. I honestly can't understand why people are so obsessed about types, especially in languages like Python or Javascript.
I got good use of the run-time type checking of typeguard [0] when I recently invoked it via its pytest plugin [2]. For all code visited in the test suite, you get a failing test whenever an actual type differs from an annotated type.
> Even since the start of python typing, it was recommended to use a more generic type like Iterable instead of List. The author claims that List is too specific
These statements contradict themselves? List is too specific, and Sequence[item] is preferred. Sometimes you are dealing with a tuple, or a generator, and so it makes more sense to annotate that it is a generic iterable versus a concrete list.
> For example, you basically never care whether something is exactly of type list, you care about things like whether you can iterate over it or index into it.
This is an odd complaint. typing.Sequence[T] has been there since the first iteration of typing (3.5), for exactly that use case, along with many related collection types.
You should never be using static typing with a scripting language like Python or Ruby.
Dynamically typed code is 1/3rd the size of statically typed code, that means that one developer who is using dynamic typing is equivalent to 3 developers using statically typed code via MyPy.
Since the code is 1/3rd of the size it contains 1/3rd of the bugs.
This is confirmed by all the studies that have been done on the topic.
If you use a static type checking with Python, you have increased your development time by 3 and your bug count by 3.
Static typing's advantage is that the code runs a lot faster but that's only true if the language itself is statically typed. So with Python you have just screwed up.
> Coverage measurements are too easy to “game” — you can get to 100% coverage without meaningfully testing all or even most of your code
Still it's a good low bar for testing. It's easy and rises code quality. I have very good results with coverage driving colleagues to write tests. And on code review we can discuss how to make tests more useful and robust and how to decrease number of mocks, etc.
Hard disagree: 100% coverage is not a "good low bar" and does not increase code quality.
Depending on the language and the particular project, my sweet spot for test coverage is between 30-70%, testing the tricky bits.
I've seen 100% code coverage with tests for all the getters and setters. These tests were not only 100% useless, they actively hindered any changes to the system.
One useful technique for checking whether the tests are actually meaningful is mutation testing - mutmut is a great Python implementation: https://mutmut.readthedocs.io
Absolutely not. This leads to testing being invasive and driving the design of your software, usually at the cost of something else (like readability). Testing is a tool, you can't let it turn into a goal.
I don't understand. The title of the post is: "Boring Python: code quality". Further down: "Today I want to talk about what's generally called "code quality" - tools to help...". I'm sorry but "code quality" is not "tooling". The post should be titled: "Python tooling". Code quality: What abstractions are you using in your code?, How easy is to make a change?, How easy is to understand your code base?, What patterns are you using and why?, Are you abusing class inheritance?, How many side effects are present out there and how does that affect your program?, Are you taking advantage of the Python language facilities and idioms?, Is it easy to write unit tests for?, etc. To sum up: "tooling" != "code quality".
> This is the first in hopefully a series of posts I intend to write about how to build/manage/deploy/etc. Python applications in as boring a way as possible.
> For example, you basically never care whether something is exactly of type list, you care about things like whether you can iterate over it or index into it.
Terrible advice not to use type hints and this reason makes no sense. There's already pretty good support for Sequence and Iterable and so on, and if you run into a place where you really can't write down the types (e.g. kwargs, which a lot of Python programmers abuse), then you can use Any.
Blows my mind how allergic Python programmers are to static typing despite the huge and obvious benefits.
It's true that Python's static typing does suck balls compared to most languages, but they're still a gazillion times better than nothing, and most of the reason they suck so much is that so many Python developers don't use them!
> I recommend using two tools together: Black and isort.
Black formats things differently depending on the version. So a project with 2 developers, one running arch and one running ubuntu, will get formatted back and forth.
isort's completely random… For example the latest version I tried decided to alphabetically sort all the imports, regardless if they are part of standard library or 3rd party. This is a big change of behaviour from what it was doing before.
All those big changes introduce commits that make git bisect generally slower. Which might be awful if you also have some C code to recompile at every step of bisecting.
Two developers on the same python project should also use the same version... with poetry it is straightforward to keep track of dev dependencies. Reorder python imports is an alternative for isort: https://github.com/asottile/reorder_python_imports
> So a project with 2 developers, one running arch and one running ubuntu, will get formatted back and forth.
Any team of developers who aren't using the exact same environment are going to run into conflicts.
At the very least, there must be a CI job that runs quality gates in a single environment in a PR and refuses to merge until the code is correct. The simplest way is to just fail the build if the job results in modified code, which leaves it to the dev to "get things right". Or you could have the job do the rewriting for simplicity. Just assuming the devs did things the right way before shipping their code is literally problems waiting to happen.
To avoid CI being a bottleneck, the devs should be developing using the same environment as the CI qualify gates (or just running them locally before pushing) with the same environment. The two simple ways to do this are a Docker image or a VM. People who hate that ("kids today and their Docker! get off my lawn!!") could theoretically use pyenv or poetry to install exact versions of all the Python stuff, but different system deps would still lead to problems.
> All those big changes introduce commits that make git bisect generally slower.
Bisection search is log2(n) so doubling the number of commits should only add one more bisection step, yes?
> Which might be awful if you also have some C code to recompile at every step of bisecting.
That reminds me, I've got to try out ccache (https://ccache.dev/ ) for my project. My full compile is one minute, but the three files that take longest to compile rarely change.
> isort's completely random… For example the latest version I tried decided to alphabetically sort all the imports, regardless if they are part of standard library or 3rd party. This is a big change of behaviour from what it was doing before.
This is not isort! isort has never done that. And it has a formatting guarantee across the major versions that it actively tests against projects online that use it on every single commit to the repository: https://pycqa.github.io/isort/docs/major_releases/release_po...
> So a project with 2 developers, one running arch and one running ubuntu, will get formatted back and forth.
You should never develop using the system Python interpreter. I recommend pyenv [0] to manage the installed interpreters, with a virtual environment for the actual dependencies.
> Black formats things differently depending on the version. So a project with 2 developers, one running arch and one running ubuntu, will get formatted back and forth.
What's the alternative? YAPF is even worse - it will flip flop between styles even on the same version! Its output is much less attractive, and there are even some files we had to whitelist because it never finishes formatting them (Black worked fine on the same files).
Not using a formatter at all is clearly worse than either option.
I love this template as well, and wholeheartedly recommend it. There are a couple things you probably don't need (click and nox, for instance, seem only useful if you're really building a couple specific things) but the gestalt of it is really strong. The [article series](https://medium.com/@cjolowicz/hypermodern-python-d44485d9d76...) that spawned the template is worth reading in full.
I would go so far as to say that the hypermodern template, nomenclature aside, is strictly better than the recommendations that the OP put forward both here and in the previous essay on dependency management. Poetry and ruff, for instance, are both very good tools — and I can understand _not_ recommending them for one reason or another but to not even mention them strikes me as worrisome.
I don't work on large python projects, mostly just small scripts that need to work well (integrating with a 3rd party rest api is a good example). I don't do CI or unittests but I use git. This is because it takes time and honestly no one outside of myself would care for small stuff like that. But I do run autopep8 and pylint it (I ignore stuff like line being too long,broad exception handling or lack of docs).
My concern is a) It needs to be reliable (don't wanna spend a ton of time chasing bugs later on) b) How can I write the actual code better? I see what pro devs write and they use smarter language features or better organization of the code itself that makes it faster and reliable, I wish I could learn that explicitly somewhere.
I mean, just the 2.7->3.0 jump was big for me because since I don't code regularly that meant googling errors a lot basically. Even now, I dread new python versions because some dependency would start using those features and that means I have to use venv to get that small script to work and then figure out how to troubleshoot bugs in that other lib's code with the new feature so I can do a PR for them.
I love python but this is exactly why I prioritize languages that don't churn out new drastic features quickly. Those are just not suitable for people whose day job is not coding and migrating to new versions, supporting code bases, messing with build systems, unit tests, qa,ci,etc... coding is a tool for me, not the centerpiece of all I do. But python is still great despite all that.
> I love python but this is exactly why I prioritize languages that don't churn out new drastic features quickly.
What do you mean by "drastic" features "quickly"? Python releases new version once a year these days, and upgrading our Django-based source code with 150 dependencies from 3.4 to 3.11 literally meant switching out the python version in our CI configuration and README.rst every once in a while, no code changes were necessary for any of those jumps...
Our developer README also contains a guide how to set-up and use pyenv and it's virtualenv plugin which makes installing new python versions and managing virtualenvs easy, just pyenv install, pyenv virtualenv, pyenv local, and your shell automatically uses the correct virtualenv whenever you're anywhere inside your project folder...
jumping to python3 was big, but you had plenty of time to prepare for that and plenty of good utilities to make the jump easier (2to3, six, ...). python2.7 itself was released 18 months after python3.0, and by the time python2.7's support ended, python3.8 was already out...
> Even now, I dread new python versions because some dependency would start using those features
If a dependency breaks compatibility with earlier Python versions because the author wants to use a fancy new feature is not really the fault of Python, is it? Library authors should target the earliest supported Python version they can.
Being backwards compatible (at which Python has been doing a good job since the 2->3 fiasco) is one thing, but trying to be forwards compatible is something else.
Are you suggesting that Python developers should only ship bug fixes so that Python 3.0 can still run code written for Python 3.11?
Not agreeing/disagreeing with the message, but the style of writing here is quite nice. It's focused, reasoned, and doesn't make too many assumptions about your tools and environment--and I appreciate that acknowledgment.
One thing that is underestimated is keep the tools version in sync between your app dev dependencies and pre-commit. This also includes plugins for specific tools (for instance flake8). A solution would be to define the hooks in pre-commit to run the tools inside your venv.
About typings: I agree the eco-system is not mature enough, especially for some frameworks such as Django, but the effort is still valuable and in many cases the static analysis provided by mypy is more useful than not using it at all. So I would suggest to try do your best to make it work.
I disagree with this assessment on running a static type checker, although I will admit, every update of python over the past 3 years seems to add more and more typing changes which tends to force global typing updates (looking at you Numpy for python 3.12!)
When python converges on consistent typing across its extended numpy and pandas ecosystem, I believe we will be able to move towards a fully JIT'd language.
> I believe we will be able to move towards a fully JIT'd language.
Unless they actually go ahead with the deferred evaluation of types (PEP 563), make all types strings at runtime and make it impossible to know which type they actually are. :)
What's the current state of the art of managing multiple virtual environments, running tests and running your application?
On Ubuntu and Windows I use Poetry [0], and it works, although it has (had?) some quirks during the installation on Windows.
I liked its portability and lockfile format though.
A few years ago I used conda [1], which was nice because it came batteries included especially for Deep Learning stuff.
I switched because it felt way to heavy for porting scripts and small applications to constrained devices like a Raspberry Pi.
And then there are also Docker Images, which I use if I want to give an application to somebody that "just works".
I use pip-tools to build a requirements.txt file from a requirements.in file. It does basically the same as poetry, but more manually. For me that's good because one of the application has a lot of requirements, and it needs to be deployed on systems with different Python versions, and the requirements need to be packaged along with the application because the servers have very limited internet access. So as long as Poetry doesn't add good support for multiple python versions and/or easy packaging of all dependencies, it isn't worth it for me to do the migration.
I'm liking PDM for a while now. Quicker than Poetry and built according to the Python package spec in mind and not as an afterthought. While it was originally meant to work with PEP 582, it works with virtual environments too (now default).
>I switched because it felt way to heavy for porting scripts and small applications to constrained devices like a Raspberry Pi.
Agreed. I like docker images for smallish portable scripts. At home I can develop on my Mac and port it to a Raspberry PI or another x86 Windows/Linux box.
Planning on running a docker swarm with a few Pi’s to see how it works.
I wish VSCode would figure out that ExampleModel.objects.first() returns ExampleModel or None or ExampleModel.objects.filter() returns an iterable of ExampleModel. Has anybody gotten this working, automatically or manually annotating?
You can annotate the manager and get some typing help in the editor. And there’s django-stubs which helps a little when running mypy. It’s not as good as pycharm though.
anderskaseorg|3 years ago
https://github.com/charliermarsh/ruff
It’s literally 100 times faster, with comparable coverage to Flake8 plus dozens of plugins, automatic fixes, and very active development.
timothycrosley|3 years ago
gjulianm|3 years ago
trymas|3 years ago
Though their `v0.0.X` versioning is very funny to me (https://0ver.org/).
captnswing|3 years ago
replaced both flake8 and isort across all my projects
drcongo|3 years ago
leetrout|3 years ago
I would nitpick this. You build images not containers and since files are not copied by default there is more nuance here that the .dockerignore file makes builds faster by not including them in the build context.
That does ultimately prevent COPY directives from using them but it is these sorts of brief, slightly inaccurate summaries that mislead folks as they build understanding.
nigamanth|3 years ago
> slightly inaccurate Not entirely, I'm not sure the author even wanted to stress on this in the article. People won't learn docker from a python article about the same.
c120|3 years ago
I absolutely let Black change code and see the value in Black that it does that so the devs do not have to spend time on manually formatting code.
Black shouldn't break anything (and hasn't broken anything for me in the years I used it) but in the unlikely case it does it, there's still pytests/unittests after that that should catch problems...
dustrider|3 years ago
Even while it won’t break anything you want CI to be your safety net, flagging a local setup as being wrong is more valuable than magically autocorrecting it.
TomSwirly|3 years ago
I liked black, though I was never satisfied with the fact that there was no way to normalize quotes to be single quotes: '. Shift keys are hard on your hands, so avoiding " makes a lot of sense to me. But there's the -S option that simply doesn't normalize quotes so it has never been a real issue.
However, this new project has a lot of typer functions with fairly long parameter lists (which correspond to command line arguments so they can't be broken up).
black reformats these into these weird blocks of uneven code that are very hard to read, particularly if you have comments.
Everyone is a fan of black; no one liked the result. :-/
I have a key in my editor to blacken individual files, but we don't have it as part of our CI. Perhaps next project again.
jkukul|3 years ago
100% this. I also let Black auto-format code in the CI and commit these formats.
A lot of developers, intentionally or not, don't have commit hooks properly setup. If Black doesn't change the code in CI they need to spend another cycle manually fixing the issues that Black could have just fixed for them.
You're saying that there's a risk that Black could break your code when formatting? Well, so could developers and I'd trust a machine to be less error-prone.
indymike|3 years ago
Let black format code before it is checked in. Code should not be reformatted for CI or production, and bad formatting should either ALWAYS throw errors (no known defects allowed) or NEVER throw errors (if it passes tests & runs ship it). Consistency is the key.
bsdz|3 years ago
jerrygenser|3 years ago
Also, mypy has gotten really good in recent years and I can vouch that on projects that have typing I catch bugs much much sooner. Previously I would only catch bugs when unit testing, now they are much more commonly type errors.
The other thing typing does is allow for refactoring code. If anything, high code quality relates to the ability to refactor code confidently and typing helps this. Therefore I would put it at the top of the list above all the tooling presented (exception I agree with ci/cd)
hbrn|3 years ago
There's zero harm in using list in private interfaces: I know I'm the only one passing the value, I know it is always a list.
As an argument type, Iterable is compatible with list, so it's benefits are minimal (with rare exceptions).
Lists are easier to inspect in a debugging session.
Iterable can be useful as return type, because it limits the interface.
Iterable is useful if you are actually making use of generators because of memory implications, but in this case you already know to use it, because your interfaces are incompatible with lists.
I can count on fingers of my hands when using Iterable instead of list actually made a difference.
liendolucas|3 years ago
No. What allows you confident refactoring code are automated tests. I honestly can't understand why people are so obsessed about types, especially in languages like Python or Javascript.
spappal|3 years ago
[0]: https://github.com/agronholm/typeguard/
[1]: https://typeguard.readthedocs.io/en/latest/userguide.html#us...
whalesalad|3 years ago
These statements contradict themselves? List is too specific, and Sequence[item] is preferred. Sometimes you are dealing with a tuple, or a generator, and so it makes more sense to annotate that it is a generic iterable versus a concrete list.
anderskaseorg|3 years ago
This is an odd complaint. typing.Sequence[T] has been there since the first iteration of typing (3.5), for exactly that use case, along with many related collection types.
https://docs.python.org/3/library/typing.html
mypy isn’t perfect, but it’s sure better than making things up without any checks; you’re going to want it for all but the smallest projects.
ReflectedImage|3 years ago
Dynamically typed code is 1/3rd the size of statically typed code, that means that one developer who is using dynamic typing is equivalent to 3 developers using statically typed code via MyPy.
Since the code is 1/3rd of the size it contains 1/3rd of the bugs.
This is confirmed by all the studies that have been done on the topic.
If you use a static type checking with Python, you have increased your development time by 3 and your bug count by 3.
Static typing's advantage is that the code runs a lot faster but that's only true if the language itself is statically typed. So with Python you have just screwed up.
bvrmn|3 years ago
Still it's a good low bar for testing. It's easy and rises code quality. I have very good results with coverage driving colleagues to write tests. And on code review we can discuss how to make tests more useful and robust and how to decrease number of mocks, etc.
tasuki|3 years ago
Depending on the language and the particular project, my sweet spot for test coverage is between 30-70%, testing the tricky bits.
I've seen 100% code coverage with tests for all the getters and setters. These tests were not only 100% useless, they actively hindered any changes to the system.
BerislavLopac|3 years ago
hbrn|3 years ago
Absolutely not. This leads to testing being invasive and driving the design of your software, usually at the cost of something else (like readability). Testing is a tool, you can't let it turn into a goal.
unknown|3 years ago
[deleted]
liendolucas|3 years ago
simonw|3 years ago
> This is the first in hopefully a series of posts I intend to write about how to build/manage/deploy/etc. Python applications in as boring a way as possible.
It's a riff on Boring Technology, see https://boringtechnology.club/
IshKebab|3 years ago
Terrible advice not to use type hints and this reason makes no sense. There's already pretty good support for Sequence and Iterable and so on, and if you run into a place where you really can't write down the types (e.g. kwargs, which a lot of Python programmers abuse), then you can use Any.
Blows my mind how allergic Python programmers are to static typing despite the huge and obvious benefits.
It's true that Python's static typing does suck balls compared to most languages, but they're still a gazillion times better than nothing, and most of the reason they suck so much is that so many Python developers don't use them!
bombolo|3 years ago
Black formats things differently depending on the version. So a project with 2 developers, one running arch and one running ubuntu, will get formatted back and forth.
isort's completely random… For example the latest version I tried decided to alphabetically sort all the imports, regardless if they are part of standard library or 3rd party. This is a big change of behaviour from what it was doing before.
All those big changes introduce commits that make git bisect generally slower. Which might be awful if you also have some C code to recompile at every step of bisecting.
kuu|3 years ago
Then add black as part of your environment with an specific version...
nxpnsv|3 years ago
throwawaaarrgh|3 years ago
Any team of developers who aren't using the exact same environment are going to run into conflicts.
At the very least, there must be a CI job that runs quality gates in a single environment in a PR and refuses to merge until the code is correct. The simplest way is to just fail the build if the job results in modified code, which leaves it to the dev to "get things right". Or you could have the job do the rewriting for simplicity. Just assuming the devs did things the right way before shipping their code is literally problems waiting to happen.
To avoid CI being a bottleneck, the devs should be developing using the same environment as the CI qualify gates (or just running them locally before pushing) with the same environment. The two simple ways to do this are a Docker image or a VM. People who hate that ("kids today and their Docker! get off my lawn!!") could theoretically use pyenv or poetry to install exact versions of all the Python stuff, but different system deps would still lead to problems.
eesmith|3 years ago
Bisection search is log2(n) so doubling the number of commits should only add one more bisection step, yes?
> Which might be awful if you also have some C code to recompile at every step of bisecting.
That reminds me, I've got to try out ccache (https://ccache.dev/ ) for my project. My full compile is one minute, but the three files that take longest to compile rarely change.
timothycrosley|3 years ago
This is not isort! isort has never done that. And it has a formatting guarantee across the major versions that it actively tests against projects online that use it on every single commit to the repository: https://pycqa.github.io/isort/docs/major_releases/release_po...
BerislavLopac|3 years ago
You should never develop using the system Python interpreter. I recommend pyenv [0] to manage the installed interpreters, with a virtual environment for the actual dependencies.
[0] https://github.com/pyenv/pyenv
zzzeek|3 years ago
use pre-commit https://pre-commit.com/ so that everyone is on the same version for commits.
IshKebab|3 years ago
Not using a formatter at all is clearly worse than either option.
tilschuenemann|3 years ago
https://github.com/cjolowicz/cookiecutter-hypermodern-python
jmduke|3 years ago
I would go so far as to say that the hypermodern template, nomenclature aside, is strictly better than the recommendations that the OP put forward both here and in the previous essay on dependency management. Poetry and ruff, for instance, are both very good tools — and I can understand _not_ recommending them for one reason or another but to not even mention them strikes me as worrisome.
badrabbit|3 years ago
My concern is a) It needs to be reliable (don't wanna spend a ton of time chasing bugs later on) b) How can I write the actual code better? I see what pro devs write and they use smarter language features or better organization of the code itself that makes it faster and reliable, I wish I could learn that explicitly somewhere.
I mean, just the 2.7->3.0 jump was big for me because since I don't code regularly that meant googling errors a lot basically. Even now, I dread new python versions because some dependency would start using those features and that means I have to use venv to get that small script to work and then figure out how to troubleshoot bugs in that other lib's code with the new feature so I can do a PR for them.
I love python but this is exactly why I prioritize languages that don't churn out new drastic features quickly. Those are just not suitable for people whose day job is not coding and migrating to new versions, supporting code bases, messing with build systems, unit tests, qa,ci,etc... coding is a tool for me, not the centerpiece of all I do. But python is still great despite all that.
black3r|3 years ago
What do you mean by "drastic" features "quickly"? Python releases new version once a year these days, and upgrading our Django-based source code with 150 dependencies from 3.4 to 3.11 literally meant switching out the python version in our CI configuration and README.rst every once in a while, no code changes were necessary for any of those jumps...
Our developer README also contains a guide how to set-up and use pyenv and it's virtualenv plugin which makes installing new python versions and managing virtualenvs easy, just pyenv install, pyenv virtualenv, pyenv local, and your shell automatically uses the correct virtualenv whenever you're anywhere inside your project folder...
jumping to python3 was big, but you had plenty of time to prepare for that and plenty of good utilities to make the jump easier (2to3, six, ...). python2.7 itself was released 18 months after python3.0, and by the time python2.7's support ended, python3.8 was already out...
selcuka|3 years ago
If a dependency breaks compatibility with earlier Python versions because the author wants to use a fancy new feature is not really the fault of Python, is it? Library authors should target the earliest supported Python version they can.
Being backwards compatible (at which Python has been doing a good job since the 2->3 fiasco) is one thing, but trying to be forwards compatible is something else.
Are you suggesting that Python developers should only ship bug fixes so that Python 3.0 can still run code written for Python 3.11?
_bohm|3 years ago
powersnail|3 years ago
toastal|3 years ago
mau|3 years ago
About typings: I agree the eco-system is not mature enough, especially for some frameworks such as Django, but the effort is still valuable and in many cases the static analysis provided by mypy is more useful than not using it at all. So I would suggest to try do your best to make it work.
LarsDu88|3 years ago
When python converges on consistent typing across its extended numpy and pandas ecosystem, I believe we will be able to move towards a fully JIT'd language.
bombolo|3 years ago
Unless they actually go ahead with the deferred evaluation of types (PEP 563), make all types strings at runtime and make it impossible to know which type they actually are. :)
But they will probably not: https://discuss.python.org/t/type-annotations-pep-649-and-pe...
But it could be a breaking change in the language. As it is, I can run this "a: str = 3" and it will work.
modeopfer|3 years ago
On Ubuntu and Windows I use Poetry [0], and it works, although it has (had?) some quirks during the installation on Windows. I liked its portability and lockfile format though.
A few years ago I used conda [1], which was nice because it came batteries included especially for Deep Learning stuff. I switched because it felt way to heavy for porting scripts and small applications to constrained devices like a Raspberry Pi.
And then there are also Docker Images, which I use if I want to give an application to somebody that "just works".
What's your method of choice?
[0] https://python-poetry.org/
[1] https://www.anaconda.com/
gjulianm|3 years ago
rirze|3 years ago
https://github.com/pdm-project/pdm
Flex247A|3 years ago
[0] https://docs.conda.io/en/latest/miniconda.html
wil421|3 years ago
Agreed. I like docker images for smallish portable scripts. At home I can develop on my Mac and port it to a Raspberry PI or another x86 Windows/Linux box.
Planning on running a docker swarm with a few Pi’s to see how it works.
aitchnyu|3 years ago
jsmeaton|3 years ago
https://github.com/typeddjango/django-stubs/tree/master
jerrygenser|3 years ago