Didn't yet go through the content, but having a AI generated image that you didn't even bother to at least touch-up a bit to fix the text does not give me a lot of confidence about the effort that went into this.
I came to this comment section to say exactly that.
At my work (university research lab) the Ph.D. students have to publish their thesis as a book to defend their degree. They are free to make the image for the cover, which is a very nice touch ang give you artistic freedom in what was supposed to be one of the most important moments of your career (I went for a picture of the chip I designed during my research).
For the past 3 years or so all we have are generic AI generated sciency-looking figures at the cover and it is depressing.
Adds nothing, they could just picked a totally unrelated stock photo if they wanted to add something there. It just immediately halo effects the whole thing as something put together without effort. Stop doing this!
Isn't this overly critical ?. The content matters far more than the image and the chapters are good. I didn't even register the image - I think most folks today have eyes that auto-skip images after being pattern-trained to ignore ads.
I’m an occasional listener of the Talk Python podcast, and I’ve also taken Michael’s pytest course. It’s clear that he puts considerable effort into his content.
I don't even understand why they do that, surely putting even a low quality something together would make it much better, and with actual font rendering.
Yeah, immediately off-putting, even though I actually enjoy the podcast. Not for like "AI bad" reasons, it's just ugly. Micheal, if you are reading this - please fix, it should take 5 minutes.
I clicked on the link and was greeted by AI slop instantly. I checked the comments, saw this, am writing this and will probably not look at it ever again. Guess I am just not the target audience. I wish them that their AI slop strategy works out just for the sake of good vibes, though. If everyone does it it can't be bad, right? I'm the issue here, clearly.
ngirx and granian are my favorite technologies to work with! Completely agree, this trend of putting a completely useless and ugly AI image on top of your page, I despise. You could have searched the web for an actual diagram, if you wanted one here. These images provided negative values to your articles.
The idea of the book is to pull away a lot of the hype of big cloud providers, show practical steps how we run things over at Talk Python (podcast, courses, e-commerce, and more).I hope some of you find this refreshing!
You can read the first 1/3 online for free. The rest is available DRM free.
Seems interesting, read the online summary. I am curious to read about your part on Chapter 14 (I am part of Litestar maintainer). Thank you for the book anyway !
Unrelated to the content: why on earth is super-light grey a good "bold" colour for a white background? I'm having to highlight each of the bolded parts of the text just to understand it :/
edit: console command for anyone else struggling to read this `document.documentElement.style.setProperty('--bulma-strong-color', '#000');`
Is this some new trend where websites include @media (prefers-color-scheme: light) and @media (prefers-color-scheme: dark) in their css, but it just breaks the site?
This site doesn't even have two themes, that css is just there to break the bold text!
Haha, came here to mention the light grey text on white background as well. This is a great example of poor accessibility. It should be obvious to a human eye that this is bad; but in case it weren't, one could open up Chrome dev tools, find the styles for this text, click on the color picker, and observe that Chrome reports the contrast ratio for that text to be 1.17, whereas a comfortable (accessible) contrast ratio starts at 4.5.
"Have you heard the phrase "You're not Google, you're not Facebook, and you're not Netflix"? The TL;DR; is those tech giants that have 1M+ concurrent users. They have a hard requirement for no downtime."
Actually, one of the more interesting parts of the Google SRE book was that they don't try to aim for 0 downtime. They consider the background error rate of any network request and optimising much beyond this is counter productive.
Even for individual services they make a point of not trying to make them perfectly available, as this means downstream services are less likely to build in adequate provision for failure.
You can and probably should go thinner than this, with uv we effectively have a workflow comparable to deploying static binaries in other language stacks. You don't need the complexity of docker for this book's goal.
This go thin with uv is good advice for smaller projects. But as you grow with more aspects, it gets more problematic.
I ran code that way for years. But now we have 23 different services: web apps, APIs, and database servers, my code and other self-hosted services.
I would NOT run 23 projects/servers (3 versions of postgres) this way. Like so much, it depends. FWIW, the book goes into depth about these trade-offs.
I'm glad to see people recognizing that computers are quite fast and that they don't need massive cloud-scale solutions for simple problems. That being said, Python really shines as glue code and in small scripts where performance doesn't matter. You'll see considerable performance (and likely maintainability) gains by moving off of Python to almost any other language.
I don't know, but the "Read Online" button leads me to "https[://]talkpython.fm/books/python-in-production/#read-online", and that URL then tries to redirect to "https[://]talkpython.fm/books/python-in-production#read-online". (Notice how the last slash of the path is missing).
This forced my browser to reload the page, and it beats the entire purpose of anchoring and fragment-based navs.
Those numbers bother me. What does it mean to be "6x cheaper"? Is that a sixth the price? If something costs $100, what is 6x cheaper than it? 2x cheaper? 1x cheaper?
It is as frustrating as when people use "200% faster" to mean exactly the same thing as "twice as fast", and "100% faster" to mean the same thing.
A lot are getting ripped off. To what degree depends on their tech capabilities and business savvy (i.e. what would it cost them to do it themselves and what kind of discount can they negotiate from the cloud provider. If you're paying the listed rates you are getting ripped of).
Nice. I used uwsgi -> gunicorn -> gunicorn with uvworkers -> granian. Granian is great. While not crazy popular along, it's really based on Rust's Hyper, see https://crates.io/crates/hyper Hyper has 400M downloads so safe to say it's pretty battle tested.
I interviewed the creator of Granian on Talk Python BTW.
The book needs to remove the AI images. They are actively hurting the eyes with the wrong and weird perspective.
It is pretty light reading, name dropping a lot of software without going into details.
As always with Python: These books do not tell you the downsides, and the future of Python is uncertain because the governance has been taken over by a bunch of mediocre weirdos. Python core has always suffered from the problem that occasionally smart people implement something and then leave, but the majority of core devs are pretty dumb and they can now vote in their own after van Rossum left.
[+] [-] maeln|5 months ago|reply
[+] [-] MadameBanaan|5 months ago|reply
At my work (university research lab) the Ph.D. students have to publish their thesis as a book to defend their degree. They are free to make the image for the cover, which is a very nice touch ang give you artistic freedom in what was supposed to be one of the most important moments of your career (I went for a picture of the chip I designed during my research).
For the past 3 years or so all we have are generic AI generated sciency-looking figures at the cover and it is depressing.
[+] [-] ahoka|5 months ago|reply
[+] [-] lenkite|5 months ago|reply
[+] [-] ayhanfuat|5 months ago|reply
[+] [-] arcanemachiner|5 months ago|reply
[+] [-] Elfener|5 months ago|reply
[+] [-] mnx|5 months ago|reply
[+] [-] anaccount342|5 months ago|reply
[+] [-] _joel|5 months ago|reply
[+] [-] numlock86|5 months ago|reply
[+] [-] thrance|5 months ago|reply
[+] [-] wiseowise|5 months ago|reply
[+] [-] barapa|5 months ago|reply
[+] [-] BestHackerOnHN|5 months ago|reply
[deleted]
[+] [-] mikeckennedy|6 months ago|reply
You can read the first 1/3 online for free. The rest is available DRM free.
[+] [-] fastasucan|5 months ago|reply
[+] [-] bakugo|5 months ago|reply
[+] [-] hshdhdhehd|5 months ago|reply
[+] [-] Kumzy|5 months ago|reply
[+] [-] divbzero|5 months ago|reply
[+] [-] kinix|5 months ago|reply
edit: console command for anyone else struggling to read this `document.documentElement.style.setProperty('--bulma-strong-color', '#000');`
[+] [-] Elfener|5 months ago|reply
This site doesn't even have two themes, that css is just there to break the bold text!
[+] [-] perch56|5 months ago|reply
[+] [-] azangru|5 months ago|reply
[+] [-] VBprogrammer|5 months ago|reply
Actually, one of the more interesting parts of the Google SRE book was that they don't try to aim for 0 downtime. They consider the background error rate of any network request and optimising much beyond this is counter productive.
Even for individual services they make a point of not trying to make them perfectly available, as this means downstream services are less likely to build in adequate provision for failure.
[+] [-] 1dom|5 months ago|reply
Those tech giants got to where they are by recognising specifically that they don't have "no downtime" requirements.
"Move fast and break things" isn't the mantra of companies with zero downtime requirements.
[+] [-] hshdhdhehd|5 months ago|reply
[+] [-] CraigJPerry|5 months ago|reply
Was hoping the book would cover data persistence.
[+] [-] mikeckennedy|5 months ago|reply
I ran code that way for years. But now we have 23 different services: web apps, APIs, and database servers, my code and other self-hosted services.
I would NOT run 23 projects/servers (3 versions of postgres) this way. Like so much, it depends. FWIW, the book goes into depth about these trade-offs.
[+] [-] portly|5 months ago|reply
[+] [-] lunias|5 months ago|reply
[+] [-] csmantle|5 months ago|reply
This forced my browser to reload the page, and it beats the entire purpose of anchoring and fragment-based navs.
[+] [-] aitchnyu|5 months ago|reply
[+] [-] brap|5 months ago|reply
[+] [-] MyOutfitIsVague|5 months ago|reply
It is as frustrating as when people use "200% faster" to mean exactly the same thing as "twice as fast", and "100% faster" to mean the same thing.
[+] [-] mikeckennedy|5 months ago|reply
A 8 CPU / 16 GB Ram server at Hetzner is $30 or so per month. It's $200+ at AWS / Azure.
Bandwidth is 4TB included from free at Hetzner, it's $92.16 / TB or $368.64 additional at AWS / Azure.
That is where the 6x comes from. It's described in detail with that math in the book BTW.
[+] [-] rcxdude|5 months ago|reply
[+] [-] divbzero|5 months ago|reply
[+] [-] mikeckennedy|5 months ago|reply
I interviewed the creator of Granian on Talk Python BTW.
[+] [-] fastasucan|5 months ago|reply
[+] [-] tha182HatR|5 months ago|reply
It is pretty light reading, name dropping a lot of software without going into details.
As always with Python: These books do not tell you the downsides, and the future of Python is uncertain because the governance has been taken over by a bunch of mediocre weirdos. Python core has always suffered from the problem that occasionally smart people implement something and then leave, but the majority of core devs are pretty dumb and they can now vote in their own after van Rossum left.