100%. I stopped writing anything that had to be deployed (basically everything except Jupyter notebooks for data stuff) in Python because it’s truly a nightmare. Go and goreleaser is great for writing a CLI (and if it’s public, it can auto generate binaries and upload to GitHub, create a Homebrew/Scoop bucket, etc)
A lot of the advice is good but I take an issue with this one. With poetry and docker, packaging Python apps for easy consumption is a non-issue. Same for Ruby. If you can get your team to standardize on poetry, you might not even need containers -- but, honestly, running these tools from CI or automation (anywhere) is so useful that you probably want container versions anyway.
Golang is not a good fit for exploratory CLIs that work with complex data structures and are written for one-off, low-CPU consumption purposes -- not for scale-up API services. Just having an interactive shell (or `pry` in Ruby, those two are identical for the purpose) saved me probably weeks of time. Trying to unit test any moderately complex API surface brings me to tears when I compare it to trivial object mocking in something like Ruby.
Python (or Ruby) are ideal for this and have excellent frameworks for CLI tools.
I recently had to package a machine learning Jupyter notebook created by a very smart person, albeit a scientist, not a developer. Getting the tool somehow into production, making it reproducible, testable, and maintainable, has proven to be a major headache.
Before, I only casually dabbled in the python world, but this was the first time I had to care about packaging, dependencies, CI and the like. Turns out, as TFA said, nobody knows how to package python apps right. For what it’s worth, I wasn’t even able to find some kind of best practice to manage friggin dependencies. There’s like a myriad of package managers, all of them work differently, and nobody seemed to had something like redistributing an app to other people on their mind. Coming from PHP, JavaScript and Go, this was utterly ridiculous to me.
Go ahead, tell me I got it all wrong and it’s really easy using tool Xyz, but if a developer with some experience under their belt isn’t able to figure this out in a few days, things are just broken.
Ruby is pretty good in this, but Python I'd heavily argue against. Maybe in another decade when they manage to restabilize what they wrought - it used to be easy.
Python gets more and more complex the further away the people running the tool are from fancy recentish distros (Fedora, Ubuntu, Arch) or special constrained environments (Nix, running the CLI in docker). The moment you have to deal with unspecified RHEL version (6 is still reasonably common) or derivative, or Mac or Windows, kiss any expectation of python packaging being nice "bye bye".
Unless of course you have a platform team that can handle the packaging and distribution for you, but then it probably falls a bit under "constrained environment".
> Golang is not a good fit for exploratory CLIs that work with complex data structures and are written for one-off, low-CPU consumption purposes
"exploratory CLIs that work with complex data structures and are written for one-off ... purposes"
Sounds like a maintenance nightmare if you are actually "deploying" Python scripts that fit this definition. Yeah, golang will usually require a bit more boilerplate up front, but it is going to make your workflows infinitely more maintainable & flexible in the long term due to type safety + easy (and docker-less) packaging.
IMHO, if it's not a literal one-liner in ($SHELL|curl|awk|jq|*), it should probably be done right the first time in golang/etc or go back to the drawing board.
The packaging thing (and I don't necessarily think "put it in Docker" is a great answer, especially if you're using Python as glue -- why isolate your glue?) is something you kind of solve once and are done with. You probably already want/need a common development environment between people that is close to what you deploy, and adding a fixed Python into the mix with pyenv or whatever is not a big deal.
I don't know, I've generally had a good time with. You have to know how to do it and where the foot guns are (I'm still learning), but it's better than being comparatively gimped in development velocity using Go, never mind Rust or C++.
No one wants a CLI in Ruby, and if you look arround there is no popular CLI written in Ruby it's one of the worse language to write CLI in because no one has the Ruby runtime installed on their machine, then you have different architectures, different OS etc ... Go is way way better than Ruby on that topic.
As for complex data structure, I don't understand exactly what do you mean, as if a dynamic language would model that easier than a strongly typed language.
When you get a single binary for CLI it's hard to use anything else, the pain of using pip for anything python based.
Regarding being cloud provider agnostic: it’s not always for fault tolerance, there can be a couple different reasons.
1) it gives your company a stronger bargaining position with the cloud provider.
Granted, my companies tend to have extremely high spend- but being able to shave a dozen or so percent off your bill is enough to hire another 50 engineers in my org.
2) you may end up hitting some kind of unarguable problem.
These could be business driven (my CEO doesn’t like yours!), technical (GCP is not supported by $vendor) or political (you need to make a China version of your product, no GCP in China!)
Everything is trade offs. AWS never worked for us because the technical implementation of their hypervisor was not affined to CPU cores of the machine, meaning you often compete with other VMs on memory bandwidth. — but AWS works in China (kinda). So my solutions support both GCP and AWS as slightly less supported backup.
I'd add another reason: Devs need to be able to run stuff locally sometimes.
It's neat having a serverless single page app that is hosted in s3, served through cloudfront, with lambda's that post messages to sqs queues that are read by god knows what else, but what happens when there's a bug? How do you test it? You can throw more cloud at it and give each dev a way to build their own copy of the stack, but that's even more work to manage. Maybe localstack behaves the same, but can you integrate it with your test framework?
I never took a hard "we must never use aws-only services" approach, but having the ability to run something locally was a huge plus. Postgres RDS? Totally fine, you don't need amazon to run postgres. Redshift? Worth the lock-in given the performance. Lambda? Eh, probably not, given that we already have a streamlined way to host a webapp.
>you may end up hitting some kind of unarguable problem.
Another example: there's multiple countries (for example, here in Russia) where personal data must be stored in data centers located in the country's borders and not every country has a AWS datacenter on its soil
.
> Don't migrate an application from the datacenter to the cloud
Reading the actual text of this one I get a different impression, but I'm still not sure I agree with this one. Applications can be radically different from each other in terms of how they are run.
At one company, we ran a simple application as SaaS for our customers or gave them packages to run on-prem. We'd stack something like seven SaaS customers on a single set of hardware (front-ends and DB servers). The cloud offering was a no-brainer, you can just migrate customers one by one to AWS or whatever, or spin up a new customer on AWS instead of in our colocation center.
Applications have a very wide range of operational complexity. Some applications are total beasts--you ask a new engineer to set up a test environment as part of on-boarding and it takes them a week. Some applications are very svelte, like a single JAR file + PostgreSQL database. The operational complexity (complexity of running the software) doesn't always correspond to the complexity of the code itself or its featureset.
> I've been involved now in three attempts to do large-scale migrations of applications written for a specific datacenter to the cloud and every time I have crashed upon the rocks of undocumented assumptions about the environment
I've only participated in a single on-prem to cloud migration. Some parts of the migration were easy, e.g. moving a postgres DB that was running on some on-prem linux server to run in AWS RDS. Some parts were rather unpleasant: where you discover that a bunch of the application code that runs in worker processes assumes it has access to a shared CIFS network share that can be used for communication throug the filesystem, and absolute file paths to on-prem CIFS network share locations are stored in metadata throughout the database. So then your available moves for how to migrate the application code and migrate the CIFS network share and migrate the data in the database all become somewhat tangled together.
I helped migrate an app from on-prem to cloud. During the migration we found that the app needed a locally installed oracleDB. Well, it violates on-prem best practices and cloud best practices. I think migrating just exposes all the shortcuts baked into a "craplication."
Especially the alerts thing. I think every company I've ever worked for made the mistake of ignoring alert spam. If an alert doesn't require human action, then it should be a log or a metric. And by all means plot it on a graph (the metric that triggered the alarm, or in the case of a boolean test result, the frequency of the failure). Look at the graph during real incidents if you want. Talk about it at the monthly meeting. But don't generate an alert that people should ignore. You're playing Russian Roulette.
> If you are in AWS, don't pretend that there is a real need for your applications to be deployable to multiple clouds. If AWS disappeared tomorrow, yes you would need to migrate your applications. But the probability of AWS outliving your company is high
Well, it's not about AWS shutting down at all! It's about them having complete control over your infrastructure, so they dictate the terms. This has many consequences: (1) they can raise prices and you can do absolutely nothing about it, (2) since you chose AWS with its dynamic pricing instead of flat-rate dedicated servers, each expansion (traffic, new services) is a cost for you. This means at some point you will realize you will save sick amounts of money if you switch to bare metal (as several notable companies have done). Except that at this point it's really difficult because you have to basically start from zero so the inertia basically pulls you into continuing this vicious cycle.
So this is just a straw-man argument. Really, I haven't heard anyone saying "but Amazon can go out of business", it's just ridiculous.
> You spring back to the present day, almost bolting out of your chair to object, "Don't do X!". Your colleagues are startled by your intense reaction, but they haven't seen the horrors you have.
They may be startled, but they almost certainly won't listen. The purgatory nature of IT work culture ensures this repetitive pattern.
That’s my experience too. The older I get the more frustrated I am that people don’t want to learn from my mistakes.
It’s not that I’m a jerk about it. Most of the time it’s business types saying “you don’t need to do all that stuff - just deploy it like that, it’ll be fine”.
And then it’s not fine - and also, it’s somehow my fault.
It seems to be more about personal aggrandisement (“I’m the boss and my word is law”) rather than trying to build a great business together. I’m pretty over it.
Designing for portability is important. Otherwise you expose yourself to dreadful uncertainties.
"AWS will not disappear". That is probably true. The average business can take this risk (and if you are huge you are not listening to me). But AWS might raise its prices to a point they are getting all your profit. DO you trust Amazon? Really? The particular AWS feature you depended on with the tight coupling "Don't Design for Multiple Cloud Providers" implies may get deprecated. What then?
This is as old as the hills: Design in layers. Have an AWS layer. If AWS goes away, quadruples their fees, deprecates your services, or you are hit with USA sanctions then there is a layer that has to be rewritten.
Parler was on AWS and they got booted off. Because of that, they collapsed. If they had two cloud deployments they could have survived. AWS will never go away, but they can make your small (or medium) size business go away pretty quick.
And just a side note, Parler was toxic and I shed no tears about their demise...
Bummer about Python :/ it’s my go-to for CLI tools, but I’ve seen that problem too.. pipenv helps, but I wonder if there’s a better way to package them so they’re more future proof.. or do I really need to learn go?
I've worked with Python for over a decade and have worked with Go for a few years. The Go tooling and workflow for building and deploying applications is much more pleasant than the Python ecosystem. Once you know which target operating systems and CPU architectures you wish to deploy to, the toolchain makes it very easy to cross compile to produce n deployable binaries, then the static linking means you generally only need to copy 1 binary file to each target, plus your own application configuration. If you have a Python background and have done some programming with a static type system before, then Go is very quick to learn. The Go language itself is a bit annoyingly inexpressive for bashing out algorithms, but that probably doesn't matter if you're writing CLI tools.
For packaging python stuff, it kind of depends who or what the end user of the CLI tool is. I used to work in a small business that shipped windows desktop software to customers, a lot of the software was written in python. From memory we packaged it with py2exe -- so you end up building a zip file containing a self-contained executable, the version of the python interpreter your tool needs, along with all the python packages as well as native libraries. That worked quite reliably, but it'd seem rather distasteful to use that approach for sharing CLI tools with your colleagues or CI machines in a dev team!
edit: there are pitfalls to deploying go application binaries if you try to put them in scratch containers and use libraries that assume they can find data such as timezone data provided in the usual place in the filesystem by the distribution (there will be no such data files in a scratch container unless you explicitly copy them in), or build a go linux binary with dynamic linking assuming glibc, then try to run it in an alpine based environment with musl. So it's not magic. But it's mostly pretty nice.
If the standard tech stack at your organisation includes Python, there's no reason why you shouldn't write CLI tools in Python. Packaging and distribution is only a problem for organisations that do not usually deal with Python.
IMO we loose a lot with go: having to compile, loosing the interactive shell, etc. Best case you work with a lot of people who know how to install python and use pip. Many people whine on boards, but it's not that complicated, especially with python 3.
you need to take advice with a grain of salt. Python is fine for CLI tools, just like Go is fine for them. If you know Python, that weird statement in the article was not for you. I honestly don't get what the problem is. You know Python? You use it for CLI? Power to ya. You know Go? Use it for CLI? Power to ya too.
Disagree completely with this. This has been probably the biggest overall boost for both engineers and operators at a few companies, I worked at.
You deliver fast, it's easy to debug, and requires no compilation -- which is usually a bigger hassle than any Python-specific problem. It gets really important if you have operators on Linux/Windows/Mac.
If we ran our cluster in the cloud we'd be on the hook for hundreds of thousands of dollars of additional costs due to the high throughput of our service. There are always exceptions to any list of rules.
Not sure if I agree about the Python jab. I've seen "pip install ...." run flawlessly more times than I've had breakfast cereal, and I eat a lot of cereal.
I kinda agree on his first point about migrating stuff to the cloud but if you've done your deployments on like-like platforms (on prem containers to cloud containers) its not that bad.
Nobody wants to mention "don't roll your own security"? That's a 101 kind of question - very easy to feel clever when you try it as an amateur, nightmarish when (really not if, when) you get it wrong.
That is one area where I think you want to outsource that to specialists.
I think Python still has a place for CLI tools, both internal and external.
If you can get away with a zero dependency Python script then there's no struggle. You can download the single Python file and run it, that's it. It works without any ceremony and just about every major system has Python 3.x installed by default. I'd say it's even easier than a compiled Go binary because you don't need to worry about building it for a specific OS or CPU architecture and then instructing users on which one to download.
Argparse (part of the Python standard library) is also quite good for making quick work out of setting up CLI commands, flags, validation, etc..
There's a number of tasks where using Python instead of Bash is easier. I tend to switch between both based on what I'm doing.
> Don't migrate an application from the datacenter to the cloud
Eh, the salesman told me it would be seamless while we were watching the football game from his company’s box. And they are the experts: it’s their cloud!
I’m gonna tell the team to do it this way when I get back to the office. I think they just like running hardware and aren’t thinking of our balance sheet.
Late to the party, but here’s a solution to “Python packaging” that works fantastically for all my stuff and requires four lines of setup and (only) one magical invocation — pip:
What if your team is Python-based? Why would I write a CLI tool to be used by other Python programmers in Go or Rust, when some of them know neither?
It doesn't matter that you know Go and can generate all possible binaries; eventually, someone else will have to make a change in your tool. It will already be difficult for them to understand a new codebase, so you don't need to make it harder by also exposing them to another language.
[+] [-] travisd|4 years ago|reply
100%. I stopped writing anything that had to be deployed (basically everything except Jupyter notebooks for data stuff) in Python because it’s truly a nightmare. Go and goreleaser is great for writing a CLI (and if it’s public, it can auto generate binaries and upload to GitHub, create a Homebrew/Scoop bucket, etc)
[+] [-] torton|4 years ago|reply
A lot of the advice is good but I take an issue with this one. With poetry and docker, packaging Python apps for easy consumption is a non-issue. Same for Ruby. If you can get your team to standardize on poetry, you might not even need containers -- but, honestly, running these tools from CI or automation (anywhere) is so useful that you probably want container versions anyway.
Golang is not a good fit for exploratory CLIs that work with complex data structures and are written for one-off, low-CPU consumption purposes -- not for scale-up API services. Just having an interactive shell (or `pry` in Ruby, those two are identical for the purpose) saved me probably weeks of time. Trying to unit test any moderately complex API surface brings me to tears when I compare it to trivial object mocking in something like Ruby.
Python (or Ruby) are ideal for this and have excellent frameworks for CLI tools.
[+] [-] duped|4 years ago|reply
This is why you shouldn't write CLI tools in Python, you need frigging docker to package them
[+] [-] 9dev|4 years ago|reply
[+] [-] p_l|4 years ago|reply
Python gets more and more complex the further away the people running the tool are from fancy recentish distros (Fedora, Ubuntu, Arch) or special constrained environments (Nix, running the CLI in docker). The moment you have to deal with unspecified RHEL version (6 is still reasonably common) or derivative, or Mac or Windows, kiss any expectation of python packaging being nice "bye bye".
Unless of course you have a platform team that can handle the packaging and distribution for you, but then it probably falls a bit under "constrained environment".
[+] [-] pphysch|4 years ago|reply
"exploratory CLIs that work with complex data structures and are written for one-off ... purposes"
Sounds like a maintenance nightmare if you are actually "deploying" Python scripts that fit this definition. Yeah, golang will usually require a bit more boilerplate up front, but it is going to make your workflows infinitely more maintainable & flexible in the long term due to type safety + easy (and docker-less) packaging.
IMHO, if it's not a literal one-liner in ($SHELL|curl|awk|jq|*), it should probably be done right the first time in golang/etc or go back to the drawing board.
[+] [-] chousuke|4 years ago|reply
Writing and deploying Python tools is easy if all your dependencies come in distro packages.
[+] [-] boardwaalk|4 years ago|reply
I don't know, I've generally had a good time with. You have to know how to do it and where the foot guns are (I'm still learning), but it's better than being comparatively gimped in development velocity using Go, never mind Rust or C++.
[+] [-] habitue|4 years ago|reply
Yeah, apps it's worth doing this for, cli tools it's not.
[+] [-] Thaxll|4 years ago|reply
As for complex data structure, I don't understand exactly what do you mean, as if a dynamic language would model that easier than a strongly typed language.
When you get a single binary for CLI it's hard to use anything else, the pain of using pip for anything python based.
[+] [-] dijit|4 years ago|reply
1) it gives your company a stronger bargaining position with the cloud provider.
Granted, my companies tend to have extremely high spend- but being able to shave a dozen or so percent off your bill is enough to hire another 50 engineers in my org.
2) you may end up hitting some kind of unarguable problem.
These could be business driven (my CEO doesn’t like yours!), technical (GCP is not supported by $vendor) or political (you need to make a China version of your product, no GCP in China!)
Everything is trade offs. AWS never worked for us because the technical implementation of their hypervisor was not affined to CPU cores of the machine, meaning you often compete with other VMs on memory bandwidth. — but AWS works in China (kinda). So my solutions support both GCP and AWS as slightly less supported backup.
[+] [-] cconstantine|4 years ago|reply
It's neat having a serverless single page app that is hosted in s3, served through cloudfront, with lambda's that post messages to sqs queues that are read by god knows what else, but what happens when there's a bug? How do you test it? You can throw more cloud at it and give each dev a way to build their own copy of the stack, but that's even more work to manage. Maybe localstack behaves the same, but can you integrate it with your test framework?
I never took a hard "we must never use aws-only services" approach, but having the ability to run something locally was a huge plus. Postgres RDS? Totally fine, you don't need amazon to run postgres. Redshift? Worth the lock-in given the performance. Lambda? Eh, probably not, given that we already have a streamlined way to host a webapp.
[+] [-] kgeist|4 years ago|reply
Another example: there's multiple countries (for example, here in Russia) where personal data must be stored in data centers located in the country's borders and not every country has a AWS datacenter on its soil .
[+] [-] sneak|4 years ago|reply
Then he goes on to advocate against designing for cloud flexibility.
This almost feels like AWS marketing.
[+] [-] klodolph|4 years ago|reply
Reading the actual text of this one I get a different impression, but I'm still not sure I agree with this one. Applications can be radically different from each other in terms of how they are run.
At one company, we ran a simple application as SaaS for our customers or gave them packages to run on-prem. We'd stack something like seven SaaS customers on a single set of hardware (front-ends and DB servers). The cloud offering was a no-brainer, you can just migrate customers one by one to AWS or whatever, or spin up a new customer on AWS instead of in our colocation center.
Applications have a very wide range of operational complexity. Some applications are total beasts--you ask a new engineer to set up a test environment as part of on-boarding and it takes them a week. Some applications are very svelte, like a single JAR file + PostgreSQL database. The operational complexity (complexity of running the software) doesn't always correspond to the complexity of the code itself or its featureset.
[+] [-] shoo|4 years ago|reply
I've only participated in a single on-prem to cloud migration. Some parts of the migration were easy, e.g. moving a postgres DB that was running on some on-prem linux server to run in AWS RDS. Some parts were rather unpleasant: where you discover that a bunch of the application code that runs in worker processes assumes it has access to a shared CIFS network share that can be used for communication throug the filesystem, and absolute file paths to on-prem CIFS network share locations are stored in metadata throughout the database. So then your available moves for how to migrate the application code and migrate the CIFS network share and migrate the data in the database all become somewhat tangled together.
[+] [-] AyyWS|4 years ago|reply
[+] [-] raffraffraff|4 years ago|reply
[+] [-] hdjjhhvvhga|4 years ago|reply
Well, it's not about AWS shutting down at all! It's about them having complete control over your infrastructure, so they dictate the terms. This has many consequences: (1) they can raise prices and you can do absolutely nothing about it, (2) since you chose AWS with its dynamic pricing instead of flat-rate dedicated servers, each expansion (traffic, new services) is a cost for you. This means at some point you will realize you will save sick amounts of money if you switch to bare metal (as several notable companies have done). Except that at this point it's really difficult because you have to basically start from zero so the inertia basically pulls you into continuing this vicious cycle.
So this is just a straw-man argument. Really, I haven't heard anyone saying "but Amazon can go out of business", it's just ridiculous.
[+] [-] grafelic|4 years ago|reply
They may be startled, but they almost certainly won't listen. The purgatory nature of IT work culture ensures this repetitive pattern.
[+] [-] doctor_eval|4 years ago|reply
It’s not that I’m a jerk about it. Most of the time it’s business types saying “you don’t need to do all that stuff - just deploy it like that, it’ll be fine”.
And then it’s not fine - and also, it’s somehow my fault.
It seems to be more about personal aggrandisement (“I’m the boss and my word is law”) rather than trying to build a great business together. I’m pretty over it.
[+] [-] worik|4 years ago|reply
Designing for portability is important. Otherwise you expose yourself to dreadful uncertainties.
"AWS will not disappear". That is probably true. The average business can take this risk (and if you are huge you are not listening to me). But AWS might raise its prices to a point they are getting all your profit. DO you trust Amazon? Really? The particular AWS feature you depended on with the tight coupling "Don't Design for Multiple Cloud Providers" implies may get deprecated. What then?
This is as old as the hills: Design in layers. Have an AWS layer. If AWS goes away, quadruples their fees, deprecates your services, or you are hit with USA sanctions then there is a layer that has to be rewritten.
Old wisdom. Use it.
[+] [-] SkipperCat|4 years ago|reply
And just a side note, Parler was toxic and I shed no tears about their demise...
[+] [-] worik|4 years ago|reply
That is a feature! Bleeding edge bleeds.
[+] [-] timwis|4 years ago|reply
[+] [-] shoo|4 years ago|reply
For packaging python stuff, it kind of depends who or what the end user of the CLI tool is. I used to work in a small business that shipped windows desktop software to customers, a lot of the software was written in python. From memory we packaged it with py2exe -- so you end up building a zip file containing a self-contained executable, the version of the python interpreter your tool needs, along with all the python packages as well as native libraries. That worked quite reliably, but it'd seem rather distasteful to use that approach for sharing CLI tools with your colleagues or CI machines in a dev team!
edit: there are pitfalls to deploying go application binaries if you try to put them in scratch containers and use libraries that assume they can find data such as timezone data provided in the usual place in the filesystem by the distribution (there will be no such data files in a scratch container unless you explicitly copy them in), or build a go linux binary with dynamic linking assuming glibc, then try to run it in an alpine based environment with musl. So it's not magic. But it's mostly pretty nice.
[+] [-] GVRV|4 years ago|reply
[+] [-] i_like_apis|4 years ago|reply
[+] [-] harel|4 years ago|reply
[+] [-] odiroot|4 years ago|reply
> Don't write internal cli tools in python
Disagree completely with this. This has been probably the biggest overall boost for both engineers and operators at a few companies, I worked at.
You deliver fast, it's easy to debug, and requires no compilation -- which is usually a bigger hassle than any Python-specific problem. It gets really important if you have operators on Linux/Windows/Mac.
[+] [-] iechoz6H|4 years ago|reply
If we ran our cluster in the cloud we'd be on the hook for hundreds of thousands of dollars of additional costs due to the high throughput of our service. There are always exceptions to any list of rules.
[+] [-] SkipperCat|4 years ago|reply
I kinda agree on his first point about migrating stuff to the cloud but if you've done your deployments on like-like platforms (on prem containers to cloud containers) its not that bad.
[+] [-] zebraflask|4 years ago|reply
That is one area where I think you want to outsource that to specialists.
[+] [-] doctor_eval|4 years ago|reply
Build some toy security… but don’t deploy it.
[+] [-] worik|4 years ago|reply
Glad of that.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] nickjj|4 years ago|reply
If you can get away with a zero dependency Python script then there's no struggle. You can download the single Python file and run it, that's it. It works without any ceremony and just about every major system has Python 3.x installed by default. I'd say it's even easier than a compiled Go binary because you don't need to worry about building it for a specific OS or CPU architecture and then instructing users on which one to download.
Argparse (part of the Python standard library) is also quite good for making quick work out of setting up CLI commands, flags, validation, etc..
There's a number of tasks where using Python instead of Bash is easier. I tend to switch between both based on what I'm doing.
[+] [-] betaby|4 years ago|reply
[+] [-] charcircuit|4 years ago|reply
[+] [-] barbazoo|4 years ago|reply
[+] [-] gumby|4 years ago|reply
Eh, the salesman told me it would be seamless while we were watching the football game from his company’s box. And they are the experts: it’s their cloud!
I’m gonna tell the team to do it this way when I get back to the office. I think they just like running hardware and aren’t thinking of our balance sheet.
[+] [-] gorgoiler|4 years ago|reply
[+] [-] maleldil|4 years ago|reply
What if your team is Python-based? Why would I write a CLI tool to be used by other Python programmers in Go or Rust, when some of them know neither?
It doesn't matter that you know Go and can generate all possible binaries; eventually, someone else will have to make a change in your tool. It will already be difficult for them to understand a new codebase, so you don't need to make it harder by also exposing them to another language.
[+] [-] privacyonsec|4 years ago|reply
Anybody tried PyInstaller ? it packages the whole Python project, dependencies included into a single exactable binary
[+] [-] jl6|4 years ago|reply
This has its own sub-antipattern: “Just put your application in a container, then it will run anywhere!”