I was all ready to savage his opinion after reading the headline but I agree looking at my architecture that I designed for the company I work for, CPU isn't the bottleneck. Every time I try to increase performance by multi threading as much as possible, the databases start screaming.
On the other hand, the idea that dynamic languages are more productive than static languages are laughable. Statically type languages prevent a lot of bugs and allow for a lot of automated provably correct refactorings that simple cannot be done with a statically typed languages. You can't even reliably do a "find usages" of classes using a dynamically typed language.
>On the other hand, the idea that dynamic languages are more productive than static languages are laughable. Statically type languages prevent a lot of bugs and allow for a lot of automated provably correct refactorings that simple cannot be done with a statically typed languages. You can't even reliably do a "find usages" of classes using a dynamically typed languag
Exactly, I get quick and precise code completion, I catch plenty of errors beforehand etc. I'd say I'm about 10x as productive in C# as in Python, with similar amount of experience. Python only shines when there is a library that does something really well that you need. For me any productivity advantage in Python is from lots and lots of libraries.
Also in terms of maintainability, I find my C# code easy to read and modify a year later when I've forgotten completely about it. In Python I need to rescan all of the types into my head until I can understand what the program does.
I mean with var and dynamic, C# offers everything you need for duck typing efficiency, while preserving the very important statically typed interfaces.
You sound like someone who hasn't used dynamic languages in anger, or you'd mention some of the things that dynamic languages do well that statically typed languages aren't so great at, to prevent your argument sounding like a straw man.
For example: dropping into a debugger (binding.pry / pry_remote in Ruby) to write code interactively in the context of the application, transferring that code to the source, and continuing on with my next fragment of functionality with a refresh of the web page, no recompiles or restarts required. You can do this with some difficulty and certain caveats in a statically compiled language, but only if things have been set up in just the right way, with automatic dependency recompilation and reloading, hot code replacement, etc. And even then, there are limitations on the kind of code you can write in the editor, depending on the underlying technology. Typically you can't create whole new classes, or introduce new fields etc.
Or consider levering up your language. Dynamic languages often give you the ability to create "new" syntax via expressive literals. This opens up more avenues for declarative programming; construct a data structure that models your problem more directly, and then write code that interprets the data structure. You want extremely lightweight literals for this, readable with minimal ceremony, including lambdas for when you need an escape hatch, a little pocket of custom code embedded in the bigger data structure. Statically typed languages have little context to infer types for such lambdas, unless they can generalize from the operators applied, but then you have a generic function, not a piece of data. So you end up infecting your DSL with type annotations and cruft, and before you know it the whole thing is hiding the forest behind the trees.
Thing is, if you aren't used to using these tools, you might not even know they exist, and you may be missing out on productivity you didn't know you could have.
I came here to make the same point about statically typed languages. My previous job mostly involved writing web servers/services in Python. A lot of the existing unit tests in our codebase were solely there to check for type safety (one of the developers that had been there for most of the lifetime of the codebase over engineered a bit with OOP in Python). Now at my current job, we use Scala. I still prefer Python for scripting, but it is very nice to be able to write code and have the compiler do most of the work when it comes to making sure everything fits nicely together.
"the idea that dynamic languages are more productive than static languages are laughable." -- being statically typed or dynamically typed comes with its own set of tradeoffs and what a person is more productive in is a highly subjective matter. Lispers are more productive in Lisp than Haskell and vice versa.
"Statically type languages prevent a lot of bugs and allow for a lot of automated provably correct refactorings that simple cannot be done with a statically typed languages." -- not true, Clojure is trying to do that with Clojure.spec and specifications being checked at runtime can get you closer to things you could have automatically proved correct only with languages with dependent types, nothing against statically typed languages but I feel that your sweeping generalizations hurt the point you are trying to make.
'Dynamic languages' is too often used as a shorthand, or interchangeable with 'scripting language', as here. There are a ton of more relevant language features when it comes to productivity, automatic memory management being the biggest IMHO. Interpreted vs compiled makes a difference too because your code->launch->test cycle can be so fast. But I agree, I'm a huge Python fan and even though I appreciate not having to gum up my code with type declarations, it's not really that big of a win in terms of productivity.
I pretty much agree with everything in the article - except for the bit where he tries to quantify why python is better from a developer efficiency perspective than other languages.
The main example he cites is a study that compares the amount of time writing string processing routines in different languages - which is quite a bit different from the work I do every day. I develop web apps which means I generally work in very large code bases, and spend most of my time modifying existing code rather than writing fresh code from scratch. I have found that statically typed languages (java + typescript) and the fantastic IDE support that comes along with them make it really easy to navigate around the code and refactor things. Also - the compiler tends to catch and prevent a whole class of bugs that you might otherwise only catch at runtime in a dynamically typed language.
Of course there are other situations where I prefer to use Ruby as my scripting language of choice - it all comes down to using the right tool for the job at hand. Unfortunately I don't think the author gives enough consideration to the trade-offs between static vs. dynamically typed languages, and I think he would have been better just leaving that section out as it isn't really necessary to prove his point that CPU efficiency isn't important in a lot of applications.
Ultimately though I completely agree with his main point: "Optimize for your most expensive resource. That’s YOU, not the computer."
Python is also heavily used in science, where performance really does matter. It's successful because of how highly ergonomic python apis can be built on top of optimised C/C++/Fortran libraries.
That said, there is clearly a desire to write 'fast' code in python itself without swapping to C. Cython helps, but to get really fast Cython code you actually have to write with C-semantics (so you are basically writing C with Python syntax).
Projects like numba JIT are interesting in that they can optimise domain-specific code (i.e. numerical/array code) that's written in normal python style. It also means jumping through a few hoops (although with the latest version in many cases all you need is a single decorator on your hot function). You can even do GIL-less multithreading in some cases.
Overall things are looking promising, with the addition of the frame evaluation API and possible improvements to the python C-api that could make JIT and similar extentions easier.
The author argues from his professional experience as a Python developer that it's fast enough, that you'll spend most time waiting for I/O anyway, that you can just throw more servers at the problem etc.
The problem is that his experience as a Python developer doesn't accurately reflect the prevalence of problems where runtime CPU performance actually is an issue. Of course not, because who in their right mind would make an informed decision to solve such a problem in Python? Python has worked for him because it is only useless for a category of problems that he hasn't had the opportunity to solve because he's a Python developer. Outside this professional experience, not everything is a trivially parallel web service that you can just throw more servers at if CPU time exceeds I/O waiting.
It all really boils down to what your requirements are, whether you have all the time and memory of a whole server park at your hands, or a fraction of the time available in a smaller embedded system, how timely the delivery of the software has to be and how timely it needs to deliver runtime results once it's up and running. There are times where Python just isn't fast enough, or where getting it fast enough is possible, but more convoluted and tricky than implementing the solution in a more performant language. Developer time may be more expensive than the platform that my solution is for, but that doesn't get around the fact that it eventually will need to run with the available resources.
Unless we are talking like circa 1999 I don't think I have heard a complaint yet that Python is slow. I'm curious who or where the author heard that from (not specifically the people themselves but the domain they are in).
What I have heard complaints about Python are (and I don't agree with all these points):
* Its not statically typed
* The python 2/3 compatibility
* It has some design flaws: GIL, variable assigning, mutable variables, lambdas, indentation (I don't agree with all these but this is complaints I have heard).
* The plethora of packaging (ie its not unified)
I guess one could argue its slow because it can't do concurrency well but that really isn't raw speed.
Then the author started comparing string processing of programmer time from a study which... doesn't help the authors point at all.
* Python has and will always be fast at string processing and most people know this
* The people that complain about python speed are almost certainly not doing string processing
* I have serious questions about the study in general (many languages have changed quite a bit since then)
For some data processing tasks Python can be brutally slow, especially text processing. NumPy is only fast because it's written in C and offloads hard numerical calculations to BLAS.
> I'm curious who or where the author heard that from (not specifically the people themselves but the domain they are in).
In the telecom domain, I've dealt with data big enough that Python wasn't really feasible. Think 100 of millions of records in CSV format that need to be parsed and processed. Doing that in Python is going to be painful.
I'd agree with all the complaints you list, and I love Python, but it is definitely kinda slow. I'd put the speed of python below several of the items on your list in terms of priorities I care about.
I've done a lot of image processing in Python using libraries like PIL, numpy and opencv. Doing per-pixel operations is extremely easy in PIL - improving my dev time, but CPU wise the slowest of the bunch. I love prototyping that way, but I have to move to numpy or opencv or another language to speed it up. A recent program to do a slightly complicated color transform on a 512x512 image was taking me over 60 seconds with PIL. It was 5-10 seconds in JavaScript, and less than 1 second using numpy.
Python is very developer productivity friendly, but degrades performance in weird ways and the methods to increase performance don't always make lots of obvious sense and often are the "idiomatic" way.
I haven't looked into the guts myself, but I'd bet that for similar or equivalent operations, the way the different syntaxes are handled under the hood are wildly different. e.g. function calls are much faster than method calls on objects.
Optimized pure python can look very ugly and non-idiomatic.
Most comparisons of popular web development languages will show a table where every other language out performs Python (except maybe Ruby) [1]. In reality, like the author pointed out, the benchmarks don't really matter when you consider network calls.
Author here. I mostly work with a lot of Java devs. They are hesitant to try python because "java is faster". Some use the static typing excuse, but oddly, the most common reason I hear people wont do python is because its "too slow".
Perhaps I live in a weird bubble, but that's the motivation behind the article.
Oh, I've heard "Python is slow" from Node folks, Java folks, Scala folks (in terms of productivity) and of course, die hard C++ fans. I'm a fan of Python but i also like languages with a stronger FP bent (and statically typed.)
> "Python is inherently harder to optimize than JS since it has <very dynamic features>"
Python is not a very dynamic language in the sense that you actually can't change a lot of stuff (and a number of the things you can change just segfault CPython). I think JS is more dynamic, for example. Or Ruby.
Python is my Swiss army knife. I love it because it is a single tool that can aid in almost every project I do. But if I'm doing one specific thing a lot, I want that thing to be done well and done efficiently, so I'll reach for the specific screwdriver I need.
Also most of my problems are IO bound so single threaded concurrency is fine.
But I represent a very small portion of the global problem space.
The fact that Python is slow isn't its only problem. What I care more about nowadays is wasting my time hunting bugs that could have been avoided by a static type system.
I see this stated as if its a universal fact, but it really true that static types reduce over-all bug density?
I myself have found this to not be true, and I have found the same in reading / talking to others. However, if you have found a resource that has data on the contrary, I would find it very interesting to read.
Have you checked out MyPy and the optional type hinting? I've been going that route in my python recently and been liking the both/and of having an optional type checker.
You only pay the the price while debugging. In your static language you pay the price continually. You are wasting orders of magnitude more time fighting your language's type system every day, you have to read reams of boilerplate code that are unrelated to the problem at hand, each extra line increases the attack surface and complexity of your code.
Mypy / the PyCharm typechecker have come a long way. It's not the same, of course, but you can get a good bit of mileage out of the new gradual typing system these days.
Python's value to me has always been that it's easier to get things done, not it's speed. One time when I was interviewing a candidate for a coding job, the candidate said she loved Python the most "because you can just yell at it and it'll work."
It's both the breadth of the standard library and ecosystem, and the simple language design, that make developing things in Python faster for me.
Doing problems on Project Euler has been an education for me in how algorithm matters more than speed. Lots and lots of people spend hours writing long C++ codes that are easily beaten by a few lines of Python. It certainly goes the other way too, and the wrong algorithm in Python is even that much slower and more painful than the right algorithm in C++. But when the right algorithm is used and the problem is solved in a few milliseconds, it really doesn't matter which language uses more CPU cycles, all that matters is whether you saw the insight that let you skip 99% of the search space, and how much time you spend writing code.
Somewhat ironically, Python is used a lot for things that would benefit from raw speed (data processing pipelines) and do not benefit at all from dynamic typing (since the kind of property bags / data frame views over data are easily replicated in statically typed languages). But Python's C extension API is quite a bit easier than p.e. Matlab's MEX API (to me at least); can typical Python IDEs compile and relink extension modules without an external build step?
> Your bottleneck is most likely not CPU or Python itself.
With applications that are dominated by raw data processing, it's very, very easy to be CPU dominated. Hell, I had one quite trivial data converter for logfiles where the "parsing the printf string" part of Java's printf dominated processing and writing a custom formatter halved processing time (while regexes can be compiled, the format string cannot be precompiled and will be interpreted each time); it's one of those things where I would intuitively say "why did this moron write his custom formatter" if I stumbled upon it in a code review. Intuitively, you'd expect this to be a simple case of an IO dominated task (which it is now once the bottleneck has been removed).
If it's fire-and-forget batch jobs, you can get away with it, but if the converter is part of a user facing fat client application that runs on a old office laptop, you don't have that luxury.
The article could be titled: "Yes, Python is Slow To Refactor and Maintain, and I Still Don't Care".
I never understand why dynamic language enthusiasts primarily focus on new code only. You have to discuss all sides of increased or decreased productivity to make a rational argument.
> Your bottleneck is most likely not CPU or Python itself.
I've found that this is often the case. Nearly always disk or network. But it's sometimes surprising how little work you need to do to become CPU-bound. This is the price we pay for such a tremendously dynamic language.
Indeed, the article's suggestions of C/Cython/PyPy are good ones to remedy the problem when it occurs.
I get the point this guy is making, but if you need something parallel for a cpu bound task, throwing more hardware at the problem isn't the most efficient solution if you can just use more cores. For example adding another quad core when the first cpu is only using one core anyway is inefficient and expensive.
Python does multiprocess very well. You can easily use all cores on your machine. Pythons main "disadvantage" is threading because of the GIL. But each process gets its own GIL. So when you multi process, your not limited to one core.
The point is that (modifying existing code or writing new code to) "just use more cores" may be less efficient, for the business or organization that is employing the programmer, given that programmer salaries, over even a fairly short amount of time, can be more expensive than hardware.
"It used to be the case that programs took a really long time to run. CPU’s were expensive, memory was expensive. Running time of a program used to be an important metric."
As hardware gets faster we give it new tasks that could not be achieved before. Like rendering high resolution stereoscopic images using physically based shading at 90 FPS on relatively cheap consumer hadware (VR). There are still quite a lot of code that we call 'performance critical'. Most of that code is written in C/C++ (and CUDA and glsl, and hlsl, etc...) today.
In ten years of Python development I have yet to come across an instance where Python couldn't be made fast. In some cases critical sections had to be delegated to C but even that is very rare.
>It's that in some scenarios python can't be made fast.
Can you give some examples of this? I mean, obviously with enough effort you can "make python fast" since it has good C bindings, and can just be a thin wrapper around fast stuff. Similar to how command line tools can be ridiculously fast[^1] despite, ostensibly, running in bash.
So I'm a bit confused about what you're claiming. Organizational issues, it's difficult to get management on board with an optimization pass?
Did you read the section on "Optimizing Python" ?? If so, I'm assuming you read about using CPython to migrate your package/program/module painlessly to C.
So given that, can you elaborate on your objection? I'd like to know why you think the article is wrong about optimising your Python to make it fast enough.
There's are still some big gains python could make, if python implementations were better.
Micropython is equivalent to a real-time cooperative-multitasking OS. If it had ~~better~~ support for things like cffi, you could implement posix on top of it. I can imagine a laptop that runs gnu+python in the next few years.
That's a whole new usecase, simply because that implementation uses a lot less ram. What usecases would we discover for a faster python?
Shared objects and proper sandboxing would also be huge.
Many times when Python is blamed for being slow, it's the programmers fault. Python is great that you can 'regular' people writing code in it quickly. The problem is, these regular people don't always understand algorithms or things like caches, threads, databases...
A lot of these users can just say "My department needs a $40,000 24 CPU server with maximum RAM from MicroWay/SuperMicro, we need to run our codes faster", when they are just trying to brute force things.
They understand the problem domain but don't have the programming skills to use a computer to efficiently solve it.
But, these guys are all a step ahead of the ones who are stuck in the mindset of "C is the only language fast enough for my work", while not even understanding pointers and basic syntax and getting stuck on silly things like text processing, which could be done in minutes in Python.
Yes, time to market is important. However, you don't need to compromise convenience of development for the sake of performance. If you twist your Python code to get performance it takes time. If you need performance, and like the syntax of Python then you should take a look at Nim [1]. With Nim I develop as quickly as in Python while I get the performance of C.
I believe application performance is important on servers. It makes a difference if your Shop software written in Python is able to handle 50 requests per second, or if the same software written in Nim can handle 500 rps. And by the way, Nim provides static typing which helps a lot to catch errors at compile time.
[+] [-] scarface74|8 years ago|reply
On the other hand, the idea that dynamic languages are more productive than static languages are laughable. Statically type languages prevent a lot of bugs and allow for a lot of automated provably correct refactorings that simple cannot be done with a statically typed languages. You can't even reliably do a "find usages" of classes using a dynamically typed language.
[+] [-] carlmr|8 years ago|reply
Exactly, I get quick and precise code completion, I catch plenty of errors beforehand etc. I'd say I'm about 10x as productive in C# as in Python, with similar amount of experience. Python only shines when there is a library that does something really well that you need. For me any productivity advantage in Python is from lots and lots of libraries.
Also in terms of maintainability, I find my C# code easy to read and modify a year later when I've forgotten completely about it. In Python I need to rescan all of the types into my head until I can understand what the program does.
I mean with var and dynamic, C# offers everything you need for duck typing efficiency, while preserving the very important statically typed interfaces.
[+] [-] barrkel|8 years ago|reply
For example: dropping into a debugger (binding.pry / pry_remote in Ruby) to write code interactively in the context of the application, transferring that code to the source, and continuing on with my next fragment of functionality with a refresh of the web page, no recompiles or restarts required. You can do this with some difficulty and certain caveats in a statically compiled language, but only if things have been set up in just the right way, with automatic dependency recompilation and reloading, hot code replacement, etc. And even then, there are limitations on the kind of code you can write in the editor, depending on the underlying technology. Typically you can't create whole new classes, or introduce new fields etc.
Or consider levering up your language. Dynamic languages often give you the ability to create "new" syntax via expressive literals. This opens up more avenues for declarative programming; construct a data structure that models your problem more directly, and then write code that interprets the data structure. You want extremely lightweight literals for this, readable with minimal ceremony, including lambdas for when you need an escape hatch, a little pocket of custom code embedded in the bigger data structure. Statically typed languages have little context to infer types for such lambdas, unless they can generalize from the operators applied, but then you have a generic function, not a piece of data. So you end up infecting your DSL with type annotations and cruft, and before you know it the whole thing is hiding the forest behind the trees.
Thing is, if you aren't used to using these tools, you might not even know they exist, and you may be missing out on productivity you didn't know you could have.
[+] [-] rockostrich|8 years ago|reply
[+] [-] abhirag|8 years ago|reply
"Statically type languages prevent a lot of bugs and allow for a lot of automated provably correct refactorings that simple cannot be done with a statically typed languages." -- not true, Clojure is trying to do that with Clojure.spec and specifications being checked at runtime can get you closer to things you could have automatically proved correct only with languages with dependent types, nothing against statically typed languages but I feel that your sweeping generalizations hurt the point you are trying to make.
[+] [-] simonh|8 years ago|reply
[+] [-] freetime2|8 years ago|reply
The main example he cites is a study that compares the amount of time writing string processing routines in different languages - which is quite a bit different from the work I do every day. I develop web apps which means I generally work in very large code bases, and spend most of my time modifying existing code rather than writing fresh code from scratch. I have found that statically typed languages (java + typescript) and the fantastic IDE support that comes along with them make it really easy to navigate around the code and refactor things. Also - the compiler tends to catch and prevent a whole class of bugs that you might otherwise only catch at runtime in a dynamically typed language.
Of course there are other situations where I prefer to use Ruby as my scripting language of choice - it all comes down to using the right tool for the job at hand. Unfortunately I don't think the author gives enough consideration to the trade-offs between static vs. dynamically typed languages, and I think he would have been better just leaving that section out as it isn't really necessary to prove his point that CPU efficiency isn't important in a lot of applications.
Ultimately though I completely agree with his main point: "Optimize for your most expensive resource. That’s YOU, not the computer."
[+] [-] mangecoeur|8 years ago|reply
That said, there is clearly a desire to write 'fast' code in python itself without swapping to C. Cython helps, but to get really fast Cython code you actually have to write with C-semantics (so you are basically writing C with Python syntax).
Projects like numba JIT are interesting in that they can optimise domain-specific code (i.e. numerical/array code) that's written in normal python style. It also means jumping through a few hoops (although with the latest version in many cases all you need is a single decorator on your hot function). You can even do GIL-less multithreading in some cases.
Overall things are looking promising, with the addition of the frame evaluation API and possible improvements to the python C-api that could make JIT and similar extentions easier.
[+] [-] boomlinde|8 years ago|reply
The problem is that his experience as a Python developer doesn't accurately reflect the prevalence of problems where runtime CPU performance actually is an issue. Of course not, because who in their right mind would make an informed decision to solve such a problem in Python? Python has worked for him because it is only useless for a category of problems that he hasn't had the opportunity to solve because he's a Python developer. Outside this professional experience, not everything is a trivially parallel web service that you can just throw more servers at if CPU time exceeds I/O waiting.
It all really boils down to what your requirements are, whether you have all the time and memory of a whole server park at your hands, or a fraction of the time available in a smaller embedded system, how timely the delivery of the software has to be and how timely it needs to deliver runtime results once it's up and running. There are times where Python just isn't fast enough, or where getting it fast enough is possible, but more convoluted and tricky than implementing the solution in a more performant language. Developer time may be more expensive than the platform that my solution is for, but that doesn't get around the fact that it eventually will need to run with the available resources.
[+] [-] agentgt|8 years ago|reply
What I have heard complaints about Python are (and I don't agree with all these points):
* Its not statically typed
* The python 2/3 compatibility
* It has some design flaws: GIL, variable assigning, mutable variables, lambdas, indentation (I don't agree with all these but this is complaints I have heard).
* The plethora of packaging (ie its not unified)
I guess one could argue its slow because it can't do concurrency well but that really isn't raw speed.
Then the author started comparing string processing of programmer time from a study which... doesn't help the authors point at all.
* Python has and will always be fast at string processing and most people know this
* The people that complain about python speed are almost certainly not doing string processing
* I have serious questions about the study in general (many languages have changed quite a bit since then)
[+] [-] nerdponx|8 years ago|reply
[+] [-] pg314|8 years ago|reply
In the telecom domain, I've dealt with data big enough that Python wasn't really feasible. Think 100 of millions of records in CSV format that need to be parsed and processed. Doing that in Python is going to be painful.
[+] [-] dahart|8 years ago|reply
I've done a lot of image processing in Python using libraries like PIL, numpy and opencv. Doing per-pixel operations is extremely easy in PIL - improving my dev time, but CPU wise the slowest of the bunch. I love prototyping that way, but I have to move to numpy or opencv or another language to speed it up. A recent program to do a slightly complicated color transform on a 512x512 image was taking me over 60 seconds with PIL. It was 5-10 seconds in JavaScript, and less than 1 second using numpy.
[+] [-] bane|8 years ago|reply
I haven't looked into the guts myself, but I'd bet that for similar or equivalent operations, the way the different syntaxes are handled under the hood are wildly different. e.g. function calls are much faster than method calls on objects.
Optimized pure python can look very ugly and non-idiomatic.
[+] [-] rockostrich|8 years ago|reply
[1] https://www.techempower.com/benchmarks/
[+] [-] nhumrich|8 years ago|reply
[+] [-] wheaties|8 years ago|reply
[+] [-] Acalyptol|8 years ago|reply
[+] [-] odiroot|8 years ago|reply
Yes, that's why we use it.
[+] [-] icebraining|8 years ago|reply
"People saying it doesn't matter that Python is slow are deluding themselves and preventing Python from getting faster like JS did"
"Python is inherently harder to optimize than JS since it has <very dynamic features>"
"Smalltalk/Lisp/etc are also very dynamic yet are much faster"
"The slowness of Python is harming the planet by being inefficient and therefore wasting more energy/producing more pollution"
Did I miss any arguments? I know certain topics are bound to attract some repetitive discussion, but "Python is slow" has been one of the worst.
[+] [-] jrs95|8 years ago|reply
[+] [-] dom0|8 years ago|reply
Python is not a very dynamic language in the sense that you actually can't change a lot of stuff (and a number of the things you can change just segfault CPython). I think JS is more dynamic, for example. Or Ruby.
[+] [-] yen223|8 years ago|reply
[+] [-] Waterluvian|8 years ago|reply
Also most of my problems are IO bound so single threaded concurrency is fine.
But I represent a very small portion of the global problem space.
[+] [-] dom96|8 years ago|reply
[+] [-] autokad|8 years ago|reply
I myself have found this to not be true, and I have found the same in reading / talking to others. However, if you have found a resource that has data on the contrary, I would find it very interesting to read.
https://medium.com/javascript-scene/the-shocking-secret-abou...
[+] [-] nerdwaller|8 years ago|reply
[+] [-] ageofwant|8 years ago|reply
[+] [-] gipp|8 years ago|reply
[+] [-] blumomo|8 years ago|reply
[+] [-] dahart|8 years ago|reply
It's both the breadth of the standard library and ecosystem, and the simple language design, that make developing things in Python faster for me.
Doing problems on Project Euler has been an education for me in how algorithm matters more than speed. Lots and lots of people spend hours writing long C++ codes that are easily beaten by a few lines of Python. It certainly goes the other way too, and the wrong algorithm in Python is even that much slower and more painful than the right algorithm in C++. But when the right algorithm is used and the problem is solved in a few milliseconds, it really doesn't matter which language uses more CPU cycles, all that matters is whether you saw the insight that let you skip 99% of the search space, and how much time you spend writing code.
[+] [-] _pmf_|8 years ago|reply
> Your bottleneck is most likely not CPU or Python itself.
With applications that are dominated by raw data processing, it's very, very easy to be CPU dominated. Hell, I had one quite trivial data converter for logfiles where the "parsing the printf string" part of Java's printf dominated processing and writing a custom formatter halved processing time (while regexes can be compiled, the format string cannot be precompiled and will be interpreted each time); it's one of those things where I would intuitively say "why did this moron write his custom formatter" if I stumbled upon it in a code review. Intuitively, you'd expect this to be a simple case of an IO dominated task (which it is now once the bottleneck has been removed).
If it's fire-and-forget batch jobs, you can get away with it, but if the converter is part of a user facing fat client application that runs on a old office laptop, you don't have that luxury.
[+] [-] kodablah|8 years ago|reply
I never understand why dynamic language enthusiasts primarily focus on new code only. You have to discuss all sides of increased or decreased productivity to make a rational argument.
[+] [-] hasenj|8 years ago|reply
For serious projects? IMO python is a disaster.
[+] [-] wyldfire|8 years ago|reply
I've found that this is often the case. Nearly always disk or network. But it's sometimes surprising how little work you need to do to become CPU-bound. This is the price we pay for such a tremendously dynamic language.
Indeed, the article's suggestions of C/Cython/PyPy are good ones to remedy the problem when it occurs.
[+] [-] jayflux|8 years ago|reply
Right tool for the right job I suppose.
[+] [-] nhumrich|8 years ago|reply
[+] [-] aeorgnoieang|8 years ago|reply
[+] [-] nadam|8 years ago|reply
As hardware gets faster we give it new tasks that could not be achieved before. Like rendering high resolution stereoscopic images using physically based shading at 90 FPS on relatively cheap consumer hadware (VR). There are still quite a lot of code that we call 'performance critical'. Most of that code is written in C/C++ (and CUDA and glsl, and hlsl, etc...) today.
[+] [-] VHRanger|8 years ago|reply
Fast prototyping is great but being stuck with a prototype for deployment isn't.
[+] [-] kerkeslager|8 years ago|reply
[+] [-] traverseda|8 years ago|reply
Can you give some examples of this? I mean, obviously with enough effort you can "make python fast" since it has good C bindings, and can just be a thin wrapper around fast stuff. Similar to how command line tools can be ridiculously fast[^1] despite, ostensibly, running in bash.
So I'm a bit confused about what you're claiming. Organizational issues, it's difficult to get management on board with an optimization pass?
[^1]: https://aadrake.com/command-line-tools-can-be-235x-faster-th...
[+] [-] ColinWright|8 years ago|reply
So given that, can you elaborate on your objection? I'd like to know why you think the article is wrong about optimising your Python to make it fast enough.
[+] [-] fnord123|8 years ago|reply
[+] [-] auserperson|8 years ago|reply
[deleted]
[+] [-] traverseda|8 years ago|reply
Micropython is equivalent to a real-time cooperative-multitasking OS. If it had ~~better~~ support for things like cffi, you could implement posix on top of it. I can imagine a laptop that runs gnu+python in the next few years.
That's a whole new usecase, simply because that implementation uses a lot less ram. What usecases would we discover for a faster python?
Shared objects and proper sandboxing would also be huge.
[+] [-] booshi|8 years ago|reply
Arguably, other languages can get code out faster depending on the dev, language, etc.
[+] [-] bluedino|8 years ago|reply
A lot of these users can just say "My department needs a $40,000 24 CPU server with maximum RAM from MicroWay/SuperMicro, we need to run our codes faster", when they are just trying to brute force things.
They understand the problem domain but don't have the programming skills to use a computer to efficiently solve it.
But, these guys are all a step ahead of the ones who are stuck in the mindset of "C is the only language fast enough for my work", while not even understanding pointers and basic syntax and getting stuck on silly things like text processing, which could be done in minutes in Python.
[+] [-] nhumrich|8 years ago|reply
[+] [-] progman|8 years ago|reply
[1] https://nim-lang.org
I believe application performance is important on servers. It makes a difference if your Shop software written in Python is able to handle 50 requests per second, or if the same software written in Nim can handle 500 rps. And by the way, Nim provides static typing which helps a lot to catch errors at compile time.