With respect to the theological view of the question; this is always painful to me.— I am bewildered.— I had no intention to write atheistically. But I own that I cannot see, as plainly as others do, & as I should wish to do, evidence of design & beneficence on all sides of us. There seems to me too much misery in the Haskell world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Haskell with recursive data structure yet leave the maximal size of tuples to arbitrary implementation details, have a restrictive typing system that would not include dependent types, or force the hand of the programmer to use the bang operator to manage memory manually to avoid consequences of laziness.
Not believing this, I see no necessity in the belief that the Haskell was expressly designed.
I would like to add that just like in our universe, in Haskell entropy only ever increases and never goes down.
I.E., add more functionality to your program and the complexity will go up. This is true for all programming languages of course, but in my limited experience larger Haskell programs seem to become exponentially more complex.
this is an interesting observation. I have various explanation on why that might be the case.
on a commercial setting there is more pressure to deliver code then to review it. combine it with the lack of our benevolent dictator for life (that has been on the project the world life cycle, not just recently), there is no one with power to actually say no to changes.
language geeks are novelty seekers. they will use every feature of their language . stronger languages have more features to abuse. so on a commercial setting you will have all the features being used without much thought an architectural design that says you that no, we shouldn't do that.
that's also why I think projects with benevolent dictators for life on the open source ward don't fall these in paths even though they use languages that are stronger.
you could restrict yourself to use languages that have only one way to be used, so python as it was originally. or use a language little abstraction power. but you will suffer in other ways. as abstraction power is genuinely useful. accidental complexity has a way of getting in by expressive means or by social means.
The key is to not write large Haskell programs. Write many Haskell libraries and compose them. This scales infinitely and is an excellent way to build software. But it's also hard to do and takes good & deliberate technical leadership if you want to get 50+ engineers doing it.
I imagine there can be some language, which can automatically modify programs, e.g. you added some conditions, and compiled removed/disabled/moved to obsolete package all unnecessary branches which became unreachable or simplified logic.
> Haskell beginners often use lists instead of arrays. You can’t do random access in a linked list, but only access the first element and then the rest of the list. The real world also doesn’t allow you random access, you are limited by the speed of light and have to go from one location to the next.
You don't need arrays for random access though. Haskell trees give you access to 2^n leaves within depth n, which also exceeds physical limitations like the speed of light.
The next ground-breaking achievement in quantum physics will be the discovery of the two smallest particles everything derives from: a left paren and a right paren.
Hell, Genesis was written in assembly for the 8008 in only 520 bits but it took a damn sandpile of support chips to bootstrap.
God uses Elixir with Rustler now because of concurrency and lazy evaluation gotchas of Haskell. (STM didn't cut it.)
Meanwhile, the devil imposes a standard of FORTRAN77 and COBOL62 with a spaghetti mess of uncommented code containing GOTOs, "god" functions, and meaningless identifiers.
This makes me believe simulation theory even more tbh. Quantum mechanics exist to fuse operations, altogether making simulating our universe more computationally inexpensive.
There's an even deeper way to thing about it: if you actually want to parallellize the simulation of multiple scenarios, or if you're running smth. that needs to compute smth in >4d, quantum mechanics + parallel universes" might be the computationally optimal way to do it!
...we don't think about it this way often because we'd be thinking about computational problems so huuuuge that we'd be like the quarks inside the atoms inside the transistors inside plannet-sized clusters spanning galaxies to even fathom computing it ...and it's not necesarily a feel-good perspective :)
I mean, even the speed-of-light limit and general relativity seem like optimizations you'd do in order to better parallelize something you need to compute on some unfathomable "hardware" in some baseline-reality that might not have the same constraints...
...and to finish the coffee-high-rant: if you want FTL you probably can't get it "inside" because it would break the simulation, you'd need to "get out" ...or more like "get plucked out" by some-thing/god :P (ergo, when we see alien artifacts UFOs etc. that seemed to have done FTL... we kind of need to start assuming MORE than _their_ existence and just them being 'more advanced' than us)
People write this sort of thing a lot, and I don't really understand it. Simulating quantum systems is dramatically (formally speaking exponentially) more expensive than simulating classical ones (at least as far as our current understanding of complexity theory goes). If you're going to simulate a universe, and you want to cheap-out on computer power, then you should simulate a classical one.
Let's face facts here: God just fell asleep on the keyboard, and by a staggering coincidence, or perhaps a weird shape of the head, the first 4 letters he typed were P, E, R, L.
>The real world also doesn’t allow you random access, you are limited by the speed of light and have to go from one location to the next.
"Random access" doesn't mean that accessing an item always takes the same time regardless of the size of the collection, it means that, if the size of the collection doesn't change, access times are uniform independently of which particular item is accessed.
For example, one might conceive of a storage device shaped like a sphere the size of the solar system, where an item is read by shining a laser onto the surface of the sphere and measuring how the laser is scattered on its way back. Such a device would be random access, even though it's impossible to grow the collection, and even though a collection with twice the radius and four times the storage size would have four times the latency.
This kind of thinking happens when you are a strong expert in a field, but your frontal lobes stop receiving enough blood. When this happens, something simple as lazy evaluation becomes the key to the universe.
...it's useful to (over)generalize sometimes to get more explanatory power for things.
I mean, it probably says nothing useful about programming, but the other way around, thinking of "uncolapsed" wave-functions as lazy-evaluation could be useful. I'm not up-to-date on theoretical physics, but I think there might be something like that in Deutsch's constructor theory.
In programming I'd prefer more a language that makes syntactically/visually obvious what's lazy and what not and allows you to pick (eg. like Rust does with &mut), with some sigil maybe, but that's probably a low-prio for many language designers nowadays...
EDIT+: and you could say you practically get this already in mainstream languages... lazy-vals are just functions and it's probably good enough or better for most programmers to have them distinct/explicit.
I want to ask God how to make my stack build process faster.. even turning off the optimization flag it still takes quite some time on my 2.6 GHz 6-Core Intel Core i7.. (or is it because I'm on a Mac? Does it build faster on Linux?)
> Consider the wave-particle duality in quantum mechanics. Every particle behaves as a wave, as long as you haven’t interacted with it. Thanks to Haskell’s lazy evaluation values are also only evaluated once they are accessed (interacted with particles), and stay unevaluated thunks (waves) in the meantime.
Lazy evaluation is a beautiful thing, and in many ways, it is the solution self-reference.
Hofstadter in "I am a strange loop" and Gödel Escher Bach talks about this, well, he talks about many things, but he also talks about how Gödel's numbers can map to proofs that are self-referential, and relates that to humans, how out of very basic building-blocks, if enough representational power exists, self-reference and therefore consciousness exists.
He posits that humans, while self-referential, don't fall into infinite strange loops because they can assign the abstraction of "self" onto an "object" and evaluate only as needed. In essence, the "self" is lazily evaluated.
This view on wave-particle duality and the quantum measurement is a (very) leaky abstraction. It is a process, governed by decoherence - for a nice overview, see e.g. "Decoherence, einselection, and the quantum origins of the classical" by Zurek (https://arxiv.org/abs/quant-ph/0105127).
No, in fact I would say almost the exact opposite. Einstein's famous quote was expressing his distaste for the "Copenhagen interpretation" of quantum mechanics. Among people who seriously think about interpretations of quantum mechanics, many (but not all) think that there are serious flaws with the Copenhagen interpretation.
Currently the speed of light is constant by definition of the meter. If we were to find certain cases where the speed of light appears to be different from c, that would be interpreted as compression or expansion of spacetime. For example, universal expansion can be reinterpreted as light being faster in the past.
I'm wondering whether it could be reinterpreted as time having a variable rate.
[+] [-] woolion|2 years ago|reply
[+] [-] bckr|2 years ago|reply
[+] [-] bicx|2 years ago|reply
[+] [-] misja111|2 years ago|reply
[+] [-] slacknatcher12|2 years ago|reply
on a commercial setting there is more pressure to deliver code then to review it. combine it with the lack of our benevolent dictator for life (that has been on the project the world life cycle, not just recently), there is no one with power to actually say no to changes.
language geeks are novelty seekers. they will use every feature of their language . stronger languages have more features to abuse. so on a commercial setting you will have all the features being used without much thought an architectural design that says you that no, we shouldn't do that.
that's also why I think projects with benevolent dictators for life on the open source ward don't fall these in paths even though they use languages that are stronger.
you could restrict yourself to use languages that have only one way to be used, so python as it was originally. or use a language little abstraction power. but you will suffer in other ways. as abstraction power is genuinely useful. accidental complexity has a way of getting in by expressive means or by social means.
[+] [-] whateveracct|2 years ago|reply
[+] [-] riku_iki|2 years ago|reply
[+] [-] slacknatcher12|2 years ago|reply
[+] [-] tromp|2 years ago|reply
You don't need arrays for random access though. Haskell trees give you access to 2^n leaves within depth n, which also exceeds physical limitations like the speed of light.
[+] [-] fatfingerd|2 years ago|reply
[+] [-] rowanG077|2 years ago|reply
[+] [-] gnull|2 years ago|reply
[+] [-] galfarragem|2 years ago|reply
[+] [-] jexp|2 years ago|reply
https://www.prometheus-music.com/audio/eternalflame.mp3
Refrain (full lyrics): http://www.songworm.com/lyrics/songworm-parody/EternalFlame....
For God wrote in Lisp code
When he filled the leaves with green.
The fractal flowers and recursive roots:
The most lovely hack I’ve seen.
And when I ponder snowflakes, never finding two the same,
I know God likes a language with its own four-letter name.
[+] [-] sph|2 years ago|reply
[+] [-] 1letterunixname|2 years ago|reply
Hell, Genesis was written in assembly for the 8008 in only 520 bits but it took a damn sandpile of support chips to bootstrap.
God uses Elixir with Rustler now because of concurrency and lazy evaluation gotchas of Haskell. (STM didn't cut it.)
Meanwhile, the devil imposes a standard of FORTRAN77 and COBOL62 with a spaghetti mess of uncommented code containing GOTOs, "god" functions, and meaningless identifiers.
[+] [-] behnamoh|2 years ago|reply
[+] [-] assimpleaspossi|2 years ago|reply
[+] [-] az09mugen|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] kevinlu1248|2 years ago|reply
[+] [-] lessaligned|2 years ago|reply
...we don't think about it this way often because we'd be thinking about computational problems so huuuuge that we'd be like the quarks inside the atoms inside the transistors inside plannet-sized clusters spanning galaxies to even fathom computing it ...and it's not necesarily a feel-good perspective :)
I mean, even the speed-of-light limit and general relativity seem like optimizations you'd do in order to better parallelize something you need to compute on some unfathomable "hardware" in some baseline-reality that might not have the same constraints...
...and to finish the coffee-high-rant: if you want FTL you probably can't get it "inside" because it would break the simulation, you'd need to "get out" ...or more like "get plucked out" by some-thing/god :P (ergo, when we see alien artifacts UFOs etc. that seemed to have done FTL... we kind of need to start assuming MORE than _their_ existence and just them being 'more advanced' than us)
[+] [-] eigenket|2 years ago|reply
[+] [-] candiodari|2 years ago|reply
He's still sleeping.
[+] [-] jiggawatts|2 years ago|reply
[+] [-] silisili|2 years ago|reply
[+] [-] koolala|2 years ago|reply
[+] [-] fluoridation|2 years ago|reply
"Random access" doesn't mean that accessing an item always takes the same time regardless of the size of the collection, it means that, if the size of the collection doesn't change, access times are uniform independently of which particular item is accessed.
For example, one might conceive of a storage device shaped like a sphere the size of the solar system, where an item is read by shining a laser onto the surface of the sphere and measuring how the laser is scattered on its way back. Such a device would be random access, even though it's impossible to grow the collection, and even though a collection with twice the radius and four times the storage size would have four times the latency.
[+] [-] nurettin|2 years ago|reply
[+] [-] lessaligned|2 years ago|reply
I mean, it probably says nothing useful about programming, but the other way around, thinking of "uncolapsed" wave-functions as lazy-evaluation could be useful. I'm not up-to-date on theoretical physics, but I think there might be something like that in Deutsch's constructor theory.
In programming I'd prefer more a language that makes syntactically/visually obvious what's lazy and what not and allows you to pick (eg. like Rust does with &mut), with some sigil maybe, but that's probably a low-prio for many language designers nowadays...
EDIT+: and you could say you practically get this already in mainstream languages... lazy-vals are just functions and it's probably good enough or better for most programmers to have them distinct/explicit.
[+] [-] bmacho|2 years ago|reply
[+] [-] tome|2 years ago|reply
[+] [-] skywal_l|2 years ago|reply
[+] [-] archibaldJ|2 years ago|reply
[+] [-] Dudester230602|2 years ago|reply
[+] [-] sph|2 years ago|reply
[+] [-] Xeamek|2 years ago|reply
[+] [-] eggy|2 years ago|reply
[+] [-] PartiallyTyped|2 years ago|reply
Lazy evaluation is a beautiful thing, and in many ways, it is the solution self-reference.
Hofstadter in "I am a strange loop" and Gödel Escher Bach talks about this, well, he talks about many things, but he also talks about how Gödel's numbers can map to proofs that are self-referential, and relates that to humans, how out of very basic building-blocks, if enough representational power exists, self-reference and therefore consciousness exists.
He posits that humans, while self-referential, don't fall into infinite strange loops because they can assign the abstraction of "self" onto an "object" and evaluate only as needed. In essence, the "self" is lazily evaluated.
[+] [-] taneq|2 years ago|reply
[+] [-] mrkeen|2 years ago|reply
If you ever make a "Haskell is bad because it doesn't use state but the real world uses state" argument, this is what you sound like.
[+] [-] PartiallyTyped|2 years ago|reply
[+] [-] stared|2 years ago|reply
[+] [-] nice_byte|2 years ago|reply
[+] [-] sssilver|2 years ago|reply
[+] [-] eigenket|2 years ago|reply
[+] [-] Symmetry|2 years ago|reply
[+] [-] mbfg|2 years ago|reply
[+] [-] fluoridation|2 years ago|reply
I'm wondering whether it could be reinterpreted as time having a variable rate.