sclv
|
2 years ago
|
on: Leaving Haskell behind
Indeed, as you note, a reinstallable base is a goal everyone wants. Its basically historical reasons and coupling of primitives (tied to the compiler innards) to nonprimitives which has caused this situation, but sufficient elbow grease should improve things.
sclv
|
3 years ago
|
on: On leaving Mapbox after 12 years
I don't think any of this is impossible to change. There was just a Labor Notes conference this weekend where thousands of people pushing for more democratic and rank and file run unions showed up. And examples like the ALU show that going through existing unions isn't the only way possible. And beyond that, even organizing with a major union can still give you a local you have power over - workers at the the times tech guild, amazon, kickstarter, etc have all organized with existing larger unions, and are starting to see more control over their conditions, more rights, and more respect already.
And I disagree that job protections would "hinder some of the innovation" happening -- if anything, more comfortable and safe employees are more free to innovate. I think it would just hinder employers giving us impossible deadlines to do underspecified or ill-specified things to tick some useless checkbox, or to deliver a feature they already sold without it having been written yet.
sclv
|
3 years ago
|
on: On leaving Mapbox after 12 years
Yes, companies are not democracies. That's why we need unions! That's the only way to exercise our collective power to negotiate with the employer on more equal terms. When we negotiate individually we also say "here are the things we want, and that is our condition of work." There is _always_ a conflict between what employees want, which involves wages and conditions, and what employers want, which is getting the most work while yielding the least in wages and conditions.
Unions in tech are as possible and necessary as unions anywhere else. Nothing about being it tech makes us "special" and the whole mythology it does only serves to keep us from organizing and solidifying our conditions and strength.
sclv
|
7 years ago
|
on: 1/0 = 0
Holy moly that is not at all what that paper says! It specifically argues that certain equational properties of a given total language continue to hold in the total fragment of a given partial language. It is an embedding theorem combined with lifting properties, not a license to perform _any_ reasoning at all regarding the non-total portion of the language it considers!
sclv
|
7 years ago
|
on: 1/0 = 0
Your edit is starting to get it. If by the argument of the article x/y = cotton candy for all x,y, then probably the argument of the article isn't good. And the reason is precisely that division in a field is taken to be nothing but a notational shorthand for multiplication by the multiplicative inverse.
sclv
|
7 years ago
|
on: 1/0 = 0
I know what he's doing. The problem is when you make it a different function (even by just extending it) then you change its equational properties. So equational properties that held over the whole domain of the function no longer hold over the extended domain. This is repaired by modifying the equational properties. But the modified equational properties mean that you now have a different system than before. So the whole thing is just playing around with words.
sclv
|
7 years ago
|
on: 1/0 = 0
The problem this and the other replies miss is that the standard definition of division is multiplication by the inverse. The entire argument rests on a notational slight of hand. The property that held before -- that _when defined_ division has the inverse property -- no longer holds. Thus many equational identities that otherwise would hold do not hold.
sclv
|
7 years ago
|
on: 1/0 = 0
Right. So the multiplicative inverse property _breaks_! He just points out that it breaks, and thus you need to use a more complicated property instead. That doesn't mean that the property doesn't break.
sclv
|
9 years ago
|
on: Standardized Ladder of Functional Programming [pdf]
I think this list is not reflective of a general Haskell outlook. Its reflective of an outlook of people standing _outside_ Haskell (LC is historically a scala-heavy conference) and projecting onto it a certain sort of structure of expectations that isn't actually how Haskellers in general view things. I agree that this misperception isn't a good thing for Haskell -- but its a misperception imposed from the _outside_.
sclv
|
9 years ago
|
on: The Four Flaws of Haskell
If you think reading X loc/min is the norm, and then hit a language where you read much slower, you might think "this is hard to read". But if you're reading the same density of _logic_ at the same speed, then that's not a drawback. If the haskell is say 3x as dense in logic, then reading at 1/3 the speed isn't a drawback at all...
sclv
|
9 years ago
|
on: Hask is not a category
You do get 0 * x <= 0. You just need a splash of domain theory to make the medicine go down.
sclv
|
9 years ago
|
on: Show HN: Learn Functional Programming Using Haskell
once you end up with a typeclass and associate the methods the unpacking goes away and you're just working in a context parameterized by some typeclass. i expressed it via that route to help make the connection to modules more clear. (there are occasions when you don't take that last step too, which is why i also sort of pointed towards that route). lennart's post shows an example where this sort of falls down -- but the followup also shows a nice haskelly solution that works, mainly, except when we want to intermix. he also suggests explicit type arguments as a way to make things nicer -- those have now landed in GHC :-)
the other relevant work in a broader sense that I should mention is regarding effectful contexts where idiomatically you declare a subclass of monad with the relevant operations, then instantiate it via the mtl or some other means, so you can swap out the IO backed "real" one or various harnesses or add in logging layers, etc.
finally, i guess i should add that as a rule of thumb i've noticed that purity and laziness both help provide ways to give "modular separation of concerns" directly. in particular, the most obvious thing we can do is just have each function do one thing to a bit of data, and produce a different bit of data and that's innately modular. but when we're interleaving IO (for example with mutable datastructures) and concerned with _when_ computation happens (in a strict setting), then it feels we're paying for this too much because we get big intermediate structures. but if you get the knack of just using pure lazy structures directly, you can sort of "amortize out" the computation cost in a nice way and also the space cost (as conceptually some big data structures become produced "on demand"). of course if you get it wrong, blammo :-)
sclv
|
9 years ago
|
on: Show HN: Learn Functional Programming Using Haskell
This is an interesting question. As much more of a Haskeller than an MLer I don't tend to feel the "need" to structure my programs explicitly modularly, and when I do I sort of instinctively use a combination of typeclasses and polymorphic higher-order functions to do so.
More basically, just think "everywhere I would open a module, instead I can take a parameterized record of functions" (and furthermore, if a datatype uniquely determines that record, I can associate a typeclass to that datatype). There are limitations to this, but in fewer circumstances than you'd think -- mainly about sort of cross-modularity (aka the expression problem).
There was a very nice discussion on lennart's blog about this in 2008, with a problem posed and some partial solutions (read bottom post to top):
http://augustss.blogspot.com/2008_12_01_archive.html
sclv
|
9 years ago
|
on: The Rust Platform
i should have specified "modulo bottom" because i somehow didn't cotton i was talking to someone more interested in pedantry than actual discussion.
that said, constructing an inhabitant of false a _different_ way (when we can already write "someFalse = someFalse") is not particularly interesting, and again doesn't speak to parametricity in any direct way.
sclv
|
9 years ago
|
on: The Rust Platform
> I need univalence for this argument to hold water.
No, you don't. Univalence is the axiom that transporting operations across such equivalences _always_ works. If you're doing equational reasoning directly it doesn't arise.
Furthermore, all you need to do is to establish that the _type operations_ regarding one type respect the equivalence to the other type as an additional step.
As you say "a monoid is a type plus two operations" -- so fine, we can treat the monoid And as the type bool and the dictionary of operations on it, and all this still works out.
sclv
|
9 years ago
|
on: The Rust Platform
> Parametricity is too good to give up. With the minor exception of reference cells (`IORef`, `STRef`, etc.), if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.
You know that's not what parametricity means, right? Like, at all?
Here's a challenge.
`foo :: forall a. a -> a`
Now, by parametricity that should have only one inhabitant (upto iso). Use your claimed break in parametricity from type families and provide me two distinct inhabitants.
sclv
|
9 years ago
|
on: The Rust Platform
> You wanna play the dependent type theory card? Type families as provided in Haskell are incompatible with univalence.
Hi. As someone that knows type theory and knows homotopy type theory and also knows Haskell well I would pose the following question to you: what purpose on god's green earth would be served by introducing univalence directly to haskell?
(Oh, and furthermore, you realize that fundeps have precisely the same issues in this setting?)
Contrariwise, don't you find it _useful_ that we can have two monoids, say And and Or, which have different `mappend` behaviour?
Now, can you imagine having that feature and _also_ respecting the idea that set-isomorphic things should be indistinguishable? How?
sclv
|
9 years ago
|
on: New haskell-lang.org
YHC, Hugs, and nhc all discontinued development.
The former is now virtually uncompilable (and was never complete), Hugs is officially unmaintained, nhc hasn't seen a release since 2010 so is not under new development.
While there was discussion over this, it doesn't at all resemble the set of arguments more recently.
sclv
|
9 years ago
|
on: New haskell-lang.org
sclv
|
9 years ago
|
on: New haskell-lang.org
I have no idea either. The new site was never discussed once on the list (
https://groups.google.com/forum/#!forum/commercialhaskell ) nor does there seem to be anything in the CHG charter that would let it as a group do anything at all, such as making a collective decision to sponsor a site.