top | item 45556500

(no title)

d_tr | 4 months ago

Functions map members of a set A to members of a set B. These can simply be Cartesian products whose members are tuples. In my dream PL syntax a function call would be a function name followed by a tuple, and that tuple would be no different than the tuples you would use in any other part of the program (and so you could use all the tuple manipulation library goodies). If the function preserves any other structure of the type, like an identity element, that could be stated so you could have morphisms. And that identity element or other properties could be declared just as other stuff like 'const' are declared, and since the compiler can't verify all these stated properties, it's on the user to provide correct info, just like it's on the user to write a correct program, so nothing lost here, and anything more, like verification by the compiler, would be a bonus.

Mathematicians have been packing all this stuff nicely for a couple of centuries now, maybe we could use more of their work on mainstream computing, and it could also be a nice opportunity to get more people to appreciate math and structure.

Something that has side effects all over the place should just not be called a function, but something else, maybe "procedure" would be an appropriate, clear term.

discuss

order

WJW|4 months ago

Haskell is much like this? A function like `borp :: a -> b` maps from type a to type b. If you want to have side effects like mutable state, you need to encode that in the function signature, like `borpWithState :: a -> State s b`, where s is the type of the mutable state.

In this case it's almost the opposite of most programming languages. In (say) Ruby or Java, any function or method can do anything; write to stdout, throw exceptions, access the network, mutate global state, etc. In haskell, a function can only do calculations and return the result by default. All the other things are still possible, but you do have to encode it in the type of the function.

EDIT: The annotations you mention with regards to identity elements etc do exist, but they live mostly on the data structures rather than on the functions that operate on those data structures.

defanor|4 months ago

Languages with dependent types (Agda, Idris, Coq, Lean) seem even closer to the description, with a user supplying proofs of properties. For instance, as in idris2-algebra [1]. One can similarly define isomorphisms, or other kinds of morphisms, by listing their properties, and requiring proofs to construct.

[1] https://github.com/stefan-hoeck/idris2-algebra

ux266478|4 months ago

Math is great and should be well studied by programmers, but in general I oppose this idea. Mathematicians define things the way they do because they have neither flip-flops nor do they have a defined execution method as part of their foundational system. These two things radically change the interaction we have with any given formal system put on top of it.

> Functions map members of a set A to members of a set B.

> Something that has side effects all over the place should just not be called a function

Leibniz defines functions as a quantity that depends on some geometry like a curve. Bernoulli later defined it as a quantity that results from a variable. The latin word "functio" means process, not implying a mapping but an arbitrary sequential performance. Mathematicians are prone to taking words from elsewhere, either twisting their meaning or inventing wholly new meaning out of thin air, all according to their whimsy for their own particular needs. I do not think a reasonable case can be made to assert we have to respect ZFC's narrow conception of a function when we do not live in a ZFC world.

dawnofdusk|4 months ago

>Mathematicians are prone to taking words from elsewhere, either twisting their meaning or inventing wholly new meaning out of thin air, all according to their whimsy for their own particular needs.

True but one benefit of those guys is that they actually define what they mean in a formal way. "Programmers" generally don't. There is in fact some benefit in having consistent names for things, or if not at least a culture in which concepts have unambiguous definitions which are mandated.

thaumasiotes|4 months ago

> Functions map members of a set A to members of a set B. These can simply be Cartesian products whose members are tuples.

Well, a function can't be a Cartesian product unless set B has cardinality 1. It's perfectly coherent to view a function as a set of tuples, but it's not legal for that set to contain two tuples (a, b) and (a, c) where b ≠ c.

> In my dream PL syntax a function call would be a function name followed by a tuple, and that tuple would be no different than the tuples you would use in any other part of the program (and so you could use all the tuple manipulation library goodies).

This already exists. For example, that's how `apply` works in Common Lisp.

https://www.lispworks.com/documentation/HyperSpec/Body/f_app...

    (apply #'+ '(1 2)) => 3

agumonkey|4 months ago

There was one attempt at creating a language splitting both pure function and effectful procedures. Any construct with a procedure call was automatically/effectively typed as a procedure. But I can't recall the name so far..

ngruhn|4 months ago

Unison is one example. Except it doesn't just differentiate between pure and effectful but also what combinations of effects are used.

paulddraper|4 months ago

That’s cool function coloring.

C++ sort of has this, with const.

noelwelsh|4 months ago

ML (Standard ML, OCaml) functions idiomatically accept and return tuples.

thaumasiotes|4 months ago

I'm pretty sure OCaml functions are monadic. They don't really accept tuples (unless you intentionally do something weird).

Rather, if you define (in your mind) a function of three variables, the compiler makes that a function of one variable that returns another function. And that return-value function takes one variable and returns a third function. And that third function takes one variable and returns the result of the triadic function you intended to write.

That's why the type of a notionally-but-not-really triadic function is a -> b -> c -> d and not (a, b, c) -> d. It's a function of one variable (of type a) whose return value is of type b -> c -> d.

snthpy|4 months ago

> In my dream PL syntax a function call would be a function name followed by a tuple, and that tuple would be no different than the tuples you would use in any other part of the program

So PRQL (prql-lang.org) is kind of like that, with the limitation that control flow is limited to the List Monad bind, i.e. the tuples from one step are piped to the function call in the next step one at a time producing 0..* result tuples and the resulting multiset is flatmapped. At the moment it just transpiles to SQL but a couple of months ago I was exploring different Lambda Calculi and how to extend this to a more general PL. Alas, that won't take shape until AI is at the level that it can write that code for me. I guess LINQ and similar Language Integrated Query Languages already provide this functionality.

P.S. Writing the above made me think that it's not quite what you asked for; in the PRQL case each function receives an implicit `this` argument which is the tuple I was thinking of. However the function can also take other arguments, including keyword arguments. Those are arbitrary. I guess they are implicitly ordered and could be represented as a tuple as well. What would you see as the benefit of that?

> (and so you could use all the tuple manipulation library goodies)

Other than indexing into tuples, I can't really think of anything else, at least for single tuples. I initially thought of something like `zip(*args)` but that's only really useful when you have list of tuples or tuple of lists and then you're back in PRQL land. Indexing into tuples is also brittle and does not produce self-documenting code so I prefer the PRQL and SQL namedtuples/structs where fields are referenceable by name.

I have this suspicion that PRQL functions are parameterised natural transformations but my Category Theory at that level is too rusty to check without extra work. If that's the case though then having the explicit function arguments be simple values feels justified to me since they're just indexing families of related transformations and are not the primary data being transformed (if that makes sense?).