Cool! Y’all, this is obviously an RPC interface, but the point is that `import { onNewTodo } from './CreateTodo.telefunc.js'` is not, as written, an RPC. That transform, with seamless type support et cetera, is what makes this interesting. (If you think it’s interesting.)
I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
My experience was that for the simplest use cases, the ergonomics were unbeatable. Just drop a `@remote` tag into your TodoMVC frontend, and there’s your TodoMVC backend. But as soon as I had to look under the hood (for something as simple as caching behavior), the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
(I think that might be why you tend to see this sort of magic more in the context of a full framework (such as Meteor), where all the typical use cases can be answered robustly. If you’re building your thing out of modules, cleverer abstractions usually mean more work overall to integrate everything and boring is better.)
But it has a section titled "Network Transparency" that talks about when transparency of fails. And he calls out three things that make network calls fundamentally different:
1. Availability. It's possible that your network "function" will be impossible to call for some amount of time.
2. Latency. The round-trip latency of RPC calls is extremely high, which affects API design in major ways. Typically, you wind up wanting to perform multiple queries or multiple operations per round-trip.
3. Reliability. Network calls may fail in many more ways than local calls.
And these are all issues that have defeated many "transparent" RPC systems in the past. Once your application spreads across multiple machines that interact, then you need to accept you're building a distributed system. And "REST versus RPC" is relatively unimportant compared to the other issues you'll face.
> My experience was that for the simplest use cases, the ergonomics were unbeatable.
Exactly.
> the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
For real-time use cases, I agree. I believe functions are the wrong abstraction here for real-time. I do plan to support real-time but using a "it's just variables" abstraction instead, see https://github.com/brillout/telefunc/issues/36.
> cleverer abstractions usually mean more work overall to integrate everything and boring is better.
Yes, Telefunc's downside is that it requires a transformer. The Vite and Webpack one are reliable and used in production (using `es-module-lexer` under the hood). The hope here is that, as Telefunc's popularity grows, more and more stacks are reliably supported. Also, there is a prototype using Telefunc without a transformer at all, which works but needs polishing.
> I experimented with a higher-magic version of this a couple of years ago.
> I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
That sounds familiar! We were doing all of that for Opalang arount 2009-2010 if I recall my calendar correctly. Were you there?
>It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions
Are you me? I made this as a fun project a while ago, and left it there gathering dust, but earlier this year it found new life in an AI application and it's delivering promising results.
There's not much new under the sun. Virtual machines were created in the 1960s, for example.
The wisdom I was given by an old hand is that the computer industry goes through a cycle of focus: disk, network, CPU - at any given time the currently 'hot' technology is targeting one of these hot spots.
I am not sure I have lived long enough to see this play out for myself but it seemed that there was some truth to it when I first heard it.
Issue with RPC was that network calls were masked like they were ordinary calls. But they're not ordinary calls, they might take a long time to respond, they might fail because of network issues, you could send request and then fail to receive response. Those kinds of situations are not common with ordinary calls, so it creates some kind of impedance mismatch.
For example it's not possible to call JS function, it'll work and they it'll fail to return a result for some reason. Worst thing that could happen is stack overflow but it won't let you call this function.
I think that RPC is fine as long as it's clearly separated and those who read and write code do not mask it behind ordinary innocent looking calls. It's OK to call personApi.getFullName(person.fistName, person.lastName). It's not OK to call person.getFullName().
> If it was this easy eveyone would be using RPC all over the place for many years already, and we'd have no HTTP and no train-load of other protocols.
People are using RPC over HTTP all the time. Most backend APIs I've seen are only vaguely RESTful [1] and everyone seems to stuff much of their critical functionality in JSON RPC APIs masquerading as POST endpoints.
The reason RPC is making a comeback is because of 1. SSR (e.g. rendering React components to HTML on the server-side), and 2. the increased practice of continuous deployment.
Increasingly, the frontend is deployed hand-in-hand with the backend.
RPC is a match made in heaven for stacks where backend and frontend are deployed in-sync.
There really are only two reasons for using a RESTful/GraphQL API: providing third parties access to your data (e.g. Facebook API) or decoupling frontend and backend (e.g. Netflix uses a loosely coupled microservice architecture).
I think this is neat having looked at the examples but isn't the title really just a game of semantics?
Non rigorously a web API is just the boundary across some service which you communicate with over (commonly) HTTP at the endpoints specified at that boundary by developers. I could write my own library that wraps those endpoints with functions that make the calls to them, in fact many libraries do just that.
What this takes care of, again kind of nicely, is dynamically and automatically generating that boiler plate as you write your API so you don't have to put in the work, but I think saying this isn't just a style of building an API is a little much.
Whether you're doing RPC, REST or whatever the boundary between your service and another service is it's API.
Remote functions instead of data interchange doesn't have the same ring.
Though I would say that RPC blurs a lot of lines. REST is clearly a way for two programs to communicate. RPC could be seen as a way for two programs to become one distributed program by means of location transparency.
Shared objects and other systems take that a lot further, RPC is in a middle ground that is very interesting but also a little scary. Reading this description I couldn't tell you exactly what the security implications are. With their TODO list app as written, can a malicious user execute arbitrary SQL (within the permissions assigned to their user) on the server? It wasn't immediately clear to me (I'm sure a little more effort on my part could clear it up).
Our latest projects are built with RPC-style APIs and a BFF approach. We generate the TS client with Swagger and you effectively get strongly typed remote method calls.
So basically, we're at SOAP again. A few more years and maybe we'll come back to Remoting?
Using JSON-RPC 1.0 here, with ad hoc client and documentation generation. So yes, basically SOAP-like, without the XML and "entreprise java" days cruft. Communication between frontend team and backend team is easy, everyone knows what a function is. No fiddling with ReST concepts for which everybody has its own personal interpretation. It's simple, it works.
Yea, I've seen a lot of folks doing this and it's neat. Most projects don't really need REST/GraphQL. So many waste so much time implementing a perfect GraphQL setup even though they don't need it :sweat_smile:.
I don't think "API" just means "programmatic interface". It also denotes some kind of decoupling. A library API => decoupling between lib user and lib author. GraphQL API => decoupling between backend and frontend.
The point of Telefunc is precisely to not decouple frontend and backend. Telefunc is all about bringing backend and frontend tightly together.
So, the tagline is very much on point ;-).
But, yea, you make me think that "Remote Functions. Instead of REST/GraphQL." is maybe better. I'll think about it.
And if it has to be not, I much rather have an Erlang / OTP-like approach where resilience and failure recovery is so fundamentally baked into the language, it’s an actual feature instead of a liability.
Google has a system called plaque in which you can express DAGs of operators (functions) and it handles all the scheduling and data flow between nodes running on a distributed cluster. This achieves much better performance efficiency at large distributed scale while also providing an easy to reason and debug program logic. I would expect more such distributed computer programming models with dedicated language/compiler toolchains in the future.
This is pretty neat. You write a function and then a library automatically creates wrapper libs to be called in a browser as if it’s a function call. A variety of frameworks have done similar - SOAP auto-generated Java libs come to mind - but this looks very clean and minimal.
Author of Telefunc here. I'm glad folks are finally taking RPC seriously.
RPC is back. Smarter, leaner, and stronger. (Sorry for the marketing jargon, but I really do believe that.)
Telefunc is still fairly young, you can expect a lot of polishing in the coming weeks/months. The high-level design is solid and I expect no breaking change on that end.
If you encounter any bug (or DX paper cut), please create a GitHub Issue and I'll fix it. I'm also the author of https://vite-plugin-ssr.com which also has no known bug, so I do take that claim seriously.
trpc relies still on strings: No proper refactoring (ie F2 in VS Code) which makes refactoring as slow and error-prone as with REST and GraphQL. Should change with v10 but its API hasn't been finalized yet and ETA is far in 2023 if at all. Also overall API design has missed opportunities.
Telefunc should support zod, why shouldn't it? Just pass them as native zod types and all good. You could also convert them before with z.infer but you don't need to.
> It seems like this library has its own bespoke syntax for types
Only for stacks which don't transpile server-side code. You can use normal TS types with something like Next, Nuxt, Svelte, Vite, etc. So, these bespoke types aren't relevant for the majority.
Nice, I do something similar in a web framework I've been working on. All rendering happen on the server so the callbacks has to run there too. The onclick-handler in the DOM will trigger a POST to the server with an unique callback id.
def self.get_initial_state(initial_count: 0, **)
{ count: initial_count }
end
def handle_decrement(_)
update do |state|
{ count: [0, state[:count] - 1].max }
end
end
def handle_increment(_)
update do |state|
{ count: state[:count] + 1 }
end
end
def render
<div class={styles.counter}>
<button on-click={handler(:handle_decrement)} disabled={state[:count].zero?}>
-
</button>
<span>{state[:count]}</span>
<button on-click={handler(:handle_increment)}>
+
</button>
</div>
end
Really odd how there is a dozen comments on RPC but no one mentioned gRPC for the web browser. Google goes in the reverse, where from both protobuf you autogenerate API boilerplate, client stubs and even REST-over-HTTP apis.
The frontend-side also throws an error. `throw Abort()` is about protecting telefunctions from third parties. It shouldn't be used to implement business logic (simply use `return` instead).
I actually have plans to make Telefunc also have sensible defaults for network failure. So that the user doesn't have to take care of that (while the user can customize failure handling if he needs to).
[+] [-] Cushman|3 years ago|reply
I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
My experience was that for the simplest use cases, the ergonomics were unbeatable. Just drop a `@remote` tag into your TodoMVC frontend, and there’s your TodoMVC backend. But as soon as I had to look under the hood (for something as simple as caching behavior), the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
(I think that might be why you tend to see this sort of magic more in the context of a full framework (such as Meteor), where all the typical use cases can be answered robustly. If you’re building your thing out of modules, cleverer abstractions usually mean more work overall to integrate everything and boring is better.)
[+] [-] ekidd|3 years ago|reply
But it has a section titled "Network Transparency" that talks about when transparency of fails. And he calls out three things that make network calls fundamentally different:
1. Availability. It's possible that your network "function" will be impossible to call for some amount of time.
2. Latency. The round-trip latency of RPC calls is extremely high, which affects API design in major ways. Typically, you wind up wanting to perform multiple queries or multiple operations per round-trip.
3. Reliability. Network calls may fail in many more ways than local calls.
And these are all issues that have defeated many "transparent" RPC systems in the past. Once your application spreads across multiple machines that interact, then you need to accept you're building a distributed system. And "REST versus RPC" is relatively unimportant compared to the other issues you'll face.
[+] [-] brillout|3 years ago|reply
Exactly.
> the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
For real-time use cases, I agree. I believe functions are the wrong abstraction here for real-time. I do plan to support real-time but using a "it's just variables" abstraction instead, see https://github.com/brillout/telefunc/issues/36.
> cleverer abstractions usually mean more work overall to integrate everything and boring is better.
Yes, Telefunc's downside is that it requires a transformer. The Vite and Webpack one are reliable and used in production (using `es-module-lexer` under the hood). The hope here is that, as Telefunc's popularity grows, more and more stacks are reliably supported. Also, there is a prototype using Telefunc without a transformer at all, which works but needs polishing.
> I experimented with a higher-magic version of this a couple of years ago.
Neat, curious, is it on GitHub?
[+] [-] Yoric|3 years ago|reply
That sounds familiar! We were doing all of that for Opalang arount 2009-2010 if I recall my calendar correctly. Were you there?
[+] [-] moralestapia|3 years ago|reply
Are you me? I made this as a fun project a while ago, and left it there gathering dust, but earlier this year it found new life in an AI application and it's delivering promising results.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] chrsstrm|3 years ago|reply
[+] [-] belfalas|3 years ago|reply
The wisdom I was given by an old hand is that the computer industry goes through a cycle of focus: disk, network, CPU - at any given time the currently 'hot' technology is targeting one of these hot spots.
I am not sure I have lived long enough to see this play out for myself but it seemed that there was some truth to it when I first heard it.
[+] [-] vbezhenar|3 years ago|reply
For example it's not possible to call JS function, it'll work and they it'll fail to return a result for some reason. Worst thing that could happen is stack overflow but it won't let you call this function.
I think that RPC is fine as long as it's clearly separated and those who read and write code do not mask it behind ordinary innocent looking calls. It's OK to call personApi.getFullName(person.fistName, person.lastName). It's not OK to call person.getFullName().
[+] [-] pragmatic|3 years ago|reply
We fought so hard to get rid of crazy RPC protocols (COM+, CORBA) and abominations like SOAP and people keep trying to bring them back.
Yes networking is hard. Do everything you can to make it easier to debug. Text protocols, easy to use tools, etc
They have no idea how thankful they should be HTML friendly ("REST") APIs.
Don't wake the RPC eldritch horrors. Let them dream their undead dreams in the stygian depths.
[+] [-] le-mark|3 years ago|reply
[+] [-] Waterluvian|3 years ago|reply
But I keep re-learning what you’re getting at, throughout my career: just keep it simple. Most things can just be simple.
[+] [-] kabes|3 years ago|reply
[+] [-] hexo|3 years ago|reply
I like the logo (or icon) they have, that masquerades it as some serious haskell (or some other functional PL) related project.
If it was this easy eveyone would be using RPC all over the place for many years already, and we'd have no HTTP and no train-load of other protocols.
[+] [-] akiselev|3 years ago|reply
People are using RPC over HTTP all the time. Most backend APIs I've seen are only vaguely RESTful [1] and everyone seems to stuff much of their critical functionality in JSON RPC APIs masquerading as POST endpoints.
[1] no true scotsman and HATEOAS ¯\_(ツ)_/¯
[+] [-] brillout|3 years ago|reply
Increasingly, the frontend is deployed hand-in-hand with the backend.
RPC is a match made in heaven for stacks where backend and frontend are deployed in-sync.
There really are only two reasons for using a RESTful/GraphQL API: providing third parties access to your data (e.g. Facebook API) or decoupling frontend and backend (e.g. Netflix uses a loosely coupled microservice architecture).
Most projects don't need either.
[+] [-] ianbutler|3 years ago|reply
Non rigorously a web API is just the boundary across some service which you communicate with over (commonly) HTTP at the endpoints specified at that boundary by developers. I could write my own library that wraps those endpoints with functions that make the calls to them, in fact many libraries do just that.
What this takes care of, again kind of nicely, is dynamically and automatically generating that boiler plate as you write your API so you don't have to put in the work, but I think saying this isn't just a style of building an API is a little much.
Whether you're doing RPC, REST or whatever the boundary between your service and another service is it's API.
[+] [-] jrumbut|3 years ago|reply
Though I would say that RPC blurs a lot of lines. REST is clearly a way for two programs to communicate. RPC could be seen as a way for two programs to become one distributed program by means of location transparency.
Shared objects and other systems take that a lot further, RPC is in a middle ground that is very interesting but also a little scary. Reading this description I couldn't tell you exactly what the security implications are. With their TODO list app as written, can a malicious user execute arbitrary SQL (within the permissions assigned to their user) on the server? It wasn't immediately clear to me (I'm sure a little more effort on my part could clear it up).
[+] [-] brillout|3 years ago|reply
[+] [-] likeabbas|3 years ago|reply
RPC - Remote Procedure Calls
I’m not even 30 btw
[+] [-] vyrotek|3 years ago|reply
So basically, we're at SOAP again. A few more years and maybe we'll come back to Remoting?
[+] [-] guggle|3 years ago|reply
[+] [-] hsn915|3 years ago|reply
The backend language I work with is Go, so there are "two" problems that need to be solved for this to work:
- Resolving function name and encoding/decoding arguments from a url to a real function call
- Representing Go structs as Typescript interfaces for the client side code to look like regular function calls with regular types and all that.
For the second part, I have a writeup about how I'm doing it: https://hasen.surge.sh/go-ts-type-bridge.html
It's worth noting that the title of the OP website is wrong.
"Remote Functions Instad of API"
What do you think an API is? API does not stand for "HTTP Requests with JSON". It stands for "Application Programming Interface".
[+] [-] brillout|3 years ago|reply
I don't think "API" just means "programmatic interface". It also denotes some kind of decoupling. A library API => decoupling between lib user and lib author. GraphQL API => decoupling between backend and frontend.
The point of Telefunc is precisely to not decouple frontend and backend. Telefunc is all about bringing backend and frontend tightly together.
So, the tagline is very much on point ;-).
But, yea, you make me think that "Remote Functions. Instead of REST/GraphQL." is maybe better. I'll think about it.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] recroad|3 years ago|reply
[+] [-] stingraycharles|3 years ago|reply
[+] [-] vinay_ys|3 years ago|reply
[+] [-] danbruc|3 years ago|reply
[+] [-] edfletcher_t137|3 years ago|reply
The source code of Telefunc has no known bug"
No known bugs does not "rock-solid" make. This claim alone makes this project very hard to take seriously.
[+] [-] brillout|3 years ago|reply
[+] [-] spullara|3 years ago|reply
https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...
[+] [-] seibelj|3 years ago|reply
[+] [-] brillout|3 years ago|reply
RPC is back. Smarter, leaner, and stronger. (Sorry for the marketing jargon, but I really do believe that.)
Telefunc is still fairly young, you can expect a lot of polishing in the coming weeks/months. The high-level design is solid and I expect no breaking change on that end.
If you encounter any bug (or DX paper cut), please create a GitHub Issue and I'll fix it. I'm also the author of https://vite-plugin-ssr.com which also has no known bug, so I do take that claim seriously.
Let me know if you have any questions!
[+] [-] dharmaturtle|3 years ago|reply
Edit: so far, I'm preferring trpc, because it has support for zod and a few other libraries https://trpc.io/docs/v9/router
It seems like this library has its own bespoke syntax for types https://telefunc.com/typescript
[+] [-] brillout|3 years ago|reply
Less boierlplate.
> so far, I'm preferring trpc, because it has support for zod
What's your use case for wanting zod instead of https://telefunc.com/shield#all-types?
[+] [-] neonedge|3 years ago|reply
Telefunc should support zod, why shouldn't it? Just pass them as native zod types and all good. You could also convert them before with z.infer but you don't need to.
> It seems like this library has its own bespoke syntax for types
Only for stacks which don't transpile server-side code. You can use normal TS types with something like Next, Nuxt, Svelte, Vite, etc. So, these bespoke types aren't relevant for the majority.
[+] [-] quechimba|3 years ago|reply
[+] [-] diceduckmonk|3 years ago|reply
https://github.com/grpc/grpc-web
[+] [-] kamilafsar|3 years ago|reply
How are errors handled though? What happens when you “throw Abort()”.
https://github.com/samen-io/samen
[+] [-] brillout|3 years ago|reply
Samen => Neat :-).
[+] [-] tbrownaw|3 years ago|reply
[+] [-] _sohan|3 years ago|reply
Maybe there has been enough changes in the ecosystem now to be able to abstract network and write code that’s not too brittle.
[+] [-] brillout|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]