This sort of parallelism is common for big matrix number-crunching on machines with large numbers of CPUs. There's a big computational cycle, where lots of CPUs are turned loose on an isolated chunk of the problem. No shared state changes during this phase. When they're all done, there's a state update to shared state for the next round. For that kind of work, each thread does about the same amount of work, so the threads finish at roughly the same time. This is useful for finite element analysis, hydrodynamics, weather prediction, and similar large problems that decompose into spatial cells. If everything follows the updating isolation rules, results are repeatable.
> The language I'm speaking of is designed for real-time high-reliability scenarios where ease of programming takes the back seat to correctness and determinism.
It sounds like the opposite - the setup allows for a low priority task to block every other task, which is normally the last thing you'd want in such a system.
Yes, this is a weakness of the model, but the key strength is that it will do so every time, which makes it way easier to test than the alternatives. Real-time doesn't mean that performance is as high as possible, it means that you can guarantee a certain level of responsiveness. A language can't do that for you, but it can help, which is why people use Esterel.
I understand that one semester is short and I think you did already a great job.
I don't really understand what Fumurt brings compared to existing approaches. It seems that you are blocking all threads at one rendez-vous point and that seems too radical, though. Also, "underthreads" could result in a unbalanced system where one thread does most of the work while others are waiting. Automatic ways to distribute code among tasks exist in synchronous language (I don't think synchronous languages necessarily sacrifice performance).
It seems you did a lot of work on the parser and compiler. You also use synchronized variable which seems a little odd: you say that "[...] the programmer doesn’t
have to worry about the order in which a synchronized variable is written to by
the different threads.". Do different threads really want to write the same variable during one timestep?
You mention Esterel and synchronous languages.
I am not sure Esterel (or Quartz[1]) is the most widely used implementation. I would bet on Scade, a child of Lustre, but I am not sure.
As you say they support a way to distribute code among execution units and Signal[2] in particular is designed to make it easy and efficiently compilable.
In Signal, you define processes and map them to different execution units through a "RunOn" directives. The Signal compiler then statically compute the order by which I/O and computations are executed on each concurrent thread (posix threads and fifos or MPI). There is also a lot to say about logical clocks which allow to transmit only the necessary information to synchronize distributed code.
The result is very little scheduling code: only signal/wake instructions surrounding sequential code with nested "if" blocks for clocks (activation condition).
So while you did a lot of work for this language, I fail to see where your approach is beneficial compared to older ones. I mean, existing languages surely could be improved, but what could convince a user to switch to this one? (a hard question, sorry).
>"underthreads" could result in a unbalanced system where one thread does most of the work while others are waiting
Underthreads is a mechanism I propose to speed up the slowest top thread in cases where this can be done. Like, you benchmark the program and see that one thread is holding it up. This thread has a workload that's easy to multithread, so you use underthreads to (hopefully) speed this thread up and fix the bottleneck.
>Do different threads really want to write the same variable during one timestep?
No, they are not allowed to. Every shared variable has an owner thread which is the only thread that can write to it. Hence there's no need to worry.
>what could convince a user to switch to this one?
Hopefully nothing. It's useless as it is. It's meant more as a contribution to public thought about hard real-time languages. If someone decides to use my code for anything then that's cool, but that's not important.
Every search for Scade ends up pointing to Esterel Technologies, which confused me, I guess. Didn't really know about it. Signal sounds interesting!
Is Erlang deterministic? If I have two processes send a message each to a third process then they could arrive in either of two orders - that's a race condition and non-deterministic isn't it?
I can assure you that I did not spend a full semester making graphs. I know it's not the first deterministic concurrency model. Thank you for making me realize that the concurrency model described can be modeled using CSP. That's neat, considering all the analysis tools available for it. Determinism is not enough, however. The space of all thread state combinations need to be reined in, not least because while deadlocks & co. can be found using automated tools, subtler errors can't. Additionally, threads compete for resources, and you need to account for hidden sources of indeterminism like that one. Two logically independent threads that can be modelled and trivially shown to not interfere with each other will definitely interfere with each other on a single-core system.
I realize that this is an unusually strict approach. I'm also terribly embarrassed if there's something I haven't considered.
Truly deterministic multithreading is really just single threading. Multithreading is by its nature undeterministic. To make it deterministic you really have to synchronize enough until just one thread at a time is making progress. It kindof defeats the purpose of multithreading. Synchronization is the enemy of scalability. You want to avoid it at all costs.
That's true if all operations and intermediate states must be determistic, but if instead you only need some things to be deterministic then the cost is a lot lower. For instance, if you spawn a bunch of threads and join in them in the same order they were created, the order of the results you gather from them can be always be the same even if they didn't complete in the same order. The computation phase didn't pay any synchronization cost, but the gathering phase did
[+] [-] Animats|10 years ago|reply
But for real-time control?
[+] [-] GuamPirate|10 years ago|reply
[+] [-] ramchip|10 years ago|reply
It sounds like the opposite - the setup allows for a low priority task to block every other task, which is normally the last thing you'd want in such a system.
[+] [-] tormeh|10 years ago|reply
[+] [-] junke|10 years ago|reply
I don't really understand what Fumurt brings compared to existing approaches. It seems that you are blocking all threads at one rendez-vous point and that seems too radical, though. Also, "underthreads" could result in a unbalanced system where one thread does most of the work while others are waiting. Automatic ways to distribute code among tasks exist in synchronous language (I don't think synchronous languages necessarily sacrifice performance).
It seems you did a lot of work on the parser and compiler. You also use synchronized variable which seems a little odd: you say that "[...] the programmer doesn’t have to worry about the order in which a synchronized variable is written to by the different threads.". Do different threads really want to write the same variable during one timestep?
You mention Esterel and synchronous languages. I am not sure Esterel (or Quartz[1]) is the most widely used implementation. I would bet on Scade, a child of Lustre, but I am not sure. As you say they support a way to distribute code among execution units and Signal[2] in particular is designed to make it easy and efficiently compilable.
In Signal, you define processes and map them to different execution units through a "RunOn" directives. The Signal compiler then statically compute the order by which I/O and computations are executed on each concurrent thread (posix threads and fifos or MPI). There is also a lot to say about logical clocks which allow to transmit only the necessary information to synchronize distributed code. The result is very little scheduling code: only signal/wake instructions surrounding sequential code with nested "if" blocks for clocks (activation condition).
So while you did a lot of work for this language, I fail to see where your approach is beneficial compared to older ones. I mean, existing languages surely could be improved, but what could convince a user to switch to this one? (a hard question, sorry).
[1] https://es.cs.uni-kl.de/publications/datarsg/Schn09.pdf
[2] https://en.wikipedia.org/wiki/SIGNAL_(programming_language)
[+] [-] tormeh|10 years ago|reply
Underthreads is a mechanism I propose to speed up the slowest top thread in cases where this can be done. Like, you benchmark the program and see that one thread is holding it up. This thread has a workload that's easy to multithread, so you use underthreads to (hopefully) speed this thread up and fix the bottleneck.
>Do different threads really want to write the same variable during one timestep?
No, they are not allowed to. Every shared variable has an owner thread which is the only thread that can write to it. Hence there's no need to worry.
>what could convince a user to switch to this one?
Hopefully nothing. It's useless as it is. It's meant more as a contribution to public thought about hard real-time languages. If someone decides to use my code for anything then that's cool, but that's not important.
Every search for Scade ends up pointing to Esterel Technologies, which confused me, I guess. Didn't really know about it. Signal sounds interesting!
[+] [-] quietplatypus|10 years ago|reply
- explicitly label shared data and control flow - group together where the top threads are spawned
[+] [-] pmarreck|10 years ago|reply
If your tests depend on thread order, they are bad tests or your code or design is bad. Period.
[+] [-] EllipticCurve|10 years ago|reply
I read the title and thought - Oh, nice, an article of Erlang.
It seems the perfect fit ...?! Why reinvent the wheel?
[+] [-] chrisseaton|10 years ago|reply
[+] [-] ketralnis|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] zem|10 years ago|reply
[+] [-] ktRolster|10 years ago|reply
[+] [-] f2f|10 years ago|reply
[+] [-] tormeh|10 years ago|reply
I realize that this is an unusually strict approach. I'm also terribly embarrassed if there's something I haven't considered.
[+] [-] petke|10 years ago|reply
[+] [-] ketralnis|10 years ago|reply
[+] [-] bnegreve|10 years ago|reply