(no title)
snapdangle | 5 years ago
When dealing with high performance computing or real time processing of high volumes of data, any fetch to RAM for loading a function call to dispatch is going to have _some_ impact in a tight loop. Add that up for all the libraries you have loaded for your application verses a ground up implementation in K... Does that whole thing live in L3 along with the VM or intepreter + dependencies underneath it? It's doubtful.
My experience was simply using their Kx's free Developer IDE and experiencing the performance differential on datasets myself. YMMV but my (admittedly limited) experience leads me to believe that there is a serious case to be made for the performance advantages of having all your computational logic living as close to your computational cores as possible.
See also the PhD by author of the OP article where he presents language where:
"The entire source code to the compiler written in this method requires only 17 lines of simple code compared to roughly 1000 lines of equivalent code in the domain-specific compiler construction framework, Nanopass, and requires no domain specific techniques, libraries, or infrastructure support."
Linked from the article, available here: https://scholarworks.iu.edu/dspace/handle/2022/24749
dTal|5 years ago
beagle3|5 years ago
The arguments for terse user facing syntax are related, though:
The ability to see a whole program (17 lines vs 1000 lines) means you need much less human “working memory” or whatever the biological equivalent of cache is, to reason about the program.
It also means you can use your visual system’s pattern matching abilities because patterns are 5-10 characters rather than 5-10 pages as they often are in C++.
Totally different hardware, but it’s still about the L1 and registers.....
snapdangle|5 years ago