I am not sure how to really refine this thought I have had, but I have this fear that every language eventually gets so bloated and complicated that it has a huge barrier to entry.
The ones that stand out the most to me are C# and Typescript.
Microsoft has a large team dedicated towards improving these languages constantly and instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features. After all, it is their job. They are incentivized to keep making it more complex.
The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.
One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.
When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
This is probably just old man ranting, but I think there's something there. The old version I used to debate about was C vs C++. Now look at modern C++, it's crazy powerful but so jam packed that many people have just gone back to C.
It has 3 ways to declare functions, multiple variations on arrow functions syntax, a weird prototyping inheritance system, objects you can create out of "new" on functions, object literals that can act an pseudo-classes, classes, decorators, for-i loop + maps + filter + for-in loop (with hasOwn) + forEach, async / await + promises and an invisible but always-on event loop, objects proxies, counter-intuitive array and mapping manipulations, lots of different ways to create said arrays and mappings, very rich destructuring, so many weirdnesses on parameter handling, multiple ways to do imports that don't work in all contexts, exports, string concatenation + string interpolation, no integer (but NaN), a "strict mode", two versions of comparison operators, a dangerous "with" keyword, undefined vs null, generators, sparse arrays, sets...
It also has complex rules for:
- scoping (plus global variables by default and hoisting)
- "this" values (and manual binding)
- type coercion (destroying commutativity!)
- semi-column automatic insertion
- "typeof" resolution
On top of that, you execute it in various different implementations and contexts: several browser engines and nodejs at least, with or without the DOM, in or out web workers, and potentially with WASM.
There are various versions of the ECMA standard that changes the features you have access to, unless you use a transpiler. But we don't even touch the ecosystem since it's about the language. There would be too much to say anyway.
There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
I think in this specific case it's JavaScript's requirement for backwards compatibility that bloats it... but there's a lot you can ignore. Like, you can declare a variable with var, let or const but there's absolutely no reason to use var any more. I feel similarly about the proposals to introduce records and tuples: https://github.com/tc39/proposal-record-tuple... in most scenarios you'll probably be better off using records rather than objects, and maybe that's what folks will end up doing.
But boy does it all get confusing.
> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
I'm not so sure about that. I think we end up consuming a lot of these features in the TS types that get published alongside libraries. We just don't know it, we just get surprisingly intuitive type interfaces.
I've been meaning to write a longer essay on this for years, but I believe the reason for this observation is different cohorts.
Imagine you are a C# programmer just as C# 1.0 is released. C# is a fairly simple language at that time (and similar to other languages you already know), so you can get caught up on it fairly easily and quickly. A few years later, C# 2.0 comes out. It's got a handful of features, but not too much for you to absorb. Likewise C# 3.0, 4.0, etc. As long as you stay on the C# train, the rate of new features does not exceed the rate that you can learn them.
Years later, another person comes along and is new to C#, which is now at version 5.0. They are presented with a huge sprawling language and they have to learn nearly all of it at once to deal with codebases they are contributing to. It's a nightmare. They long for a language that's actually, you know simple.
So maybe they find some other newer language, Foo, which is at 1.0. It's small and they learn the whole thing. After a couple of years of happy productive use, they realize they would be a little more happy and productive if Foo had just one or two extra little features. They put in a request. The language team wants happy users so they are happy to oblige. The user is easily able to learn those new features. And maybe some other Foo users want other new things. 2.0 comes out, and they can keep up. They can stay on the train with 3.0, 4.0, etc.
They never explicitly asked for a complex language, but they have one and they're happy, because they've mastered the whole thing over a period of years. They've become part of the problem that bothered them so much years ago.
Fundamentally, the problem is that existing users experience a programming language as the delta between the latest version and the previous one. New users experience a programming language as the total sum of all of its features (perhaps minus features it has in common with other languages you already know). If you assume users can absorb information at a certain fixed rate, it means those two cohorts have very different needs and different experiences.
I don't think there's a silver bullet. The best you can hope for is that a language at 1.0 has as few bad ideas as possible. But no one seems to have perfect skill at that.
> When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
TypeScript today can be written the same way that TypeScript was when it first started to become popular. Yes there are additions all the time, but most of them are, as you observe, irrelevant to you. They're there to make it possible to type patterns that would otherwise be untypeable. That matters for library developers, not so much for application developers.
To the extent there's a barrier to entry, it seems largely one that can be solved with decent tutorials pointing to the simple parts that you're expected to use in your applications (and a culture of not overcomplicating things in application code).
> The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.
That's funny given many of the changes were made to make C# look more like JavaScript!
C# 6 introduced expression-bodied members for simplified syntax (like JavaScript), null-conditional operators, and string interpolation. C# 7 brought pattern matching, tuples, deconstruction, and local functions. C# 8 introduced nullable reference types for better null safety, async streams, and a more concise switch expression syntax. C# 9 to C# 12 added records, init-only properties, with expressions, and raw string literals, global using directives, top-level statements, list patterns, and primary constructors.
In C#, if you need a string list you can do:
List<string> items = []; // Not as concise as JS but type safe.
As for TypeScript, nobody is supposed to use most of it -- unless you're authoring a library. You benefit from it's features because somebody else is using them.
Languages draw inspiration from each other -- taking the good parts and incorporating them in. C# is a vastly better, easier, and safer language than it used to be and so is JavaScript.
This is why always say the true beginner programming language is C.
Stupid easy to learn, have some loops, have some conditions, make some memory allocations. You will learn about the fundamentals of computing as well, which you might as well ignore (unknowingly) if you start with something like JavaScript (where is this data living in my computer?).
Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features. Some things are simply more useful to libraries than line of business code - that's fine. The line of business code is much better thanks to it.
What did Bjarne Stroustrup supposedly say? There are two kinds of programming languages: the ones everybody complains about, and the ones nobody uses.
I'll put on my Scheme hat and say "with hygienic macros, people can add whichever language features they want." Maybe Rust is a good experiment along those lines: C++ with hygienic macros.
Everything that people keep using grows into a monster of complexity: programming languages, software, operating systems, law. You must maintain backward compatibility, and the urge to add a new feature is too great. There's a cost with moving to the new thing -- let's just put the new thing in the old thing.
It doesn't help how arcane the TS documentation is. Important docs live as frozen-in-amber changelog entries; huge tracts of pages "deprecated" yet still #1 on Google.
Google "typescript interfaces." #1 is a page that has been deprecated for years. How did this happen?
> instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features
I appreciate that this is mostly just a generic rant, but it's not really suitable here, because this is a feature which is being added with the sole goal of improved performance.
There's only so much you can to optimize the extremely dynamic regular objects in JS, and there's no hope of using them for shared-memory multithreading. The purpose of this proposal is to have a less dynamic kind of object which can be made more performant and which can be made suitable for shared-memory multithreading.
Do you have examples of unreadable C#? The language didn’t change much IMHO. You have new features, like records, but C# code looks pretty much like what I started with in 2009
> One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.
The let keyword didn't exist in JS when Node was first released, nor did for/of, which while unstated in your post, is probably what you are thinking of when you posted this. The language has not stayed the same, at all.
>The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.
The funny thing is if you used F# over a decade ago almost all the C# improvements seem familiar. They were lifted from F#, some of them badly.
And I know F# borrows a lot from OCaml. But it's hard to fathom why we need to use the badly adopted F# features in C# instead of just getting F# as a main Microsoft adopted language.
> sometimes modern C# code from experts looks unreadable to me
This is a culture issue and has always existed in C#, Java and C++ communities sadly (and I'm seeing this now with TS just as much, some Go examples are not beacons of readability either, I assume other languages suffer from this similarly).
In the past, people abused BinaryFormatter, XML-based DSLs, occasionally dynamic, Java-style factories of factories of factories, abuse of AOP, etc. Nowadays, this is supplanted by completely misplaced use of DDD, Mediatr, occasional AutoMapper use (oh god, at least use Mapperly or Mapster) and continuous spam of 3 projects and 57 file-sized back-ends for something that can be written under ~300 LOC split into two files using minimal API, records and pattern matching (with EF Core even!).
Neither is an example of good code, and the slow but steady realization that simplicity is the key makes me hopeful, but the slow pace of this, and new ways to make the job of a developer and a computer more difficult that are sometimes introduced by community and libraries surrounding .NET by MS themselves sour the impression.
Couldn't agree more. More features in a programming language makes it easier and more fun to write code, but makes it harder to read and maintain someone else's code. Considering more time is spent maintaining code as opposed to writing it (assuming the product is successful), readability is more important than writability.
I also don’t know how to refine my thought but it’s something along the lines of:
The people who are in a position to decide what features get added to a language are usually top experts and are unlikely to have any reasonable perspective on how complicated is too complicated for the rest of us.
If you live and breathe a language, just one more feature can seem like a small deal.
I think it becomes much more reasonable when that one more feature enables an entire set of capabilities and isn’t just something a library or an existing feature could cover.
> Microsoft has a large team dedicated towards improving these languages constantly
… and the people working on these projects need to deliver, else their performance review won’t be good, and their financial rewards (merit increase, bonus, refresher) will be low. And here we are.
Edit: I realize I’m repeating what you said too, but I wanted to make it more clear what’s going on.
I think part of the reason C# has changed so much as far as the language goes, not the CLR is actually because they took so many good things from Typescript and mixed them into the language. I think part of the reason Typescript has become so cumbersome to work with is because it has similarly added a lot of the good things from C#. Which may sound like a contradiction, but I actually agree with you that plain JavaScript is often great. That being said, you don’t actually have to use all the features of Typescript and it’s still much better for larger project in my opinion. Mostly because it protects developers from ourselves in a less “config on organisational level” way.
We already use regular JS for some of our internal libraries, because keeping up with how TS transpires things into JS is just too annoying. Don’t get me wrong, it gets it right 98% of the time, but because it’s not every time we have to check. The disadvantage is that we actually need/want some form of types. We get them via JSDoc which can frankly do almost everything Typescript does for us, but with much poorer IDE support (for the most part). Also more cumbersome than simply having something like structs.
C# since version 2 here, so I’m probably older. You said a lot of words, but gave no concrete examples of what’s bad about these languages. Linters will let you turn off different syntax usages based on your preference on what is readable or not, and C# is the only language I’m aware of where you can build them into the compilation chain and literally cause the compilation to halt instead of merely giving a style warning.
You don't have to use features you don't understand. "Complex" features exist for a reason. To the uninitiated something like generic types are quite inscrutable, but when you encounter the type of problem that they solve, their use becomes much more intuitive, and eventually familiarity yields an understanding and generics reveal themselves to be quite conceptually simple, they're just variables for types.
The general idea of types with a fixed layout seems great, but I'm a lot more dubious about the idea of unsafe blocks. The web is supposed to be a sandbox where we run untrusted code and with pretty good certainty expect that it can't crash the computer. Allowing untrusted code to specify "hey let me do stuff that can cause data races if not done correctly" is just asking for trouble, and also exploits. If shared structs are going to be adopted I think they probably need to be immutable after creation, or at the very least only modified with atomic operations.
Records and tuples can make a lot of logic much more easier to read, and way less fragile. Not sure how they would play together with the shared structs though.
I feel conflicted. Working with multithreaded stuff in JS is a huge PITA. This would go some way to making things easier. But it also feels like it would radically complicate JS. Unsafe blocks? Wow-eee.
With the rise of WASM part of me feels like we shouldn't even try to make JS better at multithreading and just use other languages better suited to the purpose. But then I'm a pessimist.
A better title "A proposal for Shared Memory Multi-threading". The term "struct" has a meaning in the C language that is somewhat misleading since the purpose here is not organization, but rather to enable shared memory.
In my experience, the positive of JavaScript over other languages I have used- COBOL, Fortran, assembly, C, C++, Java - is the fine balance it has between expressibility and effectiveness.
I am not opposed to shared memory multi-threading, but question the cost/benefit ratio of this proposal. As many comments suggest, maintaining expressibility is a high priority and there are plenty of gotchas in JavaScript already.
As an example, I find the use of an upfront term like "async" to work quite well. If I see that term I can easily switch hats and look at code differently. Perhaps we could look at other mechanisms, using the term "shm", over a new type, but what do I know?
[edit for clarity since I think faster than I can type]
I don't understand the need for the ever-growing list of "enhancements" to JS. Take Class for example.
Class is entirely unnecessary and, essentially, tries to turn JS into a class-oriented language from its core which is object-oriented.
I never create classes. I always create factory functions which, when appropriate, can accept other objects for composition.
And I don't use prototypes, because they are unnecessary as well. Thus sparing me the inconvenience, and potential issues, of using 'this'.
In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.
But, at least so far, the core of JS has not been ruined.
That said, there are some new features I like. Promises/async/await, Map, Set, enhancements to Array being among them. But to my way of thinking they do not change the nature of the language in any way.
I suppose if you want a defined/packed memory layout you can already use SharedArrayBuffer and if you want to store objects in it you can use this BufferBackedObjects library they linked. https://github.com/GoogleChromeLabs/buffer-backed-object
I also expect that in browsers this will have the same cross-origin isolation requirements as SharedArrayBuffer that make it difficult to use.
Most of the JavaScript developers I've encountered recently refuse to use Map, and if you dare use it, they will say that it's complicated code and premature optimisation before even making an attempt to understand it.
I feel like trying to add fast data structures into JavaScript is futile, I think at this point it would be better to make it easier for JavaScript and the browser to interface with faster languages.
The only thing I would add to JavaScript at this point is first class TypeScript support so that we can ditch the transpilers.
// Step 2: Convert the string to binary data
const encoder = new TextEncoder();
const encodedJson = encoder.encode(jsonString);
// Step 3: Create a SharedArrayBuffer and a Uint8Array view
const sharedArrayBuffer = new SharedArrayBuffer(encodedJson.length);
const sharedArray = new Uint8Array(sharedArrayBuffer);
// Step 4: Store the encoded data in the SharedArrayBuffer
sharedArray.set(encodedJson);
Not sure this is a good idea or not, for one it'd be awesome for doing performance oriented and threaded code in JS/runtimes, the idea seems related to how C# struct's already work (and tuples under the hood). Interop with WASM code might also be simplified if struct-like access was a built-in.
The bad is that people wouldn't necessarily be prepared for their semantics (are they value or reference based?), how to shared prototypes between environments (mentioned as problem in the proposal itself), not entirely sure if this proposal would add to the complexity vs security for spectre like attacks.
It'd be useful, but worth it is another question? (And would all major players see interest in it? esp considering that it'd need to be "JSzero" level propsal if they go in that direction. (There was a post here a few days ago about layering runtimes with JS0 being the core with everything else being syntax transforms on top).
Fixed layout structs seem like a no brainer and a natural extension of the typed arrays. It’s strange that both Java and Jacascript went so long without them. Interacting with many APIs (webgpu, FFI, …) quickly becomes really unpleasant if you can’t control data layout.
My head is spinning after skimming the sections on shared memory, locks, mutexes, etc. Implementation and adoption would probably be a decade-long saga. Not to mention teaching folks when to use these and how to use them correctly.
In e.g. Elixir these are non-issues. Please, just give us declarative structs that are immutable by default (if they’re really needed, make constructors and mutability opt-in). Isn’t the trend already toward more FP in JS?
I initially didn't like the high level idea, but I warmed up to it. My only concern is that the constructor isn't guaranteed to define the same fields with the same types, which kind of defeats the point.
I'd improve this proposal in two ways:
1. Explicitly define the layout with types. It's new syntax already, you can be spicy here.
2. Define a way for structs to be directly read into and out of ArrayBuffers. Fixed layout memory and serialization go hand in hand. Obviously a lot of unanswered questions here but that's the point of the process.
The unsafe block stuff, frankly, seems like it should be part of a separate proposal.
When reading the proposal title, I thought that this is for interop with WASM. Having fixed-size structs where every field has a wasm-related type would be beautiful for interop. Just a wasm function can just return or receive an instance of a typed struct. No more reading the result using a DataView or something like that. We have to use something like BufferBackedObject for that.
When applying ReactJS in webdev after doing all kinds of engineering in all kinds of (mostly typed) languages in many runtimes, I was so surprised that JS did not actually had a struct/record as seen in C/Pascal. Everything is a prototype that pretends its an object, but without types and pointers, and abstraction layers that added complexity to gain backwards compatibility.
Not even some object hack that many OO and compiled languages had. ES did not add it either, and my hopes where in WebAsm.
This proposal however seems like the actual plan that i’d like to use a lot.
A lot of the code complexity was to get simple guarantees for data quality. The alternative was to not care, either a feature or caveat of the used prototype model.
[+] [-] leetharris|1 year ago|reply
The ones that stand out the most to me are C# and Typescript.
Microsoft has a large team dedicated towards improving these languages constantly and instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features. After all, it is their job. They are incentivized to keep making it more complex.
The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.
One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.
When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
This is probably just old man ranting, but I think there's something there. The old version I used to debate about was C vs C++. Now look at modern C++, it's crazy powerful but so jam packed that many people have just gone back to C.
[+] [-] BiteCode_dev|1 year ago|reply
It has 3 ways to declare functions, multiple variations on arrow functions syntax, a weird prototyping inheritance system, objects you can create out of "new" on functions, object literals that can act an pseudo-classes, classes, decorators, for-i loop + maps + filter + for-in loop (with hasOwn) + forEach, async / await + promises and an invisible but always-on event loop, objects proxies, counter-intuitive array and mapping manipulations, lots of different ways to create said arrays and mappings, very rich destructuring, so many weirdnesses on parameter handling, multiple ways to do imports that don't work in all contexts, exports, string concatenation + string interpolation, no integer (but NaN), a "strict mode", two versions of comparison operators, a dangerous "with" keyword, undefined vs null, generators, sparse arrays, sets...
It also has complex rules for:
- scoping (plus global variables by default and hoisting)
- "this" values (and manual binding)
- type coercion (destroying commutativity!)
- semi-column automatic insertion
- "typeof" resolution
On top of that, you execute it in various different implementations and contexts: several browser engines and nodejs at least, with or without the DOM, in or out web workers, and potentially with WASM.
There are various versions of the ECMA standard that changes the features you have access to, unless you use a transpiler. But we don't even touch the ecosystem since it's about the language. There would be too much to say anyway.
There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
[+] [-] afavour|1 year ago|reply
But boy does it all get confusing.
> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
I'm not so sure about that. I think we end up consuming a lot of these features in the TS types that get published alongside libraries. We just don't know it, we just get surprisingly intuitive type interfaces.
[+] [-] munificent|1 year ago|reply
Imagine you are a C# programmer just as C# 1.0 is released. C# is a fairly simple language at that time (and similar to other languages you already know), so you can get caught up on it fairly easily and quickly. A few years later, C# 2.0 comes out. It's got a handful of features, but not too much for you to absorb. Likewise C# 3.0, 4.0, etc. As long as you stay on the C# train, the rate of new features does not exceed the rate that you can learn them.
Years later, another person comes along and is new to C#, which is now at version 5.0. They are presented with a huge sprawling language and they have to learn nearly all of it at once to deal with codebases they are contributing to. It's a nightmare. They long for a language that's actually, you know simple.
So maybe they find some other newer language, Foo, which is at 1.0. It's small and they learn the whole thing. After a couple of years of happy productive use, they realize they would be a little more happy and productive if Foo had just one or two extra little features. They put in a request. The language team wants happy users so they are happy to oblige. The user is easily able to learn those new features. And maybe some other Foo users want other new things. 2.0 comes out, and they can keep up. They can stay on the train with 3.0, 4.0, etc.
They never explicitly asked for a complex language, but they have one and they're happy, because they've mastered the whole thing over a period of years. They've become part of the problem that bothered them so much years ago.
Fundamentally, the problem is that existing users experience a programming language as the delta between the latest version and the previous one. New users experience a programming language as the total sum of all of its features (perhaps minus features it has in common with other languages you already know). If you assume users can absorb information at a certain fixed rate, it means those two cohorts have very different needs and different experiences.
I don't think there's a silver bullet. The best you can hope for is that a language at 1.0 has as few bad ideas as possible. But no one seems to have perfect skill at that.
[+] [-] lolinder|1 year ago|reply
TypeScript today can be written the same way that TypeScript was when it first started to become popular. Yes there are additions all the time, but most of them are, as you observe, irrelevant to you. They're there to make it possible to type patterns that would otherwise be untypeable. That matters for library developers, not so much for application developers.
To the extent there's a barrier to entry, it seems largely one that can be solved with decent tutorials pointing to the simple parts that you're expected to use in your applications (and a culture of not overcomplicating things in application code).
[+] [-] wvenable|1 year ago|reply
That's funny given many of the changes were made to make C# look more like JavaScript!
C# 6 introduced expression-bodied members for simplified syntax (like JavaScript), null-conditional operators, and string interpolation. C# 7 brought pattern matching, tuples, deconstruction, and local functions. C# 8 introduced nullable reference types for better null safety, async streams, and a more concise switch expression syntax. C# 9 to C# 12 added records, init-only properties, with expressions, and raw string literals, global using directives, top-level statements, list patterns, and primary constructors.
In C#, if you need a string list you can do:
As for TypeScript, nobody is supposed to use most of it -- unless you're authoring a library. You benefit from it's features because somebody else is using them.Languages draw inspiration from each other -- taking the good parts and incorporating them in. C# is a vastly better, easier, and safer language than it used to be and so is JavaScript.
[+] [-] eddd-ddde|1 year ago|reply
Stupid easy to learn, have some loops, have some conditions, make some memory allocations. You will learn about the fundamentals of computing as well, which you might as well ignore (unknowingly) if you start with something like JavaScript (where is this data living in my computer?).
[+] [-] throw49sjwo1|1 year ago|reply
Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features. Some things are simply more useful to libraries than line of business code - that's fine. The line of business code is much better thanks to it.
[+] [-] MathMonkeyMan|1 year ago|reply
I'll put on my Scheme hat and say "with hygienic macros, people can add whichever language features they want." Maybe Rust is a good experiment along those lines: C++ with hygienic macros.
Everything that people keep using grows into a monster of complexity: programming languages, software, operating systems, law. You must maintain backward compatibility, and the urge to add a new feature is too great. There's a cost with moving to the new thing -- let's just put the new thing in the old thing.
[+] [-] Shiny_Gyrodos|1 year ago|reply
I've been learning steadily for 8 or so months now and at no point have I felt the language was unapproachable due to excessive features.
Looking back on what each new version added, I don't think any of the additions were damaging to the simplicity of C#.
I do likely have a biased perspective though, as I use newer C# features every day.
[+] [-] lelandfe|1 year ago|reply
Google "typescript interfaces." #1 is a page that has been deprecated for years. How did this happen?
[+] [-] egnehots|1 year ago|reply
[+] [-] adamc|1 year ago|reply
[+] [-] bakkoting|1 year ago|reply
I appreciate that this is mostly just a generic rant, but it's not really suitable here, because this is a feature which is being added with the sole goal of improved performance.
There's only so much you can to optimize the extremely dynamic regular objects in JS, and there's no hope of using them for shared-memory multithreading. The purpose of this proposal is to have a less dynamic kind of object which can be made more performant and which can be made suitable for shared-memory multithreading.
[+] [-] dgellow|1 year ago|reply
[+] [-] rezonant|1 year ago|reply
The let keyword didn't exist in JS when Node was first released, nor did for/of, which while unstated in your post, is probably what you are thinking of when you posted this. The language has not stayed the same, at all.
[+] [-] carlmr|1 year ago|reply
The funny thing is if you used F# over a decade ago almost all the C# improvements seem familiar. They were lifted from F#, some of them badly.
And I know F# borrows a lot from OCaml. But it's hard to fathom why we need to use the badly adopted F# features in C# instead of just getting F# as a main Microsoft adopted language.
[+] [-] neonsunset|1 year ago|reply
This is a culture issue and has always existed in C#, Java and C++ communities sadly (and I'm seeing this now with TS just as much, some Go examples are not beacons of readability either, I assume other languages suffer from this similarly).
In the past, people abused BinaryFormatter, XML-based DSLs, occasionally dynamic, Java-style factories of factories of factories, abuse of AOP, etc. Nowadays, this is supplanted by completely misplaced use of DDD, Mediatr, occasional AutoMapper use (oh god, at least use Mapperly or Mapster) and continuous spam of 3 projects and 57 file-sized back-ends for something that can be written under ~300 LOC split into two files using minimal API, records and pattern matching (with EF Core even!).
Neither is an example of good code, and the slow but steady realization that simplicity is the key makes me hopeful, but the slow pace of this, and new ways to make the job of a developer and a computer more difficult that are sometimes introduced by community and libraries surrounding .NET by MS themselves sour the impression.
[+] [-] breadwinner|1 year ago|reply
[+] [-] azangru|1 year ago|reply
You don't have to use every feature of the language. Especially not when you are just learning.
> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
Exactly. But no-one seems to be arguing that typescript has a huge barrier to entry.
[+] [-] paulddraper|1 year ago|reply
Geez I'd sure hope not.
If you liked C++11, you can use C++11. Every compiler, platform, and library will support it.
No one erased it and made you go back to C99.
[+] [-] Waterluvian|1 year ago|reply
The people who are in a position to decide what features get added to a language are usually top experts and are unlikely to have any reasonable perspective on how complicated is too complicated for the rest of us.
If you live and breathe a language, just one more feature can seem like a small deal.
I think it becomes much more reasonable when that one more feature enables an entire set of capabilities and isn’t just something a library or an existing feature could cover.
[+] [-] branko_d|1 year ago|reply
"There are only two kinds of languages: the ones people complain about and the ones nobody uses."
[+] [-] carlosrg|1 year ago|reply
[+] [-] drclau|1 year ago|reply
… and the people working on these projects need to deliver, else their performance review won’t be good, and their financial rewards (merit increase, bonus, refresher) will be low. And here we are.
Edit: I realize I’m repeating what you said too, but I wanted to make it more clear what’s going on.
[+] [-] dartos|1 year ago|reply
At least we moved past webpack mostly.
[+] [-] pier25|1 year ago|reply
Obviously then can't make TS more performant (since it doesn't execute) but C# is very performant and even surpasses Go in the TechEmpower benchmarks.
[+] [-] devjab|1 year ago|reply
We already use regular JS for some of our internal libraries, because keeping up with how TS transpires things into JS is just too annoying. Don’t get me wrong, it gets it right 98% of the time, but because it’s not every time we have to check. The disadvantage is that we actually need/want some form of types. We get them via JSDoc which can frankly do almost everything Typescript does for us, but with much poorer IDE support (for the most part). Also more cumbersome than simply having something like structs.
[+] [-] sureglymop|1 year ago|reply
[+] [-] Nuzzerino|1 year ago|reply
[+] [-] root_axis|1 year ago|reply
[+] [-] lxe|1 year ago|reply
[+] [-] connicpu|1 year ago|reply
[+] [-] egeozcan|1 year ago|reply
Talking about JS proposals, I'm looking forward to this one: https://github.com/tc39/proposal-record-tuple
Records and tuples can make a lot of logic much more easier to read, and way less fragile. Not sure how they would play together with the shared structs though.
[+] [-] afavour|1 year ago|reply
With the rise of WASM part of me feels like we shouldn't even try to make JS better at multithreading and just use other languages better suited to the purpose. But then I'm a pessimist.
[+] [-] talkingtab|1 year ago|reply
In my experience, the positive of JavaScript over other languages I have used- COBOL, Fortran, assembly, C, C++, Java - is the fine balance it has between expressibility and effectiveness.
I am not opposed to shared memory multi-threading, but question the cost/benefit ratio of this proposal. As many comments suggest, maintaining expressibility is a high priority and there are plenty of gotchas in JavaScript already.
As an example, I find the use of an upfront term like "async" to work quite well. If I see that term I can easily switch hats and look at code differently. Perhaps we could look at other mechanisms, using the term "shm", over a new type, but what do I know?
[edit for clarity since I think faster than I can type]
[+] [-] zanethomas|1 year ago|reply
Class is entirely unnecessary and, essentially, tries to turn JS into a class-oriented language from its core which is object-oriented.
I never create classes. I always create factory functions which, when appropriate, can accept other objects for composition.
And I don't use prototypes, because they are unnecessary as well. Thus sparing me the inconvenience, and potential issues, of using 'this'.
In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.
But, at least so far, the core of JS has not been ruined.
That said, there are some new features I like. Promises/async/await, Map, Set, enhancements to Array being among them. But to my way of thinking they do not change the nature of the language in any way.
[+] [-] modeless|1 year ago|reply
I suppose if you want a defined/packed memory layout you can already use SharedArrayBuffer and if you want to store objects in it you can use this BufferBackedObjects library they linked. https://github.com/GoogleChromeLabs/buffer-backed-object
I also expect that in browsers this will have the same cross-origin isolation requirements as SharedArrayBuffer that make it difficult to use.
[+] [-] voidr|1 year ago|reply
I feel like trying to add fast data structures into JavaScript is futile, I think at this point it would be better to make it easier for JavaScript and the browser to interface with faster languages.
The only thing I would add to JavaScript at this point is first class TypeScript support so that we can ditch the transpilers.
[+] [-] i007|1 year ago|reply
// Step 2: Convert the string to binary data const encoder = new TextEncoder(); const encodedJson = encoder.encode(jsonString);
// Step 3: Create a SharedArrayBuffer and a Uint8Array view const sharedArrayBuffer = new SharedArrayBuffer(encodedJson.length); const sharedArray = new Uint8Array(sharedArrayBuffer);
// Step 4: Store the encoded data in the SharedArrayBuffer sharedArray.set(encodedJson);
Now you can use Atomics, no?
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
[+] [-] whizzter|1 year ago|reply
The bad is that people wouldn't necessarily be prepared for their semantics (are they value or reference based?), how to shared prototypes between environments (mentioned as problem in the proposal itself), not entirely sure if this proposal would add to the complexity vs security for spectre like attacks.
It'd be useful, but worth it is another question? (And would all major players see interest in it? esp considering that it'd need to be "JSzero" level propsal if they go in that direction. (There was a post here a few days ago about layering runtimes with JS0 being the core with everything else being syntax transforms on top).
[+] [-] dangoodmanUT|1 year ago|reply
[+] [-] alkonaut|1 year ago|reply
[+] [-] ralmidani|1 year ago|reply
In e.g. Elixir these are non-issues. Please, just give us declarative structs that are immutable by default (if they’re really needed, make constructors and mutability opt-in). Isn’t the trend already toward more FP in JS?
[+] [-] Jcampuzano2|1 year ago|reply
[+] [-] bastawhiz|1 year ago|reply
I'd improve this proposal in two ways:
1. Explicitly define the layout with types. It's new syntax already, you can be spicy here.
2. Define a way for structs to be directly read into and out of ArrayBuffers. Fixed layout memory and serialization go hand in hand. Obviously a lot of unanswered questions here but that's the point of the process.
The unsafe block stuff, frankly, seems like it should be part of a separate proposal.
[+] [-] nikeee|1 year ago|reply
[+] [-] winrid|1 year ago|reply
[+] [-] barrystaes|1 year ago|reply
When applying ReactJS in webdev after doing all kinds of engineering in all kinds of (mostly typed) languages in many runtimes, I was so surprised that JS did not actually had a struct/record as seen in C/Pascal. Everything is a prototype that pretends its an object, but without types and pointers, and abstraction layers that added complexity to gain backwards compatibility.
Not even some object hack that many OO and compiled languages had. ES did not add it either, and my hopes where in WebAsm.
This proposal however seems like the actual plan that i’d like to use a lot.
A lot of the code complexity was to get simple guarantees for data quality. The alternative was to not care, either a feature or caveat of the used prototype model.
[+] [-] baxuz|1 year ago|reply