Eliah_Lakhin | 6 months ago | on: Zedless: Zed fork focused on privacy and being local-first
Eliah_Lakhin's comments
Eliah_Lakhin | 7 months ago | on: Partially Matching Zig Enums
Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.
The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.
Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.
It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.
Eliah_Lakhin | 1 year ago | on: From Reform to Ruin in the USSR
However, to a large extent, this is the result of cutting expenditures on projecting Soviet influence abroad. The Soviet Union had enormous spending on subsidizing friendly regimes and their economies around the world, as well as maintaining a military presence. The same applies to some former Soviet republics that the USSR had to subsidize for decades.
I think we are observing similar processes in the United States today. They are attempting to cut spending and perhaps even reduce their military presence simply because they cannot afford it in the long term without sacrificing their own prosperity.
Eliah_Lakhin | 1 year ago | on: From Reform to Ruin in the USSR
Another important factor was China's significantly larger population compared to the Soviet Union, combined with notably lower labor costs. All these factors eventually propelled the country to great prosperity. Without them, I think China today wouldn't look much different from any other East Asian country.
The Soviets simply didn't have such opportunities. Leaving aside the fact that Western countries never offered them a similar deal, Soviet labor simply couldn't match the industrial productivity enabled by the cheap workforce of East Asian countries.
The USSR had vast land and abundant natural resources, but its population density was relatively low. Additionally, it already possessed advanced technologies and a well-developed industrial base. From the U.S. perspective, such a country looked more like a potential (and actual) competitor rather than just another member of the Western economic system.
I'm not a big fan of a planned economy. And I believe that the lack of social freedoms and democratic institutions, typical of Western countries, was a major factor in the Soviet collapse. But regardless of the decisions and reforms Soviet authorities could have made after World War II, I think the country was doomed either way.
The Soviet Union simply didn't have a large enough population to effectively develop such an enormous landmass. After WWII, significant male losses and the effects of the second demographic transition led to continuous population decline. The only reasonable course of action would have been to relinquish part of its global influence and territory, which it eventually did — but perhaps too late. However, the authorities of any country rarely want to give up power, and the Soviets were no exception.
As for turning points, I don't think it was the NEP. More likely, the Communist (October) Revolution itself was the crucial historical moment. The Russian Empire was a relatively promising state, evolving in the right direction. It was gradually building democratic institutions and transitioning to a liberal economy. Its industrial development was progressing similarly to other European countries — perhaps with some lag, but still moving forward.
Perhaps the real turning point in Russian history was when radicals, driven by controversial economic and social ideas, inherited a wealthy country and used its potential for large-scale social experiments.
Eliah_Lakhin | 1 year ago | on: Structured Editing and Incremental Parsing
It's essentially the experience most of us already have when using Visual Studio, IntelliJ, or any modern IDE on a daily basis.
The term "incremental parsing" might be a bit misleading. A more accurate (though wordier) term would be a "stateful parser capable of reparsing the text in parts". The core idea is that you can write text seamlessly while the editor dynamically updates local fragments of its internal representation (usually a syntax tree) in real time around the characters you're typing.
An incremental parser is one of the key components that enable modern code editors to stay responsive. It allows the editor to keep its internal syntax tree synchronized with the user's edits without needing to reparse the entire project on every keystroke. This stateful approach contrasts with stateless compilers that reparse the entire project from scratch.
This continuous (or incremental) patching of the syntax tree is what enables modern IDEs to provide features like real-time code completion, semantic highlighting, and error detection. Essentially, while you focus on writing code, the editor is constantly maintaining and updating a structural representation of your program behind the scenes.
The article's author suggests an alternative idea: instead of reparsing the syntax tree incrementally, the programmer would directly edit the syntax tree itself. In other words, you would be working with the program's structure rather than its raw textual representation.
This approach could simplify the development of code editors. The editor would primarily need to offer a GUI for tree structure editing, which might still appear as flat text for usability but would fundamentally involve structural interactions.
Whether this approach improves the end-user experience is hard to say. It feels akin to graphical programming languages, which already have a niche (e.g., visual scripting in game engines). However, the challenge lies in the interface.
The input device (keyboard) designed for natural text input and have limitations when it comes to efficiently interacting with structural data. In theory, these hurdles could be overcome with time, but for now, the bottleneck is mostly a question of UI/UX design. And as of today, we lack a clear, efficient approach to tackle this problem.
Eliah_Lakhin | 1 year ago | on: Tiny Glade 'built' its way to >600k sold in a month
I'm genuinely proud of the authors — they've set an inspiring example and given us hope for a bright future where the Rust ecosystem serves as a foundation for unique and creative game development projects.
Eliah_Lakhin | 1 year ago | on: Source-Available Is Meaningless
The term "open source" has a well-established reputation as "free as in beer", whether we like it or not. So why attach such a label to a commercial product?
Commercial software isn't inherently a bad thing. In fact, it's even better if the author or business can afford to publish it in source code form, making their services more transparent to end users.
As for the term "source available", it isn't as well-established as "open source". Its meaning may not be clear to the audience, and there's a certain lack of trust associated with it. However, this could change over time if more projects identify as "source available" and maintain clear and honest distribution and usage policies.
Eliah_Lakhin | 1 year ago | on: The Static Site Paradox
A startup's market value is often closely tied to its number of employees. From an investor's perspective, a company with 1,000 employees is typically valued much higher than a small team of 37 programmers — regardless of the revenue generated per employee, or even if the company isn’t generating revenue at all. This is largely because interest rates remained very low for a long time, making it reasonable to borrow investment funds for promising companies with large staffs.
However, those employees need to be kept busy with something that appears useful, at least in theory. I believe this is one of the primary reasons we see such complex solutions for relatively simple tasks, which sometimes might not require a large team of advanced web developers or sophisticated technologies at all.
Eliah_Lakhin | 1 year ago | on: 77% of employees report AI has increased workloads and hampered productivity
Back then, I often felt that the software products we were developing could have been created by much smaller teams of experienced programmers, or even by a single programmer. I'm referring specifically to direct programming, excluding management, QA, and devops. My professional experience is primarily with startups and small companies, but I believe this idea could extend to some larger products as well.
This raises the question of whether I, as a programmer, was productive enough. I believe that my colleagues and I were quite productive, and we performed our daily tasks honestly and fairly. However, I feel that our responsibilities were artificially limited. I think my productivity could have been much higher if my responsibilities within the company had been expanded. At least, this is what my personal, non-commercial experience with my pet projects in my spare time suggests.
I understand that a pet project is not the same as a business solution, but I believe the core issue is not that AI affects programmers' productivity, but that AI has helped management realize that increasing the number of programmers does not necessarily improve product quality.
I also found Josh Christiane's video on this topic very insightful: https://www.youtube.com/watch?v=hAwtrJlBVJY
Eliah_Lakhin | 1 year ago | on: Should I open source a commercial product?
As for community feedback, it doesn't necessarily have to be negative. Recently, I published my project under a non-standard license and received generally positive feedback[1], despite my project being in a very niche field.
Eliah_Lakhin | 1 year ago | on: Should I open source a commercial product?
I'm not a lawyer, and it's generally a good idea to consult a specialist when drafting licensing terms. However, in my personal opinion, it's often better to draft a project-specific license yourself to start, rather than using a popular open-source license, most of which are not aligned with typical commercialization goals.
Eliah_Lakhin | 1 year ago | on: GitHub Copilot is not infringing your copyright (2021)
I'm not sure this is applicable to licensed programs because a book is sold, not licensed.
> The output of a machine simply does not qualify for copyright protection – it is in the public domain.
As far as I know, the output of a compiler that builds executables from copyrighted source code is still subject to copyright protection. Is software like an LLM fundamentally different from a compiler in this regard?
In my opinion, the author's argument has several flaws, but perhaps a more important question is whether society would benefit from making an exception for LLM technologies.
I think it depends on how this technology will be used. If it is intended for purely educational purposes and is free of charge for end users, maybe it's not that bad. After all, we have Wikipedia.
However, if the technology is intended for commercial use, it might be reasonable to establish common rules for paying royalties to the original authors of the training data whenever authorship can be clearly determined. From this perspective, it could further benefit authors of open-source and possibly free software too.
Eliah_Lakhin | 1 year ago | on: Free and Open Source Software–and Other Market Failures
There is nothing inherently wrong with Facebook making React open-source. React undoubtedly benefits everyone.
However, the issue lies in the fact that this practice doesn't create a true "market". Facebook has made a relatively small and insignificant portion of their source code available for free, which doesn't impact their business significantly. Meanwhile, they have encouraged thousands of programmers around the world to develop React extensions and publish them for free under similar terms. For an individual programmer, unlike Facebook, this means giving away 100% of their work effort without charge. While this benefits society in terms of knowledge sharing, it almost always financially benefits businesses and big tech companies.
Overall, this model creates a situation where most programmers end up doing part of the job for businesses for free, and they have to earn their living by working for these companies as well.
This model exploits programmers' labor in two interconnected ways. Simultaneously, there is widespread public promotion that publishing under OSS licenses is moral and the only way to go.
Eliah_Lakhin | 1 year ago | on: Free and Open Source Software–and Other Market Failures
However, the core idea of sharing source code is not exclusive to the FOSS movement. This concept aligns with the original intentions of the Berne Convention. The United States adopted the Berne Convention relatively late, in the late 80s if I'm not mistaken. Before this adoption, source code made publicly available without prior copyright registration procedures was effectively considered public domain. This situation allowed businesses and startups to exploit these sources to create closed-source commercial products without crediting the original authors.
The Berne Convention was a game changer. It introduced a new rule that simply making source code publicly available automatically grants the author exclusive copyright of their work, without any bureaucratic hurdles.
This rule opened up new possibilities for programmers to create open-source projects in the broadest sense. Nevertheless, due to historical reasons, it was not an easy task for the public to understand this new reality. The FOSS movement worked hard to convince people that publishing software in source form is perfectly fine.
However, the free-software philosophy can be quite restrictive compared to what the Berne Convention actually allows authors to do with their work. This ideology is so pervasive today that many programmers believe that publishing source code must involve using an OSI-approved (F)OSS license. Any deviation from this is often seen as supporting outdated business models and harming the programming community's ability to share their work with the public.
These misconceptions likely arise because the FOSS movement has taken the lead in promoting the principles of the Berne Convention, adding its ideological restrictions. The rarely acknowledged truth is that the Berne Convention offers a wide range of possibilities that could benefit the community of authors more than big-tech corporations. The Four Freedoms of Free Software significantly restrict these options for authors.
More importantly, big businesses have already adapted to this new reality and are utilizing this philosophy to their advantage.
Eliah_Lakhin | 1 year ago | on: Show HN: Lady Deirdre 2 – Rust Framework for Compilers and LSP Servers
That's an interesting idea. Tree-Sitter and Lady Deirdre are quite different in their approaches to parsing. Tree-Sitter is a GLR parser, while Lady Deirdre is a recursive-descent parser. In the Lady Deirdre API, there are customizable traits that let you define new types of files with parsers ("documents" in terms of Lady Deirdre). Perhaps it would be possible to create an adapter, but I would implement it as a separate crate.
> what happens with open-source projects that use this that are then used in commercial projects?
Good question. The idea is that if you link to Lady Deirdre in the Cargo.toml dependencies, it is up to the commercial project authors. They will compile the actual executable intended for selling by downloading both your crate and my crate. However, I'm not a lawyer, and this is not legal advice. Just my thoughts.
Eliah_Lakhin | 1 year ago | on: Show HN: Lady Deirdre 2 – Rust Framework for Compilers and LSP Servers
That's correct. The license agreement requires purchasing a commercial license per product if it reaches a certain revenue threshold. I believe this price should be feasible for a business, and this restriction is not unfair for regular programmers either. To clarify, the license needs to be renewed annually to continue receiving new versions from me. Additionally, I reserve the right to change the price in the future. However, license renewal is not a strict obligation. You can continue using the previous versions if you buy the license at least once.
The idea is to replace the donation model widespread in typical OSS projects with a formal obligation to purchase a license. I believe this approach more accurately expresses what the authors really want in exchange for their labor.
Eliah_Lakhin | 1 year ago | on: Show HN: Lady Deirdre 2 – Rust Framework for Compilers and LSP Servers
I agree with your point about the licensing. I would also add that tools for the development of compiler front-ends are quite a niche market. So, honestly speaking, I don't plan to earn much from my project regardless of the license terms. This work is part of a higher-level in-progress toolset, which is closer to the end users. I have dedicated it as a separate project primarily for public preview, with some restrictions on distribution and use, as I haven't decided on the overall toolset distribution model yet. But it is possible that I will change the licensing terms of Lady Deirdre in the future to something less restrictive (maybe even MIT) to make it more popular, this is just not my current goal. I apologize for any inconvenience my current licensing terms may cause.
Eliah_Lakhin | 1 year ago | on: Show HN: Lady Deirdre 2 – Rust Framework for Compilers and LSP Servers
> why the seemingly needless location of everything down in a "work" folder?
It's just easier for me to organize the filesystem this way on my local machine. Everything that is unrelated to the development is outside of the work directory. Also, the license agreement refers to the "work". Perhaps it would be clearer for users if the work directory is clearly dedicated.
> Is there something else that you envision one day living at the top-level which you just planned for by putting everything someone would care about one further click away?
Everything that I planned to publish that is related to Lady Deirdre is already in the repository. I use this project for my other programming language project that I plan to release soon, but it will be in a separate GitHub repo. Actually, Lady Deirdre initially was separated from the language project codebase as I thought it may be useful for other programming language authors.
Eliah_Lakhin | 1 year ago | on: Vulkan Tutorial
The issue here is that if the users chose Vulkan, they probably do care about these details, but the API makes it totally obscure for them if they don't have prior experience with the GPU architectures.
At the same time the API is too low level for other users who want to just start drawing polygons on the screen here and now like they did with OpenGL.
Eliah_Lakhin | 1 year ago | on: Use context-free grammars instead of parser combinators and PEG