printf_kek0 | 8 years ago | on: Nuklear: A single-header ANSI C GUI library
printf_kek0's comments
printf_kek0 | 8 years ago | on: Nuklear: A single-header ANSI C GUI library
> [...] the standard should have included a keyword -- or the compiler authors a flag -- to allow people to opt in to these sorts of optimizations rather than requiring them to opt out.
As an amateur Linux kernel hacker, I see hacks in the kernel that circumvent compiler bugs and unexpected behavior because of compiler defiance of standards. The rants on lkml seem to assign most of blame to the compiler authors of gcc. Here is one of Linus' (many) denunciations of gcc:
https://lkml.org/lkml/2003/2/26/158
But also Andrew T: https://stackoverflow.com/a/2771041 who claims - if I understand correctly - that strict-aliasing was already part of the C89/C90 standard but that compiler authors didn't implement the standard correctly.
> It is way too late to change the way C works by default by doing stuff like this.
One thing that I am confused about in your explanation is this:
> Under these conditions, a single-header library can result in "riskier" optimizations that the compiler wouldn't attempt if the same code resided in its own translation unit.
How exactly does a compiler generate "riskier" optimizations from a single-header as opposed to separate translation units? I fail to understand how after the pre-processing phase, this would be less safe.
printf_kek0 | 8 years ago | on: Nuklear: A single-header ANSI C GUI library
However, having used a few of these "single-header" libraries my main concern is navigating a 9000 sloc header file as opposed to a neatly re-factored version...
An explanation of how undefined behaviour is possible would be welcome.
printf_kek0 | 8 years ago | on: Predicting Random Numbers in Ethereum Smart Contracts
There is a manifest contradiction here created by this choice of words.
printf_kek0 | 8 years ago | on: Three months of content moderation for Facebook in Berlin
It would be inconsistent with the principle of free speech to sanction certain things from being said in public. Even racist, offensive and blatantly stupid bullshit (like your example) should not be censored.
The reason for this is because, paradoxically, if we assume that people are capable of rational discussion and debate they will eventually see where their beliefs or statements were in error through their reasoning.
To borrow my previous analogy, _hate-speech_ laws are akin to telling a child to: "obey; because I am your father" as opposed to people self-correcting their ethics through open debate and questioning.
printf_kek0 | 8 years ago | on: Three months of content moderation for Facebook in Berlin
Regulatory requirements notwithstanding, a policy that dictates what content gets filtered to users is analogous to a parent forbidding a child from watching an age restricted movie.
Although I could present a "this is a slippery slope" argument here, the more salient argument is that content moderation is essentially a form of social engineering. If you think I am exaggerating but have never seen video footage of what _real war_ does to real human beings I would encourage you do so; consider then whether you still experience the same apathy that you did whenever "Suicide bomber in <place_in_middle_east> kills x" appears in your feed.
IMHO, people should at least be presented with the option to see what is getting filtered rather than selectively suppressing objectionable material lest society remain indifferent..
printf_kek0 | 8 years ago | on: Linux-insides: Linux kernel load address randomization
printf_kek0 | 8 years ago | on: Guide to Serverless Architecture
https://ebooks.adelaide.edu.au/m/mill/john_stuart/system_of_...
printf_kek0 | 8 years ago | on: Of the Liberty of Thought and Discussion (1869)
According to J.S. Mill, free speech should only be constrained to the extent that it violates what he calls the harm principle:
"The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others."
He makes a distinction between causing harm and causing offense. Revulsion, disgust, psychologically upsetting speech, discriminatory speech, insults and so forth fall under the latter category. Free speech which merely causes offense should not be prohibited according to Mill however controversial or contrary to social norms such speech is.
But how strong the definition of the "harm principle" should be is subject to debate.
Contrast for instance, screaming "Fire!!" in a theater - which might cause injury and death from a crowd stampede - with "All <insert_ethnic_group> are inherently inferior to <insert_other_ethnic_group>".
The difference between causing harm and offense is that harm is universally injurious, whereas as what causes offense is only experienced subjectively.
Although I find blatant hate speech detestable, there is a worrying trend where a majority of people consider hate speech to mean "anything that we disagree with". In our times, it is difficult to speak frankly or be a contrarian without being demonized. And this is precisely what J.S. Mill calls "tyranny of the majority" and consequentially what leads to loss of individuality in society.
printf_kek0 | 8 years ago | on: Of the Liberty of Thought and Discussion (1869)
He who knows only his own side of the case, knows little
of that. His reasons may be good, and no one may have been
able to refute them. But if he is equally unable to refute
the reasons on the opposite side; if he does not so much as
know what they are, he has no ground for preferring either
opinion. The rational position for him would be suspension
of judgment, and unless he contents himself with that,
he is either led by authority, or adopts, like the generality
of the world, the side to which he feels most inclination.
I think this is highly relevant considering the times we live in (identity politics, nationalism, feminism, polarization of opinions and beliefs).
1. In some libraries (SQLLite for instance, and libev too I think) the authors have a script that "amalgamates" all sources into a single translation unit.Their reasoning being that a compiler with full-visibility of the source can do global / interprodecural optimization that would not be possible otherwise. Is there any sense in this if it practical to do so for a small to moderate size library?
2. Please tell me what I should read so I can reach the same level of understanding that you have. <not a question>