and our linker prints out a slightly strange string in the help message.
Strange, or actually helpful? It would've been more devious if the message it was looking for actually contained more... potentially copyrightable content; here's one recently-mentioned example:
Anyone who has experimented with Hackintoshing may also recall the "SMCDeviceKey", a "magic cookie" that serves a similar purpose of attempting to use copyright as a blocker to compatibility.
Sega (and Nintendo) continued using schemes similar to the one at issue in Sega v. Accolade, but requiring the reproduction of the companies' respective logos. This is why the Nintendo logo on classic Game Boy systems is replaced with a solid block if you turn the system on without a cartridge; the displayed logo is read from the cartridge so that publishers would have to reproduce it. Nintendo also did a low-tech version with Famicom Disk System: there was a plate inside the drive embossed with "NINTENDO", (notionally) requiring a matching engraving in the disk's enclosure. In Sega's case they even devised a scheme of formatting bit patterns on CDs to produce a Sega logo visible to the naked eye, and then checking for the presence of those patterns to decide whether to boot a disc. I think the idea was less about making the compatibility token more copyright-worthy and instead making it a distinctive trademark so that Nintendo and Sega could go after bootleg manufacturers under various trademark laws.
It's not only lld, the clang compiler has a similar issue. It pretends to be an old version of GCC (4.2) with the predefined macros __GNUC__ and __GNUC__MINOR__, since a lot of software check for the presence of the __GNUC__ macro or its value to enable modern features. That occasionally confuses software which expects features first introduced in gcc 4.1/4.2 to be present when these macros are defined (see for instance https://bugs.llvm.org/show_bug.cgi?id=16683 or https://bugs.llvm.org/show_bug.cgi?id=24682).
When I first read the passage about libtool I thought it was a joke, and the illusion of order got a cold reality shower:
> the tests elaborately explore the functionality of the complex solution for a problem that should not exist in the first place. Even more maddening is that 31,085 of those lines are in a single unreadably ugly shell script called configure. The idea is that the configure script performs approximately 200 automated tests, so that the user is not burdened with configuring libtool manually. This is a horribly bad idea, already much criticized back in the 1980s when it appeared, as it allows source code to pretend to be portable behind the veneer of the configure script, rather than actually having the quality of portability to begin with.
What PHK misses is that the messiness he observes is not inherent in "the bazaar", but in the environment which gave rise to autoconf and libtool: multiple, competing cathedrals that didn't share knowledge and couldn't agree on the basics. A standard set of options to ld(1) for dynamic libraries would have been nice in 1988, but I too want a pony. In the 1980s not everybody agreed even on how to do dynamic libraries. There were three major competing implementations: loading dynamic libs at fixed memory addresses (e.g., old-school a.out Linux), position-independent code (e.g., modern Linux), or monkeypatching the in-memory image to point all function calls to dynamic symbols to their proper locations (e.g., Windows). Which one an OS chose had profound implications for what the linker could do and how. So getting a universal standard for dynamic-lib linker flags at the dawn of POSIX was a pipe dream. hence the need for libtool.
In a universe of abundant open source these problems are sometimes stupid easy to solve. The x86Open project was a consortium to define a common ABI for Unix implementations on x86 hardware. But since all the vendors involved (including a couple of open-source BSDs) already shipped Linux kernel personalities for their OS, the only business x86Open ever did was to declare "Linux ELF binaries" the standard and then disband itself.
PHK had a talk he gave to my local Linux/BSD group a few months back and it was a really engaging and fun talk, so it doesn't surprise me that this was a good read. I've used Varnish for fun quite a bit and seeing his talk made me realise why it's such a solid piece of software.
Also, since we cannot update the existing configure scripts that are already generated and distributed as part of other programs, even if we improve autoconf, it would take many years until the problem would be resolved.
I don't know autoconf and this sentence got me curious: why would it not be possible to regenerate existing configure scripts using a fixed version of autoconf? Are those scripts likely to be manually edited after they've been generated?
Yes that is possible; the configure script is for the most part autogenerated. But the maintainers of said programs have to actually do so, and users have to actually download the latest version. It would still take many years for fixes to propagate throughout the entire ecosystem.
You can regenerate, and indeed you have to if you need to add support for a new cpu architecture, for instance. Debian has support in its packaging for making it easy to regenerate configure scripts as part of the package build for this reason: https://wiki.debian.org/Autoreconf They say "By _far_ the largest piece of work for any new debian port is updating hundreds of packages to have newer autotools config information so that they will build for the new architecture", which is why putting in the effort to have the package build do it automatically is worthwhile. Presumably FreeBSD could have taken a similar approach for its ports, though of course just tweaking the help string is much less effort.
Most open-source projects let you build the configure script on your computer. They either provide a tiny script named autogen.sh or tell you to call autoreconf -fi manually. They might track the auto-generated configure script as fallback for people without autotools installed, but no one really edits those.
> There were two possible solutions. One was to fix the configure script. However...
> The solution we ended up choosing was to add the string "compatible with GNU linkers" to our linker's help message.
The right way to deal with things like this is to do both. Do the hacky-but-realistically-shippable thing to get unblocked, and then also contribute towards the "right" solution, even if it's way upstream from you. Otherwise you're part of the problem.
He who fights with monsters should be careful lest he thereby become a monster. And if thou gaze long into an abyss, the abyss will also gaze into thee. -- Neetch, naturlich
You young 'uns may not remember it, but there used to be more systems than Windows, Mac, and Linux, and forced OS updates weren't a thing. Libtool was a way to try to make programs run on most people's computers, by compiling small programs to detect individual features. It was written in a nightmare language called M4, which would generate shell scripts, which would usually generate C programs, which would attempt to compile and run.
libtool's goal is actually smooth over the fact that dynamic linking is hard at compile-time. It's not a good abstraction and must be deeply ingrained into your project because of this.
I typically do not use libtool but rather have an autoconf macro [0] to determine how to interact with the linker. This has the disadvantage that each "./configure" invocation can only produce either static or shared archives, but not both (since the object files that make those up may require different compile flags). libtool's solution is to compile the object file both ways, but it does not really go well with the autoconf mechanism.
I also have a different set of macros for managing the ABI [1], and I'm not sure how that's managed with libtool.
User-Agent Sniffing. It's the worst solution, everyone involved will hate it eventually and in the end it amounts to all software just continuously improving the same-yness of the string.
Trying to re-solve the User-Agent issue, I think it would be much better if browsers claimed which standards they conform to, with an accompanying version.
The string would be split on the field separator (any non-printing space?). All exact matches for specifications would be compared and the result ANDed. This way a range could be created by having a minimum supported version as well.
I remember back when feature detection was gaining steam as the preferred alternative to user-agent sniffing, and Safari had a showstopper of a bug that meant preventDefault was present and callable but didn't actually do what preventDefault was supposed to do. So you had to fall back to sniffing for Safari to work around that (by hooking up a different event listener with a 'return false' instead of a preventDefault call).
We have feature detection for that which solves the worst problems.
I'd much rather if the "version number" of the browser wasn't exposed, and instead something like a UUID that changes every "major version" for each browser.
This lets developers blacklist specific browser versions if they are a problem, without allowing them to say "only works on chrome" or "fallback to this on safari at any version".
I know this would be basically impossible to get everyone to agree on, but it's an interesting thought.
When I was recently writing a Python script to ping the urls in places.sqlite to check for link rot, I had to change the urllib's useragent to mimick a modern browser, because otherwise lots of web sites redirect me around or return errors (even 404).
[+] [-] userbinator|8 years ago|reply
Strange, or actually helpful? It would've been more devious if the message it was looking for actually contained more... potentially copyrightable content; here's one recently-mentioned example:
https://dacut.blogspot.ca/2008/03/oracle-poetry.html
This also reminds me of https://en.wikipedia.org/wiki/Sega_v._Accolade and https://en.wikipedia.org/wiki/Lexmark_International,_Inc._v.....
Anyone who has experimented with Hackintoshing may also recall the "SMCDeviceKey", a "magic cookie" that serves a similar purpose of attempting to use copyright as a blocker to compatibility.
[+] [-] 0xcde4c3db|8 years ago|reply
[+] [-] FRex|8 years ago|reply
[+] [-] jaflo|8 years ago|reply
[+] [-] cesarb|8 years ago|reply
[+] [-] hyperpape|8 years ago|reply
[+] [-] muxator|8 years ago|reply
When I first read the passage about libtool I thought it was a joke, and the illusion of order got a cold reality shower:
> the tests elaborately explore the functionality of the complex solution for a problem that should not exist in the first place. Even more maddening is that 31,085 of those lines are in a single unreadably ugly shell script called configure. The idea is that the configure script performs approximately 200 automated tests, so that the user is not burdened with configuring libtool manually. This is a horribly bad idea, already much criticized back in the 1980s when it appeared, as it allows source code to pretend to be portable behind the veneer of the configure script, rather than actually having the quality of portability to begin with.
[+] [-] bitwize|8 years ago|reply
In a universe of abundant open source these problems are sometimes stupid easy to solve. The x86Open project was a consortium to define a common ABI for Unix implementations on x86 hardware. But since all the vendors involved (including a couple of open-source BSDs) already shipped Linux kernel personalities for their OS, the only business x86Open ever did was to declare "Linux ELF binaries" the standard and then disband itself.
[+] [-] tzahola|8 years ago|reply
https://news.ycombinator.com/item?id=15836907
https://news.ycombinator.com/item?id=12251323
https://news.ycombinator.com/item?id=4407188
[+] [-] sondr3|8 years ago|reply
[+] [-] spiznnx|8 years ago|reply
[+] [-] brucephillips|8 years ago|reply
[+] [-] amgaera|8 years ago|reply
I don't know autoconf and this sentence got me curious: why would it not be possible to regenerate existing configure scripts using a fixed version of autoconf? Are those scripts likely to be manually edited after they've been generated?
[+] [-] FooBarWidget|8 years ago|reply
[+] [-] pm215|8 years ago|reply
[+] [-] zuzun|8 years ago|reply
[+] [-] smelendez|8 years ago|reply
If you were creating a new class of software where this could be an issue, what would you do?
[+] [-] ikeboy|8 years ago|reply
[+] [-] matt_kantor|8 years ago|reply
> The solution we ended up choosing was to add the string "compatible with GNU linkers" to our linker's help message.
The right way to deal with things like this is to do both. Do the hacky-but-realistically-shippable thing to get unblocked, and then also contribute towards the "right" solution, even if it's way upstream from you. Otherwise you're part of the problem.
[+] [-] username223|8 years ago|reply
You young 'uns may not remember it, but there used to be more systems than Windows, Mac, and Linux, and forced OS updates weren't a thing. Libtool was a way to try to make programs run on most people's computers, by compiling small programs to detect individual features. It was written in a nightmare language called M4, which would generate shell scripts, which would usually generate C programs, which would attempt to compile and run.
[+] [-] rkeene2|8 years ago|reply
I typically do not use libtool but rather have an autoconf macro [0] to determine how to interact with the linker. This has the disadvantage that each "./configure" invocation can only produce either static or shared archives, but not both (since the object files that make those up may require different compile flags). libtool's solution is to compile the object file both ways, but it does not really go well with the autoconf mechanism.
I also have a different set of macros for managing the ABI [1], and I'm not sure how that's managed with libtool.
[0] http://chiselapp.com/user/rkeene/repository/autoconf/artifac... [1] http://chiselapp.com/user/rkeene/repository/autoconf/artifac...
[+] [-] zaarn|8 years ago|reply
[+] [-] LoSboccacc|8 years ago|reply
[+] [-] digi_owl|8 years ago|reply
[+] [-] maaark|8 years ago|reply
[+] [-] shak77|8 years ago|reply
[+] [-] mjevans|8 years ago|reply
EG: www/HTML<=5.1 www/XHTML<=1.1 www/CSS<=3.0 ISO/ECMAScript<=8
The string would be split on the field separator (any non-printing space?). All exact matches for specifications would be compared and the result ANDed. This way a range could be created by having a minimum supported version as well.
[+] [-] ubernostrum|8 years ago|reply
I remember back when feature detection was gaining steam as the preferred alternative to user-agent sniffing, and Safari had a showstopper of a bug that meant preventDefault was present and callable but didn't actually do what preventDefault was supposed to do. So you had to fall back to sniffing for Safari to work around that (by hooking up a different event listener with a 'return false' instead of a preventDefault call).
[+] [-] Klathmon|8 years ago|reply
I'd much rather if the "version number" of the browser wasn't exposed, and instead something like a UUID that changes every "major version" for each browser.
This lets developers blacklist specific browser versions if they are a problem, without allowing them to say "only works on chrome" or "fallback to this on safari at any version".
I know this would be basically impossible to get everyone to agree on, but it's an interesting thought.
[+] [-] digi_owl|8 years ago|reply
If you have Flash installed, but the site is not whitelisted, Chromium will claim Flash is not installed to any JS detection code.
[+] [-] lowq|8 years ago|reply
[+] [-] anthk|8 years ago|reply
At ~/.dillo/dillorc
http_user_agent="Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.13337/458; U; en) Presto/2.2.0"
[+] [-] gkya|8 years ago|reply