I was surprised by my almost-panicky reaction to seeing:
Identifiers Can Have Blanks
open_window_with_attributes(...)
becomes:
open window with attributes (...)
I think I actually felt that wrongness in my stomach. Like a more intense version of seeing our corporate network shared drive's files with spaces and parens in them.
I had a similar reaction, and I'm not sure that it's a "damn kids, get off my lawn" reaction. Specifying an unambiguous grammar may be difficult - which implies parsing may become a problem.
An implementation exists, so the author has something working, but I'm wondering how robust the parsing is. I haven't seen many code examples (only short fragments on the page), so I don't know what potential issues, if any, there are. But, this is the sort of thing that could significantly complicate adding new language features that requires additional syntax.
edit: I'm perusing the source for the compiler, which is of course written in Zinc. This code from the main driver of the compiler perhaps gives a better feel for how it may look in practice:
while i < argc
def arg = argv[i]
if is equal (arg, "-debug")
debug = true
elsif is equal (arg, "-v")
version = true
elsif is equal (arg, "-u")
unicode = true
elsif is equal (arg, "-o") && i < argc-1
out filename = new string (bundle, argv[++i])
to OS name (out filename)
elsif is equal (arg, "-I") && i < argc-1
append (include path, new string (bundle, argv[++i]))
else
filename = new string (bundle, arg)
end
++i
end
From an aesthetic point of view, it doesn't look that bad. In this example, I think "is equal", "out filename", "to OS name" and "include path" are all identifiers. But I'm still wondering what kind of parsing and lexing issues that may arise.
While i didn't panic, i find myself having quite a negative reaction to a language in which "Identifiers can have blanks" is listed under main features.
EDIT : Also, i see quite the opportunity from wrong parsing, not on the machine side, but on the human side. blanks already have a function in other programming languages : They are here to separate symbols. By giving them this double meaning, you actually bring context in the parsing of any piece of code, which i think could be a pretty painful exercise.
Other version : Don't design a language version because it makes code easier to type, if it doesn't also make it easier to read
(I know the author thinks it easier to read, but i'm not yet convinced about that)
That strikes me as giving lie to the "Ruby-like syntax" claim; ask a Ruby programmer what that line means and you will not get the correct answer for Zinc.
Actually the connection with Ruby is tenuous anyhow; Ruby and assembler just don't go together. An assembler should produce a very clear one-to-one correspondence of instruction to machine language opcode, pretty much by definition. A high-level language can turn a simple statement into arbitrarily-complicated run-time code, pretty much by definition. Neither of these are criticisms by any means, it's just what they are. There isn't much syntax cross-talk to be had there.
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
As much as your criticisms may be valid, I think he has given sufficient justification:
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
This kind of "Because I said so" reasoning is valid in pretty much any hobbyist-type situation as far as I'm concerned. If you don't like it, fork it.
The Zinc paper is one of my all time implementation papers, right up there with Dybvig's thesis, Rabbit and Orbit papers on Scheme, Reppy's thesis on Concurrent ML, and SPJ's Tagless paper.
The Zinc Experiment is Leroy at his best; compiler hacking lore meets programming language research (no hand-waving past performance issues, with a critical eye towards foundations.)
What is wrong with 64 bit integers? Maybe they've been indicted on war crimes or something. The number of languages that appear and don't support them.... And what about interfacing with C? I can count the languages on one hand that have a simple and efficient C interface! (I have a list of other things almost always ignored by languages for no good reason... efficiency, friendly license, lack of macros or ability to extend the language...)
I will try to ignore the shallow (but horrifying) issue of identifiers including spaces.
The real question to be asked here is what is wrong with the current portable assembler (C) ? C has occupied this niche for a long time and quite successfully - I believe all current mainstream kernels are written in C (or possibly a limited subset of C++).
If you want a 'portable assembler', a modern C compiler is in my opinion, a good choice:
- a solid specification: detailing the behaviour of operations, what is defined, implementation, or undefined behaviour.
- access to platform specific features through builtins and intrinsics
- ability to use inline asm if you really want to (or need to)
- easy integration with existing libraries
- minimal dependencies on a runtime library (pretty much none in freestanding implementations)
- most compliers give have ways to get good control of both what code is generated and structure layout.
The modern C ecosystem provides (mostly good) tools for:
Admittedly, most of these tools don't depend on the code being written in C, but I suspect any new language would take a while to get properly integrated. If you want to use a low level language, you really want to have access to these tools or equivalent.
A new language trying to compete in this space would have to offer something fairly substantial to get me to switch - and a strange syntax like zinc is not going to help. From the documentation at least, zinc seems to currently be missing: an equivalent to volatile; asm; anyway to access a CAS like instruction; 64bit types; floats; a way to interface to C code; clear documentation about behaviour in corner cases (what happens if you a left shift a 32bit value by 40?). The only thing seems to bring to the table to compensate is the ability to inherit structures
I agree with you. I just wanted to list the one complaint I do have about C: missed optimization opportunities due to lax aliasing rules.
Consider the following C translation unit:
void foo(const int *i);
void bar();
int baz() {
int i = 1;
foo(&i);
return i + 1;
}
int quux() {
int i;
foo(&i);
i = 1;
bar();
return i + 1;
}
You'd like to think that both baz() and quux() could compile the return statements to a constant "return 2." After all, foo() is taking a pointer to a CONST int. But alas, this is not the case, because foo() could cast away the const. So in truth, both functions are forced to reload the integer from the stack, add 1 to it, and then return that! You can't use any values you had loaded in registers (or in this case, you can't evaluate the expression at compile time).
My example is contrived, but you can easily construct examples that fit the same pattern and are real.
I've heard that Fortran still beats C in optimization in some cases; I would expect that the above is one major reason why. C99's "restrict" addresses some of the difference but cannot help you with the above.
The main problems with C are inability to control memory layout in fine detail, and lack of control over the calling sequence - you can't portably get a tail call. Have a look at the C-- work by Simon Peyton Jones and Norman Ramsey and others for more details.
I guarantee that I would confuse the types "byte" (uint8_t) and "octet" (int8_t). The typical distinction between a byte and an octet has to do with the number of bits in the representation (a byte usually has 8, an octet always has 8). I don't know of any convention for bytes being unsigned and octets being signed.
You're right that with "byte" there isn't an official size specification, although the de facto size is 8 bits, unlike with "octet", which was specifically defined as 8 bits (for interoperability between different systems).
Regarding the question of signed/unsigned - I'll try to explain:
byte - unsigned
On page 37 of the C99 standard: "A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to 2^CHAR_BIT - 1)"
i.e. according to the C99 standard, a byte is unsigned.
octet - signed
Think of an octet in two ways: the concept of something that is exactly 8-bits on the one hand, and on the other hand, the technical representation of this concept.
When you read the literature you'll notice that an octet refers simply to the size of something (8 bits) and not is signedness.
For example, octets arguably arose in the networking world, and the NDR (Network Data Representation) refers to octet in sign-neutral way.
On page 256 of the C99 standard: "The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits."
Now, how would you go about representing the concept of an "octet" (which is sign-neutral)? If you used an unsigned 8 bit integer, you can't represent the sign of the (conceptual) octet, while a signed 8 bit type can.
[+] [-] psnj|15 years ago|reply
I guess I'm old.
[+] [-] scott_s|15 years ago|reply
An implementation exists, so the author has something working, but I'm wondering how robust the parsing is. I haven't seen many code examples (only short fragments on the page), so I don't know what potential issues, if any, there are. But, this is the sort of thing that could significantly complicate adding new language features that requires additional syntax.
edit: I'm perusing the source for the compiler, which is of course written in Zinc. This code from the main driver of the compiler perhaps gives a better feel for how it may look in practice:
From an aesthetic point of view, it doesn't look that bad. In this example, I think "is equal", "out filename", "to OS name" and "include path" are all identifiers. But I'm still wondering what kind of parsing and lexing issues that may arise.[+] [-] Raphael_Amiard|15 years ago|reply
EDIT : Also, i see quite the opportunity from wrong parsing, not on the machine side, but on the human side. blanks already have a function in other programming languages : They are here to separate symbols. By giving them this double meaning, you actually bring context in the parsing of any piece of code, which i think could be a pretty painful exercise.
Other version : Don't design a language version because it makes code easier to type, if it doesn't also make it easier to read
(I know the author thinks it easier to read, but i'm not yet convinced about that)
[+] [-] jerf|15 years ago|reply
Actually the connection with Ruby is tenuous anyhow; Ruby and assembler just don't go together. An assembler should produce a very clear one-to-one correspondence of instruction to machine language opcode, pretty much by definition. A high-level language can turn a simple statement into arbitrarily-complicated run-time code, pretty much by definition. Neither of these are criticisms by any means, it's just what they are. There isn't much syntax cross-talk to be had there.
[+] [-] Hexstream|15 years ago|reply
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
just-use-lisp-style-identifiers-then
[+] [-] JonnieCache|15 years ago|reply
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
This kind of "Because I said so" reasoning is valid in pretty much any hobbyist-type situation as far as I'm concerned. If you don't like it, fork it.
[+] [-] thesz|15 years ago|reply
That one is more popular, it eventually became OCaml.
[+] [-] mahmud|15 years ago|reply
The Zinc Experiment is Leroy at his best; compiler hacking lore meets programming language research (no hand-waving past performance issues, with a critical eye towards foundations.)
[+] [-] sb|15 years ago|reply
[+] [-] wbhart|15 years ago|reply
[+] [-] wildmXranat|15 years ago|reply
[+] [-] paperclip|15 years ago|reply
The real question to be asked here is what is wrong with the current portable assembler (C) ? C has occupied this niche for a long time and quite successfully - I believe all current mainstream kernels are written in C (or possibly a limited subset of C++).
If you want a 'portable assembler', a modern C compiler is in my opinion, a good choice:
The modern C ecosystem provides (mostly good) tools for: Admittedly, most of these tools don't depend on the code being written in C, but I suspect any new language would take a while to get properly integrated. If you want to use a low level language, you really want to have access to these tools or equivalent.A new language trying to compete in this space would have to offer something fairly substantial to get me to switch - and a strange syntax like zinc is not going to help. From the documentation at least, zinc seems to currently be missing: an equivalent to volatile; asm; anyway to access a CAS like instruction; 64bit types; floats; a way to interface to C code; clear documentation about behaviour in corner cases (what happens if you a left shift a 32bit value by 40?). The only thing seems to bring to the table to compensate is the ability to inherit structures
[+] [-] haberman|15 years ago|reply
Consider the following C translation unit:
You'd like to think that both baz() and quux() could compile the return statements to a constant "return 2." After all, foo() is taking a pointer to a CONST int. But alas, this is not the case, because foo() could cast away the const. So in truth, both functions are forced to reload the integer from the stack, add 1 to it, and then return that! You can't use any values you had loaded in registers (or in this case, you can't evaluate the expression at compile time).My example is contrived, but you can easily construct examples that fit the same pattern and are real.
I've heard that Fortran still beats C in optimization in some cases; I would expect that the above is one major reason why. C99's "restrict" addresses some of the difference but cannot help you with the above.
[+] [-] fanf2|15 years ago|reply
[+] [-] humbledrone|15 years ago|reply
[+] [-] joubert|15 years ago|reply
Regarding the question of signed/unsigned - I'll try to explain:
byte - unsigned
On page 37 of the C99 standard: "A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to 2^CHAR_BIT - 1)"
i.e. according to the C99 standard, a byte is unsigned.
octet - signed
Think of an octet in two ways: the concept of something that is exactly 8-bits on the one hand, and on the other hand, the technical representation of this concept.
When you read the literature you'll notice that an octet refers simply to the size of something (8 bits) and not is signedness. For example, octets arguably arose in the networking world, and the NDR (Network Data Representation) refers to octet in sign-neutral way.
On page 256 of the C99 standard: "The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits."
Now, how would you go about representing the concept of an "octet" (which is sign-neutral)? If you used an unsigned 8 bit integer, you can't represent the sign of the (conceptual) octet, while a signed 8 bit type can.
[+] [-] timrobinson|15 years ago|reply
Edit: the HLA web site always used to be a decent place to learn assembly language. I don't remember it being so mauve though: http://homepage.mac.com/randyhyde/webster.cs.ucr.edu/index.h...
[+] [-] gasull|15 years ago|reply
http://www.corepy.org/
[+] [-] gcv|15 years ago|reply
[+] [-] yawniek|15 years ago|reply
[+] [-] sanxiyn|15 years ago|reply