Doradus's comments

Doradus | 7 years ago | on: Kotlin 1.2.60 comes with native compiler binaries

Your history is a little mixed up there. The original author of Turbo Pascal, and Chief Architect of Delphi, became the Lead Architect of C#. It’s not like Microsoft came out of nowhere and ate Borland’s lunch.

Doradus | 8 years ago | on: Why You Should Hire an Old Programmer

> I expect discriminating by amount of relevant experience is fair, but that's not the same as age.

As someone who started coding at age 9, I agree. :-)

It's not about hiring someone in their 40s per se, but rather about recognizing the value of 20 years' experience. Age alone doesn't impart wisdom.

Doradus | 8 years ago | on: Why You Should Hire an Old Programmer

Agreed. I remember being in my early 20s, watching my more experienced colleagues, and realizing this. I had had a different (incorrect) mental model: I had thought intellect was as good as experience, so Ability = Intellect + Experience. I was wrong.

Doradus | 8 years ago | on: Linux follows an inverse form of Conway's Law

It seems absurd that an article claims to disprove a hypothesis linking structure to communication patterns without even the slightest mention of trying to observe those patterns. But they go even further: they claim to have determined the direction of causality between two variables without even having measured one of them!

Doradus | 9 years ago | on: React v15.5.0

Huh? That's funny--I would have said the opposite. It was from watching Obama's speeches that I learned that silent pauses can be much less annoying than "umms". If you want to see someone using a lot of "umms", watch Justin Trudeau.

Doradus | 9 years ago | on: Java Without If

It doesn't take long to get used to that style. It took me maybe three weeks of playing with Java 8 streams in my spare time before I got quite comfortable with it.

Doradus | 9 years ago | on: Proposed New Go GC: Transaction-Oriented Collector

RAII is using one C++ misfeature (destructors) to work around another misfeature (inability to do cleanup when leaving a lexical scope). Go has a far more elegant solution to the latter; if you're not familiar with the defer statement, check it out.

Doradus | 11 years ago | on: Newcomb's paradox

Did you read the article? It answers your question. Take a look at the section that begins with this:

The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first analysis argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money.

If you don't find this convincing, that's the point. Half the people who read this think one answer is obviously right, and the other half think the other is obviously right.

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

(Whoops, I've misunderstood you twice. I think my other reply answered the wrong question. Let me try again.)

I didn't say synchronization can prevent EA. I said that calling a finalizer on an application thread can defeat the mutual exclusion that would usually be guaranteed by Java's locking. If the app thread holds a monitor when the finalizer is called, that would usually make the finalizer block until the app exits the monitor. If they're on the same thread, that becomes a recursive monitor enter on the same thread, which doesn't block.

Even if you and I solve this particular problem, these are only the problems that occurred to me off the top of my head. With these kinds of subtle, thorny issues lurking around every corner, I wouldn't expect JIT development teams to give this any real attention unless there's some workload where it matters.

[These are my personal opinions, not those of my employer.]

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

I must have misunderstood you. You made a statement that I parsed as follows:

"If (this stuff isn't escaping) then (the JIT can prove that it isn't using a lock)."

That is logically equivalent to:

"If not (the JIT can prove that it isn't using a lock) then not (this stuff isn't escaping)."

Hence, I was explaining that using a lock doesn't count as an escape. But apparently I just misunderstood your meaning?

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

Your argument proves that the time taken can be no less than O(n) in the number of object references in live objects. The (potentially billions of) unreachable objects are never touched.

I agree that limited RAM environments are important. Asymptotic complexity only matters when you're effectively operating at the asymptote.

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

I seem to have hit a nerve here. Perhaps I haven't been clear about some of the points I'm making, which I don't think are all that contentious.

I'm not saying a fast GC can reduce the cost of allocation. I'm saying a fast GC can reduce GC overhead. I think that's an uncontroversial statement. I can only insist so many times that Java's GC doesn't touch dead objects at any time during its scan. If you want to disagree with me, that is your prerogative.

I agree that a GC walk can thrash cache when it happens. However, a copying GC can also rearrange object layout to improve locality. Which effect is more significant depends on the workload.

I didn't mention the number of cores because it didn't occur to me that it was significant to you. GCs can scale just fine to many cores. No core has to "check" other cores: each core can check its own local data, depending on the tactics employed to deal with NUMA. There are always challenges in scaling any parallel workload, of course, and there are always solutions.

Our GC (in IBM's J9 JVM) uses 8 bits in each object. With classes aligned to 256 bytes, there are eight free bits in each object's class pointer. Hence, J9's object model has one word of overhead on top of the user's data, which is what malloc/free typically needs to track each object's size anyway. No disadvantage there.

[These are my personal opinions, not those of my employer.]

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

Well, it's hard to deny that the work done by the program is O(n) (or even Ω(n)) in the number of objects created by the program. That's almost tautological.

The thing that is interesting about the asymptotic complexity of garbage collection, though, is that it gives you a peek into the future of GC overhead as heaps get larger.

Consider a program with some bounded steady-state memory consumption, and a GC that takes O(1) time to free unlimited amounts of memory. In such a system, you can reduce your GC overhead arbitrarily simply by enlarging the heap. A larger heap gives a larger time interval between collections, yet doesn't make the collections any slower. Don't like 6% overhead? Fine. Make the heap 10x larger and you have 0.6% overhead.

This time-space trade-off is always available to the end user of such a system. It's certainly not available with more primitive systems like malloc+free.

[These are my personal opinions, not those of my employer.]

Doradus | 11 years ago | on: My favorite bug: segfaults in Java

The JVM doesn't release the memory to the OS when garbage is collected; only when the heap shrinks. Any zeroing the OS might do is proportional to the size change in the heap, not to the number of objects freed.
page 1