Doradus | 7 years ago | on: Kotlin 1.2.60 comes with native compiler binaries
Doradus's comments
Doradus | 7 years ago | on: Teenager Finds Classical Alternative to Quantum Recommendation Algorithm
Doradus | 8 years ago | on: Longest Lines of Sight on Earth
Doradus | 8 years ago | on: Why You Should Hire an Old Programmer
As someone who started coding at age 9, I agree. :-)
It's not about hiring someone in their 40s per se, but rather about recognizing the value of 20 years' experience. Age alone doesn't impart wisdom.
Doradus | 8 years ago | on: Why You Should Hire an Old Programmer
Doradus | 8 years ago | on: Linux follows an inverse form of Conway's Law
Doradus | 8 years ago | on: JDK 9 modules voted down by EC
Doradus | 9 years ago | on: React v15.5.0
Doradus | 9 years ago | on: Java Without If
Doradus | 9 years ago | on: Creative Computing Magazine
Doradus | 9 years ago | on: Proposed New Go GC: Transaction-Oriented Collector
The right answer is that Java's generational collector supports this transaction lifetime reasonably well in many cases.
Doradus | 9 years ago | on: Proposed New Go GC: Transaction-Oriented Collector
Doradus | 9 years ago | on: Proposed New Go GC: Transaction-Oriented Collector
Doradus | 11 years ago | on: Newcomb's paradox
The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first analysis argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money.
If you don't find this convincing, that's the point. Half the people who read this think one answer is obviously right, and the other half think the other is obviously right.
Doradus | 11 years ago | on: My favorite bug: segfaults in Java
I didn't say synchronization can prevent EA. I said that calling a finalizer on an application thread can defeat the mutual exclusion that would usually be guaranteed by Java's locking. If the app thread holds a monitor when the finalizer is called, that would usually make the finalizer block until the app exits the monitor. If they're on the same thread, that becomes a recursive monitor enter on the same thread, which doesn't block.
Even if you and I solve this particular problem, these are only the problems that occurred to me off the top of my head. With these kinds of subtle, thorny issues lurking around every corner, I wouldn't expect JIT development teams to give this any real attention unless there's some workload where it matters.
[These are my personal opinions, not those of my employer.]
Doradus | 11 years ago | on: My favorite bug: segfaults in Java
"If (this stuff isn't escaping) then (the JIT can prove that it isn't using a lock)."
That is logically equivalent to:
"If not (the JIT can prove that it isn't using a lock) then not (this stuff isn't escaping)."
Hence, I was explaining that using a lock doesn't count as an escape. But apparently I just misunderstood your meaning?
Doradus | 11 years ago | on: My favorite bug: segfaults in Java
I agree that limited RAM environments are important. Asymptotic complexity only matters when you're effectively operating at the asymptote.
Doradus | 11 years ago | on: My favorite bug: segfaults in Java
I'm not saying a fast GC can reduce the cost of allocation. I'm saying a fast GC can reduce GC overhead. I think that's an uncontroversial statement. I can only insist so many times that Java's GC doesn't touch dead objects at any time during its scan. If you want to disagree with me, that is your prerogative.
I agree that a GC walk can thrash cache when it happens. However, a copying GC can also rearrange object layout to improve locality. Which effect is more significant depends on the workload.
I didn't mention the number of cores because it didn't occur to me that it was significant to you. GCs can scale just fine to many cores. No core has to "check" other cores: each core can check its own local data, depending on the tactics employed to deal with NUMA. There are always challenges in scaling any parallel workload, of course, and there are always solutions.
Our GC (in IBM's J9 JVM) uses 8 bits in each object. With classes aligned to 256 bytes, there are eight free bits in each object's class pointer. Hence, J9's object model has one word of overhead on top of the user's data, which is what malloc/free typically needs to track each object's size anyway. No disadvantage there.
[These are my personal opinions, not those of my employer.]
Doradus | 11 years ago | on: My favorite bug: segfaults in Java
The thing that is interesting about the asymptotic complexity of garbage collection, though, is that it gives you a peek into the future of GC overhead as heaps get larger.
Consider a program with some bounded steady-state memory consumption, and a GC that takes O(1) time to free unlimited amounts of memory. In such a system, you can reduce your GC overhead arbitrarily simply by enlarging the heap. A larger heap gives a larger time interval between collections, yet doesn't make the collections any slower. Don't like 6% overhead? Fine. Make the heap 10x larger and you have 0.6% overhead.
This time-space trade-off is always available to the end user of such a system. It's certainly not available with more primitive systems like malloc+free.
[These are my personal opinions, not those of my employer.]
Doradus | 11 years ago | on: My favorite bug: segfaults in Java