flurrything's comments

flurrything | 7 years ago | on: Apple Finally Updates MacBook Air with Retina Display, Touch ID

Similar position as you (mid 2012 macbook air, 1.8 GHz Intel Core i5, 8Gb of RAM, 256Gb SSD + 256Gb SD Card).

I've upgraded its battery twice, last time this year, for 60 EUR each time, and its hitting > 9h of battery life right now.

What does seem compelling in the new Air to you?

I only see a similarly spec'ed system (Geekbench says its 40% faster): dual-core, ~same RAM, ~same HDD if I go for the 2000 EUR 512Gb SSD version, ... Retina display (this is huge), no macsafe, new keyboards (people have mixed feelings about these), ...

I find it hard to justify upgrading a system that's still working fine at such a high price tag (>2k EUR) just for a Retina display and ~30-40% faster CPU.

A modern quad/six core could deliver 4-6x speed-ups in some workflows, and discreet nvidia graphics could allow me to develop for CUDA, 1Tb SSD would be nice, etc. I could justify paying 2000 EUR for all these upgrades on top of a Retina display. But just for a retina display? I'd rather go skiing in the Alps for two weeks this winter for 1600 EUR and just buy an apple watch with the remaining 400 EUR =/

flurrything | 7 years ago | on: Apple Finally Updates MacBook Air with Retina Display, Touch ID

> OS improvements,

Mojave still works fine in a `2012` macbook air, which still does have >9h battery life (a new battery costs 60 EUR).

The touchpad is great, and it has an SD Card reader that can be used to bump its 256Gb SSD with slow extra storage for cheap.

So here I am, trying to find any reason that would justify upgrading a 2012 macbook air to a 2018 macbook air and the only thing that's worth it is the new display - sadly that's not worth > 2000 EUR.

flurrything | 7 years ago | on: Apple Finally Updates MacBook Air with Retina Display, Touch ID

> I'm sympathetic to the complaint about price, but if your contention is that the devices are basically unchanged, then why buy a 2018 model? Why not buy a 2014 model?

Why buy a new model at all? I have a fully-spec'ed macbook mid 2012 which costed 1600 EUR and has 512Gb HDD thanks to a 256Gb SD Card. Benchmark-wise, the macbook air 2018 is not 2x faster in any benchmark. Still a dual core, still 8Gb RAM, 128Gb without SD Card slot, etc. For a model with equivalent HDD I have to pay >2k EUR for the 16Gb RAM and 512 SSD.

It's pretty impossible to justify upgrading from this 2012 model. That's 2000 EUR for a Retina display, and that's pretty much it. Throw in a 1 Tb SSD, 32 Gb RAM and a quad or six core, and upgrading might be worth it. But that would mean a macbook pro 2018 in the ~3k EUR range, which is a different price range and probably not worth paying given that these come without nvidia discreet graphics.

With this product line, it makes much more sense for me to upgrade to a Dell than to another macbook.

flurrything | 7 years ago | on: Apple Finally Updates MacBook Air with Retina Display, Touch ID

Dude, I bought a max spec'ed macbook air mid 2012 with 256Gb SSD + 256Gb SD Card, a dual core, and 8Gb of RAM, for ~1600 EUR in 2012.

In 2018 I would have to pay 2000 EUR for 512Gb SSD, still a dual core, 16Gb of RAM that is only slightly more powerful than my mid 2012 machine (I haven't found benchmarks yet that manage to get a 2x difference) to the point where it is just not worth upgrading.

If I were to upgrade it, I would throw 1500 EUR at Dell for a quad core, 32Gb RAM, and a 1TB SSD, maybe with nvidia discreet graphics. A similarly spec'ed macbook pro sets you at 3-4k EUR...

The only people I can imagine buying macbooks in 2018 are those not paying for them themselves (company laptop) because for an individual they just don't make financial sense - one can get similarly spec'ed laptops for almost half the price.

flurrything | 7 years ago | on: The Linux Kernel Is Now VLA (Variable-Length Array) Free

> it makes sense to treat GCC (actually "the C89 standard plus GCC extensions") as the standard.

While that might have been the case, this announcement says that one of the many reasons to stop using VLAs is to allow the kernel to be compiled with clang. The announcement reads like being able to compile the kernel with other compilers is a very desirable property, that has taken many years of hard work to achieve due to the incorrect assumption that "GCC is the C standard".

So while that assumption might have made sense back then, it does not appear to make sense now. If you treat one compiler as "the standard", chances are that that's the only compiler that you will ever be able to use. That's a bad strategic decision for a big project like the Linux kernel.

flurrything | 7 years ago | on: Swiss Tables and absl::Hash

> We generally recommend that you use absl::flat_hash_map<K, std::unique_ptr<V>> instead of absl::node_hash_map<K, V>.

So what's the point of `node_hash_map` then ?

flurrything | 7 years ago | on: Memory Allocators 101 – Write a simple memory allocator (2016)

> but sometimes you know invariants about object sizes, alloc patterns and free patterns that allow much more performant allocation.

jemalloc allows you to query these invariants if you don't know them, and to use the information to re-configure the allocator to match them :/

I'm pretty sure most modern allocators allow you to do this as well.

> The simplest case is if you know that you will free everything at once, or nothing at all.

That's pretty much a one liner with jemalloc.

flurrything | 7 years ago | on: Continued progress porting Emacs to Rust

> For users, I hope Remacs will be faster

The main reason my emacs is painfully slow is that most elisp code blocks. Stuff like hitting tab for auto-completion might perform a blocking query from a language server hanging emacs for multiple seconds. Opening a file tries to start a language server and blocks the editor in the process. Etc.

Making emacs itself faster won't make the experience of working with emacs any better.

flurrything | 7 years ago | on: Rust RAII is better than the Haskell bracket pattern

> In what sense do you disagree?

Not the parent, but it is trivial to write C++ and Rust examples in which destructors of variables with block scope are not called. The std library of both languages do even come with utilities to do this:

C++ structs:

    struct Foo {
      Foo() { std::cout << "Foo()" << std::endl; }
      ~Foo() { std::cout << "~Foo()" << std::endl; }
    };

    {
        std::aligned_storage<sizeof(Foo),alignof(Foo)> foo;
        new(&foo) Foo;
        /* destructor never called even though a Foo
           lives in block scope and its storage is
           free'd
        */
    }
C++ unions:

    union Foo {
      Foo() { std::cout << "Foo()" << std::endl; }
      ~Foo() { std::cout << "~Foo()" << std::endl; }
    };

    {
      Foo foo();
      /* destructor never called */
    }
Rust:

    struct Foo;
    impl Drop for Foo {
        fn drop(&mut self) {
            println!("drop!");
        }
    }

    {
      let _ = std::mem::ManuallyDrop::<Foo>::new(Foo);
      /* destructor never called */
    }
etc.

> There are some situations where objects with block scope do not have their destructor called e.g. `_exit()` called, segfault, power cable pulled out. But in that sense nothing is guaranteed.

This is pretty much why it is impossible for a programming language to guarantee that destructors will be called.

Might seem trivial, but even when you have automatic storage, any of the things you mention can happen, such that destructors won't be reached.

In general, C++, Rust, etc. cannot guarantee that destructors will be called, because it is also trivial to make that impossible once you start using the heap (e.g. a `shared_ptr` cycle will never be freed).

flurrything | 7 years ago | on: The relative performance of C and Rust

No, as much work means that you can't use 99% of C++ in C FFI. You have to fully instantiate template functions, template structs (you can pass `std::vector` to C FFI), etc. before you can expose them to C.

If you are writing C++ in such a way that this does not matter, then you are basically writing C already and `extern C` is all you have to do though, but then you might as well just have been using C this whole time.

flurrything | 7 years ago | on: LLVM 7.0.0 released

Rust has been able to target RISC-V for half a year, so LLVM must have been able to target it for even longer...

flurrything | 7 years ago | on: LLVM 7.0.0 released

> On Linux, you can turn it off.

The system administrator can turn it off, kind of. But a zig or rust program running in user space without privileges cannot turn it off, not even for the program itself.

flurrything | 7 years ago | on: LLVM 7.0.0 released

What do these Zig functions do in systems with overcommit ? (that is, Linux, MacOS, *BSDs, Android, iOS, ...).
page 1