This is called a semaphore, it's already implemented in GCD.
Also, why talk about performance and then make obj-c method calls...?
It's quite easy using NSProxy to create a throttler that will wrap any object, then you can abstract throttling from the behavior of the underlying object.
Why no mention of GCD here? GCD is very, very good at synchronizing access to shared resources.
The most Cocoa-compatible way of handling background execution of expensive procedures is always going to be best executed, quickest, using Grand Central Dispatch.
You're missing the most important point of the entire Throttler goal: gracefully returning fast, with success or failure. Nowhere is it stated that the goal is to enqueue tasks for execution.
If you had read 'til the end you would have found multiple statements that OSAtomic* is merely an alternative. Not a silver bullet. Not the fastest.
From the conclusion:
"It's very important to understand that every example in this article could have legitimately been solved with different concurrency primitives — like semaphores and locks — without any noticeable impact to a human playing around with your app."
Also, "(...) is always going to be best executed, quickest, using GCD." is kind of a blanket statement. I'd be careful around the use of "always".
> This post talks about the use of OS low level atomic functions
This is a pet peeve of mine, to call that an "OS" feature. In all recent CPUs I know of, atomic ops are not a privileged operation, and there is absolutely nothing for the operating system to manage in a traditional sense. You don't trap into the kernel and have it compare-and-swap, you just, um, compare and swap.
Maybe your OS provides a convenient C API, but it is not "OS" functionality. It's just instructions on your CPU. You could just as well write them inline. In many common uses, that's what ends up happening - the atomic ops are put inline with the rest of your code.
If your use of atomics consists entirely of calling functions provided in a library as part of the OS, what's wrong with calling them "OS atomic functions"? That is what they are. The fact that you can accomplish the "atomic" part without the "OS functions" part doesn't change the fact that, as written, the article discusses the "OS functions" part.
At least for ARM on Linux, the kernel does provide cmpxchg in the VDSO, but I think that is for support on older ARM architectures. IIRC, the ARMv7 does not need to use the kernel helpers.
This is all claimed to do this for 'performance' but there's no figures in this document as to whether the incrementAndGet / decrementAndGet is any faster than @synchronize.
(I suspect it probably is, but fundamentally, @synchronize is implemented using compare and swap / other processor atomics, so it's probable that the difference is very slight - e.g. there's only a measurable difference if the thread is descheduled while holding a lock).
The goal of the article isn't about sheer performance — there are plenty of notes about it. If it was about pure performance, it'd be recommending moving away from objc classes and methods and using C functions or C++ classes instead, like std::atomic<>.
It's meant to be a somewhat-easy-to-digest introduction to lock-free design, where applicable.
@synchronized calls objc_sync_enter and objc_sync_exit. Source code is available[0]. Best case, the thread already has the object locked and only needs to increment a lock count. Worst case, it spin locks, searches through a linked list of existing locks, then needs to malloc a new entry and create a new mutex for it.
It's actually more than you'd think -- @synchronized has to deal with re-entrant locks, try/catches that also release locks accordingly while bubbling exceptions, etc. There's a lot more to it than compare and swap.
Or just use std::atomic and other std::mutex in Objective-C++. Under Objective-C world, no memory semantics are well-defined, and all these are hacks on pile of other hacks.
[+] [-] fleitz|11 years ago|reply
Also, why talk about performance and then make obj-c method calls...?
It's quite easy using NSProxy to create a throttler that will wrap any object, then you can abstract throttling from the behavior of the underlying object.
https://developer.apple.com/library/mac/documentation/Genera...[+] [-] lyinsteve|11 years ago|reply
The most Cocoa-compatible way of handling background execution of expensive procedures is always going to be best executed, quickest, using Grand Central Dispatch.
For example:
That will ensure every call to -veryExpensiveMethod is run in sequence, and won't require waiting on your end.These problems have been solved, better.
[+] [-] ovokinder|11 years ago|reply
If you had read 'til the end you would have found multiple statements that OSAtomic* is merely an alternative. Not a silver bullet. Not the fastest.
From the conclusion:
"It's very important to understand that every example in this article could have legitimately been solved with different concurrency primitives — like semaphores and locks — without any noticeable impact to a human playing around with your app."
Also, "(...) is always going to be best executed, quickest, using GCD." is kind of a blanket statement. I'd be careful around the use of "always".
[+] [-] asveikau|11 years ago|reply
This is a pet peeve of mine, to call that an "OS" feature. In all recent CPUs I know of, atomic ops are not a privileged operation, and there is absolutely nothing for the operating system to manage in a traditional sense. You don't trap into the kernel and have it compare-and-swap, you just, um, compare and swap.
Maybe your OS provides a convenient C API, but it is not "OS" functionality. It's just instructions on your CPU. You could just as well write them inline. In many common uses, that's what ends up happening - the atomic ops are put inline with the rest of your code.
[+] [-] mikeash|11 years ago|reply
[+] [-] ovokinder|11 years ago|reply
How would you rephrase that? Just "low level atomic", "atomic"?
[+] [-] jevinskie|11 years ago|reply
https://www.kernel.org/doc/Documentation/arm/kernel_user_hel...
[+] [-] richardwhiuk|11 years ago|reply
(I suspect it probably is, but fundamentally, @synchronize is implemented using compare and swap / other processor atomics, so it's probable that the difference is very slight - e.g. there's only a measurable difference if the thread is descheduled while holding a lock).
[+] [-] ovokinder|11 years ago|reply
It's meant to be a somewhat-easy-to-digest introduction to lock-free design, where applicable.
What @synchronized ends up doing is far more complex — it has to be, to ensure the correctness of its purposes: https://github.com/opensource-apple/objc4/blob/master/runtim...
[+] [-] ksherlock|11 years ago|reply
0: http://www.opensource.apple.com/source/objc4/objc4-646/runti...
[+] [-] azinman2|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] liuliu|11 years ago|reply
[+] [-] azinman2|11 years ago|reply
[+] [-] richardwhiuk|11 years ago|reply