kenbellows's comments

kenbellows | 8 years ago | on: Developers Are Already Making Great AR Experiences Using Apple's ARKit

Depends on the use application. For observing especially large objects[1][2], I agree, it would be very hard to use for anything beyond a quick demo. But on the other hand, I think the phone is a perfect viewport for on-the-fly smaller applications, like an AR measuring tape[3], adding info about artwork in a museum, displaying a virtual prototype of a dish on a restaurant's menu[4], translating street signs on the fly[5], etc. The phone is perfect for applications where you don't want to plan ahead or be constantly carrying an extra piece of equipment around.

1: https://twitter.com/madewithARKit/status/880815805281300480 2: https://twitter.com/madewithARKit/status/880056901987254272 3: https://www.youtube.com/watch?v=z7DYC_zbZCM 4: https://twitter.com/madewithARKit/status/880744158423658497 5: http://newatlas.com/google-translate-update/35605/

kenbellows | 8 years ago | on: The Brain as Computer: Bad at Math, Good at Everything Else

I don't agree with that description of what the brain does when you catch a ball, or with the principle it's proposing. I don't think the brain does any kind of calculations to figure out how to catch the ball; I think it's effectively muscle memory. If you've never caught anything before, your brain will have no clue what to do. As you practice running to catch things, I think a better description of your subconscious process is something like: "The ball (or whatever) looks like it's growing bigger in my visual field at a certain rate. A previous time when it grew bigger at a rate kind of like that, I applied about x force in the legs, and I didn't get there in time. Another time when it was growing at about this rate, I put applied a larger amount of force y, and the ball landed behind me. This time I'll try to apply a little more than x force, but less than y force, and see how that works."

This is a substantial oversimplification of course (there are many more factors involved than how fast the ball is growing in the visual field), but I think the point is clear enough. I doubt there's any trigonometry happening in the brain's circuitry; it seems much more plausible to me that the brain is really good at remembering how it felt in previous circumstances, recognizing how those remembered circumstances relate to the current one, and trying to adjust.

As I understand it, this is actually a significant debate in cognitive science, philosophy of mind, and related fields. One prominent proponent of a view like the one I've expressed here, that the brain doesn't require or use heavy math to do things like catch flying objects but rather acquires the ability over time through experience, is John Searle. He is known for using the example of his dog's ability to catch a ball that's bounced off a wall when discussing and arguing against theories of mind that propose that all unconscious processes must be following algorithms or rules (like running through computations to figure out how to catch a ball). Here's a quote of his from the BBC program Horizons (quote found in "New Technologies in Language Learning and Teaching", issue 532, on page 37 [1]):

  If my dog can catch a ball that's bounced off the wall, that may be
  just a skill he's acquired. The alternative view (the pro-AI view)
  would say: "Look, if the dog can catch the ball it can only be 
  because he knows the rule: go to the point where the angle of
  incidence equals equals the angle of reflection in a plane where the
  flatness of the trajectory is a function of the impact velocity
  divided by the coefficient of friction" - or something like that.
  Now, it seems to me unreasonable to think that my dog really *knows*
  that. It seems to me more reasonable to suppose he just learns how
  to look for where the ball is going and jumps *there*. And a lot of
  our behavior is like that as well. We've acquired a lot of skills,
  but we don't have to suppose that, in order to acquire these skills,
  the skills have got to be based on our mastery of some complex
  intellectual structure. For an awful lot of things, we just *do* it.
[1] https://books.google.com/books?id=fWQhj0HVCbUC&pg=PA37&lpg=P...

kenbellows | 8 years ago | on: Why You Should Hire an Old Programmer

I think the article is arguing that while age may not be necessary for quality, it is sufficient. A young(er) programmer may or may not have these qualities, but in order to survive in the programming industry beyond a certain point, you have to develop these qualities. (Not sure whether I agree, but I think it's a point worth considering.)

kenbellows | 9 years ago | on: Show HN: Octaspire Dern – Programming language

I would think that whitespace becomes significant inside string-delimiting square brackets for the same reason that whitespace is significant inside string-delimiting quotes: whitespace characters are characters like any other. I would expect `[ abcd ]` to be different from `[abcd]` in the same way that `" abcd"` is different from `"abcd"` in languages that use quotes.

Is there something I'm missing that mitigates this intuition? Does this language (or, for that matter, the demo language you wrote for your class) ignore leading or trailing whitespace inside square brackets? What about excess whitespace between words (that is, whitespace beyond a single space or tab)? If so, if indeed leading/trailing/excess whitespace is collapsed inside of square bracket delimited strings, how would I create a string with leading or trailing whitespace or extra space between words if I wanted to?

Honest questions; don't mean to criticize, just eager to learn.

kenbellows | 9 years ago | on: Pi and the Golden Ratio

But... why? If you wanted to express their equation in terms of tau (that is, 2pi), you could just set the first term of the right hand side to 10/phi instead of 5/phi. In fact, throughout their derivation there are points where a tau-based version would be a bit cleaner (though of course there are other points where tau might be a bit messier).

kenbellows | 9 years ago | on: A History of Tug-Of-War Fatalities (2014)

He was referencing this statement from the GP: "Looped around a waist and​ the same person can get up to about 1.0-1.5x with cleats on turf." Point is, if tug-of-war players have been known to lose hands because the rope was wrapped around them, do you really want to see what happens when you wrap it around your waist? Maybe it's safer because your body is much thicker than your wrist, but is that really an experiment you want to run?

kenbellows | 9 years ago | on: Lisping on the GPU [video]

Not sure the process, but it's possible on YouTube to allow community members to add captions for your videos (after approval by the channel owner); who knows, maybe a few generous folks with some spare time would do the work for you

kenbellows | 9 years ago | on: Ask HN: Is web programming a series of hacks on hacks?

Minor comment regarding CSS: Flexbox alone was never supposed to fix layout; the real magic is supposed to be Flexbox + CSS Grid. As I understand it, Flexbox is truly designed for use in one dimension to govern local dynamic behavior within a larger CSS Grid layout that structures the whole page. At the moment the problem is the lack of CSS Grid adoption by browsers (see http://caniuse.com/#feat=css-grid). I'm still hopeful that the state of CSS layouts will drastically improve once Grid and Flexbox are both sufficiently supported.

kenbellows | 9 years ago | on: Organisms might be quantum machines

I mean... evolution explains the complexity of things, and evolution certainly has a foundation in the chemistry of DNA. Or are you asking why we have the particular configuration of biological entities we have instead of a different one?

kenbellows | 9 years ago | on: Hoaxes and scams on Facebook: How most of them work and spread

Using bad grammar and obviously fake UIs is a pretty well-known technique that's been used by Internet scammers and phishers for years to filter out the savvier potential victims. It seems the motivation is not that smarter users will report them or try to stop them, but that those who are naive enough to miss the obvious signs of a scam are also far more likely to actually fall prey to the scam, to send larger sums of money, and to possibly fall victim to multiple scams. In essence, the bad grammar and fake UIs are used to make the scammers more efficient. They don't need to waste time on people who will get a few steps in, then cause them trouble or recognize the scam and back out; if you don't see the clear signs up front, you likely won't notice any of the later ones either. This is the same reason that the "Nigerian Prince" email scam still survives.

Here's a research paper published by Microsoft on this very subject back in 2012: https://www.microsoft.com/en-us/research/publication/why-do-...

Here's a decent summary on Yahoo: https://www.yahoo.com/news/study--obvious-nigerian-scam-emai...

kenbellows | 9 years ago | on: The Apple Goes Mushy Part I: OS X's Interface Decline

I don't think people are arguing against ideograms; in fact, I think the argument is very much in favor of ideograms, but simultaneously that icons like the floppy disk save icon are not very good ones. From what I've seen, there seems to be somewhat of a trend away from object-base metaphors and toward action-based metaphors, at least when it comes to action icons like the save icon. Instead of a picture of a storage device, modern or extinct, many applications now use icons intended to represent the act of storing something digitally, either using a an arrow directed down toward a rectangular shape meant to represent local media or using an arrow directed up into a cloud to represent cloud storage. (For an example of these, see Glyphicon's `glyphicon-save` and `glyphicon-cloud-upload` icons[1].)

One explicit advantage that these sorts of icons have is that they allow for a nice symmetry between Save and Open icons and upload/download icons (Glyphicon is again a good example; see glyphicon-open and glyphicon-cloud-download). This ties into another, perhaps more arguable advantage, a blurring between local and remote save actions. As applications become increasingly web-based, device-independent, and portable, it makes more sense to me to intentionally separate the "save" action from it's destination; I don't care so much where or how my data is saved, I only care that it's save and that I can get it back later.

I'd love to hear responses to my thoughts here; they sort of developed as I wrote the comment, so they're rather fresh at the moment.

1: http://bootstrapdocs.com/v3.1.0/docs/components/

kenbellows | 9 years ago | on: The Apple Goes Mushy Part I: OS X's Interface Decline

The idea is that if you choose a good enough icon, you won't confuse people who know the old one. Users are at this point used to encountering unfamiliar icons and trying to quickly guess what they mean, so if a sufficiently communicative icon is chosen there should be no problem.

Importantly, the user has no reason to directly contrast the new icon to the old one. The user doesn't answer the question "Is this icon as effective as that old one?", they simply have to answer the question "Do I know how to perform the action I want to perform?", and as long as your save icon clearly communicates its meaning, there shouldn't be much/any confusion when the user tries to save. They'll look for something that seems to say "Click me to save", they'll see your icon, they'll say "Hey, that looks like it means 'Save'!", and they'll try it. (Aside: I don't say this randomly, I'm speaking from experience here; there have been plenty of applications in recent years that have tried out new "Save" icons, and I can't say I've ever had a problem figuring out how to save with any of them.)

As far as what "the point" is in changing out the icon, the point is that the entire reason for using action icons on buttons, etc. is to give the user an intuitive sense of what action will be performed when they click it, and as time goes on, the link between the floppy disk and digital storage will become weaker and weaker. And while it may be true that we could drag that symbol with us by convention, my question would be, why bother? If we can come up with something better, especially if we can find something that isn't tied to any specific technology (and I'd argue that we have), isn't this an improvement? I can't think of any advantage the old icon has over new ones other than the small advantage that it's familiar, but, as I said above, I don't think that's enough.

In other words, instead of asking "Why should we get rid of the floppy disk icon?", I honestly think the better question is, "Why not get rid of the floppy disc icon?"

kenbellows | 9 years ago | on: The Apple Goes Mushy Part I: OS X's Interface Decline

That's all fine and good, but this response is entirely different from the blog's original argument that I was responding to. The original argument was "Icons should be based on real world objects because they give the user an immediate sense of what the icon is for." The argument here seems to be "People know what the current save icon means, so there's no reason to change it", and the thing is, this sort of "if it ain't broke, don't fix it" viewpoint is pretty antithetical to our jobs as interaction designers. Even if most people know what the save button means by convention, that doesn't rule out that might find a better icon, one that is strictly iconic and not skeuomorphic in any way (in fact, I think some better alternatives have already been found are are gaining in popularity).

So in summary, while I agree that it's sometimes fine to use an old icon if enough people understand what it means by convention, this is not a good reason to avoid using the newer, less skeuomorphic icons that the linked blog post was trying to argue against.

kenbellows | 9 years ago | on: The Apple Goes Mushy Part I: OS X's Interface Decline

I'd argue that the quill pen was a special case. As you point out, it was an intentional anachronism to communicate a certain point. This was not the case with, e.g., the floppy disk, the notepad, the contacts book, etc. Everyone knew what the quill pen was because it was still frequently seen in portrayals of the olden days when it was used; even today, portrayals of Victorian England in Doctor Who or Revolutionary America in Hamilton inevitably show a few quill pens in use. But the reason people recognized floppy disks, contact books, and notepads was because they were still in active use at the time.

You point out that while the quill pen was long outdated when it was first used, "everyone still recognized the contents of that icon", but that's exactly the problem that the GP and GGP are pointing out: more and more of the old skeuomorphic icons reference real world icons that younger users (and indeed some older users) actually don't recognize. Notepads, sure, we've still got those; contact books, eh, you'll see them once in a while, but tbh when I see a bare "contact book" icon without a label it occasionally takes me a second to figure out what I'm looking at; floppy disks, as has been argued to death, are entirely a thing of the past, with the exception of old systems and archives still in use in dusty university basements. Young users today essentially just know the image of a floppy disc as "the save button" without any skeuomorphic rationale backing it up.

The skeuomorphic link between computers and the physical objects we use is is constantly degrading, to the point that using skeuo icons can sometimes actually inhibit the user experience and slow the user down while they try to figure out what they're looking at. We have common patterns emerging with no or very little connection to the real world; a great example would the "hamburger" menu button. If there's any metaphor there in the user's mind, it's to the row items that will appear when you click on it, not to anything physical, yet it's perfectly comprehensible to anyone who's been using digital devices for any length of time.

kenbellows | 9 years ago | on: A Single Div

As long as we're being pedantic, it's definitely 'fewer'. 'Less' is only used in cases where you couldn't conceptually talk about a single instance of the thing being described. 'Less distance', but 'fewer miles'; 'less abstraction', but 'fewer layers of abstraction'. Even if in practice it may be difficult to find the edges of a single layer of abstraction, the word 'layers' is still metaphorically talking about a collection of individual items, and whenever this is the case we use 'fewer'.
page 2