And to add to the confusion, why are there different sized tiles?
Why on earth would that be confusing? When I see a double door next to a normal sized door I don't freak out and try to break down the wall instead.
I get the point being made in the article, but I don't quite buy it. I don't think that people understand they can press app icons on the iPhone because they have raised shadows around them, I think they press them because they are visually eyecatching and surrounded by areas that are not. There are many different visual cues out there, and people adapt to new ones all the time.
Honestly, consistency is a much prized and proven way to help users navigate an interface. If they see things where and how they expect, they can navigate through easier, find buttons and actions easier, etc.. It's also prized in design and aesthetics. Designers (and even the Android design site) recommend designing to a grid to get a uniform, nice looking appearance. So altering the size of the tiles is considered bad from a UI and appearance perspective.
But why are the titles different sizes? There may be no reason beyond aesthetics, but that's his point: differing sizes often conveys information, and even if it does not, it takes effort on our part to realize that size differences does not imply functionality difference.
And if I did see a double-door immediately next to a normal sized door, I would wonder why that was set up like that. Is one the emergency exit? Is one the freight-entrance?
I agree. I played around with the Surface RT for over 30 mins at a Microsoft store with the intention of buying it for my parents for Christmas, and I walked away because even I couldn't effectively figure out what the "rules" were for interacting with Metro. I'm sure if I gave it more time, I could, but there is no way my parents, who still use XP, would be able to figure it out.
I wasn't sure what I needed to do to get to the "Desktop" mode where it looked like Windows 7, or how to flip back and forth, and which things I could swipe, etc. I felt like it was a big mess because a lot of the UI features that we've come to expect were not there. In contrast, the iPhone and subsequently the iPad were intuitive right off the bat.
To be fair, I'm seeing a lot of this terrible UI experience in other things as well. For example, on Chrome when you are reading a PDF, if you want to save it or zoom, it's not obvious how to do it. You need to miraculously hover over the bottom right corner and then the buttons show themselves, but there are no visual cues indicating that that's what you're supposed to do. It's fancy, but terrible UI.
The same thing occurs on Facebook, where people are just expected to know where to hover in order to show functionality. I don't know where this trend came from, but it's terrible, and I think this article is showing an extension of how we are moving away from all the visual cues and things we've learned about UX in the past 30 years. Sure, it's different but it doesn't mean it's better, especially when it forced people to hunt, peck, and guess for functionality, something that UX is supposed to get rid of.
anecdotally, my mom really likes Windows RT. Watching her use the traditional start menu, or attempting to navigate Windows Explorer to find something is an exercise in pain. She, honestly, really enjoys the full screen start menu -- easier to find the app she wants to start --, the WinRT full screen apps -- doesn't have to remember/think about window/application life cycle management. It pretty much works the way that she wanted Windows XP to work in the first place.
When I use Windows 8, on the other hand, I spend 99% of my time on the desktop, and the transition to a full screen start menu/screen is pretty jarring. But, honestly, as far as the new UI paradigms go, its not that much of a mess... Try watching a Windows user try to use a OSX for the first time. Or vice versa. Or a mac user trying to use KDE.
I think the real world analogy of the OP is a bit flawed. Babies don't instinctually know how to open a door, that is not something we are genetically programmed for. They learn by watching other people do it, and you learn by trying. There is a low penalty for trying to failing to open a door correctly -- sometimes you push instead of pull -- and that is the point of a good user interface. Does Windows 8 succeed at that? Perhaps, but its not a disaster.
A disaster would be a door that killed you if you tried to open it incorrectly.
A little OT, but it does make me chuckle when you read this on a blog with little to no visual affordances. The title and the date are both permalinks, but there's no indication until you mouse over to check. That's fine - for the sake of cleanliness it's an acceptable trade off. And also, importantly, we've learnt that they probably are. Consistency is also important. Whether people will learn Metro successfully (statistically) etc. is yet to be seen I guess, but it's more complex than this.
Though one thing that differentiates them is that the systems we learnt about titles and dates being typically interactable had mouse pointers. I can't quite express why, but it feels less annoying to mouse over something and discover it's interactable (via a change in the mouse cursor) than to stab at text on the screen.
I guess one is a more passive "will this do something if I interact with it" while the other is a more proactive "I'll try to interact with this and see if it works". One results in a yes or no answer, while the other results in a failed action, which seems more frustrating to me.
That all just goes to show that, as you said, it's more complex than simply missing affordances = unusable UI.
I think the issue is that one shouldn't go too far in the direction of skeumorphism or too far away from it. Most people are comfortable with some level of skeumorphism because it can be a powerful usability enhancement. As I type this I'm looking at the "add comment" button below here on Hacker News, and wouldn't you know it, it's got a slight gradient giving it a bevel, suggesting it's occupying 3-dimensional space and can be pushed like a button.
Of course this can be taken too far. When too many skeumorphic accents are added to a design it can cause it to be rigid and noisy. As the OP mentions, if you go too far from skeumorphism you run contrary to how the human brain works. My favorite user interface designs usually have a very tasteful and well placed set of skeumorphic elements with an overall minimalist design. Tactile, not tacky.
The author doesn't get it. A lot of the way users know what to click is based on consistency. If it's a tile on the start screen you can click it. Want to print? It's always in the same place? Want to share? Same place. Want to close an app? Always the same way? Want to see more options for an app? The same way.
Now within the app one could argue there is a stronger need for affordances, but even there I've yet to encounter a single problem in my use of several Win8 apps.
I find the Win8 interface a lot more intuitive than the OSX interface. But I'm sure others would find the opposite. I suspect a lot depends on your starting point and your predisposition. My four year son figured out most of the Win8 interface in about 5 minutes (literally... at the MS store he was flying through the UI much better than I'd ever seen him with Win7 and a mouse).
On my iPad, my 2-year old is able to unlock the screen, go to Home in case any app is active and then open his favorite 2 apps (a simple game for toddlers and a painting app).
That's not saying much though, he just did what all kids do ... tried things out and quickly memorized what worked and it was easy and fun for him to do so because of the touch-screen. He also taught me some shortcuts I had no idea were available, like how to do multitasking by switching between active apps or how to split the on-screen keyword into 2 smaller pieces :-)
In general, kids can learn by trial and error quite efficiently, sometimes in a matter of minutes or seconds and shouldn't be used as a benchmark for how intuitive an interface is, because all that really says about an interface is that it can be learned by trial and error by kinds. Regular WIMP interfaces are indeed not intuitive for kids because the interface is often exposed through hierarchical menus that can't be explored by children who can't read.
That's a lot of buildup just to say you feel the interface to be unintuitive.
We know how to interact with different items because of experience and common signals -- not all door handles are alike, but different interpretations of the 2 major types (knob and lever) are similar enough to visually signal to us their probable use-case (opening a portal in the wall).
Similarly, I could go on about how the "stop, wait, go lights" at the top of windows in OSX are counterintuitive because they are in the same location as their Windows counterpart, but have different functions. It's not intuitive because the same visual signals provide different outcomes.
Very sensational. I dislike these kind of posts that get to the frontpage because of their title, rather than the actual content :( If the content is good enough, please put something related to that in the title.
Agreed. This has nothing to do with the term "metrosexual".
"Metrosexual is a neologism, derived from metropolitan and heterosexual, coined in 1994 describing a man (especially one living in an urban, post-industrial, capitalist culture) who is especially meticulous about his grooming and appearance" - Source http://en.wikipedia.org/wiki/Metrosexual
Nah, I don't buy it. These kinds of things are easily learned. Web links don't have depth and we all learned pretty fast that we can click them (in some instances they don't even have to be underlined)
Most Desktop web browsers provide quite many affordances for links, and do so automatically upon hover: change cursor, underline, pop up link description, show URL in status area.
Touch UIs can't have these "introspective" affordances because hover is not practical in a touch-based UI.
Even with all these affordances, if a Web UI didn't distinguish a link from other content visually, it would make for a difficult interface to traverse.
Depth in this context is relative (affordance is a better term btw). A boldness in a sea of un-bold text is something worth investigating (affords something...), with your mouse, or just your eyes - then you can determine from the text whether you might want to click it or not. This is why it's common to underline links or change colour. You can't make a link the same as normal text and expect a user to learn which words you've used for links.
The point is we already know these things by intuition, we shouldn't have to learn them.
Honestly, Android's newer holo theming suffers the same exact problems. I've watched countless users and received support emails where people just don't *ing notice/try/use the action items in the new action bar pattern we're all supposed to be using. This pattern has us place very sparsely decorated icons in the top bar, generally without even text. Tons of users completely miss them vs. big, chrome, 3D styled, pushable-looking buttons on the bottom of an app.
Even worse, the icons aren't supposed to have text and users are supposed to know to long press on them to find out what exactly they do. I've never in my life seen a user do that. I emailed a Google Dev Advocate about all this, asking if they actually had statistics and user studies to back up this new direction they are taking the UI, if it actually helped users in the metrics or was just designers trying to make things look pretty without actually helping. No answer.
Nobody is born with the skills to open a door or push a button. I have a <1yr old that still can't do either of those. That is something you learn. Metro is just a different type of UI that you might want to learn.
"Can I click on all those tiles?" - if you try to touch or click them, you will quickly see a 3D effect that mimics that of a push button (tile scales to 97.5% of its size similar to a pushed button). After that you will quickly learn that you can interact with tiles.
But a physical button has a shape or appearance that makes it look like something to push. It's learnable.
The author is making the point that Metro has zero visual clues, it's not learnable. You don't know which tiles you can interact with until you try and interact with them.
But that then creates an expectation that "anything in a square box can be clicked on" which is going to make life difficult if you want to design an app but now you can't use square boxes for pure information display.
Note that he's referring to Norman's use of the word "affordance," more correctly referred to as "perceived affordance." Both are important concepts in design.
Another rant about Metro interface. Can we please stop it already, it's just beating a dead horse. Sure, Windows 8 is kind of beta quality, just like Vista was.
On a side note, comparing with door handles is just wrong. We already have a generation who grew up with idea of abstract controls. We have a save button which mostly looks like floppy button. How many 16 year olds know what floppy is?
Not many, and that's a good thing. The floppy icon only didn't represent well the abstract act of saving files. As it turns out it's just plain better present to users a button with the clear text inside, save. Or completely remove it from the UI, and do auto save for users.
On a side note, Windows 8 is beta as were: windows vista, winxp pre sp3, win2k, win millennium, win98, win95, win 3.x, and previous. Microsoft had only two versions of Windows that were really usable, stable, and fast: WinNT 4, and Win7.
There's very little I like about Metro, but I do like that Microsoft is focusing on the total user experience across all its products. Perhaps they need to divide the company into 3 components — consumer (win metro / tablet / phone), enterprise (win office / office suite / .NET), entertainment (xbox).
I've been a Microsoft fanboy [1] but just find very little about Windows that I love anymore. I understand they are trying to be visually different from OSX, but I'm not sure this is the right direction. OSX hasn't deviated from 'windows-based' app management and Metro makes its history as Windows almost unrecognizable. As a power user with 2 monitors usually running 2x or 4x in split or quad view, I don't see a UI that will be more adaptable for efficiency and multitasking. iOS handles it very poorly.
[1] DOS > Win 3.1 > 95 > 98/ME > NT > 2000 > Windows XP > .. converted to Apple ..
Metro is pulling things in the right direction I think. Hyper-skeumorphism did immense harm in the hands of copy-style designers, allowing them to justify their unoriginality through "I can copy that because it's actually copying a real-life object, you can't be unoriginal if you copy physical reality, all great artists did it" reasoning. It's better to concentrate on the axis between obscene skeumorphism and uberminimalistic full-flatness then to pick on any one of the extremes as they are obviously flawed.
Take for example the whole crop of metro-style Bootstrap themes and pick an UI interaction element like the buttons, to see an example of a scale of designs between decent micro-skeumorphism and full-flatness. This one http://bootswatch.com/cosmo/#buttons or this one http://talkslab.github.com/metro-bootstrap/basecss.html#butt... sport full-flat microsft style buttons, with no hints of possible interaction, while others like http://inprogress.neuronq.ro/madmin/ show subtle hints of skeumorphism (you can probably google for many other more or less metrofied bootstraps...)
It appears his "thesis" is half way through: "It’s because our eyes know we’re in a 3D world. We can detect light sources, and degrees of shading, and depth. And without any of these, we’d be absolutely lost." My first thought: I'm reading English text, with no shading or depth, and it seems to be a pretty effective form of communication. tldr: the article is hyperbolic garbage.
Our ancestors did not own smartphones, so the broad evolution argument is kind of garbage. User interfaces need to be researched before we can make conclusions on skeuomorphism vs "pure digital".
Furthermore, the author talks about affordances and how Metro has none. This is false. Anything that can be touched on the screen reacts to your touch. For example, if you're scrolling down the main menu and your finger happens to press down on a tile, the tile will be "pushed inwards" at the point of contact (even if you haven't released your finger). It's very subtle, but it definitely lets your subconscious know that in the future, if you would want to press that thing, you can. Now it's not an immediate affordance like a door knob, but a touch screen in itself is an affordance for touching, and once you touch then the other affordances reveal themselves.
This line of argument always jumps right from rods and cones to perceived affordances without ever making the case that there's a significant gain to perceived affordances from mocked up depth. If this is so uncontroversially true, surely someone has done a study you can link.
Slightly OT:
"The Rods... so sensitive that they can be triggered by single photon"
This sort of thing is why I love HN. Irrespective of the topic, there is always some little gem I find somewhere. I was very sceptical of this claim, so I looked it up. Turns out it is possible:
At the highest level, Metro design feels like a case of design overgeneralization. It tries at once to apply the same look & feel principles in Touch, Desktop, and Web context.
Reduced discoverability is the exact issue XBOX Live's interface has had since they updated it -- it's harder to differentiate between content I've paid for, their internal marketing and external advertisements to the point where I deliberately avoid them and just use the XBOX button modal to navigate. There's no rhyme or reason for the element sizing, outside of [apparently] making all of the stuff that's actually relevant to me the smallest.
[+] [-] untog|13 years ago|reply
Why on earth would that be confusing? When I see a double door next to a normal sized door I don't freak out and try to break down the wall instead.
I get the point being made in the article, but I don't quite buy it. I don't think that people understand they can press app icons on the iPhone because they have raised shadows around them, I think they press them because they are visually eyecatching and surrounded by areas that are not. There are many different visual cues out there, and people adapt to new ones all the time.
[+] [-] lnanek2|13 years ago|reply
[+] [-] scott_s|13 years ago|reply
And if I did see a double-door immediately next to a normal sized door, I would wonder why that was set up like that. Is one the emergency exit? Is one the freight-entrance?
[+] [-] steve8918|13 years ago|reply
I wasn't sure what I needed to do to get to the "Desktop" mode where it looked like Windows 7, or how to flip back and forth, and which things I could swipe, etc. I felt like it was a big mess because a lot of the UI features that we've come to expect were not there. In contrast, the iPhone and subsequently the iPad were intuitive right off the bat.
To be fair, I'm seeing a lot of this terrible UI experience in other things as well. For example, on Chrome when you are reading a PDF, if you want to save it or zoom, it's not obvious how to do it. You need to miraculously hover over the bottom right corner and then the buttons show themselves, but there are no visual cues indicating that that's what you're supposed to do. It's fancy, but terrible UI.
The same thing occurs on Facebook, where people are just expected to know where to hover in order to show functionality. I don't know where this trend came from, but it's terrible, and I think this article is showing an extension of how we are moving away from all the visual cues and things we've learned about UX in the past 30 years. Sure, it's different but it doesn't mean it's better, especially when it forced people to hunt, peck, and guess for functionality, something that UX is supposed to get rid of.
[+] [-] 32bitkid|13 years ago|reply
When I use Windows 8, on the other hand, I spend 99% of my time on the desktop, and the transition to a full screen start menu/screen is pretty jarring. But, honestly, as far as the new UI paradigms go, its not that much of a mess... Try watching a Windows user try to use a OSX for the first time. Or vice versa. Or a mac user trying to use KDE.
I think the real world analogy of the OP is a bit flawed. Babies don't instinctually know how to open a door, that is not something we are genetically programmed for. They learn by watching other people do it, and you learn by trying. There is a low penalty for trying to failing to open a door correctly -- sometimes you push instead of pull -- and that is the point of a good user interface. Does Windows 8 succeed at that? Perhaps, but its not a disaster.
A disaster would be a door that killed you if you tried to open it incorrectly.
[+] [-] TazeTSchnitzel|13 years ago|reply
Really? I suppose you could have difficulty with it, but the "Desktop" tile is quite clearly visible...
[+] [-] kolektiv|13 years ago|reply
[+] [-] ajanuary|13 years ago|reply
Though one thing that differentiates them is that the systems we learnt about titles and dates being typically interactable had mouse pointers. I can't quite express why, but it feels less annoying to mouse over something and discover it's interactable (via a change in the mouse cursor) than to stab at text on the screen.
I guess one is a more passive "will this do something if I interact with it" while the other is a more proactive "I'll try to interact with this and see if it works". One results in a yes or no answer, while the other results in a failed action, which seems more frustrating to me.
That all just goes to show that, as you said, it's more complex than simply missing affordances = unusable UI.
[+] [-] marknutter|13 years ago|reply
Of course this can be taken too far. When too many skeumorphic accents are added to a design it can cause it to be rigid and noisy. As the OP mentions, if you go too far from skeumorphism you run contrary to how the human brain works. My favorite user interface designs usually have a very tasteful and well placed set of skeumorphic elements with an overall minimalist design. Tactile, not tacky.
[+] [-] kenjackson|13 years ago|reply
Now within the app one could argue there is a stronger need for affordances, but even there I've yet to encounter a single problem in my use of several Win8 apps.
I find the Win8 interface a lot more intuitive than the OSX interface. But I'm sure others would find the opposite. I suspect a lot depends on your starting point and your predisposition. My four year son figured out most of the Win8 interface in about 5 minutes (literally... at the MS store he was flying through the UI much better than I'd ever seen him with Win7 and a mouse).
[+] [-] bad_user|13 years ago|reply
That's not saying much though, he just did what all kids do ... tried things out and quickly memorized what worked and it was easy and fun for him to do so because of the touch-screen. He also taught me some shortcuts I had no idea were available, like how to do multitasking by switching between active apps or how to split the on-screen keyword into 2 smaller pieces :-)
In general, kids can learn by trial and error quite efficiently, sometimes in a matter of minutes or seconds and shouldn't be used as a benchmark for how intuitive an interface is, because all that really says about an interface is that it can be learned by trial and error by kinds. Regular WIMP interfaces are indeed not intuitive for kids because the interface is often exposed through hierarchical menus that can't be explored by children who can't read.
[+] [-] stephengillie|13 years ago|reply
We know how to interact with different items because of experience and common signals -- not all door handles are alike, but different interpretations of the 2 major types (knob and lever) are similar enough to visually signal to us their probable use-case (opening a portal in the wall).
Similarly, I could go on about how the "stop, wait, go lights" at the top of windows in OSX are counterintuitive because they are in the same location as their Windows counterpart, but have different functions. It's not intuitive because the same visual signals provide different outcomes.
[+] [-] troebr|13 years ago|reply
[+] [-] Killah911|13 years ago|reply
[+] [-] oellegaard|13 years ago|reply
[+] [-] 16s|13 years ago|reply
"Metrosexual is a neologism, derived from metropolitan and heterosexual, coined in 1994 describing a man (especially one living in an urban, post-industrial, capitalist culture) who is especially meticulous about his grooming and appearance" - Source http://en.wikipedia.org/wiki/Metrosexual
[+] [-] richardlblair|13 years ago|reply
You know how many people here would have had to zoom in?? Are you new to the internet?? /end nerd rage/
[+] [-] Lagged2Death|13 years ago|reply
[+] [-] VMG|13 years ago|reply
[+] [-] pixxa|13 years ago|reply
Touch UIs can't have these "introspective" affordances because hover is not practical in a touch-based UI.
Even with all these affordances, if a Web UI didn't distinguish a link from other content visually, it would make for a difficult interface to traverse.
[+] [-] mitchellbryson|13 years ago|reply
The point is we already know these things by intuition, we shouldn't have to learn them.
"Don't make me think" - Steve Krug
[+] [-] lnanek2|13 years ago|reply
Even worse, the icons aren't supposed to have text and users are supposed to know to long press on them to find out what exactly they do. I've never in my life seen a user do that. I emailed a Google Dev Advocate about all this, asking if they actually had statistics and user studies to back up this new direction they are taking the UI, if it actually helped users in the metrics or was just designers trying to make things look pretty without actually helping. No answer.
[+] [-] RivieraKid|13 years ago|reply
[+] [-] drivebyacct2|13 years ago|reply
[+] [-] lini|13 years ago|reply
"Can I click on all those tiles?" - if you try to touch or click them, you will quickly see a 3D effect that mimics that of a push button (tile scales to 97.5% of its size similar to a pushed button). After that you will quickly learn that you can interact with tiles.
[+] [-] nodata|13 years ago|reply
The author is making the point that Metro has zero visual clues, it's not learnable. You don't know which tiles you can interact with until you try and interact with them.
[+] [-] jiggy2011|13 years ago|reply
[+] [-] radarsat1|13 years ago|reply
http://en.wikipedia.org/wiki/Affordance
[+] [-] zv|13 years ago|reply
On a side note, comparing with door handles is just wrong. We already have a generation who grew up with idea of abstract controls. We have a save button which mostly looks like floppy button. How many 16 year olds know what floppy is?
[+] [-] wildranter|13 years ago|reply
On a side note, Windows 8 is beta as were: windows vista, winxp pre sp3, win2k, win millennium, win98, win95, win 3.x, and previous. Microsoft had only two versions of Windows that were really usable, stable, and fast: WinNT 4, and Win7.
[+] [-] pxlpshr|13 years ago|reply
I've been a Microsoft fanboy [1] but just find very little about Windows that I love anymore. I understand they are trying to be visually different from OSX, but I'm not sure this is the right direction. OSX hasn't deviated from 'windows-based' app management and Metro makes its history as Windows almost unrecognizable. As a power user with 2 monitors usually running 2x or 4x in split or quad view, I don't see a UI that will be more adaptable for efficiency and multitasking. iOS handles it very poorly.
[1] DOS > Win 3.1 > 95 > 98/ME > NT > 2000 > Windows XP > .. converted to Apple ..
[+] [-] nnq|13 years ago|reply
Take for example the whole crop of metro-style Bootstrap themes and pick an UI interaction element like the buttons, to see an example of a scale of designs between decent micro-skeumorphism and full-flatness. This one http://bootswatch.com/cosmo/#buttons or this one http://talkslab.github.com/metro-bootstrap/basecss.html#butt... sport full-flat microsft style buttons, with no hints of possible interaction, while others like http://inprogress.neuronq.ro/madmin/ show subtle hints of skeumorphism (you can probably google for many other more or less metrofied bootstraps...)
[+] [-] ian00|13 years ago|reply
[+] [-] pilgrim689|13 years ago|reply
Furthermore, the author talks about affordances and how Metro has none. This is false. Anything that can be touched on the screen reacts to your touch. For example, if you're scrolling down the main menu and your finger happens to press down on a tile, the tile will be "pushed inwards" at the point of contact (even if you haven't released your finger). It's very subtle, but it definitely lets your subconscious know that in the future, if you would want to press that thing, you can. Now it's not an immediate affordance like a door knob, but a touch screen in itself is an affordance for touching, and once you touch then the other affordances reveal themselves.
[+] [-] speednoise|13 years ago|reply
[+] [-] OzzyOsbourne|13 years ago|reply
This sort of thing is why I love HN. Irrespective of the topic, there is always some little gem I find somewhere. I was very sceptical of this claim, so I looked it up. Turns out it is possible:
http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.h...
http://www.ncbi.nlm.nih.gov/pubmed/10800676
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1281447/?page=1
[+] [-] pixxa|13 years ago|reply
At the highest level, Metro design feels like a case of design overgeneralization. It tries at once to apply the same look & feel principles in Touch, Desktop, and Web context.
Jack of Many Trades, Master of None.
[+] [-] mnicole|13 years ago|reply