I've been using Moment (https://inthemoment.io/) for about a year and am really proud of how I've been able to reduce total screen time from 4 hours to 45 minutes per day. Most of the gains were due to deleting Safari and Twitter from my device (Moment breaks down app usage based on battery usage).
To anyone reading this, be aware of how grabbing your phone in the morning to read headlines or Medium digest can turn into 30 minutes or even an hour. Think about how checking e-mails can take you completely out of context and badly dilute the quality of your work. Consider how the amazing small interactions with other people that make life beautiful can be destroyed by even a glance at the phone.
Monitoring phone usage and actively cutting it down I really feel has improved my quality of life. Now that Apple has built in tools to do this, I hope more people will treat this as seriously as they treat exercise, nutrition, etc.
I'd be curious to know if reducing your mobile screen time has led to an increase in your desktop screen time. That is, if you're not looking at email on your phone, presumably you look at it later on the computer.
Is it the same for news reading, or do you just read less news these days?
Even if reducing mobile screen time leads to some increase in desktop screen time, it could still be a good thing for social conventions and quality face to face interactions. I'm just curious about whether we're reducing screen time overall, or just shifting it from mobile/interrupted to desktop/purposeful.
Think about how checking e-mails can take you completely out of context and badly dilute the quality of your work.
I agree with this completely, but the converse is also true. Sometimes you need to switch context during the day, and scanning headlines and reading emails helps me do that (I'm doing it right now!).
The best context switching strategies I've found are eating and meditation, but those aren't always feasible or socially acceptable, so browsing headlines is a good stand-in.
Can anyone recommend an app for Android which does the same thing as Moment and is effective? (Not that I don't know any apps that offer similar functionality but so far none of them has really convinced me / worked.)
I say I'm a musician on the side, but truthfully I've been messing with sound instinctively since I came out of the womb, and I only picked up programming and design and stuff later on. As such, I gravitated toward electronic music doodads almost immediately.
This issue of "falling down the LCD well" has been at the forefront of electronic music for a while. Synthesizers were cool in the 70's because they had knobs, and any abstract logic involved was done by you, so you had to be into it.
Then the 80's and 90's came along and we got stuff like the DX7 and innumerable "workstation keyboards" that were little more than a tiny display and two or three buttons. Maybe a jog wheel if you were lucky.
These were unambiguously cleaner from a design perspective, but what people began to realize is that the screen was too much of an abstraction. Then of course DAWs came along and even the synthesizers themselves moved into the computer physically.
Music, like life in general, is visceral, and the screen is not. Musicians became frustrated with the lack of physicality.
The modular synthesizer approach of the 70's has made a HUGE resurgence with the eurorack standard.
Not everyone, but LOTS of people actually PREFER a gigantic mess of tangled wires with physical plugs and knobs to a sterile pure-logic implementation on the computer that can do all the same stuff cheaper and in a more reproducible way.
I suspect we'll see a similar sort of resurgence of physicality across every product that has been absorbed into the computer screen.
As a life-long musician, I found that the tedium and aggravation of fighting the software and associated hardware to get it to do what I wanted - a wholly left-brain activity
- was taking up half my time.
And that was after the learning curve of trying to adapt to the melange of hardware, interface, and software options.
That time and energy was taken away from creativity and experimenting with music to improve its richness,interest, and originality.
Modulars can be more expressive in expert hands (seldom the case). But making good music requires a different kind of expertise. And I hear that difference - and the cost of all the lost creative energy - on the radio every day.
The first time I heard about Eurorack was when the guy behind the Audio Damage plugins remarked on his blog that part of why he he was making the move to Eurorack was that hardware couldn't be pirated.
Well into the era of digital mixing consoles, the complex tactile control surfaces remain. Faders and knobs are the right abstraction for working with sound. Contrast with lighting, which has become almost entirely keyboard driven; faders are now optional add-one for some of the most popular consoles like the Ion.
I know that I am not the typical use case for several reasons including the failure of voice recognition with speech impediments but I doubt voice will be the way forward any more than smart watches were - being niche at best.
First off with speech everyone can tell what you are doing and it is obtrusive - look at the old DMV signs against cellphones back from when they could literally just call.
Second it is just plain worse as an interface - just try to use a phone tree. People have dutifully ignored phone tree based answering machines existence except for the visually impaired who frankly lack options and must use what everyone else would consider useless.
Third there is less to do with it and thus less reason to get involved with the frontier. Again the same trap as the smart watch. People asked what can you do with it and the iWatch flopped despite Apple trendiness.
Direct thought reading might work better but that is in the easier said than done category. They can't even make an acceptably accurate non-invasive glucose meter so it is very unlikely to come out in consumer goods.
Google glass interestingly also largely flopped for several reasons despite heading in the opposite direction and provoked sheer irrational hatred above and beyond all other carriable or constantly recording cameras. AR sounds nice but they also need to factor in the significant glasses wearing population. Also apparently had short battery life for something to be worn as a HUD.
Noting where things can go wrong is easy compared to figuring out where to go in the future even with caveats like "10 years directed research lead time". I think the market has matured for personal electronics now until they can offer "magic" again. VR is neat but a niche in chicken and egg situation.
I appreciate your sense of doubt about often overblown predictions for the usefulness of the next new thing, but I think some of your claims are a bit too pessimistic. The Apple Watch can hardly be considered a flop. They're now selling 8 million per quarter and have outsold the entire Swiss watch industry. In just 3 years, it became the most popular watch in the world.
Now more and more people have a voice-activated assistant not only in certain rooms of their home or in their pocket, but also on their wrist. It might seem to be a small thing not to have to reach into your pocket to invoke a voice assistant, but that small amount of time saved really adds up, especially if you're doing something with your hands like cooking or holding a baby, which some people still do!
>including the failure of voice recognition with speech impediments
There’s a massive wave of geriatrics coming, and various dysarthrias (be it as significant as a stroke or as minor as edentulism or poorly fitting dentures) are extraordinarily common.
Any voice tech that can’t handle speech impediments isn’t ready for the market - it’s just too big a demographic chunk.
The article points in the wrong direction. Yes, smartphones is the new smoking but not because of their screens. It's not the screens that are addictive, it's what on the screens -- the internet. The internet is too good to not be addictive, especially with the rise of social media, internet news and messengers. Switching to other, non screen-based interaction channels will only bring all the notifications, social media and news to these new channels -- be it voice assistants, watches or whatever else.
Exactly. The screen is just a window into another world where information is far more accessible and available. And calling it "addictive" isn't really appropriate (in general, obviously there are a lot of nasty Skinner-box apps which do deserve that name) when it's a high-quality resource and paying it more attention makes sense.
Today, services we interact with on a screen bring us ads. This is how we pay for them.
Tomorrow, when new interactions will exist, we also will have ads. On the watch, during Siri conversations, ...
Same as Facebook at it's beginning, they will wait until we are used to it before beginning to do it. I'm looking forward to the "vocal assistant" version of uBlockOrigin ;)
Does anyone else think this kind of writing is unnecessarily hyperbolic? I'm so tired of reading articles that resemble the next Michael Bay script. The core of this article may have value, but I can't even get to it since it's drenched in distracting click-bait sauce.
This isn't story time, New York Times. Treat me like an adult.
> and about 11 hours a day looking at screens of any kind.
That is so depressing, but thinking about it myself, 11 is probably a minimum. I have to change careers to selling ice creams at the beach or something.
Think of it this way: we spend 16 hours a day looking at things in general. Over 50% of our brain's cortex is dedicated to processing visual information. It makes sense that we're going to gravitate towards devices that provide us with an easy means of giving us visual information. We're just structured that way. If over 50% of our brain's cortex was dedicated to processing aural information, or physical touch, or smell, then the dominant devices nowadays would be related to that.
The main problem with screens (and the reason why I try not to do it as much as I used to) are 1) If you're interacting with a screen all the time, you're probably not interacting with someone in person, and everyone's ability to read body language and subtle cues probably goes way down when they do interact in person. 2) The current screens emit light that at the very least probably harm our biorhythms, but may harm us in other ways as well. Meanwhile the Kindle isn't much different than looking at a book or a sign. and 3) Screens provide a limited window into another world, and at least right now, that window has zero depth (and doesn't engage our other senses either while we're at it). We don't get to take advantage of having two eyes and seeing depth while we're on a screen. This may change once VR becomes more and more viable and realistic.
I do try to do things more analog myself now though. Writing or designing on paper while outside in good weather is much preferable to inside on a screen. It's also why I've gravitated to more offscreen hobbies, such as board games and board game design, as opposed to staying on screens and programming games and apps in my off hours. Screens can let you get those things done faster, though (i.e. it's much faster and less straining on my hands to type than to write all of my thoughts).
For now, we still have a choice. While the article points out that we have a hard time resisting the urge to use our phones when they are in room, what happens with AR/VR becomes so embedded that we can no longer distinguish between what is real and what is augmented?
Most of us are already choosing to give into screen time, what happens when we no longer have an easy choice?
Take long 6 hours walks every day after work. No need to do something as drastic as changing careers. I bet ya that you'll get bored with outside play soon enough, and you'll come back with more appreciation for the comforts of an engineering life.
Great message, but this is mostly an opinion piece with few technical details. I am also not a fan of how the answer is to wait for Apple (Big Tech) to save us.
Bret Victor wrote a piece [0] lamenting the convergence on screens as the interaction design paradigm almost 7 years ago. Bret explains why screens are limiting interaction design through examples centered around the human body.
Ironically, this NYT piece gives the impression that a human being is a floating head and fingers i.e. an AR/VR avatar that they seem to loathe. I hope the future of computing isn't just the ability to check my calendar without a screen while walking. I want to use my body in tandem with computation. I don't have a Killer App for this interaction paradigm, but I found this paper by Scott Klemmer, Björn Hartmann, and Leila Takayama useful for thinking about it [1].
I disagree with Bret for fundamental reasons. I think Bret is trying to make computing more human, when I view computing as fundamentally unhuman. It seems that there is an inverse correlation between screen use and mental health. Bret's solution is to improve our technology. My solution would be to limit our use of technology. Computers are fabulous for certain things: transmitting information, processing data, automating repetitive things, but they're awful for many others. I worry that by trying to make our interactions with computers more human we're just going to become more dependent on them.
Visual signals and interfaces have information density and permanence, which gives gleanability. You can put lots of things into your visual field (e.g. multiple application windows, multiple computing devices, multiple inanimate objects), where they'll stay put without further input. Inherent cognitive ability even allows us to track multiple objects in motion throughout a scene and keep them coherent -- which is the skill that enables driving. Visual interfaces are very well suited to how humans best absorb information, and how they context switch given a large number of potential tasks.
A world where we begin to move off of visual interfaces will be awkward. While humans are good at absorbing conversational audio, they mentally filter most of it out to distill it down to its essential elements, and knowing what's essential may not even be known ahead of time. We'll direct voice-outputting interfaces to repeat things often, and they must be smart enough to accurately determine the context of our inquiry.
Voice output is often paired with voice input, but voice propagates well in public, leaking information to everything in range. Devices that capture speech-like input in a private way are not yet widespread. Meanwhile, structured command input through voice is awkward, and natural language processing doesn't sound natural yet. It's complex to implement and the computer frequently encounters a situation it doesn't yet understand, which is the most discouraging kind of interaction one can have with a computing platform. Factors like these highlight that audio-based interfaces are rarely programmed to be discoverable, and even if they were, exchanging that information over audio is less efficient than doing so visually.
New research into interface design is needed to address many of the shortcomings of current attempts to de-emphasize screens.
I took a class in rapid prototyping earlier this spring. One question has resonated with me since: “How does a human appear to the computer?”
There was a grotesque drawing of an eyeball attached to an ear along with a couple of fingers. It’s not entirely inaccurate: most of our interactions with computers are with our fingers, eyes, and ears. But now that microcontrollers like the Arduino and SBCs like the Raspberry Pi are so cheap and accessible, we can begin to look at different ways to interact with computers, through sensors instead of touchscreens and keyboards.
In a few decades, we may see a shift in our human-computer interfaces as lasting and profound as the leap from mainframe terminals to personal computers.
The author sees this as a problem and the solution is a new revolution but didn’t realize the underlying problem might be we are constantly seeking the next revolution, or to refuse to accept the horror of people stop buying stuff.
The title suggested to my brain that some sort of eye glass device would handle projections from your phone in a non intrusive way for quick access to things like calendar, last text received, etc.
No 3d, could just be text with voice control for what to display. The eyeglasses would be normal looking eyeglasses, maybe a heavier frame to house the needed electronics. Or maybe the ear loop has the extra stuff but not thick like a hearing aid.
Anyhow, what I think the article intended was the next revolution being in perfecting voice commands to apps. This is obviously for consumers not computer geeks that work on computers all day.
This seems like a clear next step; however, I have a hard time believing that the underlying goal of this direction is NOT a pair of AR glasses - another, potentially more addictive, screen.
I’m personally really excited about that potential. I would love to pivot my career away from teaching to building AR workflow mediation for teachers. I would even be pleased to only carry a watch and headphones to fulfill the majority of my computer-related tasks.
That said, I do fear Hyper-Reality[1] and such a persistent, obligatory mediation of our lived experience.
Fuck AR glasses. I absolutely do not want a society where they are an important component of participating in economic and social life. In my mind, that is a far greater threat to human freedom than any corrupt government or act of violence.
Wow, yet another internet moral panic piece from Farhad Manjoo and the New York Times. For extra credit replace smartphone and facebook with MTV and television and see if you can tell the difference from something written in 1983.
[+] [-] pcprincipal|7 years ago|reply
To anyone reading this, be aware of how grabbing your phone in the morning to read headlines or Medium digest can turn into 30 minutes or even an hour. Think about how checking e-mails can take you completely out of context and badly dilute the quality of your work. Consider how the amazing small interactions with other people that make life beautiful can be destroyed by even a glance at the phone.
Monitoring phone usage and actively cutting it down I really feel has improved my quality of life. Now that Apple has built in tools to do this, I hope more people will treat this as seriously as they treat exercise, nutrition, etc.
[+] [-] gnicholas|7 years ago|reply
Is it the same for news reading, or do you just read less news these days?
Even if reducing mobile screen time leads to some increase in desktop screen time, it could still be a good thing for social conventions and quality face to face interactions. I'm just curious about whether we're reducing screen time overall, or just shifting it from mobile/interrupted to desktop/purposeful.
[+] [-] padobson|7 years ago|reply
I agree with this completely, but the converse is also true. Sometimes you need to switch context during the day, and scanning headlines and reading emails helps me do that (I'm doing it right now!).
The best context switching strategies I've found are eating and meditation, but those aren't always feasible or socially acceptable, so browsing headlines is a good stand-in.
[+] [-] codethief|7 years ago|reply
[+] [-] plurgid|7 years ago|reply
This issue of "falling down the LCD well" has been at the forefront of electronic music for a while. Synthesizers were cool in the 70's because they had knobs, and any abstract logic involved was done by you, so you had to be into it.
Then the 80's and 90's came along and we got stuff like the DX7 and innumerable "workstation keyboards" that were little more than a tiny display and two or three buttons. Maybe a jog wheel if you were lucky.
These were unambiguously cleaner from a design perspective, but what people began to realize is that the screen was too much of an abstraction. Then of course DAWs came along and even the synthesizers themselves moved into the computer physically.
Music, like life in general, is visceral, and the screen is not. Musicians became frustrated with the lack of physicality.
The modular synthesizer approach of the 70's has made a HUGE resurgence with the eurorack standard.
Not everyone, but LOTS of people actually PREFER a gigantic mess of tangled wires with physical plugs and knobs to a sterile pure-logic implementation on the computer that can do all the same stuff cheaper and in a more reproducible way.
I suspect we'll see a similar sort of resurgence of physicality across every product that has been absorbed into the computer screen.
[+] [-] 8bitsrule|7 years ago|reply
- was taking up half my time.
And that was after the learning curve of trying to adapt to the melange of hardware, interface, and software options.
That time and energy was taken away from creativity and experimenting with music to improve its richness,interest, and originality.
Modulars can be more expressive in expert hands (seldom the case). But making good music requires a different kind of expertise. And I hear that difference - and the cost of all the lost creative energy - on the radio every day.
[+] [-] jefurii|7 years ago|reply
[+] [-] Adamantcheese|7 years ago|reply
[+] [-] closeparen|7 years ago|reply
[+] [-] Nasrudith|7 years ago|reply
First off with speech everyone can tell what you are doing and it is obtrusive - look at the old DMV signs against cellphones back from when they could literally just call.
Second it is just plain worse as an interface - just try to use a phone tree. People have dutifully ignored phone tree based answering machines existence except for the visually impaired who frankly lack options and must use what everyone else would consider useless.
Third there is less to do with it and thus less reason to get involved with the frontier. Again the same trap as the smart watch. People asked what can you do with it and the iWatch flopped despite Apple trendiness.
Direct thought reading might work better but that is in the easier said than done category. They can't even make an acceptably accurate non-invasive glucose meter so it is very unlikely to come out in consumer goods.
Google glass interestingly also largely flopped for several reasons despite heading in the opposite direction and provoked sheer irrational hatred above and beyond all other carriable or constantly recording cameras. AR sounds nice but they also need to factor in the significant glasses wearing population. Also apparently had short battery life for something to be worn as a HUD.
Noting where things can go wrong is easy compared to figuring out where to go in the future even with caveats like "10 years directed research lead time". I think the market has matured for personal electronics now until they can offer "magic" again. VR is neat but a niche in chicken and egg situation.
[+] [-] ewzimm|7 years ago|reply
Now more and more people have a voice-activated assistant not only in certain rooms of their home or in their pocket, but also on their wrist. It might seem to be a small thing not to have to reach into your pocket to invoke a voice assistant, but that small amount of time saved really adds up, especially if you're doing something with your hands like cooking or holding a baby, which some people still do!
[+] [-] arkades|7 years ago|reply
There’s a massive wave of geriatrics coming, and various dysarthrias (be it as significant as a stroke or as minor as edentulism or poorly fitting dentures) are extraordinarily common.
Any voice tech that can’t handle speech impediments isn’t ready for the market - it’s just too big a demographic chunk.
[+] [-] magic_beans|7 years ago|reply
[+] [-] dgudkov|7 years ago|reply
[+] [-] taneq|7 years ago|reply
[+] [-] yanslookup|7 years ago|reply
With AR and VR, my money is on more screen time in the future not less.
[+] [-] jpl56|7 years ago|reply
Tomorrow, when new interactions will exist, we also will have ads. On the watch, during Siri conversations, ...
Same as Facebook at it's beginning, they will wait until we are used to it before beginning to do it. I'm looking forward to the "vocal assistant" version of uBlockOrigin ;)
[+] [-] germinalphrase|7 years ago|reply
[+] [-] darkpicnic|7 years ago|reply
Does anyone else think this kind of writing is unnecessarily hyperbolic? I'm so tired of reading articles that resemble the next Michael Bay script. The core of this article may have value, but I can't even get to it since it's drenched in distracting click-bait sauce.
This isn't story time, New York Times. Treat me like an adult.
[+] [-] rb808|7 years ago|reply
That is so depressing, but thinking about it myself, 11 is probably a minimum. I have to change careers to selling ice creams at the beach or something.
[+] [-] dictum|7 years ago|reply
You will learn to loathe ice cream, the beach, and even the nicest people you could see there. (Quoth Kierkegaard: https://www.goodreads.com/quotes/7141047-marry-and-you-will-...)
[+] [-] vkjv|7 years ago|reply
[+] [-] cableshaft|7 years ago|reply
The main problem with screens (and the reason why I try not to do it as much as I used to) are 1) If you're interacting with a screen all the time, you're probably not interacting with someone in person, and everyone's ability to read body language and subtle cues probably goes way down when they do interact in person. 2) The current screens emit light that at the very least probably harm our biorhythms, but may harm us in other ways as well. Meanwhile the Kindle isn't much different than looking at a book or a sign. and 3) Screens provide a limited window into another world, and at least right now, that window has zero depth (and doesn't engage our other senses either while we're at it). We don't get to take advantage of having two eyes and seeing depth while we're on a screen. This may change once VR becomes more and more viable and realistic.
I do try to do things more analog myself now though. Writing or designing on paper while outside in good weather is much preferable to inside on a screen. It's also why I've gravitated to more offscreen hobbies, such as board games and board game design, as opposed to staying on screens and programming games and apps in my off hours. Screens can let you get those things done faster, though (i.e. it's much faster and less straining on my hands to type than to write all of my thoughts).
[+] [-] elorant|7 years ago|reply
[+] [-] protonimitate|7 years ago|reply
Most of us are already choosing to give into screen time, what happens when we no longer have an easy choice?
[+] [-] tw1010|7 years ago|reply
[+] [-] meathook|7 years ago|reply
Bret Victor wrote a piece [0] lamenting the convergence on screens as the interaction design paradigm almost 7 years ago. Bret explains why screens are limiting interaction design through examples centered around the human body.
Ironically, this NYT piece gives the impression that a human being is a floating head and fingers i.e. an AR/VR avatar that they seem to loathe. I hope the future of computing isn't just the ability to check my calendar without a screen while walking. I want to use my body in tandem with computation. I don't have a Killer App for this interaction paradigm, but I found this paper by Scott Klemmer, Björn Hartmann, and Leila Takayama useful for thinking about it [1].
[0] http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi... [1] https://hci.stanford.edu/publications/2006/HowBodiesMatter-D...
[+] [-] jeffreyrogers|7 years ago|reply
[+] [-] niftich|7 years ago|reply
A world where we begin to move off of visual interfaces will be awkward. While humans are good at absorbing conversational audio, they mentally filter most of it out to distill it down to its essential elements, and knowing what's essential may not even be known ahead of time. We'll direct voice-outputting interfaces to repeat things often, and they must be smart enough to accurately determine the context of our inquiry.
Voice output is often paired with voice input, but voice propagates well in public, leaking information to everything in range. Devices that capture speech-like input in a private way are not yet widespread. Meanwhile, structured command input through voice is awkward, and natural language processing doesn't sound natural yet. It's complex to implement and the computer frequently encounters a situation it doesn't yet understand, which is the most discouraging kind of interaction one can have with a computing platform. Factors like these highlight that audio-based interfaces are rarely programmed to be discoverable, and even if they were, exchanging that information over audio is less efficient than doing so visually.
New research into interface design is needed to address many of the shortcomings of current attempts to de-emphasize screens.
[+] [-] rm_-rf_slash|7 years ago|reply
There was a grotesque drawing of an eyeball attached to an ear along with a couple of fingers. It’s not entirely inaccurate: most of our interactions with computers are with our fingers, eyes, and ears. But now that microcontrollers like the Arduino and SBCs like the Raspberry Pi are so cheap and accessible, we can begin to look at different ways to interact with computers, through sensors instead of touchscreens and keyboards.
In a few decades, we may see a shift in our human-computer interfaces as lasting and profound as the leap from mainframe terminals to personal computers.
[+] [-] bwang29|7 years ago|reply
[+] [-] matt_s|7 years ago|reply
No 3d, could just be text with voice control for what to display. The eyeglasses would be normal looking eyeglasses, maybe a heavier frame to house the needed electronics. Or maybe the ear loop has the extra stuff but not thick like a hearing aid.
Anyhow, what I think the article intended was the next revolution being in perfecting voice commands to apps. This is obviously for consumers not computer geeks that work on computers all day.
[+] [-] tonyedgecombe|7 years ago|reply
[+] [-] EGreg|7 years ago|reply
And notifications of irresponsible programs. The tragedy of the commons where the commons is human attention.
[+] [-] germinalphrase|7 years ago|reply
I’m personally really excited about that potential. I would love to pivot my career away from teaching to building AR workflow mediation for teachers. I would even be pleased to only carry a watch and headphones to fulfill the majority of my computer-related tasks.
That said, I do fear Hyper-Reality[1] and such a persistent, obligatory mediation of our lived experience.
[+] [-] shawn|7 years ago|reply
[+] [-] cirgue|7 years ago|reply
[+] [-] tyu100|7 years ago|reply
[+] [-] personjerry|7 years ago|reply
[+] [-] tk75x|7 years ago|reply
[+] [-] PerilousD|7 years ago|reply
[+] [-] AndrewDucker|7 years ago|reply