For years we’ve been told that our smartphones spell only our inevitable doom. Spending too much time with a screen in your face will supposedly increase your risk for depression, ruin your sleep, and exacerbate your anxiety—especially if you’re young. But new research suggests the science behind those claims is a lot more complicated than most of us realize; and perhaps the claims themselves may be greatly exaggerated.
Jean Twenge, Ph.D., a psychologist at San Diego State University, tells SELF that she began to worry in 2012 when the psychologists behind Monitoring the Future, a decades-long study of teenage behavior, reported a steep and unexplained decline in happiness and an accompanying rise in depression. A subsequent report from Pew Research Center revealed that 2012 was the year in which the fraction of Americans who own smartphones approached 50 percent.
The potential link steered her subsequent research, which culminated most recently in the 2017 publication of iGen, her book chronicling the vast and mostly negative effects that screens—phones, for the most part—have had on adolescents.
But Amy Orben, a doctor of philosophy candidate at Oxford University studying the psychological impact of social media, tells SELF she was skeptical. She was baffled by the hand-wringing trickling through the scientific literature on screen time. She felt unscathed by the devices she’d used throughout her teenage years. And she couldn’t help noticing the demographic of many of the hand-wringers. Most of those researchers were “above a certain age,” she says.
Digging Into the Data on Tech and Wellbeing
Orben decided to do her own analysis of the data behind iGen. She didn’t see what Twenge saw.
In January, Orben published a paper asserting that screen time was no stronger a risk factor for adolescent depression than eating potatoes or wearing eyeglasses.
For her study, published in Nature Human Behavior earlier this month, Orben and her co-author Andrew Przybylski re-analyzed the publicly available (and quite large) datasets that many other researchers use to study the potential effects of technology use.
The researchers dug into data for 355,358 people (predominantly between the ages of 12 and 18) included in three large, ongoing surveys (Monitoring the Future, the Youth Risk and Behavior Survey, and the U.K. Millennium Cohort Study) using statistical tools designed to ferret out genuine connections between two variables—in this case, wellbeing (including measures of depression, suicidal ideation, and overall mental health) and technology use (including how much time participants spend on social media and playing video games, and how they consume news).
They then analyzed other studies correlating mental health with activities and physical characteristics in the same way and with the same demographic. They found that the link between technology use and diminished wellbeing was not only minuscule, but also comparable to the link seen among factors that seem very unlikely to have such an effect (eating potatoes, for instance).
Overall, their results suggest that more and different research is needed before we draw any firm conclusions about the risks of screen time.
So what are parents—and anyone else worried about the negative effects of screen time—supposed to do? The contradicting research refuses to yield concrete answers and the data are harder to untangle than a thousand earbuds.
The Many Limitations of the Research
There's no shortage of research looking at the correlations between technology use and wellbeing, but drawing conclusive findings from that data is more complicated than you might think.
One issue, Orben says, is the size of the datasets, which sometimes include hundreds of thousands of adolescents. A group that large will have a huge number of variables at play, such as the amount of time the parents spend with their child, whether or not both parents are employed, how happy the parents are, and whether or not the child has a long-term illness. All of these can independently influence mental health, so isolating the potential effects of just digital exposure time is tricky.
Plus there’s the question of whether certain types of phone use are worse than others, which has barely been explored, Twenge says. So far, though, some of her data hint that live social interaction (such as video chats and some games) may not drag us down as much as more passive activities, like scrolling through social media, she says.
The designs of the studies can also be problematic. For instance, Orben points to the work of Andrew Gelman, Ph.D., a Columbia University statistician who has written extensively on what he calls “the garden of forking paths” (from the title of a book by Jorge Luis Borges). With this approach, researchers decide how they will analyze their data one step a time, based on what the previous step reveals.
For instance, researchers who don’t find depression among all teens who use digital technology might then narrow their investigation to smartphone use only. If those data aren’t meaningful, then they might compare mental health among girls who use social media versus boys who do the same. At each fork, the results of the prior decision guide the way. The published study reports this approach, Orben says, “as if that one path was meant to be." This type of cherry-picking undermines the validity of the ultimate conclusion, says Orben, because in reality, the study was essentially cooked up to find something meaningful. Ultimately, the headlines we see reflect the eventual interesting finding, not all the insignificant findings that are dismissed along the way.
The problem pervades psychological research, with many investigators accused of “fishing expeditions” in which they keep casting their line until they hook an attention-grabbing finding. Orben’s paper found more than 600 million paths that the U.K. Millennium Cohort Study—a long-term investigation chronicling behavior and development among 19,000 people born in the U.K. between 2000 and 2001—could have followed.
Massive datasets can make tenuous connections seem stronger than they really are, which might be the case with screen time. The issue partly boils down to the way researchers analyze their results. They benefit from reporting an impressively small p-value—a statistic measuring the probability of obtaining the same outcome by chance. Studies with large numbers of participants may magnify minor differences, leading to a headline-generating conclusion based on mistakes rather than reality.
Orben’s study relies on a tool called the percent or proportion variance explained (PVE). Whereas the p-value gauges the certainty that one variable is affecting another—for example, screens making us sad—PVE reveals the magnitude of the effect. A small PVE suggests that, although screens might be making us sad, the effect is actually very minor, Michael Lavine, Ph.D., a statistician with the U.S. Army Research Office, tells SELF. Chris Ferguson, Ph.D., a psychologist at Stetson University in Florida, tells SELF that a small PVE could also reflect an error.
Orben and Przybylski did find that screen time negatively affected adolescent wellbeing, but the PVE was 0.24 percent. Tiny. They compared that figure to the PVE for other behaviors and found that the detrimental effect of screens was only slightly greater than that of eating potatoes (0.17 percent). Being bullied was worse (4.5 percent).
On the other hand, Twenge objects to their use of percent variance, which was called out as deceptive by renowned psychologist Robert Rosenthal back in 1979. “People who want to make these events look small report them in terms of percent variance,” she says, “even though it’s pretty useless.”
PVE, Twenge says, considers all possible causes of a result (teen depression, for instance), which isn’t what parents want to know. Sure, your genetics could play a role, but those can’t be altered. So it’s more useful to gauge how happy teens are who spend more or less time with digital media, she says. The data in iGen offer that comparison, which is a “much better measure,” she says.
But even this is up for debate among researchers, it seems: “[Rosenthal's assertion] is dead,” Ferguson says. “Percent variance does matter.”
These disagreements may be exciting fodder for researchers, but what does this mean for the rest of us who are just wondering how worried we should be about screen time? Lavine offers a helpful middle ground: Percent variance is legitimate, he says, but a small figure doesn’t mean the risk is meaningless.
Even if a specific effect is small, “it could still be an effect worth talking about.” The key is whether any given variable—too much screen time, eating potatoes, being bullied—has a plausible explanation. Screen time and potatoes might have some association with ill health, Lavine says, but the explanations for each link differ. And one might seem more plausible than the other.
In this case, it’s not hard to make the case for why increased screen time could have detrimental effects on your overall wellbeing, whereas it’s a little harder to make that case for eating potatoes. Still, the research doesn’t tell us that screen time causes widespread detrimental health effects for an entire population.
A Dataset of One
Where does all of this leave individuals who are trying to decide what’s best for themselves or their children?
In this case, the plausible explanation has to be based on a sample size of one: the person whose wellbeing is at stake. And that’s really the only “dataset” most of us have access to. Just because it’s plausible that excess screen time reduces mental wellbeing, that doesn’t mean everyone is going to experience that to the same degree.
The frustrating answer is that we’ll need more research to really understand what’s happening here, if anything. That’s because studies showing a link between digital technology and depression don’t necessarily prove that the former caused the latter. The correlation could exist because the users were already depressed and turned to social media for a pick-me-up. Or some third factor could be responsible for both, like the fact that they’re teenagers going through all kinds of changes. It’s also essentially impossible to do a double-blind placebo-controlled study on this association, so all we have is correlational data and that can only tell us so much. It can’t tell us what effect screen time is going to have on one specific individual, or how different types of technology use would affect that one person.
Ultimately, though, Orben emphasizes that the point of her “science satire” was not to disprove specific claims about the risks of screen time, but to point out the issues with the quality of the research in general. “Once we ask the correct research questions,” she says, the risks of screen time will emerge clearly.
But Twenge—and, for the record, the American Academy of Pediatrics (AAP)—isn’t content to wait, because the escalating rates of depression and self-harm are real. “If there’s some chance that the excessive amount of time teens are spending on phones has something to do with it,” she says, “we should take that possibility seriously.”
The AAP suggests setting the limit at one hour of screen time per day for children between the ages of 2 and 5. For older children, the AAP suggests “consistent limits” but does not specify the total hours. Twenge suggests two hours but acknowledges the boundaries are still vague. “You could make a case for three or four hours if you wanted to,” she says.
As complicated as the research may be, her overall prescription is relatively simple and falls in line with much of what we already know about sleep hygiene: “No phones in the bedroom, no phones an hour before bedtime, and no overuse during the day.”
Whether or not those rules are sufficient—or even necessary—for each and every person remains to be proven.