248: The Anxious Generation Review (Part 2): Does Social Media Actually Cause Kids’ Depression and Anxiety?

Four diverse children sitting close together on what appears to be a couch or bench, each looking down at their own smartphones or mobile devices

In Part 1 of this mini-series looking at Jonathan Haidt’s book The Anxious Generation, we discovered that the teen mental health crisis might not be as dramatic as The Anxious Generation claims – and that changes in diagnosis and coding could be inflating the numbers. But even if we accept that teens’ struggles have increased somewhat, the next crucial question is: what’s actually causing the change?

Jonathan Haidt is adamant that social media causes depression and anxiety in teenagers. He claims that “dozens of experiments” prove social media use is a CAUSE, not just a correlate, of mental health problems. But when you dig into the studies, as we do in this episode, we’ll see that the ‘causal’ data is nowhere near as strong as Haidt claims.

We’ll examine the experimental evidence behind social media and teen mental health claims, reveal why leading researchers compare social media effects on teens to eating potatoes, and uncover what factors actually explain 99% of youth mental health outcomes. Because if we’re going to spend time and energy helping our kids, we want to make sure we’re spending it doing things that will actually help.

Questions This Episode Will Answer

Does social media really cause teen depression and anxiety? Research shows correlation, not proven causation, with social media effects on teens explaining less than 1% of wellbeing, similar to the effect of eating potatoes. (Some researchers argue that this is still important enough to pay attention to – the episode explores why.)

Why do I keep hearing that social media is harmful if the research is weak? Many (but not all) social media studies find some evidence of harm, but when you look at the methodology this isn’t surprising – researchers do things like sending participants daily reminders that “limiting social media is good for you,” and then asking them how much social media they’ve consumed and how they feel. It’s hard to draw strong conclusions from this data!

How can different studies on social media show opposite results? Researchers studying teen social media use can get completely different results from the same data depending on how they choose to analyze it. The episode looks at those choices and what they mean for understanding whether social media causes kids’ depression and anxiety.

Is limiting my teen’s social media use actually going to help them? Current evidence suggests that some kids who use social media a lot are vulnerable to experiencing depression and anxiety, and limiting their use specifically may be protective. There is little evidence to support the idea that blanket bans on kids’ social media/smart phone usage will result in dramatic improvements in youth mental health, and focusing on issues that are more clearly connected to mental health would likely have a greater positive impact.

What You’ll Learn in This Episode

  • How most social media research creates results that don’t tell us what we want to know (but then reports the results as if they do)
  • How the same teen mental health data can be analyzed to support opposite conclusions about social media effects on teens
  • What family relationships, academic pressure, and economic stress reveal about the real drivers of youth mental health issues
  • How social media and teen mental health correlations explain the same amount of variance as seemingly irrelevant factors like potato consumption
  • How researcher bias and study design flaws make social media studies less reliable than parents think
  • What happens when kids who benefit from social media lose access to it
  • Why the focus on teen social media use might distract from addressing bigger factors affecting your child’s wellbeing
  • How to evaluate social media research claims critically when making decisions about your family’s technology use
  • What the ongoing debate between leading researchers reveals about the uncertainty in digital wellness science
  • Why blanket solutions like social media bans might miss the complex realities of teen mental health challenges

Dr. Jonathan Haidt’s Book

The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness (Affiliate link)

 

Jump to highlights

00:45 Introduction of today’s episode

01:40 Haidt explains that after reviewing many research studies with his colleagues Jean Twenge and Zach Rausch, social media doesn’t just happen to show up alongside mental health problems in teens – it’s actually creating them. The research shows that social media use leads to increased anxiety and depression, rather than simply being something that anxious and depressed teens tend to use more often

05:28 According to Dr. Gray, despite potential placebo effects boosting results, researchers found mostly no significant improvements in wellbeing from reducing social media use, only small effects on loneliness and depression that could easily be explained by chance

12:20 Dr. Amy Orben’s Specification Curve Analysis is a sophisticated attempt to show how research choices affect outcomes

15:12 A study by Schwartz found that both the group that quit Instagram AND the control group that kept using it normally BOTH improved on measures of depression and self-esteem, which the researchers admitted might just be because being in a study about social media usage made people more aware of their usage

26:54 Dr. Twenge’s studies of over 100,000 teens found heavy social media users were twice as likely to report depression, low wellbeing, and suicide risk especially girls

31:42 Dr. Orben uses a technique called Specification Curve Analysis, which is a way to evaluate how the choices a researcher makes affect the study outcomes

34:35 Some of the factors that are bigger contributors than screen time usage

42:53 Dr. Orben describes repeating technology panics: radio, comics, TV, video games, now social media. Research lags behind fears, creating cycles where society panics about new tech before understanding previous ones

50:19 People tend to agree with yes/no questions regardless of content, even contradictory statements. Question wording heavily influences responses, inflating correlations due to response style rather than genuine opinions

54:00 Wrapping up

 

References

Centers for Disease Control and Prevention. (2016). Epi-Aid 2016-018: Undetermined risk factors for suicide among youth, ages 10–24 — Santa Clara County, CA, 2016. Santa Clara County Public Health Department. https://files.santaclaracounty.gov/migrated/cdc-samhsa-epi-aid-final-report-scc-phd-2016.pdf

City of Palo Alto. (2021). City of Palo Alto: Suicide prevention policy and mental health promotion [Draft policy document]. Project Safety Net. https://www.psnyouth.org/wp-content/uploads/2021/06/DRAFT-Palo-Alto-Suicide-Prevention-Policy-and-Mental-Health-Promotion-dT.pdf

Clinical Practice Research Datalink. Clinical Practice Research Datalink (CPRD) is a real-world research service supporting retrospective and prospective public health and clinical studies. CPRD. https://www.cprd.com/

Curran, T., & Hill, A. P. (2022). Young people’s perceptions of their parents’ expectations and criticism are increasing over time: Implications for perfectionism. Psychological Bulletin, 148(1-2), 107-128. https://doi.org/10.1037/bul0000347

Evolve’s Behavioral Health Content Team. (2019, September 13). Long-term trends in suicidal ideation and suicide attempts among adolescents and young adults. Evolve Treatment Centers. https://evolvetreatment.com/blog/long-term-trends-suicidal-ideation-suicide-attempts-adolescents-young-adults/

Evolve’s Behavioral Health Content Team. (2020, July 27). Mental health and suicide statistics for teens in Santa Clara County. Evolve Treatment Centers. https://evolvetreatment.com/blog/mental-health-suicide-santa-clara/

Faverio, M., & Sidoti, O. (2024, December 12). Teens, social media and technology 2024: YouTube, TikTok, Instagram and Snapchat remain widely used among U.S. teens; some say they’re on these sites almost constantly. Pew Research Center. https://www.pewresearch.org/wp-content/uploads/sites/20/2024/12/PI_2024.12.12_Teens-Social-Media-Tech_REPORT.pdf

Garfield, R., Orgera, K., & Damico, A. (2019, January 25). The uninsured and the ACA: A primer – Key facts about health insurance and the uninsured amidst changes to the Affordable Care Act. KFF. https://www.kff.org/report-section/the-uninsured-and-the-aca-a-primer-key-facts-about-health-insurance-and-the-uninsured-amidst-changes-to-the-affordable-care-act-how-many-people-are-uninsured/

Gulbas, L. E., & Zayas, L. H. (2015). Examining the interplay among family, culture, and Latina teen suicidal behavior. Qualitative Health Research, 25(5), 689-699. https://doi.org/10.1177/1049732314553598

Haas, A. P., Rodgers, P. L., & Herman, J. L. (2014, January). Suicide attempts among transgender and gender non-conforming adults: Findings of the National Transgender Discrimination Survey. American Foundation for Suicide Prevention and Williams Institute, UCLA School of Law. https://williamsinstitute.law.ucla.edu/wp-content/uploads/Trans-GNC-Suicide-Attempts-Jan-2014.pdf

Haidt, J., & Rausch, Z. Better mental health [Ongoing open-source literature review]. The Coddling. https://www.thecoddling.com/better-mental-health

Haidt, J., Rausch, Z., & Twenge, J. (ongoing). Social media and mental health: A collaborative review. Unpublished manuscript, New York University. Accessed at tinyurl.com/SocialMediaMentalHealthReview

Hunt, M., Auriemma, J., & Cashaw, A. C. A. (2003). Self-report bias and underreporting of depression on the BDI-II. Journal of Personality Assessment, 80(1), 26-30. https://doi.org/10.1207/S15327752JPA8001_10

Johns Hopkins Medicine. Premenstrual dysphoric disorder (PMDD). Johns Hopkins Medicine. https://www.hopkinsmedicine.org/health/conditions-and-diseases/premenstrual-dysphoric-disorder-pmdd

Martin, J. L. (2002). Power, authority, and the constraint of belief systems. American Journal of Sociology, 107(4), 861-904. https://doi.org/10.1086/343192

Mueller, A. S., & Abrutyn, S. (2024). Addressing the social roots of suicide. In Life Under Pressure (pp. 191-218). Oxford University Press. https://doi.org/10.1093/oso/9780190847845.003.0008

NHS Digital. (2020). Mental health of children and young people in England, 2020 [Data set]. UK Data Service. https://doi.org/10.5255/UKDA-SN-9128-2

Programme for International Student Assessment. (2024, May). Managing screen time: How to protect and equip students against distraction. OECD. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/managing-screen-time_023f2390/7c225af4-en.pdf

Rosin, H. (2015, December). The Silicon Valley suicides: Why are so many kids with bright prospects killing themselves in Palo Alto? The Atlantic. https://www.theatlantic.com/magazine/archive/2015/12/the-silicon-valley-suicides/413140/

Royal College of Pediatrics and Child Health. (2020, March). Suicide. State of Child Health. https://stateofchildhealth.rcpch.ac.uk/evidence/mental-health/suicide/

Sarginson, J., Webb, R. T., Stocks, S. J., Esmail, A., Garg, S., & Ashcroft, D. M. (2017). Temporal trends in antidepressant prescribing to children in UK primary care, 2000–2015. Journal of Affective Disorders, 210, 312-318. https://doi.org/10.1016/j.jad.2016.12.047

Scottish Government. (2024, March 18). Supporting development of a self-harm strategy for Scotland, what does the qualitative evidence tell us? Gov.scot. https://www.gov.scot/publications/supporting-development-self-harm-strategy-scotland-qualitative-evidence-tell/

Thomas, J. F., Temple, J. R., Perez, N., & Rupp, R. (2011). Ethnic and gender disparities in needed adolescent mental health care. Journal of Health Care for the Poor and Underserved, 22(1), 101-110. https://doi.org/10.1353/hpu.2011.0029

Townsend, E., Ness, J., Waters, K., Rehman, M., Kapur, N., Clements, C., Geulayov, G., Bale, E., Casey, D., & Hawton, K. (2022). Life problems in children and adolescents who self‐harm: Findings from the multicenter study of self‐harm in England. Child and Adolescent Mental Health, 27(4), 352-360. https://doi.org/10.1111/camh.12544

U.S. Department of Health and Human Services, Office of Minority Health. (n.d.). Mental and behavioral health – American Indians/Alaska Natives. https://minorityhealth.hhs.gov/mental-and-behavioral-health-american-indiansalaska-natives

Wong, Y. J., Wang, L., Li, S., & Liu, H. (2017). Circumstances preceding the suicide of Asian Pacific Islander Americans and White Americans. Death Studies, 41(5), 311-317. https://doi.org/10.1080/07481187.2016.1275888

Zulyniak, S., Wiens, K., Bulloch, A. G. M., Williams, J. V. A., Lukmanji, A., Dores, A. K., Isherwood, L. J., & Patten, S. B. (2021). Increasing rates of youth and adolescent suicide in Canadian women. The Canadian Journal of Psychiatry, 67(1), 67-69. https://doi.org/10.1177/07067437211017875

Transcript
Emma:

Hi, I'm Emma, and I'm listening from the UK. We all want our children to lead fulfilled lives, but we're surrounded by conflicting information and click bait headlines that leave us wondering what to do as parents. The Your Parenting Mojo podcast distills scientific research on parenting and child development into tools parents can actually use every day in their real lives with their real children. If you'd like to be notified when new episodes are released and get a free infographic on the 13 Reasons Your child isn't listening to you and what to do about each one, just head on over to yourparentingmojo.com/subscribe, and pretty soon you're going to get tired of hearing my voice read this intro, so come and record one yourself at yourparentingmojo.com/recordtheintro.

Jen Lumanlan:

Hello and welcome back to Your Parenting Mojo. This is Part Two of our deep dive into Dr. Jonathan Haidt’s book The Anxious Generation. In Part One, we questioned whether there really is a teen mental health crisis as dramatic as Haidt describes. We saw how changes in diagnosis, encoding, and healthcare access might be inflating the numbers as well as the study methodology of the studies that show this finding. We visited Palo Alto, California where tight community bonds couldn't prevent crushing academic pressure from driving teen suicide rates to four to five times the national average. But let's say that we accept that teen mental health challenges have increased to some extent even if it’s not as much Haidt says that they have. The next question is: what is causing that increase in challenges? Haidt is adamant that social media is the culprit. In The Anxious Generation, he writes: "Taken as a whole, the dozens of experiments that Jean Twenge, Zach Rausch, and I have collected confirm and extend the patterns found in the correlational studies: Social media use is a CAUSE of anxiety, depression, and other ailments, not just a CORRELATE." That's a bold claim. Today, we're going to examine those dozens of experiments. We will look at what happens when you tell psychology students you're studying "digital wellness" and then ask them to track their social media use. We'll explore natural experiments where entire regions get high-speed internet at different times. And we'll dig into the fierce debate between researchers who see catastrophe and those who see... well, not so much at all. By the end of this episode, you'll understand why one researcher compared the effect of social media on teen wellbeing to the effect of eating potatoes - and why that comparison matters more than you might think.

Jen Lumanlan:

Let's dive in to those studies that Haidt references as showing a correlational relationship between social media use and kid’s wellbeing. At this point, I want to bring in Dr. Peter Gray’s work, yup, same Peter Gray whom I interviewed way back in April two thousand eighteen for episode sixty-two on Why We Need to Let Our Kid’s Need to Take More Risks. Dr. Gray’s Substack is where I initially found some of the critiques of Haidt’s work but I really didn’t dig into it anymore until I started writing this episode. So Gray takes one of the papers that Haidt cite In The Anxious Generation as providing experimental support and shows why that might not be the right conclusion to draw from it. This is the paper by Melissa Hunt et al. called No more FOMO: Limiting social media decreases loneliness and depression.

Jen Lumanlan:

In this paper, a hundred forty-three psychology undergrads were randomly assigned to either limit Facebook, Instagram, and Snapchat usage to no more than 10 minutes per platform per day, or to use social media as usual for a three-week period. The students responded to questionnaires to tell the researchers about their wellbeing, and the researchers verified the students' social media usage through screenshots from their devices. Gray says: “The most damaging flaws with this study, which should be obvious to any social scientist, is there are no controls for demand effects or placebo effects." The demand effect is one that shows up across many of the experiments I read for this episode, and happens when the participant knows or can guess the researcher's hypothesis, and acts to confirm the hypothesis. The participants KNEW they were supposed to track their social media use and report on their wellbeing. They will have been told many times by their parents, by the teachers and by the media that social media usage is bad for you, so it seems hard to imagine that the students couldn't figure out the purpose of the experiment, and there's a decent likelihood that this shaped their SELF-REPORTED descriptions of their wellbeing. In addition to the issues that Dr. Gray notes, I’ll add here that participants were psychology students looking for course credit, so it seems likely they could figure out who was doing the research even it was done anonymously and what was its purpose, and they may even have wanted their professor to get a publishable result. They might not have wanted their professor, a figure in authority, to know that they were struggling with mental health issues which could have affected how they rated themselves in the study.

Dr. Gray notes that there was no placebo in this study, which is like when a company designs a drug to reduce depression and gives some people the drug and some people a pill with no medicine in it to see if the pill with the medicine performs better than the one without. In this case, there's no way to create a placebo effect because you can't hide whether a participant is in the experimental or control group from the participants. Gray goes on to say: "But now here is the kicker. Despite that POSSIBILITY of demand and placebo effects, the researchers found very weak and inconsistent effects of the intervention on their measures of wellbeing. They found no significant overall effect on psychological wellbeing (combining the measures), no effect on anxiety, no effect on self-esteem, no effect on autonomy, no effect on self-acceptance. They found only rather small effects on self-reported loneliness and depression, and the latter effect was only significant for those participants who showed higher than average depression at the beginning of the study. When you measure a lot of things, without predicting beforehand which things will be affected and which will not, one or more of those things might come out, by chance, in a manner consistent with the initial hypothesis, especially when the effects are rather small. In my mind it is at least as reasonable to attribute the reported findings in this study to chance, or to the placebo effect, or the demand effect, or to some combination of all of these, as it is to attribute them to the reduced use of social media."

Other factors that Gray doesn't identify, but could very well have swayed the results, are that the participants self-reported their social media use AND their psychological wellbeing. The social media use screenshots were supposed to be an objective measure, but it's possible that just being more aware of your social media usage as well as your mood, could have affected the wellbeing of the students in either group. Students in BOTH groups actually showed reductions in anxiety and fear of missing out, which suggests support for the benefits of self-monitoring. The experiment only lasted for three weeks, so who knows whether the temporary boost in wellbeing that was seen would persist over time as we would want to know it does before creating state-level or national policies to regulate teens’ screen time use.

Haidt described a study by Kleemans and colleagues in The Anxious Generation like this: Another study randomly assigned teen girls to be exposed to selfies taken from Instagram, either in their original state or after modification by the experimenters to be extra attractive. They go on to quote: "Results showed that exposure to manipulated Instagram photos directly led to lower body image." That study had the same placebo and demand effects as Haidt's study. Even though the participants were told the researcher were about facial preferences, it's hard to imagine an adolescent girl wouldn't have heard that 'perfect' Instagram images are harmful to body image, which could have predisposed them to report lower body satisfaction after they saw the manipulated photos. Despite the cover story about the facial preferences, the act of being asked to scrutinize Instagram selfies and then asking them to think about their body satisfaction could have primed them for negative self-reflection, in a similar way that we see when girls are asked to be aware of their gender, they may then perform worse in math test.

Jen Lumanlan:

Regarding the demand effect, the girls may well have guessed the study's hypothesis and then felt pressure to report 'negative' feelings after seeing the manipulated images. The researchers knew which group participants were in, and participants could probably have figured it out too if they noticed obvious photo manipulation. The participants were asked to view 10 selfies in a row, which isn't typical Instagram usage and might exaggerate effects that wouldn't occur with casual, real-world browsing. The sample was limited to Dutch adolescent girls of a similar ethnic background, who were recruited using a snowball sampling method which in a fancy term in which 'we asked people we knew and then we asked them to refer us to other people,' which could have introduced selection bias into the results. Girls who were already interested in or affected by social media and body image issues might have signed up and then referred the researchers to their friends who are also affected by social media and body image issues. The study measured immediate, short-term impact so we don't know anything about longer-term or cumulative impacts. It also used self-reported information about body image which could have been affected by the participant's mood, context, and perceived expectations rather than long-term psychological changes or actual behavior in a real-world setting. The images were viewed alone, just as images, not in the context of the likes and comments you get on Instagram which might either buffer or exacerbate effects, and there was no control for the participants' baseline mood or body satisfaction, which again could have influenced the results. And that’s just two of the hundreds of studies that Haidt cites to demonstrate that social media use CAUSES anxiety and depression.

He cites two studies specifically in support of this idea; the first is by Primack et al, which DOES attempt to give us a longer-term perspective by taking a baseline and the following up six months later, but it’s limited by the same placebo and demand effect problems that we saw in other studies, along with social media usage data being self-reported, which may not accurately reflect real usage. In addition, only fifty six percent of the initial two thousand four hundred eight respondents completed the follow-up, and the ones who remained were older and more likely to be female, which could have affected the results. The second study is by Shakya and Christakis, which was observational and longitudinal, which means that participants were not randomly assigned to use or not use Facebook, and they weren't told how Facebook might affect their wellbeing, which goes part way to addressing the demand effect issue, but participants knew the researchers had access to their Facebook data and that they were being asked questions about their mental and physical health which might have influenced their responses if they believed social media is supposed to be harmful. The study didn't have a control, so there was no comparison group engaging in non-social media activity online, which makes it difficult to rule out effects from general screen use versus Facebook specifically. As usual, participants self-reported data on their physical and mental health and life satisfaction, which is vulnerable to the participant's mood at the time of reporting, problems with remembering what's happened over the last six months, and a desire to demonstrate (or not) a connection between mental health and Facebook use. People who felt guilty or conflicted about the time they spent on Facebook might have rated themselves more poorly, regardless of any true effect. The sample was drawn from a panel created by the research firm Gallup, who were willing to respond to repeated surveys and voluntarily allowed access to their Facebook data. The participants in this study were younger, more female, and more socially connected than the broader sample which is likely not representative of all Facebook users, let alone all social media users or your average teen who is using TikTok for hours a day. To be fair to the researchers, some have genuinely tried to address these methodological problems. Dr. Amy Orben's Specification Curve Analysis is a sophisticated attempt to show how research choices affect outcomes. When she tests all plausible analytical pathways, she's essentially saying, 'Look, I could make this data show whatever I want depending on my choices, so let's see the full range of possible findings.' We’ll hear more about some specific findings of this analysis when we start discussing Dr. Jean Twenge’s work.

Jen Lumanlan:

The problem isn't so much individual researchers doing bad work. It's that we're trying to use imperfect tools to answer questions that might be impossible to answer definitively. And when those imperfect answers are used to justify major policy changes, we need to be especially cautious. The studies I’ve mentioned so far are the ones cited in The Anxious Generation as directly supportive of social media CAUSING the "tidal wave" of mental illness. Dr. Haidt also has the Google Docs where he correlates information was able to fit in to the book. One of it is the Social Media and Mental Health and he and Zach Rausch and Jean Twenge gather relevant studies and put them in that document, there are 24 studies listed in section three point one, Experiments Indicating Harm is the subtitle of that section. (And just kind of a side note here, some researchers wrote a blog post for the London School of Economics’ blog that was pretty critical of the book, and one of their points was how Haidt shows the research supporting his opinion in the main text, and acknowledges the ones that don’t support his position in the endnotes. At least in the Google Docs the experiments not indicating harm are described right after the ones indicating harm.) I was able to find 16 of those experiments indicating harm; the remainder weren't publicly available and weren't available through my institution. Here's what I found in the sixteen I looked at. Pretty much all of them have the same demand and placebo effect problems that we've already seen. Allcott and colleagues literally told participants they were studying "the welfare effects of social media" and then PAID people a hundred and two dollars to deactivate Facebook. Eighty percent of participants later said that deactivation was "good for them," but is there ANY chance the participants knew what they were SUPPOSED to answer? As we saw earlier, it's very difficult to set up a good control condition where the participants don't know if you're in the treatment or the control group. If you're asking people to quit social media, they know what the study is about social media and that they're in the experimental group. Only the studies Haidt cites by Dr. Julia Brailovskaia attempted to have proper control groups where some groups increased their physical activity instead of reducing their social media usage. When you ask someone to monitor any behavior closely, they often feel more purposeful and in control. A study by Schwartz found that both the group that quit Instagram AND the control group that kept using it normally BOTH improved on measures of depression and self-esteem, which the researchers admitted might just be because being in a study about social media usage made people more aware of their usage.

Jen Lumanlan:

A study by Faulhaber emailed a treatment group every day with a reminder to limit social media use to 30 minutes per day across their top 3 platforms to test the effect of self-monitoring their use. The treatment group were given no instructions about changing their social media use, but completed the same surveys about their mental health. Only the treatment group's wellbeing improved, but this isn't really a test of self-monitoring. The treatment group got monitoring encouragement yes AND a behavioral target of 30 minutes of usage AND external daily reminders. The treatment group was essentially told 'limiting social media is good for you' every morning, but the control group didn't receive this messaging. There was also no verification that people ACTUALLY reduced their usage to 30 minutes. The researchers even acknowledged: "It is important to point out that whether participants limited their social media usage to the prescribed 30 min is not the critical aspect of this experiment. The critical aspect is that participants were trying to limit their social media usage. Even though many participants may have not been able to reduce their social media use to exactly 30 min every day, the intervention was still effective." This is classic placebo effect -- people felt good about TRYING to be healthier, whether or not they changed their behavior very much.

Most of the researchers told participants they were doing something related to some aspect of digital wellness, which may tap into the cultural narrative about technology being bad for you. Participants might feel virtuous for cutting back, regardless of the actual effects. A study by Brown & Kuss observed that people using social media for 'coping, habit, and boredom.' If you take away a coping mechanism without addressing underlying stress, is this really helpful in the long term?

A study by Turel and colleagues recruited students by telling them it was "a study about Facebook use behaviors." They only included people who already used Facebook at least two hours a day, scored high on a depression or anxiety screening, and were willing to limit their use. Then they told the intervention group to "embrace a personal challenge -- to abstain from using Facebook for up to one week (seven days). To do this, we ask that you log out of Facebook on your computer, cellphone, tablet and any other devices, and that you consider uninstalling the app from your phone or tablet. If you find that you absolutely cannot make it the full seven days, please complete survey 2 before you resume use of Facebook." So the instructions made it pretty clear that the study was about Facebook use, that the researchers thought the students Facebook behavior was excessive, and that they might have trouble making it to a week without accessing it, and that abstaining was framed as a 'personal challenge.' Finding that participants who already experienced mental health challenges felt relief when they were given both permission AND ACADEMIC CREDIT to take a break doesn't prove that social media causes their stress in the first place. The study title, "Short abstinence from online social networking sites reduces perceived stress, especially in excessive users," is not a lie, but I'm not sure it belongs in the "Experiments indicating harm" section of Haidt's google doc. The same problems come up over and over again – a study by Thai and colleagues recruited people who were already experiencing emotional distress, told them the study was about social media use, and then found that those who reduced usage to 60 minutes per day reported better "appearance esteem." The researchers themselves acknowledge that when people reduce social media time, they do other things instead -- but they didn't measure or control for what those other activities were. If someone spends less time comparing themselves to filtered photos and more time moving their body or hanging out with friends in person, which change actually helped their body image? The vast majority of these studies only look at very short-term outcomes -- they might check in with participants a week or at most a month after their social media use reduction and see how they're doing. Again, the two studies by Julia Brailovskaia had the longest follow-ups, one of three months, and the other of six months. But these studies had very narrow samples of mostly female German university students, they relied on students reporting their own social media use and well-being and they didn't look at the types of social media use people reduced.

There are now several meta-analyses available that purport to show a link between children's social media usage and reduced mental health. But a meta-analysis is a way of statistically analyzing the results of a lot of different studies, and when the studies included in the meta-analysis have the kinds of problems we've reviewed, I wouldn't have any more confidence in the meta-analysis than I do in the individual studies. If the studies we're looking at find statistically significant results, which some of them do, then the meta-analysis is likely to find a statistically significant result. But that does not help us to overcome all of the other methodological issues we've seen.

Jen Lumanlan:

You might be asking at this point: ‘why does it matter?’ ‘why does all this methodology stuff matter? Studies have limitations - so what?' Well here's why it matters for your family: when research is this flawed, we can't tell the difference between correlation and causation. And if we can't tell what's actually causing the problems our kids face, we might end up fighting the wrong battle. Imagine if doctors treated every fever by putting patients in ice baths, without checking whether the fever was caused by infection, by heat exhaustion, by medication side effects. That's essentially what happens when we assume that screens are the problem without solid evidence of what’s really behind the change that we’re seeing.

Given all of the methodological problems with the experimental studies – things like the demand effects, the lack of proper controls, the short timeframes, the fact that it’s hard to get parental permission to experiment on children - you might wonder if there's any way to study this question more rigorously. That's where 'natural experiments' come in. These are quasi-experimental studies take advantage of real-world changes, like when broadband internet rolls out to different regions at different times. If social media really is driving mental health problems, we should see clear patterns as internet access improves. Let's see what these studies actually found.

The assumption is that as broadband gets faster kids want to use it more and their mental health suffers as they’re on social media more. I looked at all three studies on broadband rollout that Haidt cites, one each from England, Spain, and Italy. The Spanish study couldn't isolate social media use from general internet use, but the researchers looked at the connection between the timing of negative mental health effects and the rise of Instagram and TikTok. There were some unfortunate issues with the Spanish study, including that a link between broadband rollout and depression/anxiety was found for MEN born between nineteen eighty-five and nineteen ninety-five but NOT women. Yet the abstract at the beginning of the paper says: In particular, internet access leads to an increase in diagnoses of depression, anxiety, drug abuse, and personality disorders - for both males and females - and of eating and sleep disorders - for females only." This is not the first time I've seen one result discussed in a paper and another reported in the abstract, but it's disappointing to find it here.

The time period the Italian study covers mostly predates widespread social media, and yet the researchers observed mental health effects there as well which implies that mental health impacts are not uniquely tied to social media; that they can come from other things people do online -- they mention gambling, and perhaps we might add porn to that list as well.

Jen Lumanlan:

The English study was especially interesting because broadband improvement was done in neighborhoods at a time, rather than in entire cities. The average neighborhood has about six hundred fifty households, and the six thousand children in the sample lived in three thousand seven hundred and sixty-five neighborhoods. This made the broadband rollout data quite fine-grained, and the health and wellness data was quite fine-grained as well because it's part of a much bigger study of over forty thousand households with eight waves of data. Children in the house are given a questionnaire where they're asked about how they feel about their school work, appearance, family, friends, school, and life as a whole. There's are also questions about whether kids belong to a social media site and if so, how many hours a day they spend interacting with friends on social media, and what other activities they're involved in, which is vulnerable to the usual limitations on self-reported data. Strangely, the study found that broadband speed is strongly associated with better performance on exams at age ten to eleven, and with WORSE performance on exams at age 16. The largest effect found from a one percent increase in broadband speed was the decrease in how children felt about their appearance by about 0.6% on average. There was no statistically significant relationship between the use of social media and girls' satisfaction with their friends or family relationships. Spending 5+ hours on social media use per day had a larger negative effect on wellbeing than other online activities like watching TV and gaming, and for girls the effect sizes were comparable to effect sizes from issues like bullying or family conflict found in other research. The researchers believed that time spent on social media usage crowds out time spent on other activities, especially sports, since social media use was associated with taking part in fewer activities, although this kind of assumes that a kid who is bouncing to a different sport or instrument or whatever every night is somehow doing 'better' than one who focuses only on getting really good at ONE activity.

Both the English and Spanish researchers split their data by various factors like gender, age, and urban/rural areas. Because they don't state up-front that they're planning to do this analysis, especially when initial results aren't significant but BECOME significant after the splitting, we start to worry that p-hacking is happening, which is when you go searching in your data for some sort of statistically significant finding. There's no direct evidence that this happened, but because the researchers don't say up-front what analysis they were going to do, and they then go in to analyze several different sub-groups of data and compare multiple variables without correcting for this, which increases the likelihood of false-positive results. So once again we're left wondering why the researchers didn't SET UP the study in a way that could actually tell us what we wanted to know, rather than seeming to back into it along the way.

The final sets of data that I wanted to look at where a body of work arguing that screen time IS harming our children, and also one that argues that maybe our concerns are overblown. Dr. Jean Twenge's work was the obvious choice for me on the 'is-harming' side, since she collaborates with Jonathan Haidt so often, and he cites her work in The Anxious Generation. I came across Dr. Amy Orben's work several times as I was researching this episode and she often comes to the opposite conclusion. I downloaded every paper I could find from each of their websites that was relevant to this topic, which was 20 from Dr. Twenge and 19 from Dr. Orben.

Jen Lumanlan:

Reading Dr. Twenge's studies, I can see why she would be alarmed! She looks at multiple large datasets, often of over a hundred thousand participants that are representative of the U.S. population, and finds that teens who use a lot of digital media, especially social media, are twice as likely to report low well-being, depressive symptoms, and suicide risk factors compared to light users. She finds that his pattern occurs in the U.K. as well and is especially pronounced for girls. The relationship isn't linear -- teens who use social media for up to an hour a day often have slightly higher well-being than teens who don't use it, but well-being declines steadily as you go beyond 1-2 hours per day. Most of Twenge's findings are correlational, which means she can say that screen time and wellbeing are linked, but can't prove that one causes the other, but she does cite longitudinal studies suggesting that more social media use can predict later declines in well-being, rather than a decline in well-being predicting social media use. She proposes that sleep disruption, displacement of in-person interactions and exercise, social comparison, and cyberbullying create the negative effects.

Twenge’s critics say that self-reported screen time is imprecise, and that when you ask teens to keep track of their screen time as it's happening AND ask them to look back over a week or a month, you tend to get very different pictures of their use and which one is correct, we don’t know. Twenge responds that both of these methods yield similar associations with teens' wellbeing, especially for heavy social media use. She does consider gender differences and distinguishes between different types of screen use, which shows that not all screen time is equally problematic. This is really important because so many of the studies DON’T make this distinction, and different kinds of screen time do seem to affect boys and girls differently.

Twenge believes that studies not finding an impact associated with screen time may be 'over-controlling' by including mediators like parental distress that can hide associations with screen time that really ARE there. But her critics argue that even though she uses some demographic controls, confounds that she DOESN’T control for, like family environment and pre-existing mental health issues, could contribute to the results she's finding.

When we think about the size of the datasets we’re looking at, we find ourselves at the intersection of Twenge and Orben's work. When you use huge datasets, you tend to be able to find relationships between all kinds of things that aren't very relevant. Orben actually criticizes Twenge's work by comparing the association between well-being and digital technology use to other everyday activities, and that’s where she found that the link between teen's wellbeing and their digital technology use had about the same correlation as the link between their wellbeing and whether they eat potatoes. Both correlations were statistically significant in a dataset of more than sixty thousand people, but we have to assume as far as we know that there is no real relationship between wellbeing and eating potatoes.

This gives a concrete example of how researchers can 'fish' for significant results by testing thousands of variable combinations in large datasets, and produce some statistically significant correlations that end up being irrelevant in the real world. I'm reminded of a study that I once read on some kind of intervention to get babies to sleep more, which created a reduction of about 1/3 of a waking episode per night, from something like 10 wakings to 9.6 wakings. Statistically significant, yes, but does this have any relevance to the sleep-deprived parent's life? Not so much. So, Orben points out that same goes for the screen time data: often the correlations Twenge finds explain less than 1% of the variance in wellbeing. The effects vary by gender, developmental stage, and the choices the researcher makes in analyzing the data and even when they are there, the effects aren't always statistically significant.

Let's spend a minute looking at these analytical choices. In any study, researchers make analytical decisions like which variables to control for, how to define concepts like 'well-being,' which time-frame to study, and each choice made impacts the results. We saw lots of instances of this earlier when we talked about the experimental data, and each one of those choices affects the study’s outcomes. This is why I have a hard time when people criticize my bias, because I know I’m biased, and I tell you what my biases are. But the idea that scientific research is somehow NOT biased is really mistaken. It’s just that the bias is so baked in to the studies that we can’t see it as easily.

Jen Lumanlan:

Dr. Orben uses a technique called Specification Curve Analysis, which is a way to evaluate how the choices a researcher makes affect the study outcomes. This is a method researchers use to test many reasonable ways of analyzing the same data. It helps to show whether the main result is consistent across different choices that the researcher might make, or if it only appears under specific conditions and certain choices. Orben's work showed that some published results on screen time, including Twenge's, are among the most negative possible and aren't representative of the full range of plausible outcomes. That means you could reasonably look at the same data that Twenge did and make different analytical choices and reach very different conclusions than Twenge does.

Orben also argues that many studies conflate between-person associations (which means comparing different people at one time) with within-person effects (which means how changes in one person's behavior affect their own well-being over time). She finds that longitudinal analyses show that within-person effects are even smaller than cross-sectional ones. This is pretty important if we're thinking about banning screen time, where we expect to see a change in a person’s wellbeing after removing access to screen time.

I am feeling pretty frustrated about the state of the research base at this point. I can imagine an entire research program that would address the shortcomings in each of the kinds of studies we’ve reviewed, and even include studies designed by teens to help us to finally answer once and for all the extent to which screen time affects children's health. I actually DID imagine it all, and described it all for you, and then I realized you wouldn't want to listen to it for the 10 minutes it would take me to describe it. Because at the end of the day, The Anxious Generation is using words like "The surge of suffering" and "this tidal wave of anxiety and depression" and "the mental health crisis" to describe a very impressive-looking set of hockey stick graphs that EXPLAIN LESS THAN 1% OF THE VARIANCE IN CHILDREN'S WELLBEING. Twenge's rebuttal is that we would have to dismiss things like the link between smoking and lung cancer if we used the same 1% variance standard to dismiss other risk factors. Twenge and Orben essentially are kind of talking past each other -- Twenge says 'we’ll stop dismissing meaningful associations based on variance explained,' and Orben says 'Let's use more sophisticated methods to understand the true relationships.' But whether we perceive that 1% as meaningful or not, we can't deny that there's another 99% of the variance to explain. Here’s what the research says are just SOME of the factors that are bigger contributors than screen time usage:

The first is family relationships. One of the studies we looked at earlier from the UK found that 64% of 11-15 year olds cited family relationships as their primary problem contributing to their self-harm. There’s a significant association between positive supportive relationships with parents and young people’s wellbeing and life satisfaction.

Jen Lumanlan:

The second is social connections and relationships. This isn’t just about HAVING friends, but the quality of those friendships. Strong friendships can be especially protective when the child isn’t getting support from their family. But face-to-face time with friends was declining long before smartphones existed, which suggests much broader social changes at work.

The third is Economic security - The stress of poverty is terrifying in the U.S particularly. Financial instability affects everything from where families live, to whether parents are home or working multiple jobs, to whether there's food on the table. Those without families, youth of color, homeless youth, asylum seekers, refugees, and trans youth, are especially precarious and vulnerable. Having money can protective for the people who have it, but not having money can be incredibly difficult for those who don’t.

The fourth is Sleep and physical health - Poor sleep is both a cause and effect of mental health struggles. Academic pressure, extracurricular activities, and yes, screen time, can all interfere with sleep. But so can anxiety about grades, family stress, or unsafe neighborhoods.

The fifth factor is Academic pressure - Forty-eight of fifty two studies in one meta-analysis found evidence of a positive association between academic pressure or timing within the school year and at least one mental health outcome. Palo Alto students' emergency room visits peaked during the school year and dropped during summer, indicating that pressure from school was a contributor to the teens’ mental health there.

The sixth issue is the school environment - Beyond academic pressure, factors like bullying, feeling unsafe, lack of belonging, and unsupportive teachers all contribute to mental health challenges. We’ve seen recent increases in the percentage of students who were threatened or injured with a weapon at school, and in the percentage of students who were bullied at school (15% to 19%). There has also been a jump in the percentage of students who missed school because of safety concerns either at school or on the way to school.

When we focus exclusively on that 1% attributed to social media, we're ignoring the complex web of factors that actually shape teen mental health. This brings us back to Dr. Twenge’s work that we opened up with, where she looked at 13 factors that could have caused the mental health crisis, and then debunks each of them as an inadequate single explainer of mental health outcomes. But when we live in a society this complex, is it more likely that something as complex as an increase in mental health challenges is caused by ONE SINGLE FACTOR, and that factor is screen time, or by a whole host of factors working together? Haidt's strongest argument is timing - why would there be an inflection point around two thousand twelve? But consider what else was happening globally: the two thousand eight financial crisis was still rippling through families worldwide, climate change was becoming an unavoidable reality for young people, mental health awareness was increasing, reducing stigma around seeking help. Different countries had their own pressures - austerity measures in the UK and parts of Europe, educational reforms like Common Core in the US, and similar testing-focused changes in other countries. Maybe teens turn to social media more when they're stressed about climate change, or when their families are struggling financially, or when school feels pointless and disconnected from their real interests. In that case, the phone isn't the root cause - it's how kids are coping with a world that feels increasingly overwhelming and out of their control. And we might find that taking away one of their main coping mechanisms without replacing it with a more effective coping mechanism – or addressing the problem at its root – makes things worse rather than better.

Jen Lumanlan:

Twenge says that we should still focus on screen time because even though the variance is small on an individual level is small, because at the POPULATION level, the effects are large, and even small shifts in population wellbeing have profound societal costs. Orben would respond to this by saying that for most people the risk posed by digital technology use is negligible, and that focusing on extreme group differences like the people who are harmed the most can exaggerate the practical importance of the effect. The effects on groups of kids are nuanced, and the effects on each individual child are nuanced, one teen may find social media use helpful at one point in a day and not helpful at one point in a day and as far as I can see up to this point Twenge has not taken a strong policy position – for example, in one paper she suggested limiting screen usage to 1-2 hours per day, and that high levels of social media and internet use should be an indicator of a more careful screening for mental health issues. But she has a brand-new book that will be published in September twenty twenty-five called 10 Rules for Raising Kids in a High-Tech World that is Rather Less Nuanced and Rather More Prescriptive. Rule Number 1 is: You’re in Charge. There’s also: “No social media until sixteen – or later,” and “Give the first smartphone with the driver’s license.” These recommendations are not nuanced or specific. Again, Twenge is arguing for change on this issue because it’s doable – but creating a supportive family relationship is also doable, and would potentially have a much larger impact on children’s mental health. I want to find and pull the BIGGEST levers we can to support kids. I'll never forget what was a bombshell moment for me at the time, when I interviewed Dr. Ainsley Gilpin on what children learn from pretend play. At the end of the interview, she’s kind of casually dropped the idea that she knows the effect sizes are tiny, but the interventions are cheap so they keep doing the research because the people wanted to do the intervention because they don’t cost a lot. In this case it isn't so much that the interventions are cheap, but it seems to be politically popular for families and schools and politicians to come together to criticize the big tech companies.

And don't get me wrong; I think there's a lot wrong with our culture when producing a product that companies KNOW causes harm to some people is seen as a perfectly viable business decision. But that's the culture we live in. That's how we got climate change. That's how we get cars sold with known faults because the company balances the cost of the lawsuits they'll get when people die with the cost of the recall, and they pick whichever number is lower. A commenter on one of Dr. Haidt's google docs says that "You argue that the effect size of the marginal hour of social media is larger than others find, and I like your argument. I agree. But I'm concerned that those who are on the other side of the argument are missing this: social media is not the disease, it's the vehicle for the disease. It's the grimy subway railing that you touch that has a thousand pathogens. Measuring the effect size of touching the railing for an additional marginal minute is not measuring the right thing." The pathogens are the ideas that kids are getting through social media, and the commenter calls some ideas 'pathogens' that I don't think necessarily are, which is a result of us having different outlooks on the world. We've been told over and over and over again that screens are terrible for children. Social media is terrible for children. It's come to be something we almost intuitively believe, like cold weather causes you to catch a cold even though it kid of doesn’t. But what if it isn’t social media that’s so terrible, but the ideas they’re getting exposed to through social media. Those aren’t changing, and teens are likely to be exposed to them through other channels if we cut off their social media use. What if, when teens aren’t distracted by social media any more, they throw themselves into their schoolwork, and we end up pressuring them into another Palo Alto type of situation that we saw in part 1 of this series? What if the kids just exert the same cultural forces on each other in the playground that they would otherwise get on social media?

Dr. Orben notes the Sisyphean cycle of technology panics -- this is not the first time we've worried that teens' behavior is affected by media. In the nineteen forties, radio dramas were blamed for making children anxious. In the fifties, comic books were thought to cause juvenile delinquency. In the eighties, TV viewing was blamed for violence and aggression. And in the two thousands, video games were linked to criminal behavior. Orben says that the cycle has four stages -- new technology plus societal concerns create a moral panic. Research is done to address public fears, and then researchers start from scratch because the lack theoretical frameworks. The research ends up happening too slowly, new technology emerges, and the cycle restarts as society moves on to worrying about the next technology before we've fully understood the last one. We can see the AI panic coming already. And this matters because it shows how we're already moving on to the next technology panic before we've even figured out the last one. Just as we're debating social media and phone bans, schools are now grappling with ChatGPT, and parents are worried about AI companions and deepfakes.

Jen Lumanlan:

The pattern Orben identified - moral panic, rushed research, lack of theoretical framework, then moving on to the next fear - means we never actually learn how to help kids navigate technology thoughtfully. We're so busy fighting the last war that we don't develop sustainable approaches to digital literacy and wellbeing. If we ban social media today, what happens when the next technology arrives? Do we ban that too? And in all of this, we're primarily looking at the global north. Despite over 80% of the world's population living in the global south, over 70% of studies on adolescent depression and social media recruited participants only from the Global North, because we're the ones having moral panics. We're the ones who can relate to the slim blonde-haired white girl staring at her phone on The Anxious Generation book cover and worry that it represents our own child's future.

This is going to be one of those episodes that I never want to publish because there’s always more information, I can add to it. At the end of May twenty twenty-five, Haidt and his assistant Zach Rausch posted on the After Babel blog which is a Substack, about a “major new study that was just posted online” by a group of researchers who “set out to discover just how much consensus there actually is within the academic community on the potential harms of social media for adolescence, and on the state of evidence about those harms,” although the study has not yet been peer reviewed. I’ll quote the next section at length: “To structure the study, the researchers selected 26 claims from the text of The Anxious Generation, primarily from Chapters 1, 5, and 6. (We meaning Haidt and Rausch worked with them on this phase, to refine the claims they proposed, in order to be confident that the claims to be tested were the claims we made. We were not otherwise involved in the genesis of the study, nor in its design). The study leaders assembled a large panel of experts through targeted outreach to prominent researchers representing a range of perspectives on the issue. They then used snowball sampling, asking initial invitees to recommend additional experts. This process yielded a group of two hundred and twenty-nine researchers who were formally invited to participate in the study. Additional researchers gained access to the survey via posts in relevant academic forums, bringing the total number of individuals who received the survey at some point to two hundred eighty-eight. In Phase 1, the study leaders distributed Survey #1 to the recruited experts and also posted it on relevant academic forums. The study asked participants to evaluate each of the 26 claims. A total of a hundred thirty-seven experts evaluated at least one claim. For each claim, they were asked whether they believed it was “probably true,” “don’t know,” or “probably false.” They were also asked whether empirical evidence supported or contradicted the claim, and what types of evidence (if any) were available.”

Then in Phase 2 the responses from Phase 1 were synthesized into 26 draft consensus statements that the same researchers rated for accuracy. So if we rephrase that, we could say: Four researchers, two of whom have published about or are investigating the harms of social media and the effects of reducing its use, three of whom work at the same New York University where Haidt and Rausch work, decided to recruit a set of experts. I made an initial run at trying to understand how diverse this group of experts really was; I couldn’t look at the publication histories of all 121 of the ones who actually cited as authors of the study but even the researchers who wrote the report admit that it’s possible that their sample is biased. In a blog post explaining how the study came about, the lead authors said that they invited potential participants to join “a team to write a consensus statement on the causal impact of social media on mental health, and said that as “part of our methodology, we plan to select the central claims from The Anxious Generation and conduct a survey among experts to evaluate the extent of experimental evidence supporting each claim and identify key areas for future research.” It’s not super surprising that Dr. Orben, her regular collaborator Dr. Andrew Przybylski, and Dr. Candice Odgers, who wrote a critical review of The Anxious Generation in the prestigious journal Nature, all declined to participate. Dr. Chris Ferguson, who we called out in the last episode who was the one who said that Haidt would have been failed if he would have turned this data as a senior research project, started participating but then dropped out of the study and posted on X: “I did the original survey but my comments were ignored. Many others not invited and some declined to participate because they viewed the organizers as having conflicts of interest (which they do). Result is a self-selecting biased sample.” So, three of the leading researchers who are critical of Haidt’s claims dropped out before the project even started and they were invited and they didn’t participate, and another after the first round – but the lead researchers didn’t seem to pause and wonder if the project should have been SET UP in a more inclusive way.

Once the lead researchers had identified their participants, they then told the participants that they were investigating a series of claims from The Anxious Generation, a popular recent book that is very critical of screen time, to see the extent to which the participants agree with the premises in the book.

Because this is a book about the harms of social media, all of the statements the researchers were asked to evaluate were written in the negative: so things like “Social deprivation can reduce mental health” “social media can impair sleep.” First, let’s look at the conditional phrasing of the statements: those “cans” are critical, because they allow for SO MUCH flexibility in interpretation. Even with all the issues in the research that we’ve considered in these last two episodes, I’d probably agree that social deprivation CAN reduce mental health, it CAN impair sleep. DOES it reduce mental health and impair sleep? And is it the most important factor affecting those outcomes? Maybe not…but it CAN affect them. So now I both believe that the research on this topic is overblown, AND I’m agreeing with the statement of harm.

Jen Lumanlan:

Then there’s the fact that these topics phrased as statements, not questions, and you don’t have to have a hundred twenty one PhDs to know that people are much more likely to agree with questions when they’re asked for a yes/no response regardless of the content of the questions themselves. Participants will even agree with statements that logically contradict each other, and can be swayed by the wording of a question. One study on this topic found that asking participants to agree or disagree with a statement inflates correlation between the items due to response style rather than actual opinion. So a hundred ten of the researchers in this study said that it is probably true that “sleep deprivation can reduce mental health.” What a shock. I could say: Hey, I’m working with one of the world’s leading researchers on how much hemorrhoids are painful, who has just published a book about how much hemorrhoids are painful that’s getting a ton of publicity. Can you please say whether this statement is probably true or probably false: “Hemorrhoids can be painful.” Is it likely that this methodology is going to introduce some bias into the results?

In Phase 2 of the experiment the experts provided citations of relevant research, and the experts generally reported being aware of more evidence in favor of a claim than against. Well, I’m aware of more evidence in favor of these claims than against them, but that doesn’t mean I agree with it, for all the reasons we’ve discussed in this episode.

Then they had the experts rate the policy statements in The Anxious Generation – that phone-free schools would benefit teens’ mental health, that phone-free schools would benefit the mental health of adolescents overall, and imposing and enforcing a legal minimum age of 16 for opening social media accounts would benefit the mental health of adolescents overall. This time, all the proposals are phrased in the positive. The experts thought all three proposals were ‘probably true’ by a ratio of eight to one on phone-free schools, six to one on no smartphones before high schools, and three to one on limiting social media use to over 16s, although 20-25% of respondents said they didn’t know if each of the policies would be more likely to provide benefits than not provide benefits. By separating out the ‘don’t knows and then reporting the benefit/no benefit ratio, it sounds like there’s much more support for the benefit side than there really is. A ratio of 8 to 1 only means that about 70% of the experts thought phone-free schools would likely benefit mental health. And a ratio of 3 to 1 means about 55% of them thought limiting social media to age 16 would improve mental health. Which is kind of a strange finding if you think about it, because aren’t teens using their phones in school mostly on social media? So experts think social media causes harm, how can more of them think that keeping kids off social media during school hours only is more likely to benefit them than limiting them entirely until they’re 16?

The blog post says that “objections (to the study) largely centered on who was at the table rather than what emerged from it. Even if our invitees had skewed one way or the other, that imbalance would only matter if it generated flawed judgments. Ultimately, the central question is whether any substantive errors exist in our 26 consensus statements. If any expert uncovers a genuine error or oversight, we stand ready to make any necessary corrections during the review process.” But there can’t really BE any genuine error or oversight, because the results consist of the opinions of the participant experts. We can’t know if the imbalance of participants generated flawed judgments because we had an imbalance of participants.

Jen Lumanlan:

Let me be clear about what I'm trying to say here: The mental health challenges teens face are real, but they're not distributed equally. We’re pouring resources into debating whether suburban kids spend too much time on Instagram while American Indian and Alaska Native girls are dying by suicide at FIVE TIMES the rate of white girls. 84% of Native girls experience violence in their lifetime. These aren't problems that smartphone bans would solve. In fact, for Native teens in geographically isolated communities, social media might be one of the few ways to connect with other Native youth, access mental health resources, or maintain cultural connections.

For LGBTQ+ youth, the benefits of social media are well-documented. They can act as safe environment to access important information’s about identity Social media can act as a safe environment to access information about identity, express identity, or provide support among LGBTQ people, thus supporting mental health and well-being.

Black teens are more likely than Hispanic and White teens to use social media to get information about mental health. When we ban ALL kids from using social media, we also ban the kids for whom social media is a lifeline, because their families and local communities don't accept them as they are. And this would only address GIRLS’ mental health use, because boys don't use social media as heavily and still commit suicide at such high rates that it's their deaths that drive national level trends, not girls' deaths.

The solutions we're discussing - banning social media, limiting screen time - are solutions designed by and for relatively privileged communities. They assume that the biggest threat to teen wellbeing is too much TikTok, not poverty, discrimination, academic pressure, not a lack of mental health services. And are we kidding ourselves that banning these services would actually mean that kids wouldn’t be able to access them? Kids have been finding ways to get things we’ve banned for as long as we’ve been banning them. But they might be less likely to come to us for help if they know our first response will be: “But you shouldn’t have been using it in the first place.”

I think there's a massive potential for action in between blanket bans and just holding up our hands and saying "fine, kids, do whatever you want." It's an approach that requires a lot more nuance than a blanket ban. It's going to require actually talking with our kids. It’s going to require perceiving our kids as people with agency, who shape their own technology use, and aren’t just acted upon by big companies. And it might even address those factors that make up the other 99% of teen wellbeing.

Jen Lumanlan:

In Part 3 of this mini-series, we'll explore what a nuanced approach might look like. We'll discuss how to help teens develop their own internal compass for social media use - not through restriction but through reflection and collaboration. We'll look at how parents can have productive conversations about online experiences without becoming the screen time police. And we'll examine what schools can do beyond banning phones to actually support teen wellbeing. Because if we really want to help our kids thrive, we need to look beyond their screens to their whole lives - their relationships, their stress levels, their sense of purpose and belonging. The solution isn't as simple as taking away Instagram. But it might be more effective.

Emma:

We know you have a lot of choices about where you get information about parenting, and we're honored that you've chosen us as we move toward a world in which everyone's lives and contributions are valued. If you'd like to help keep the show ad free, please do consider making a donation on the episode page that Jen just mentioned. Thanks again for listening to this episode of The Your Parenting Mojo podcast.

About the author, Jen

Jen Lumanlan (M.S., M.Ed.) hosts the Your Parenting Mojo podcast (www.YourParentingMojo.com), which examines scientific research related to child development through the lens of respectful parenting.

Leave a Comment