How did you come to turn attention to the issue of children’s protection in cyberspace?
Packin: I have a background in financial regulation and an interest in technology policy, and that’s collided with fintech. I started being very interested in how these digital platforms impact children: Are they playing to their vulnerabilities?
I turned to one of my former Penn Ph.D. fellows, Michal, who has been researching children and their vulnerabilities and protection under the law for a long time. We wanted to use our complementary skillsets to do a better job analyzing these issues. To explore the more scientific aspects, we reached out to Gideon, who has a Ph.D. in neural systems, and to Diana, who looks at neurobiology and interventions and outcomes. So, we created this power team that was able to give the topic a 360° view.
Also, I have four children of my own, ranging in age from 3 to 11, and their digital consumption is unavoidable. Even the most informed, on-top-of-it parents are subject to these manipulative and unknown territories. Someone needs to be an advocate for children, and this is an area that is very ripe for intervention.
Gilad: I have a criminal law background and specialized in prosecuting crimes against children, and most of my writing is about how to adapt laws and policies to the developmental needs of children. What I found in my practice is that a lot of the decisions in the policymaking world are made based on ideologies and intuitions, not scientific fact. With our collaboration with Gideon and Diana, we are realizing how important it is to create this common language so science can be more accessible to people who create laws and so we can all benefit from the advanced findings that science brings to the table.
That’s why Nizan and I created the Multidisciplinary Center of Childhood, Public Policy & Sustainable Society, to bring together science people and law and policy people, to make research more accessible to people who make laws, to bring it out of professional journals, and to have a real-world impact.
What do we know about children’s vulnerabilities to some of these digital applications and games?
Fishbein: The child and adolescent brain is considered especially vulnerable because it’s rapidly developing and, thus, more readily perturbed by outside influences than in adulthood. This developmental period is also typified by immaturity of neural systems and their connections that serve learning, impulse control, decision-making, planning for the future, and other cognitive and emotion regulatory functions. As a result, children’s ability to assess the consequences of their actions, engage in advantageous decision making, and inhibit emotionally-driven impulses are not yet mature, so they are vulnerable to features of games that are stimulating and fun but lack the full ability to invoke their own guardrails.
Adolescents are even more vulnerable in some respects. They have less communication between the front of the brain and affective regions of the limbic system, meaning that there is less cognitive control over emotional reactions to environmental stimuli, relative to adulthood or even earlier in childhood. As a result, the brain is more intensively stimulated by rewards and less motivated to avoid penalties. In the context of newfound autonomy and increased opportunities to engage in risky behaviors, not to mention access to various financial means, adolescents may be especially prone to negative consequences of engaging in digital environments.
Gilad: The age designated in the Children’s Online Privacy Protection Act considers children 13 and older fully consenting adults, but that’s arbitrary. Why 13 and not 12 or 14? We’re expecting children at 13 to make really big decisions that can affect their lives and health, but do they have the ability to do that? The answer is no, probably not.
It's important to recognize the conflict of interest here because this addiction feature, which game developers might call ‘stickiness’ or ‘engagement,’ is what these companies strive for; it’s how they make money. Then on the other side you have children whose brains cannot resist it that much. The companies don’t mean to intentionally harm children, but the core of their financial interest is based on these addictive qualities.
Nave: There are age ratings on video games, but, ironically, the entity that gets to decide that rating is the app developer, not an outside regulatory agency. Children have different developmental stages, and we need different ways to protect them, just like you have to have a booster seat for kids who are a certain height, you can’t sell alcohol to kids, and you can’t advertise e-cigarettes to children.
Packin: To the dismay of many unconsenting parents, we see children playing games where you’ll never be able to win or to finish the game or advance a level if you don’t pay money to obtain ‘loot boxes,’ which are consumable virtual items that can be redeemed to receive items ranging from a player's avatar or character to game-changing equipment such as swords and armor. You’ll be behind if you don’t buy certain features. This has become a reality in modern video games. A handful of European governments have banned these ‘loot box’ features, but there is not regulation like that in the United States. However, the Federal Trade Commission did just issue this month an order requiring that Epic Games, the creator of the popular Fortnite video game, pay $245 million to consumers to settle charges that it used dark patterns to trick players into making undesired purchases and enabling children rack up unauthorized charges without their parents’ involvement. We think these issues are something that needs to be regulated and carefully discussed.
Where are the major research and policy gaps that need to be filled?
Gilad: We know that there are associations between time in cyberspace and negative outcomes. We have the science to show that. What is missing is establishing causation. But it’s tricky to do those studies. If you do a randomized controlled study, you are exposing kids to something that’s possibly harmful to them. But in order to really show causation you need to establish that exposure to technology is the single cause of harm and also to fine tune what exactly in the game is causing this harm. Is it the visuals? Is it the sound? Is it the fast pace? Is it the incentives?
Fishbein: In addition to what Michal mentioned, we have yet to fully determine how marketing techniques are furthering those consequences. We don’t yet know what degree of exposure leads to those consequences. And, importantly, we don’t yet understand what characteristics of children, or subgroups of children, make them most vulnerable to those impacts. How might children’s preexisting characteristics or environment predispose them to excessive play or make them more vulnerable to negative consequences, for example?
Nave: The psychology of technology is a really interesting emerging area. Technology creates new environments in which people receive messages, make decisions, and interact, and these environments are not natural to us and are changing extremely rapidly. There are a lot of questions to ask about how we behave in these contexts.
An important thing to note is that we could do a basic science research in this area, and that’s great. But technology is moving so fast that we cannot afford to move slowly. The pace of scientific publication and the pace of policymaking are not keeping up. We want translational research that is disseminated as fast as possible to policymakers and to consumers.
What are some of the important next steps to take to protect children in the virtual world?
Gilad: Creating our research center is our first step, and that’s really the main message of this piece: the importance of having fluid collaborations between law and policy scholars to create more tailored research progress.
The law will forever be behind the science because law takes time to change, and there are many forces in play and political ideologies that are sometimes in conflict with the science. But we think it’s possible to do something that can be protective in a way that can progress with time and evolve. It seems like there is a lot of political will to act. Since we started working on this piece last summer, there has been a constant evolution of law and regulation around this issue in the United Kingdom and in the United States. It’s moving fast.
Nave: With the center’s work and by getting the word out to the readers of Science with this article, we hope this will spark a research program and more incentives for research in this domain. I think we as a society are not fully aware of the wild reality of technology. Tech companies are really powerful, and they have lobbyists working hard in their interests. Technology often has mixture of benefits and perils. We need to find a balance that works in our benefit.
Packin: We have a whole list of recommended next steps in the article, but I do think the biggest issue is more awareness on the part of lawmakers and other parties to this issue. And this cannot happen without scientists. There has to be a research and policymaking process that gets everyone involved in the very early stages. The bottom line is what happens in the virtual world has real implications on the lives of children.