Editor’s Note: Walk into any bus terminal, coffee shop, library, or other public venue and you are likely to see crowds of young adults immersed in private universes of music and social media. Although ubiquitous, this sight would have baffled observers just a generation ago. Social connections were neither forged nor maintained over the Internet and, by the time many young people had reached their mid twenties, many were occupied with marriage, young families, jobs, military service, or other full-time pursuits. A “quarter-life identity crisis,” as this new period of prolonged transition to adulthood has been described, implies setback and confusion. But is this really the case? Might not the current delays in forming lasting life maps and partnerships be an adaptive response to economic and social turmoil, as well as the social technology that has expanded the range of social and romantic possibilities. For more on this topic, I encourage you to read this thoughtful piece by Temple University psychologist, Laurence Steinberg, which appeared recently the New York Times.
The Case for Delayed Adulthood
By Laurence Steinberg
ONE of the most notable demographic trends of the last two decades has been the delayed entry of young people into adulthood. According to a large-scale national study conducted since the late 1970s, it has taken longer for each successive generation to finish school, establish financial independence, marry and have children. Today’s 25-year-olds, compared with their parents’ generation at the same age, are twice as likely to still be students, only half as likely to be married and 50 percent more likely to be receiving financial assistance from their parents.
People tend to react to this trend in one of two ways, either castigating today’s young people for their idleness or acknowledging delayed adulthood as a rational, if regrettable, response to a variety of social changes, like poor job prospects. Either way, postponing the settled, responsible patterns of adulthood is seen as a bad thing.
This is too pessimistic. Prolonged adolescence, in the right circumstances, is actually a good thing, for it fosters novelty-seeking and the acquisition of new skills.
Studies reveal adolescence to be a period of heightened “plasticity” during which the brain is highly influenced by experience. As a result, adolescence is both a time of opportunity and vulnerability, a time when much is learned, especially about the social world, but when exposure to stressful events can be particularly devastating. As we leave adolescence, a series of neurochemical changes make the brain increasingly less plastic and less sensitive to environmental influences. Once we reach adulthood, existing brain circuits can be tweaked, but they can’t be overhauled.
You might assume that this is a strictly biological phenomenon. But whether the timing of the change from adolescence to adulthood is genetically preprogrammed from birth or set by experience (or some combination of the two) is not known. Many studies find a marked decline in novelty-seeking as we move through our 20s, which may be a cause of this neurochemical shift, not just a consequence. If this is true — that a decline in novelty-seeking helps cause the brain to harden — it raises intriguing questions about whether the window of adolescent brain plasticity can be kept open a little longer by deliberate exposure to stimulating experiences that signal the brain that it isn’t quite ready for the fixity of adulthood.
Evolution no doubt placed a biological upper limit on how long the brain can retain the malleability of adolescence. But people who can prolong adolescent brain plasticity for even a short time enjoy intellectual advantages over their more fixed counterparts. Studies have found that those with higher I.Q.s, for example, enjoy a longer stretch of time during which new synapses continue to proliferate and their intellectual development remains especially sensitive to experience. It’s important to be exposed to novelty and challenge when the brain is plastic not only because this is how we acquire and strengthen skills, but also because this is how the brain enhances its ability to profit from future enriching experiences.
With this in mind, the lengthy passage into adulthood that characterizes the early 20s for so many people today starts to look less regrettable. Indeed, those who can prolong adolescence actually have an advantage, as long as their environment gives them continued stimulation and increasing challenges.
What do I mean by stimulation and challenges? The most obvious example is higher education, which has been shown to stimulate brain development in ways that simply getting older does not. College attendance pays neural as well as economic dividends.
Naturally, it is possible for people to go to college without exposing themselves to challenge, or, conversely, to surround themselves with novel and intellectually demanding experiences in the workplace. But generally, this is more difficult to accomplish on the job than in school, especially in entry-level positions, which typically have a learning curve that hits a plateau early on.
Alas, something similar is true of marriage. For many, after its initial novelty has worn off, marriage fosters a lifestyle that is more routine and predictable than being single does. Husbands and wives both report a sharp drop in marital satisfaction during the first few years after their wedding, in part because life becomes repetitive. A longer period of dating, with all the unpredictability and change that come with a cast of new partners, may be better for your brain than marriage.
If brain plasticity is maintained by staying engaged in new, demanding and cognitively stimulating activity, and if entering into the repetitive and less exciting roles of worker and spouse helps close the window of plasticity, delaying adulthood is not only O.K.; it can be a boon.
Laurence Steinberg, a professor of psychology at Temple University, is the author of “Age of Opportunity: Lessons From the New Science of Adolescence.”