Are young workers canaries in the AI coal mine?

- A recent study by the Digital Economy Lab at Stanford University takes a data-driven approach to the effects of AI on the job market.
- Young, entry-level workers appear to be impacted by declining employment as a result of AI automation.
- In occupations where AI serves a more augmentative role, the study found growth in employment for workers of all ages.
During the late 19th and early 20th century, coal miners in Europe and North America used canaries as living carbon monoxide alarms. Due to their high metabolism and sensitive respiratory system, these small, yellow songbirds succumbed to the invisible, odorless gas much faster than humans. As soon as the small cages strapped to their tool belts stopped chirping and chattering, the miners knew it was time to head back up.
Several lifetimes and technological revolutions later, this unconventional safety measure has resurfaced in the title of a monumental study — one concerned with a different potential hazard: artificial intelligence. Published in late August by the Digital Economy Lab at Stanford University, Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence constitutes one of the first attempts to trace, by means of actual, hard data, how AI is reshaping the American labor market.
The results are rather alarming. So far, the widespread adoption of generative AI appears to be hitting entry-level workers harder than older, more experienced people in senior positions. The study also suggests that AI has a greater effect on employment opportunities for professions where the technology is capable of automating human labor, such as software development and customer service. In professions where AI augments and enhances human labor, on the other hand, employment opportunities have remained stable, or even increased.
To get a clearer picture of how AI is expected to change the future of work, Big Think spoke with two of the study’s three co-authors: Erik Brynjolfsson — Jerry Yang and Akiko Yamazaki Professor and Senior Fellow at the Stanford Institute for Human-Centered AI — and Bharat Chandar, postdoctoral researcher at the Stanford Digital Economy Lab and Institute for Human-Centered Artificial Intelligence.
Here’s what the canaries are telling us.
Big Think: What was the genesis of this study?
Brynjolfsson: Like everyone else, we were seeing a lot of news and chatter about AI, with some saying it was destroying jobs and others saying that that was nonsense. There were a lot of conflicting anecdotes. In the job market, there are thousands of layoffs and hires each month, so it’s not hard to find examples for whatever hypothesis you want to put forward. We thought it would be a good idea to take a data-driven look at what was actually happening.
Big Think: “Canaries in the coal mine” means “warning — danger ahead.” What’s the danger here, and how big is it?
Brynjolfsson: While we didn’t mean for it to sound too apocalyptic, the idea is that there are certain early warning signs you can spot before others. Even before you notice a change more generally, there may be a couple of markers that give an indication of what’s to come. We also put a question mark in there, because we’re not sure yet, but we do see early evidence that certain groups have falling employment rates, while others have rising employment rates. These could be harbingers of bigger things to come.
Big Think: If these are just the early warning signs, what’s ahead of us?
Brynjolfsson: Technology has always created jobs, and it’s always destroyed jobs. In some sense, this is not a big change… yet. But one could imagine, and some predict, that AI will have much more widespread effects than earlier technologies. If that’s the case, we’d want a dataset capable of tracking changes in as close to real time as possible. Right now, the effects are relatively small in the overall labor force, even though, for certain groups, there were significant losses. Right now, the broader effects are relatively muted. But if they start becoming more widespread, that would be something we’d need to take seriously.

Big Think: Research into AI’s impact on society is booming. Did you stumble onto anything surprising compared to previous studies?
Brynjolfsson: I should preface by saying — yes, there is growing interest in the economic effects of AI, but not nearly enough. There’s huge investment in creating stronger AI capabilities, and yet we as a society aren’t putting nearly enough energy into understanding the effects those capabilities have on the workforce, or on productivity.
Chandar: One thing that surprised us was the results for automation versus augmentation. For occupations where AI is used to automate work, we saw declining employment for young workers. But in occupations where AI serves a more augmentative role, we actually saw growth in employment for workers of all ages.
Big Think: How do you know the changes in the labor market you’ve observed are the result of AI, and not inflation, tariffs, or other economic factors?
Chandar: We came up with different hypotheses for what could explain the data and investigated them. For example, we considered the impacts of remote work. During the pandemic, a lot of work shifted to home, and we wanted to see if the return to the office contributed to some of the effects we were seeing. However, when we compared, we found similar patterns for remote jobs and non-remote jobs.
For occupations where AI is used to automate work, we saw declining employment for young workers.
Another hypothesis that we wanted to test was whether the effects could be explained in terms of education, as there is some evidence that education was harmed during the pandemic. But again, the results were similar for people who did and didn’t go to college. We also looked at part-time versus full-time work, and again we reached similar results. While we wouldn’t attribute everything to AI, we do think we’re picking up on its effects, especially with regards to entry-level workers.
Big Think: What has the response to the paper been like? Since its publication, has anyone come forward with takes or commentary that really got you thinking?
Brynjolfsson: We intentionally wrote the paper in a kind of “just the facts” tone. We didn’t spend a lot of time trying to put forward stories or theories about why this is happening. We just said, “This is what we see.” That, I think, opened the door for other people.
Joshua Ganz, an economist at the University of Toronto who does a lot of work on AI economics, noted that the results were very consistent with the hypothesis that AI is complementing more senior workers. He also said that, if that were the case, then companies might continue to hire more senior workers, but not as many junior workers, who aren’t being complemented the same way.
Others have argued that, although AI has proven remarkably good at automating coding, the software industry has experienced ups and downs that are independent of AI. Disentangling — separating how much is due to AI and how much is due to over-hiring and other factors — that’s what we’re doing now.
Big Think: You mention the difference between “codified” and “tacit” knowledge: What do those terms mean and why is one more AI-proof than the other?
Brynjolfsson: A plausible argument is that AI learns similarly to how people do in college: by reading and studying codified examples. Meanwhile, more senior employees learn what you might call the “tricks of the trade” — a type of [tacit] knowledge that isn’t written down. In that sense, they know something that neither junior employees nor AI can learn on their own, and — for these types of jobs — that makes AI more of a compliment rather than a substitute.
Big Think: In the study, you mention that AI appears to mostly affect employment, not income. Why could that be?
Brynjolfsson: There’s a large span of literature in economics about this concept called “wage stickiness.” Basically, it refers to how it’s a lot harder to cut someone’s salary than it is to not hire them in the first place. People are very resistant to downward movement in wages, and I think that may explain why income is a little more “sticky” than employment. Most of the changes on the employment side happen not from people being laid off, but from people just not being hired to begin with. And if you’re an HR manager, that’s probably an easier decision to make than to go to somebody and say, “Hey, you’re getting a 10% pay cut.”
Big Think: If you don’t hire entry-level employees, who’s going to replace the senior ones when they retire?
Brynjolfsson: It’s an important point, and one people often bring up when we are talking about this. It’s great that senior workers are so valuable. But where do they come from? They come from junior workers. And if you don’t hire junior workers, you’re not going to get senior ones. Companies will have to think ahead. They can’t not hire people and expect senior workers to magically appear.
A lot of the onus is on universities and companies to modify and improve their training and onboarding, as one concern is that junior workers aren’t entering the job market with the skills they need. I recently had a conversation with Andrew Ng [cofounder and head of Google Brain], and he said the same thing: that a lot of juniors had not been taught to use basic AI tools like Cloud Code and Devin at universities, only the fundamentals of software engineering. He thought that was a mistake, that universities should be teaching them those things, not employers. In a way, it’s a bit ironic that one of the things that we don’t have enough of right now is knowledge of AI itself.
It’s great that senior workers are so valuable. But where do they come from? They come from junior workers. And if you don’t hire junior workers, you’re not going to get senior ones.
On the other side of that equation, we might see big changes in the fields students choose to study or the professions they pursue. Perhaps they’re getting fewer computer science degrees and more degrees that lead to employment in less AI-exposed fields.
Big Think: With that in mind: should governments step in to prevent AI cannibalizing entry-level jobs, or is it in companies’ self-interest to do something?
Brynjolfsson: It’s certainly in the company’s interest. It’s in the employee’s interest, but it’s also in the government’s interest. One of the basic concepts in Economics 101 is the idea of a public good, a category that includes education and training. The investments that companies make help them grow, but they don’t capture the full benefits. Employees are free to take the knowledge they have acquired and go work somewhere else. This means there’s going to be an underinvestment in training if left entirely to the private sector. That’s why training and education have almost always been partly publicly provided or publicly subsidized.
Big Think: How does the impact of AI compare to previous waves of automation?
Chandar: Though I think there are many ways in which AI is different, I also think past experiences can guide us in terms of how we adjust to new changes, what new jobs could be created [or] destroyed, and how the economy responds to all of that.
Brynjolfsson: Every wave of technology is different. There are some things we can learn from the past, but we also have to look at data on what’s actually happening in the present and see what’s new. For now, one thing that seems a little different is that these earlier waves of technological development often led to an increase in wage inequality, and we’re not sure we’re going to see that in this [wave] just yet.
Big Think: Are there any new questions this study has produced that could form the objectives of future research projects?
Brynjolfsson: We actually have nine big topics we’re encouraging researchers go for: economic growth, scientific discovery, inequality, concentration of wealth, concentration of power, measures of wellbeing, catastrophic risk, geopolitical implications, and transition economics: how we get from where we are now to a world where AI is much more powerful. Every one of these would constitute a big research agenda, so there’s plenty of work to do. We’re still in the early days of the AI revolution.