Skip to content
Business

Essential skills for “weird times”: How internet pioneer Caterina Fake keeps it real

Venture capitalist and Flickr co-founder Caterina Fake talks to Big Think about why AI won’t make the internet better, her influences beyond tech, and more.
A grayscale portrait of Caterina Fake is centered between an FDNY ambulance on the left and patterned designs with circular symbols on the right.
Caterina Fake / David Vives / Logan Voss / Unsplash / Big Think
Key Takeaways
  • Caterina Fake co-founded photo-sharing site Flickr and is now an investor at Yes VC.
  • Fake says she wants to get away from the “trivial BS” of the internet and is training as an emergency medical technician.
  • Here she chats with Big Think about Heidegger, the likelihood that AI can help us fix the internet, and more.
Sign up for the Big Think Business newsletter
Learn from the world’s biggest business thinkers.

Caterina Fake is an “outlier,” a woman working in Silicon Valley since the 1990s. She started out building debut websites for the likes of McDonald’s before co-founding Flickr, “almost certainly the best photo sharing site in the world,” per her LinkedIn profile. She sold it to Yahoo for a reported $35 million in 2005

Fake spent eight years on the board of Etsy and is now a trustee of arts organizations such as the Sundance Institute. She founded Yes VC in 2018, investing in energy, health, and AI companies.

It hasn’t all been smooth sailing, though — Findery, the travel journal app she co-founded, didn’t work out — and Fake says she now wants to get away from the “trivial BS” of the internet by training as an emergency medical technician. Here, she explores possible future effects of AI, the difference between “online community” and “social media,” and the importance of old-school skills.

Big Think: You were a pioneer of the internet. How did you get started?

Fake: I worked at one of the very first web agencies, called Organic, Inc., which spun out of Wired magazine to create websites for advertisers. The first site that they assigned me to work on, completely unexpectedly, was mcdonalds.com. So, I built the first McDonald’s website. It was a horrific-looking website, but it was before formatting was even possible.

I was sitting next to one of the founders of Organic, Inc., Brian Behlendorf, and we were working on a website for Kimberly-Clark. It was about teenagers having their period for the first time. I remember Brian searched for “teenage girl,” and all this pornography came up. And this is 1996 — so the internet has always been 95% garbage and 5% good stuff. But that last sliver of good content is being eroded even further. My experience of the internet, when constrained to apps, has become further polluted by an inability to turn off ads.

AI has basically rendered the user-generated internet unusable. It has made an already untrustworthy internet even more so. It makes it even easier to spam, create “content,” alienate us from reading the writing of, and thereby connecting with, real humans.

Big Think: What was the thinking behind Flickr?

Fake: It was a paid service. We were not selling users to advertisers, which, of course, is a different business model from what eventually prevailed on the internet. I was very much a proponent of a paid service up until Yahoo acquired Flickr. I felt that was a much more ethical way of doing business.

Try Big Think+ for your business
Engaging content on the skills that matter, taught by world-class experts.

The first reverse chronological so-called “feed activity” was on Flickr around 2004, which Facebook subsequently copied. We called it “online community”; they called it “social media.” And there’s a big difference between those two things. Online community is something you participate in — there are certain mores and principles that you follow — whereas social media implies that it’s just a container for stuff to get your attention. 

Those of us who were early in the trenches of online community saw this as a complete atrocity. [Facebook] was suddenly sorting everything by most popular, and what that means is most sensational, pornographic, most divisive, most extreme. But with well-designed software, you can create bulwarks against it, and so you can protect your users and community members. 

Big Think: You recently posted on X: “The internet is mostly being used to scare people.” Can AI help to reverse that?

Fake: It’s not doing so, [and] I don’t think it’s well set up for that to be the case. The internet has always had the potential to be a force of good. It was the same thing with television. But things tend to get more exploitative and dumbed down. So, reversing back into your original question, can AI help us to solve these problems? The answer is yes. Will it? No, not under the current financial incentives [and] current business models of the internet. 

Big Think: Are those current business models something you can change or influence with your investments?

Fake: I had many, many opportunities to invest in hugely successful gaming companies, but I never did because I felt it was like Las Vegas for children. Tech companies exploit our biological weaknesses, quite deliberately. Our sensitivities to stimulation and novelty, [as well as] our tendencies to be drawn to things scary, bloody, sexy, or fast-moving. The ease with which we can be alarmed or guided into hatred and suspicion is being used against us, and thereby, we lose agency at their hands. 

We called it “online community”; they called it “social media.”

We are not usually triggered while reading a poem, listening to music, or having dinner at home — unless of course the TV is on and phones are being used. But the companies behind this kind of media are using us against ourselves to weaken and weaponize us. Too much responsibility is being imposed on the individual. The creators of the software have a moral responsibility to build non-exploitative software. 

Big Think: Which leaders do you admire in tech, and why?

Fake: I don’t consider myself to be hugely influenced by the tech world. My background is in art and literature. I came out to California to get a PhD in Renaissance literature, so I started from a different perspective. Part of my thesis was based on the philosophy of Martin Heidegger. Is he an idol of mine? No, the guy was a Nazi.

It’s an unfortunate fact that companies and platforms have become very powerful — power corrupts and absolute power corrupts absolutely.

But Heidegger made some incredible observations about technology. He said that technology was not necessarily computers or even books. Technology was — is — the idea that everything becomes standing stock ready to be exploited. For example, the Danube River in nature is just water flowing through the land. But it becomes a thing to be photographed, it becomes a means of commerce, it’s there to transport goods. A forest, in the eyes of technology, Heidegger calls timber. It’s there to be exploited in some way.

Big Think: You posted on Instagram that you’re training to be a volunteer emergency medical technician (EMT) in New York. What are you learning from that experience? 

Fake: In these weird times that we live in, essential skills like this are actually important. I’m doing [the course] with my daughter. I found it to be very, very challenging, especially for somebody who hasn’t paid attention to this stuff since high school biology. 

I spent a couple of days in the ambulance, so I’ve been picking up people in the middle of crime scenes, in the middle of a shelter for the unhoused, going into people’s homes and transporting them to the emergency room, putting them on oxygen.

The internet propels you into all of this meaningless, trivial BS, right? You feel what is important and essential in life by doing things like [EMT training]. You eschew all the noise that the internet is scaring you with and making you vulnerable and susceptible to ad targeting. 

Big Think: Being an EMT is “real,” analog life. Do you see people returning to analog communications during the age of AI?

Fake: I know a bunch of Gen Alpha and Gen Z kids have turned it off completely, or they turn it on once a week to get their messages. It’s spotty because these corporations are much more powerful than the individual. Of course, you can [turn your phone off], but it’s really hard to make those choices because of the way the world operates. If you sit on a corner in Manhattan, you can get a taxi, but if you’re out in the suburbs somewhere, it’s harder and harder to get a taxi without a smartphone. 

Big Think: You talked about ethics, and you described yourself as an outlier. But have social media pioneers created a monster, despite good intentions?

Fake: So much of it depends on who’s in charge. I wrote an article for Wired many years ago, and it’s about how software takes on the values and mores of its creators. It’s an unfortunate fact that companies and platforms have become very powerful — power corrupts and absolute power corrupts absolutely. If it has a switch, turn it off. It’s within your power, in spite of all the behaviorists who have been trying to get you to switch it on. 


Caterina Fake writes on Substack

Sign up for the Big Think Business newsletter
Learn from the world’s biggest business thinkers.

Unlock potential in your business

Learn how Big Think+ can empower your people.
Request a Demo

Related