Chapter 11
From the Ashes
Web 1.0 happened with too much hype, much too publicly. In many ways, Web 2.0 happened much more out of view, but much more as a reality.
Riding this wave of user growth and improvements in connectivity, developers were evolving websites into something new. Early sites, including the portals like Yahoo that came to dominate initial usage before the dot-com bust, were comprised mostly of so-called static pages, with content that was fixed from page to page, that didn’t change from user to user, or at best changed very little based on the identity or interests of a user, and it relied primarily on material gathered by a site’s producers, housed on their servers, and coughed up over and over again until someone made a new page.
That fixed nature of websites was changing right around the time of the bust, and the term Web 2.0 was coined (in 1999) to describe sites that were embracing technology that allowed for increasingly dynamic experiences for visitors. Web 2.0 was not about a fundamentally different web, but rather referred to a critical evolution in the software that developers could use to build ever-more compelling sites. As described in Wikipedia, “A Web 2.0 site may allow users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to websites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups, and folksonomies.”
Then on September 11, 2001, it became abundantly clear to everyone that the web not only wasn’t going away as a source of news, it was becoming a central source of news. Since the inception of digital news, it had always been true that major events spiked web usage, whether at CNN or The New York Times — anywhere. Each major event brought in new audiences, and often a large percentage would stick and become regular users. Because the 9/11 terrorists flew their planes into the World Trade Center at the beginning of the work day, most Americans were in their offices, where they quickly became glued to their computers for real-time news as the tragedy unfolded. News servers were so over-taxed that many crashed or had to temporarily revert to “text-only” status. So, just as the dot-com bust is reaching its nadir, the web becomes the go-to medium for the whole country. Chris Schroeder was managing The Washington Post’s news site on 9/11.
I can remember, as an example, in the couple of hours after September 11th, people cared a tremendous amount [about] what was right or what was wrong. So in that instance, as a contextual aspect of breaking news, that mattered a great deal. People, again, weren’t thinking — and I don’t think [that they] think now — “Would I pay more or less for it?” But they certainly put a value on it. People want to know what’s really going on.
At the same time, 9/11 spawned a new technology movement around news that began to define a new era. Krishna Bharat, who created Google News, talked to us at Google’s Mountain View complex:
I helped start the research group here [at Google]. I worked on web search for a few years. But then in 2001, I sort of got interested in news again, as did a lot of people, because of 9/11. I could talk about that, if you want.
Yes, because it’s a recurring theme in our interviews. …There are several things that everybody agrees on, and everybody agrees that 9/11 was a seminal moment in the history of news and digital news.
Yes, I will. And so a little bit of personal background here. I was at a text retrieval …and indexing conference in New Orleans along with lots of other researchers, and we were stuck there after 9/11 happened because the skies were closed. So I spent all my time trying to either find a flight back or follow the news and brainstorm about finding news with a lot of people. So I was stewing in it, in some sense. I came back here, and I found out what happened in that period. A lot of online news sites had kind of melted down, so Google had to host some of that content. They built a resource page. It was abundantly clear that, although we were a premier information company, people came to us and said, “Give us information about what just happened now,” and we didn’t have a good answer, right? We didn’t have a way of telling them….A month and a half later [after September 11th] I had my first prototype: 150 sources. Either top international sources or top national sources crawled every 15 minutes or so and indexed, and presented in the form of a pretty ugly UI [user interface]. Here’s the top story and here are the articles with the top story, here’s the second story and so forth.
We [Google] were a pretty small company at that time. I sent the demo out. I would say everybody in the company looked at it and played with it, and some people got very excited about it because news was on everybody’s minds, and they went to a couple of sources habitually, and now they were able to expand the range of sources, and they were able to look at sources that had interesting viewpoints that they hadn’t encountered before. It was super efficient.
Today, Google News ranks among the top few Internet news sites in traffic. But that’s for largely text-based content. The next battlefield may be over video news, where Google has a beachhead with YouTube but not market control. Richard Gingras, one of the digital pioneers going back to the era of teletext, now manages Google News among other areas of responsibility.
Well, what we were really looking at on the television side was, “What was the future of the television set?” Google has always been quite concerned about the hegemony, for instance, that exists between distribution players and hardware devices. The control of cable over the distribution infrastructure, for instance. The control of carriers in device lock-ins in the cellular [telephone] world. The notion, really, with Google TV was, “How could we enable the full flowering of IP for video, and in a sense, to some extent, bypass the control points of the cable guys?” Thus the notion of saying, “Should there be an operating system for the TV that basically says, ‘Connect this TV to the Internet, and you’ve got the world of the Internet before you’?” Which would be a hard thing for the cable guys to control.
I was always a geek. When I was based at the FT in Budapest, I used to get on a little train [to go to] Vienna, which was the closest place you could buy Wired magazine and all the Mac enthusiast magazines.
In ’96 I switched from the investment banking beat to the tech and Internet beat…. I wanted to go to San Francisco. I’d read Wired magazine. I believed that something was happening there, and was actually a little bit disappointed when I arrived. It wasn’t quite what I’d imagined, that South of Market. I’d imagined this digital epicenter where the new web was being born. It actually seemed to be inhabited by a few homeless people, and maybe two or three people who could have conceivably been web designers. But I still believed. I closed my mind to the visual evidence.
…Before leaving the FT, Richard Lambert, who was the editor at the FT at the time, asked me when I was in San Francisco to “tell us what we should be doing.” I wrote a memo, which unfortunately I’ve lost. It said it is pointless for us to report what others had [already] done better. We should be seeking to add value, and where others have done a story better, we should link to them. This was…
Heresy.
It was heresy. It was revolutionary. Unfortunately, it was still sort of revolutionary in the newspaper world 10 years later. That was the extraordinary thing. The extraordinary thing was not that it was revolutionary then. The extraordinary thing was that it was still revolutionary and still sort of is now — that newspapers insist on rehashing stories that have been better covered elsewhere, instead of taking and moving the story forward. There’s still a huge amount of duplication in the efforts of the news industry.
The idea of Newsblogger was that you would consume and write about the news at the same time. It was actually very much ahead of its time. It was something like what we’re doing now in many ways. The act of reading and writing — in a truly interactive news environment — cannot be separated. They have been separated, but they cannot be usefully separated.
He continued through this period as a serial entrepreneur, attempting to buy Blogger and then resigning from Moreover. He took up personal political blogging after 9/11, and also wrote self-disparagingly about the whole dot-com boom and bust, including his and others’ roles in it. He moved to New York in the spring of 2002, and founded Gawker as a side project.
There’s a certain demographic. It was part of the city, and here was a site that appealed to a very specific group who are very well networked, who would talk amongst themselves. It was a very good place to start in many ways.
The site got buzz almost straightaway. Our launch party was maybe two or three months after we launched. Kurt Andersen came. I didn’t know Kurt Andersen. None of us were connected to him in any way. He was a figure. We’d all heard of him. He’d edited Spy magazine. Gawker was, to some extent, the successor…to Spy magazine.
One of the reasons it took off, I think, was because there was nothing else going on at the time. They were all carpetbaggers and had been washed out of the market. There was no Internet advertising. The initial business model, to the extent there was one, was that, “Maybe we can make some money off of licensing fees.” That was the extent of it. Or otherwise, “I’ll just fund it for as long as it takes.” When something takes off like that, you should just plunge straight in. I wouldn’t say I plunged straight in. In retrospect, I should have gone more aggressive, sooner.
Now, just as Denton did, all manner of news entrepreneurs could take advantage of the new generation of technology and publishing tools to build businesses at much lower cost than previously. The barriers of entry to publishing were at a historic low. A young journalist named Rafat Ali founded paidcontent.org, in part, to chronicle the post Web Winter. Two entrepreneurs named Jason Calacanis and Brian Alvey founded Weblogs Inc., a network of dozens of blogs, later sold to AOL.
While Nick Denton was steadily building his network of blogging sites, Arianna Huffington and Ken Lerer were creating another kind of news operation. Huffington and Lerer each brought a co-founder to the table. Huffington brought journalist and blogger Andrew Breitbart; Lerer, an ex-teacher and ex — MIT Media Lab graduate student named Jonah Peretti. Peretti describes how Lerer recruited him to be a Huffington Post co-founder:
He knew I was from a very different world with very different interests, and wasn’t a creature of the media world. He knew that he needed someone who understood the web and technology. I ended up flying out and meeting Arianna. I remember we had a 7 a.m. meeting which, for me, is incredibly early. She was already in a meeting with another group when I woke up, came out of my room in her house, and she’s already at the table having a full breakfast with like some NGO in L.A. that was working on an environmental cause or something. I was like, “What?” I don’t know if I was the second breakfast. Maybe it was the third or fourth. But she was definitely starting earlier. Next thing I knew…we were flying to [a] rally in Sacramento, which wasn’t planned.
I came back feeling like, if anything, it would be an adventure. Arianna was incredibly charming and tireless and driven. Then we formed a partnership, also with Andrew Breitbart, who used to work for Arianna. The four of us went into business together. We started hiring a founding team. Then, of course, Roy [Sekoff], who was working with Arianna previously and continued, became a partner at HuffPost, too.At the outset, The Huffington Post combined many of the things that had appeared since the inception of web news, but most notably the ideas of aggregating content and blogging:
It was aggregation. It was Drudge [the news aggregation site founded by Matt Drudge] plus three other elements: The collective blog; the community — because, from the beginning, we made it very easy to comment…; the fourth element was original reporting. It was part of the template, but at the beginning, we didn’t have the money yet.
[So we] made a page that Google liked and that consumers liked as this one-stop shop to find out all the things that are happening all around. Now, the question that some people ask is, “If you are a news organization and you have five people on the scene and you’re doing tons of original reporting and you’re writing these stories and collecting things and it costs much more and HuffPost is getting more traffic than you, is that fair?”
One example of this — I remember some reporter calling me to ask about this a while ago — it was, The New Yorker did a long Scientology story and the HuffPost outranked The New Yorker story in Google. The reporter was like, “Isn’t this unfair? Think how much The New Yorker spent on that.” But when you think of what a consumer wants — a consumer is in their office. They have a little bit of time between meetings. They heard some buzz about a Scientology piece in The New Yorker, they search for it. They get a page that has some bullet points that explains what’s in it, a link to it. Like, “You might want to read it.” And three or five percent of people are like, “Oh, I actually do want to read this long piece.” And other people are like, “Oh, I’m glad I know what it’s about. I don’t really want to read this. But I’m glad I know what it’s about.”
From a purely algorithmic perspective or purely technical perspective, Google is giving people, in that case, a good consumer experience. But from an economics of journalism perspective, there is a problem, which is that The New Yorker is spending a huge amount of money to produce this long story and they’re not getting as much traffic online as Huffington Post, which might have spent two hours on it, or some even shorter amount of time. The time it took for the editor to read it and pull out relevant things.
So one question is, what’s good for consumers? Then there’s this other question of what’s good for journalism and how do you build sustainable models for journalism?
One especially important software tool that had enabled these new media publishers to emerge during this time was RSS, sometimes known as Really Simple Syndication. RSS gave publishers the ability to programmatically syndicate content across the web, in turn allowing them to greatly, and instantly, expand their audiences. Developed in the late ’90s in part by Dave Winer, RSS became the syndication standard that would also unleash a whole new generation of products, like Google Reader, built on “content aggregation,” a concept that stirred much controversy among traditional content providers as links and articles were freed from their “home” publications and reassembled into new mash-ups.
Winer tells the story of how The New York Times and other major news organizations adopted RSS and helped make it a standard in the early 2000s.
RSS didn’t really exist yet. I mean, it was sort of nascent, but it wasn’t really popular at all.
And what were you planning to do with it?
What I did with it is that we had an aggregator that we just plugged in, and we had several news sources: Wired, Red Herring, Motley Fool, a lot of blogging tools. We had a lot of stuff coming through our system. What we didn’t have were the major news organizations.
So you get this call from the licensing department at The New York Times?
And she says, “You’re a very sweet boy, but you can’t do this.” And I go, “Oh, please?” You know, I felt like…loved and admired, but absolutely prohibited to do this.
But caught.
Caught. And I said, “I understand. I won’t do it anymore.”
Ultimately, The Times and other news organizations agreed to adopt RSS, and the standard took off across the news industry, enabling a new class of aggregators called “news readers.” This was the beginning of the inevitable trend of readers assembling much more fluid, more personalized news experiences. News websites would get traffic from these sources, but the tradeoff was the steady erosion of the “front door,” or homepage “brand” that was so dominant during the Web 1.0 era. With these in place, many consumers no longer wanted to go to individual site “homepages”; they wanted a convenient “reader” that would gather all of their favorite web content. RSS made this revolutionary new user experience simple, taking former attempts at aggregation, like My Yahoo, much farther across a wider range of sources of content. Hundreds of such readers launched, including Google Reader, which eventually shut down in 2013 as Twitter and others reached dominance. But in their time, these readers portended a change of user behavior that would lead to hyper-fragmentation in the news world.
One of the seminal companies started during this period, in late 2003, was Feedburner. Dick Costolo, its founder, is now CEO of Twitter:
It started to become obvious to us, the founders of Feedburner, that there were getting to be too many things you have to check in the morning. Remember, there were these — they called them, I don’t know — My Yahoo; these half-attempts at, “Assemble your own homepage and we’ll pull in all these widgets; I’ll tell you what’s going on.” But they really weren’t very good, frankly. With the invention, if you will, of syndication, RSS, we realized, “Oh, this is the future. What you’ll do is you’ll subscribe to a bunch of RSS feeds, and they will be brought to you and delivered to you. You’ll only have to go to this one thing to keep track, in real time, of the 50 things you want to be interested in.
The idea behind Feedburner was, in this world where all content will be syndicated, and what you’ll catch up with in the morning is a feed instead of your 90 different sources you go to, somebody needs to sit between the publishers and the subscribers and create some sort of frictionless way of making sure this stuff gets shared easily and is trackable and traceable.
The content providers are still going to need to make money, so they’ll want to put ads in their feed and on and on and on. That was the idea behind Feedburner, and we were right.
Then Google acquired you, right?
Mm-hmm. In the summer of 2007, they acquired us, again, on the hypothesis that as content is syndicated and more and more people are getting content in syndication instead of going to xyz.com, it’s going to be important to be part of that world of syndication. A publisher clearinghouse, if you will.
As the world of content became increasingly fragmented, news companies were now challenged to supplement the traffic coming directly to their homepages with users coming in from the “side doors,” that is, from the RSS-enabled news readers and, most importantly, from Google’s increasingly dominant search engine.
By 2004, it was clear to some that we were entering a new era of innovation, characterized by a heretofore-unseen focus on cost containment. The dot-com bust had evolved into an era of more seasoned, more disciplined venture capitalists and entrepreneurs, not to mention bean counters at traditional media companies, seeking much leaner business models. This was Web 2.0, and while there is probably much truth to Tim Berners-Lee’s characterization of Web 2.0 as just “jargon,” the new phrase caught on and began to define the post — dot-com bust era. John Battelle, co-creator of the Web 2.0 conference, and now CEO of Federated Media, told this story in a Google Hangout (Yes, we understand the irony in this):
I think there was a cultural moment, after the dot-com crash, where there was a lot of sentiment in the air that this Internet thing was certainly important, but it was overhyped, under-delivered, probably over-capitalized, and a lot of people lost a lot of money. In New York, in particular the financial markets, I think, had a very negative view of the web, as did a lot of the large media companies who had invested heavily in it, and not seen a return. [They] were, quite honestly, I think, driven in part by a concern that their traditional models were going to be disrupted. So that was some schadenfreude.
Web 2.0 really meant that if we take a platform that is open, that has a shared sense that values how we connect to each other, how we share information, how we communicate, we can do some pretty remarkable things.The open source stack of technologies had become far more stable and, probably most importantly, we had a broadband usage that had crossed 20 or 30 percent in developed markets and was growing at a spike similar to the spike we see now [in 2013] with mobile adoption.
Probably the seminal Web 2.0 company was one that started in 1998, and that was Google. Google was built from the ground up on this idea that the web is about connections between things. In the first instance those things were webpages; in the second instance they were people. Just like what we’re doing now on a Google platform [in this Google Hangout].
[Google] gave almost everyone an instant reason to derive value from the Internet, which was, “I can instantly find what I need and go there.”