I first posted this essay on my personal blog and have moved it here, with a few small tweaks. This essay gives some insight into why Content Science is pursuing the Content + Credibility Study and why we offer content testing services.

I feel the need to say what should be the obvious. Why? Because recently, while catching up on my Twitter feed, the following statement smacked me like a gauntlet:


I was too late to join the conversation, but the statement has concerned me ever since. In the user experience and design communities, has an assumption locked our thought about reading so tight that we refer to it as a “fact?”

Why This “Fact” Stifles Us

Based on my experience with designing for and observing users, I am convinced that users read on the web (among other places). Sure, they scan hurriedly through irrelevant or uninteresting content until they arrive at what they want. (For a nice explanation, see pages 2-4 of “Letting Go of the Words.”) THEN, users read.

Why do we forget the reading part?

Think about some of the confining implications. If users don’t ever read, then

  • It doesn’t matter what we say or how we say it because users won’t notice.
  • Text has little impact on how users perceive a brand or make a decision.
  • We can communicate only with visuals.

And, taking this assumption to its logical conclusion, if users don’t ever read, is there much point to having any words on the web?

These implications seem ridiculous when I shine the light of reason on them. But, when they lurk unexposed with the shrouded assumption that users don’t read, our design and content choices are at risk of suffocating. I think we need to revisit this “fact,” starting with an excavation of Jakob Nielsen’s influential study of How Users Read on the Web.

Let’s Delve Into The Source: The Nielsen Study

For background, read Jakob Nielsen’s explanation of the study. Also try to check out a longer version including explanations of related studies. In my searching, I did not find much constructive criticism of this study. I feel it has five limitations:

1. The Topic: Irrelevant

A tourist trip to Nebraska? I know very few people for whom this topic would be relevant. (No offense to Nebraskans out there! I’m sure it’s a beautiful state. It’s just not on most people’s destination list, even if it should be.) In fact, the topic was chosen specifically because people would likely know little about travel in Nebraska. My concern is that if the topic is not pertinent, people won’ be motivated to read about it.

2. The Participant Sample: Unknown Interest

I could let 1 go if the study recruited a sample of people who expressed interest in a trip to Nebraska. It also would be interesting to test a sample of people with interest and a sample of people with no interest. However, the explanation does not state that the study used such sample criteria.

3. The Content Options: Too Extreme

The options include variations on a bombastic marketing version and a sparse objective version as well as variations on paragraph form and bulleted list form. Some important nuances are missing from the content options. How about a concise, promotional version that doesn’t lie and uses a bulleted list or a simple table? I also wonder whether wording variations and format variations are too many variables in one study. Furthermore, because the success metric focuses largely on remembering the list of tourist attractions, the content option that performs best—a bulleted list of the attractions—is designed to memorize.

4. The Context: Unclear But Probably Persuasive

The study explanation does not mention the purpose of the content and the overall website. Is the purpose to attract new tourists, to win back past tourists, to encourage tourism business, or something else? Did the study scenarios reflect the context realistically? Also, most of these possible contexts (which I inferred based on reading the original version of the content) seem persuasive, not educational or informational.

5. The Success Metric: Not Complete

The study uses a reading usability metric including comprehension, recall, and time. It also includes a subjective measurement, but it is subjective mostly about usability qualities (how easy was it to find information, etc.). The metric does not address content meaning, influence, likelihood to visit Nebraska, or related measurements. From a user perspective, is the goal to remember the exact names of Nebraska’s tourist attractions? Or is the goal to make a confident decision about whether Nebraska is worth visiting? From a business perspective, is the goal to teach people about Nebraska’s specific tourist attractions? Or is it to convince people that Nebraska deserves to be on their travel itineraries? I believe the study tries to stick strictly to usability. But is it useful to measure success in a persuasive context without touching on meaning, influence, and broader goals?

The description of the metric shows awareness of context, noting that one might add weight to certain elements of the metric for an intranet or a leisure site. However, because the metric elements do not address persuasion, adjusting their weight for a persuasive context would not help.

In short, I believe these limitations stem from the following two mistakes:

  1. Attempting to analyze and measure a persuasive situation as an educational one.
  2. Trying to test reading without considering relevancy and context.

Because of the limitations, I don’t feel the study allows us to conclude much more than the following statement: People with unknown interest in visiting Nebraska who are asked to learn about Nebraska’s tourist attractions remember those attractions best when they have little description beyond their name and display in a bulleted list.

We certainly can’t conclude from this study that people don’t read on the web.

Now, Let’s Elevate Our Understanding

Should we cut this study some slack because it happened 12 years ago? Yes and no. I truly appreciate how this study brought corporate attention to writing for the web. I respect the effort to test and measure reading usability at a time when the web was very new. I also am grateful that this and related studies inspired Redish’s useful description of users’ scanning behavior in “Letting Go of the Words.”

However, the approach exemplified in this study limits our thinking about how users read during an interactive experience. We learn only what users quickly find, read, and memorize on command. We do not learn

  • What content resonates, relates, or influences
  • What reading is like for users who find content about a topic that genuinely interests them.
  • The ways content makes (or fails to make) an emotional connection with users.

And that’s just for starters. So, I think this study is overdue to have its slack tightened. And we’re well overdue to elevate our understanding of interactive reading, which will breathe new life into our content and design choices.

Originally published on the now-archived Content Science blog in January 2012.

The Author

Colleen Jones is the author of The Content Advantage and founder of Content Science, a content intelligence and strategy firm that has advised or trained hundreds of the world’s leading organizations since 2010. She also is the former head of content at MailChimp, the marketing platform recognized by Inc. as 2017 Company of the Year. A passionate entrepreneur, Colleen has led Content Science to develop the  content intelligence software ContentWRX, publish the online magazine Content Science Review, and offer online certifications through Content Science Academy.

Colleen has earned recognition as an instructor on LinkedIn Learning, one of the Top 50 Most Influential Women in Content Marketing by a TopRank study, a Content Change Agent by Society of Technical Communication’s Intercom Magazine, and one of the Top 50 Most Influential Content Strategists by multiple organizations.

Follow Colleen on Twitter at @leenjones or on LinkedIn.

This article is about



We invite you to share your perspective in a constructive way. To comment, please sign in or register. Our moderating team will review all comments and may edit them for clarity. Our team also may delete comments that are off-topic or disrespectful. All postings become the property of
Content Science Review.

Partner Whitepapers

The 3 Elements of Content Intelligence

Make better content decisions with a system of data + insight.

Digital Transformation for Marketing

Your content approach makes or breaks your digital transformation. Learn why intelligent content strategy + engineering are critical to your success.

Content Strategy for Products + Services

Your content is integral to your product. You might have piloted content strategy and seen promising results. Now what? It’s time to get more strategic so you can sustain and scale. This whitepaper will help you start.

Help with Content Analytics + ROI

Does your content work? It's a simple question, but getting a clear answer from content analytics or ROI formulas is often anything but easy. This ebook by Colleen Jones will help you overcome the challenges.