With the number of articles out there on the topic, we know that a content-first workflow is the way to go. Steph Hay describes an ideal content-first process beautifully, but there was a key point I noticed when trying to figure out how we could incorporate it: it seems to be optimized for content that doesn’t yet exist. How do we design for content that already exists?
In our case, we were revamping our website and had 30,000 assets that needed to be structured as content types for our new CEM. Content existed — in excess, and inconsistently. Handing over lots of unique content blobs and wishing our designers the best of luck wasn’t going to work.
When we started structuring the content, we realized we were asking and answering lots of questions that our developers and designers would need: What changes could we make if we split fields apart and rewrote the content? What fields did we need when we referenced another structure? These questions were uncovering the ways we’d decide to build the new experience.
So, we turned it into an opportunity: we revolved our entire workflow around content types and structures.
Since the workflow centered around content types, we needed to determine a list of possible content types. This necessitated knowing which content assets we were going to keep.
So, a full audit was critical. Once we had our spreadsheets of content, we used the page titles and URLs to parse through that content, identify pieces of content that were similar, and put them into like “buckets.” Then we gave those buckets names that made sense, like “article” and “multimedia”. These buckets became our list of high-level preliminary content types.
A quick note about content types: they’re often messy, or at the very least, widely varied from one organization to another. There aren’t any “right answers” here. For us, we had articles, but also article-like types, such as chapters and blog posts. We’d eventually need to determine whether these were the same content type or different types. We flagged these for comparison later.
Once our content was audited, and we had our buckets of possible content types, we were ready for the rest of the workflow.
Akin to an agile “discovery” phase, we needed to decide on scope for the content types. We started by choosing a content type to work on, then we identified examples of each of these types from the audit. We grabbed examples that represented the majority of a type, but at the request of our designers, we also found some of the worst examples. By looking at our known extremes, we accommodated odd scenarios early and reduced rework.
Then we workshopped the content type. This work alone could be a whole book, so the short story goes like this: we looked at our examples one by one, and whiteboarded a list of every element we saw in the content. Some of our article fields looked like this
We highlighted elements found across all of the examples and noted elements that were unique. We also noted how we wanted to differentiate similar fields, like short description versus abstract and introduction. This list served as our high-level content structure.
In the few cases when we needed to compare more than one content type, we followed the above steps for each type separately: what high-level structure did the articles need? Separately, what did chapter need? Then we did a comparison – were most of the fields common, or were there many pieces of structure we needed for one that we’d never use for the other? Even in this early phase, we could literally see (on the whiteboard) that they’d be different enough to warrant their own structures.
With the preliminary structure down pat, we moved forward to create a more detailed content structure.
Once we had a preliminary structure and our examples, we needed to dig into the details.
The first part of this step was refining the current fields to the correct level of detail. For example, many of our articles had subtitles, so we added in a field for that. We also had 2 types of Professional Development credits, so they each received their own field.
At this point, it’s likely we spent a lot of time concentrating on the user-facing content fields, but there were often fields that were needed behind-the-scenes, too. These might include:
Going back to our example, we added:
Now we had a complete list of content type fields, creating a content structure. But one of the most important aspects of our process flow was making sure that the information uncovered about the content was also captured for the most seamless design possible.
So we documented additional details for each of our known fields. We used an Excel spreadsheet, which worked great for our simple structures.
Our process worked best when we detailed the following as columns in our spreadsheet. Each column would be filled out for each structure field:
|Spreadsheet Column||Description||Column Examples
|Field Label||This is the “title” of the field||Title
|Field Type||Describes the kind of field||Single line text, number, search|
|Mandatory or Optional?||Describes if the field must be filled in or not||Mandatory or optional|
|Allow Multiple?||A system configuration to allow more than one value in the field||No
|Character Limits||Defines if you want to limit how much can be entered||250 characters|
|Field Rules||Specifies if you want to run validation on a field||A-Z, special characters allowed|
|Field Instructions||Help text for your authors||Using title case, put the title here. Do not add subtitles|
|Field Display||Yes or No indicator to designers if you think it is a field that should display||Yes|
|Display Comments||A field to provide context to your designers where it isn’t obvious||The most important element on the page|
Once our structure was detailed, and our notes were filled out in the spreadsheet, it was time to see the structure in action.
Let’s recap. We’ve identified pieces of like content, named them, and created a content structure based on those pieces. Next, we needed to validate our work. Ergo, mapping.
The concept of mapping was simple: grab one of those content examples, and literally copy and paste it, field-by-field, into the structure spreadsheet next to the Field Labels.
The simplicity of this task was misleading. Since so much of our content varied widely, it was still a transformative exercise to determine how content would live in the new structure. Judgment calls needed to be made by people who knew the content best.
These final mappings were a transparent tool to help designers, and later migrators, see how we expected the old content to fit into its new home. It looked similar to this:
|Getting Your Money’s Worth||Title|
|An Introduction to Investments||Subtitle|
|2||Professional Development Credit|
|Starting to Invest||Article Series|
Inevitably, we’d see adjustments to make, so we iterated. Common questions that helped us determine if changes were necessary included:
Pitting actual content against the proposed structures was a necessary stress-test. When we found we missed something, we made an update. This exercise also resulted in the first deliverable that makes our new content structure real, especially for legacy content.
I’ll be candid: in our first few rounds, we didn’t have this step. Instead, we sent our designers the new structures, some notes, and a URL to the old content. Ultimately, this step exposed the assumptions we were making, and in the end, reduced so much inefficiency.
Once we crafted our shiny new content type, and we knew it worked with our content, we willingly gave someone the opportunity to tell us that our (structure) baby was ugly.
The real purpose here was to materialize how authors might interact with the structure—and it resulted in our author point-of-contact asking more questions we hadn’t thought of.
Author help text is often a trivial afterthought, so why did we incorporate it as a step? We knew we’d need to help our authors use fields consistently.
The more we talked about transforming the content, the more ways we found for authors to interpret the fields. Since our authors didn’t live and breathe digital, help text was meant to serve as the reminder of “what we meant” during our conversations. The interpretation was vast, including:
The list goes on. To help the authors, our help text writer provided:
To write these specifics, she had to consider various possibilities — and there were lots of questions to answer. These questions provided context that was helpful during design discussions, and again later during migration.
Ah, finally—the wireframes.
We gave our designers the documents we had created up to this point: the structure, its corresponding details, the mapping, and the help text. Using these, they created designs per content type.
We met multiple times a week to review their mockups, ask questions, and iterate. Sometimes our designers even had suggestions that wound up in our adjusting the structure. This was normal: being content first didn’t mean we would cut out our designers. We just wanted to focus on getting the content ready before we focused on design.
Once designs were agreed upon, every piece of content in the structure was annotated with its field label so the developers knew how the interface mapped. This provided another layer of transparency everyone craved. After design, the structures and mockups were sent to developers for our standard Agile process. Sometimes, we even got questions from our QA team that might result in changes. Albeit less frequent, this feedback loop also helped us strengthen our intentions.
I’ll be the first to admit it: Not every one of our 50-plus structures went through this whole process. But the structures that were the closest wound up moving quicker than others, and most importantly, retained shared language and shared understanding throughout.
By the time we got to the design step, I often reminded myself of all of the work we did previously — the first 4 steps were, in essence, recording the content design requirements.
We rallied around content structures, but it still put content at the center of our process: structure-first was content-first. It also provided deliverables based on a single content type, which allowed everyone to be on the same page regarding the scope of every type. Each step was another layer of requirements that continued become more refined.
And the better our team got at thinking about content specifically as content instead of design, the better we got at collaborating on content and design together.
Content that uses emotive language performs nearly twice as well as purely factual content. Learn more in this guide from Acrolinx.
Learn why one page is rarely enough to rank for competitive topics and how to build a content cluster that positions you as an authority in this MarketMuse whitepaper.
Make better content decisions with a system of data + insight.