If you read this post about UX and readability, you might have some questions about the usefulness of readability formulas in UX. We’re here to answer the questions raised by the article.
1| “Readability formulas are not reliable”
The article claims, “If readability formulas were reliable, different formulas measuring the same text would all give similar grade levels”. We can understand why this would seem concerning at face value, but the reasoning behind this is a little confused. UX Matters have compared the SMOG index - the gold standard in medical writing - to the Dale-Chall formula, which is recommended for children’s books.
Whilst some formulas are intended for a wide age range, other formulas such as this were intended to be specialist and should be treated as such.
Learn more about the readability formulas we do recommend for general use.
The article also astutely points out that different programmes can give you a different grade level. This is based on the way they evaluate certain text features.
We actually agree with this. Some other readability websites or programmes can calculate readability in a way that uses outdated and imperfect code. They can also evaluate certain text features - such as line breaks - in a way that simply doesn’t make sense for a writer in the 21st century.
Readable takes into account that many people use lists in their writing. We treat bullet points or lists as separate sentences, whereas some programmes will detect them as one long sentence.
Readable takes into account that many people use lists in their writing.
And no, if you were wondering, the date 2019 is not counted as four syllables on Readable. We discount numeric dates and they don’t count toward your overall readability. Written as twenty-nineteen, if you really wanted to, it would count as four syllables.
2 | “Readability scores are not valid”
UX Matters’ reasoning behind this point is that readability is not the same as legibility. We agree with this and don’t know of any readability tools which have claimed differently, but we disagree that this makes readability invalid.
This is what Readable’s readability tool can do:
- Tell you how sharp and clear your copy is
- Analyze how accessible your writing is
- Let you know when your sentences are too long
- Give you some tough love when you’re using a difficult word where a simple one would do
- Audit your website and tell you which pages need improving
- Proofread your emails for readability
- Integrate into your CMS for quick and easy readability checking before you hit publish
- Be the Stephen King in your life and tell you when you’re using too many adverbs
- Analyze keyword density
- And much, much more!
As you can see, a readability tool isn’t confined to the formulas. For example, adverbs are not calculated in the formulas, but we let you know when you’re potentially being excessive with them.
Without good readability, the other great aspects of your writing simply aren’t going to get through to the average reader.
Here’s what a readability tool can’t do:
- Tell you whether or not your content is interesting
- Let you know if it’s visually friendly - we evaluate text, not formatting. However, we provide a lot of tips in our blog and newsletter and recommend sticking to shorter paragraphs
- Write your content for you or answer your emails (sorry)
As a content writer, we bet you have a host of different tools at your disposal. Readability formulas don’t solve all of your writing needs. However, they’re a really valuable part of the puzzle. Without good readability, the other great aspects of your writing simply aren’t going to get through to the average reader.
3 | “Readability formulas don’t consider the meaning of words”
Again, to back up this point, UX Matters have cited the Dale-Chall formula to invalidate readability formulas in general. To be clear, Dale-Chall is only suitable for writing aimed at young children.
Dale-Chall, due to its specialist nature, is not included in our overall readability rating and is suitable for a narrower range of purposes, as detailed above.
4 | “Grade levels are meaningless for adults”
The article has cited one account of someone who re-wrote a lease for an apartment aimed at low-income and low-literacy tenants. They renamed ‘security deposit’ to ‘promise money’ and they were for some reason surprised when tenants were confused.
This is because ‘promise money’ isn’t a real term. Moreover, we don’t consider ‘security deposit’ to be unreadable anyway, since neither of the words is over four syllables. Because of this, the change was unnecessary and, to be honest, a little patronizing.
That’s not what readability is about. They would have done better to focus on more impactful changes which didn’t change terms to make them more opaque. Nobody, regardless of intelligence level, knows what ‘promise money’ is.
So, we don’t think it unfair to say this decision went against common sense and is not a good example of readability for adults being “meaningless”. The low-literacy individuals probably appreciated other aspects of the revised lease which were more readable.
This includes shorter sentences and revising words which were actually difficult. The principles of readability don’t align with this individual’s opinion that low-income people don’t know what a security deposit is.
We should also point out that readability isn’t all about education levels.
We live in an attention economy and quick scannability matters.
In a survey of 550 business people, Harvard Business Review contributor Josh Bernoff found that 81% of subjects found poorly written content a waste of time.
Although very well-educated and able to comprehend and unpick the content, they knew to spend their mental energy elsewhere. In other words, regardless of intelligence level, we live in an attention economy and quick scannability matters. We argue that this is meaningful to adults.
5 | “Readability formulas assume they’re measuring paragraphs of text”
We respect UX Matters and their right to an opinion about readability - but the points they make in this section are a little bold. Regrettably, they’ve used Microsoft Word in their example. The way Word calculates line breaks is flawed and contributes to a wildly inaccurate score.
As mentioned before, Readable correctly takes into account your lists and bullet points - lists aren’t a challenge for us.
They also mention how many words constituted a sample in the readability formulas at the time of their conception. However, this was when they were calculated manually. Readable grades the text as a whole and the original sample sizes don’t apply - however, to ensure an accurate score, we recommend you use a text which is over 100 words.
6 | “Revising text to get a better score misses the point”
Caroline makes a great point that readability isn’t the only thing to consider when editing your copy. We agree with her statement that “long sentences often occur in writing that is difficult for some people to read, but that does not mean sentence length is the main or the only problem for those people”. However, it’s far from fruitless.
Editing your copy for readability can ensure that your content is easy for the reader so that your other communication skills can really shine and aren’t obscured by convoluted content.
7 | “Good scores don’t mean you have useful or usable content”
This claim is correct, but we doubt anyone using a readability tool thinks it’s the only ingredient in the content stew.
To ensure your content is useful to your reader, adjust your targets. Our letter-grading system is designed to help you reach the general public because wide accessibility is important.
However, if you’re writing to an audience of people who typically have a postgraduate qualification, you probably don’t need to edit your content down to a general public level.
However, as aforementioned, people of all educational levels appreciate clear and concise content that respects their time. We’re hard-wired to prefer finding our answers quickly and easily. Increasing readability will also dramatically increase the number of people who actually finish reading your content.
We're hard-wired to prefer finding our answers quickly and easily.
We like to think our approach to readability is more multidimensional than other tools - for example, we have features which analyze how conversational your content is, and what kind of tone you’re using.
What is your biggest challenge when editing your copy and what are the most important improvements to you in your redraft? Let’s have a discussion about them in the comments.