Readability analysis: accept, consider, ignore?

readability analysis computer

Imagine the scene. A graphic designer submits some branding materials to a client. The client’s marketing guy feeds the image into a scanner and takes a sip of coffee as the computer does his its work. Ten seconds later, the progress bar reaches 100 and a bullet list in red, amber and green fills the screen. The visuals have been thoroughly assessed, and it’s not looking good.

Two of the rectangles have long edges that are 3% below golden ratio 
The model’s smile has an ecstasy coefficient of 0.73, 0.02 below optimum
The yellow glow is hex #f6f600, not the more pleasing #f6f300
28% of the image is in the passive voice
“Lorem ipsum dolor sit amet” is not standard English
The kitten image satisfies the Schrödinger cuteness pudding by 95%

This isn’t a scene from Black Mirror. It is happening right now to graphic designers from Reykjavik to Wellington. And they’re mad as hell.

Actually, that’d be a lousy episode. But it is starting to happen to copywriters. I’m hearing of writers having their work fed through readability algorithms by clients as a first step to being signed off. I wouldn’t say it’s a common phenomenon just yet, but who knows. As a worker with words, I’ve been here before …

Spell Czech 😂

Back in the day, their was a debate among proofreaders a bout weather there daze whirr numbered thanks to spell chequers on whirred processors, and sentences like this one were commonly scene on many an Internet massage bored. Proofreaders would even discuss whether they themselves would ever consider using one. Naturally, most were offended at the mere suggestion of it (but of course we used them – why wouldn’t you?). Once the limitations and undeniable benefits of the spell checker became obvious, we all calmed down. And anyway, we laughed, there would never be such a thing as

Grammar Check!

Yes, we did know that the sentence was fragmented. We even considered correcting our correlative conjunction mismatches. But the rise of the grammar checker still took the writing and editing community by surprise. Checking grammar was much more difficult than comparing words with a dictionary (and ignoring homophones). It required an understanding of language that is extraordinarily difficult to achieve.

Flawed though grammar checkers are, I must admit to finding the technology remarkable, and I occasionally have a Kasparov v. Deep Blue moment when a particularly complex structural flaw gets flagged, even if 99% of the time it’s a flaw I’d have noticed on second reading. (And never forget that Kasparov lost 4-2, not 6-0. Grammar check mate.)

Readability analysis: the final insult

So here we are. We’ve had our spelling and grammar autocorrected, and now we’re having the very fruits of our trade – our sentences – analysed by emotionless digital gatekeepers. In fact, many of us are willingly submitting our work to the machines whenever we blog. We are allowing our sentences to have their essential components – verbs, voice, adjectives, flow, lengths – judged for correctness by computers.

Is there such a thing as good copy and bad copy? I hope so. No, I think so. Let’s just say you know it when you see it. We can probably agree on that.

But can the measure of good or bad copy really be determined by how far it strays from some arbitrary mathematical mean?

This development doesn’t seem to be satisfying demand from grammarians, publishers or website owners. It seems to be more like the spawn of the SEO sector. I’m not going to argue about SEO. I’ve met and worked with enough SEOs to recognise that they know exactly what they are doing. Their actions have tangible, measurable results, and they are lovely people.

It just seems to me that the rise of readability analysis comes from that mindset – from people who are sure that if they can optimise everything until any rough edges are removed, success will follow.

readability analysis computer

The cursed passive voice

I’m yet to be convinced that readability analysis has reached the stage where its results should be taken as gospel, just as spelling and grammar checkers can still make awkwardly clever mistakes.

The passive voice is a particular favourite of the tests. Yes, it’s good to be active and to have all your subjects and objects actively working on the sentence’s behalf. But passivity often carries a subtlety, a quiet confidence, a sense of not having to try too hard, a feeling that not every sentence has to be a call to action. It’s also a useful way of referring to previous statements without having to restate sentence objects or their pronouns. And it can bring the focus of a sentence to the object rather than the subject, because sometimes it’s the subject that’s more important. This guy gets it.

It can’t simply be stated that the passive voice is bad. Indeed, PV is good enough to be allowed in a recommended[by whom?] 10% of sentences, but go over that and all of a sudden your copy is suboptimal. Do you buy that?

Ultimately, it’s quite easy to identify passive voice algorithmically, and that’s where its prominence in tests comes from. If a sentence feels right to you, the human, it’s right. By all means run your copy through a readability checker, but have an open mind about its results – you risk destroying something beautifully imperfect.

Not the same as reading levels

I’m quite interested in reading levels when it comes to copy. Writing copy that a typical 11-year old would understand (even if it’s for an educated adult audience) seems like a sound concept to me. Your speech patterns and, to a large extent, your everyday vocabulary, are pretty much set in stone by that age. Great copywriters don’t necessarily know loads of fancy words – and they certainly don’t use them in their copy. They’ve learnt to use the words everybody understands efficiently, possibly in unexpected ways that carry new meaning.

Copywriters’ clients usually tell them who their target markets are, but frankly, if you can’t work it out for yourself, you probably shouldn’t be in the business. Good writers know their audiences just as well fashion designers, film-makers and comedians do, and accept that there’s certain language you can’t expect a given audience to respond to.

I think there’s probably a place for machine learning and algorithms to determine good reading levels, and the results I’ve seen can be useful; but the ultimate test of the quality of copy is how well it informs, entertains and converts in the wild. Here’s some good analysis from an avowed rival.

We’re not there yet

Anyway, I’ve said my piece. As I mentioned at the start, it’s still uncommon for clients to reject work on the basis of digital readability analyses. People might not all understand the intricacies of grammar or spelling, but they know when something is readable because, well, they can read it. Any danger comes from people using a digital readability test as their first sweep (as they would with a spell checker) and rejecting perfectly good work on the basis of its results.

Like all creatives, copywriters are good at taking criticism on the chin, and frankly we have it easy – rejected graphic design often means starting again from scratch, whereas we might just have to re-jig a few sentences. (I don’t even know how sculptors ever get to the point where they can submit their work.) Criticism from another human is real and valuable.

In terms of SEO, readability also isn’t a ranking factor per se, although an unreadable site will suffer a higher bounce rate, which is a ranking factor. It would have to be quite dire to cause noticeable bounce compared with, say, slow loading times or bad mobile friendliness, though. The best readability test you can do is to read it yourself and let a few other people read it and to act on their comments. They will be correct roughly 100% of the time.


Image: Sean Batty