Last week the World Wide Web Consortium's (W3C's)
href="http://www.w3.org/TR/WCAG20/">Web Content Accessibility Guidelines
(WCAG) 2.0 became a "W3C Recommendation", i.e., an official standard.
In the current installment of my ongoing series on WCAG 2.0, I'm sharing my
experiences with the following success criterion:
3.1.5 Reading Level: When text requires
reading ability more advanced than the lower secondary education level after
removal of proper names and titles, supplemental content, or a version that does
not require reading ability more advanced than the lower secondary education
level, is available.
This is a Level AAA requirement, so unless you're trying to attain maximum
accessibility it probably won't be of huge concern to you. However, before you
dismiss it entirely, I encourage you to give it some thought. This is one of the few
WCAG 2.0 success criteria that we at DO-IT had considerable difficulty meeting in
our DO-IT Video
Search application, and we still haven't quite reached a measurable level of
conformance on all pages. However, this success criterion has caused us to think
differently as we choose our words in written communication, and I think that's a
good thing.
In higher education, I've heard a lot of grumbling about this success criterion from faculty members. Why should they write for a lower secondary level audience, when their audience consists of college undergraduate or graduate students, if not a
higher-educated audience of scholarly peers?
The answer is very well explained by the W3C in the WCAG 2.0 supplemental page
on
href="http://www.w3.org/TR/UNDERSTANDING-WCAG20/meaning-supplements.html">Understanding Success Criterion 3.1.5. As the W3C explains, word length and sentence complexity have an effect on the ability of individuals with reading disabilities such as dyslexia to decode the words on a page. These
individuals can sometimes spend an enormous amount of mental energy decoding
complex text, even if they are highly educated individuals with specialized
knowledge of the subject matter. We authors can relieve some of this burden, and
make our content easier for everyone to read, simply by being aware of the words
we're choosing, and occasionally opting for smaller words and shorter sentences.
How to measure readability of text is a problem that researchers have been
exploring for nearly a century. Some of the earliest work that I'm aware of in this area is that of William Gray and Bernice Leary, who wrote the book "What makes a Book Readable?" way back in 1935. Their pioneering work included experiments in which they manipulated dozens of linguistic variables and attempted to identify
which variables had the greatest impact on human subjects' ability to read the text.
Other researchers have built upon the knowledge gained in these early tests.
Perhaps the most influential of these researchers was Rudolf Flesch, who
developed readability tests based on numbers of words and syllables, and
conducted research experiments to validate these results against results of tests with human readers. The Flesch Reading Ease test and Flesch-Kincaid Readability Test both measure readability based on slightly different formulas using the number of
syllables, words, and sentences.
Gez Lemon of Juicy Studio has incorporated both Flesch tests, plus one other (The
Guning-Fog Index) into an on-line
href="http://juicystudio.com/services/readability.php">Readability Test. It
couldn't be much simpler: Just plug in the URL of a web page and find out how
readable that page is.
In testing the DO-IT Video Search site, we were pleased to find that the transcripts of all our videos actually met the success criterion. This wasn't altogether surprising, as we recognize that the audience for our videos includes lower secondary level students, and we consciously write our video scripts with that in mind.
We failed though on two pages. The first was the
href="http://www.washington.edu/doit/Video/Search/faq.html">FAQ, which was written
with an audience much like myself in mind: someone who is somewhat technical,
and is comfortable with larger-than-average words. You, dear blog reader, are also
assumed to be in this category.
The other page that failed the readability tests was our
href="http://www.washington.edu/doit/Video/Search">home page. This page
posed unique challenges, because the home page consists mostly of titles and
brief descriptions of each of our videos. Our titles include words like "disabilities" and "accessibility" and phrases like "postsecondary education". DO-IT itself stands for "Disabilities, Opportunities, Internetworking, and Technology". Obviously we can't easily change any of this. We actually can claim at least some of the credit for the phrase "after removal of proper names and titles", which was added to the Success Criterion after we explained our dilemma to the WCAG Working Group.
Although we couldn't make our name or titles more understandable, we did find some room for improvement in the video descriptions. Here are some examples:
- We changed "Testimonials from employees with disabilities..." to "People with
disabilities talk about..." - We changed "a fully inclusive postsecondary learning environment" to "a higher
education learning environment that includes all students" - We changed the word "manipulated" to "changed"
- We changed the word "participants" to "students"
- We changed the word "subsequently" to "later"
As a result of our effort to conform to WCAG 2.0, I find that in general I'm more
aware, and I'm choosing simpler words. I've even done so a few times while writing
this blog post, which, according to our test results, is very easy to read (hopefully you agree):
Summary | Value |
---|---|
Total sentences | 232 |
Total words | 1399 |
Average words per Sentence | 6.03 |
Words with 1 Syllable | 864 |
Words with 2 Syllables | 307 |
Words with 3 Syllables | 130 |
Words with 4 or more Syllables | 98 |
Percentage of word with three or more syllables | 16.30% |
Average Syllables per Word | 1.62 |
Gunning Fog Index | 8.93 |
Flesch Reading Ease | 64.05 |
Flesch-Kincaid Grade | 5.82 |
The Guning Fog and Flesch-Kincaid Indexes are both expressed in number of
years of schooling required for understanding the content. So, this blog is readable either by an eighth grader or a fifth grader, depending on which scale you trust most. The third measure, the Flesch Reading Ease index, is a 100-point scale, where a higher number reflects better readability (the goal for WCAG 2.0 readability is a score of roughly 60-70).
For comparison, here are the same results for the WCAG 2.0 document, which is considerably less readable than this blog, and not quite conformant to itself:
Summary | Value |
---|---|
Total sentences | 1808 |
Total words | 15048 |
Average words per Sentence | 8.32 |
Words with 1 Syllable | 8609 |
Words with 2 Syllables | 3290 |
Words with 3 Syllables | 1800 |
Words with 4 or more Syllables | 1349 |
Percentage of word with three or more syllables | 20.93% |
Average Syllables per Word | 1.73 |
Gunning Fog Index | 11.70 |
Flesch Reading Ease | 52.30 |
Flesch-Kincaid Grade | 8.03 |
Will SC 3.1.5 result in web pages that are easier to understand? Or will web
authors load their source code with simple text and hide it off screen using CSS,
just to pass the readability tests?
<span class="hidden">see spot run. go spot go.</span>
Hopefully no one will do the latter. And if they do, hopefully they won't credit me with this brilliant idea.
My hope in spreading the word about this topic is that authors will simply pay
attention to the words they use. Check the thesaurus for smaller words, not bigger
ones, and if you can say it more simply, do it.
No comments:
Post a Comment