Up until this March this year I have been have accessibility manager for 16 years. In that time period there have been 4 questions that have really bugged me because I have never found a satisfactory answer to any of them.
The answers have alluded me because we never had the relevant data, so instead we tried to read the accessibility tea leaves, and hoped that our interpretation of the patterns they formed were accurate.
There are ways of ascertaining whether things are going well, such as compliance. I never advocate compliance as an answer to whether all users have comparative experiences, but as most of what it covers is sensible it is a good starting point.
Monitoring the complaints log is something I have also done for years, and at the BBC we turned the average amount of complaints we received in a week in 2005 to the same amount received in a year in 2020. At first glance that sounds great, but whether that is good practice or people giving up is debatable. The market is becoming so big that complaints have been reduced by the shear scale of what can be complained about. Users can think that a product will not improve any more so they have to compromise, or people no longer think they are being listened-to so why should they bother?
Any cut of these examples undermines the reliability of the use of complaints as a metric.
So instead we sometimes asked in quant studies or focus groups, but you get little breadth of opinion or statistical significance from either of those approaches, much in the same way asking the opinion of lobby groups and charities isn’t reliable.
All these are worth doing because they are great for surfacing specific questions and barriers, but none are indications of audience satisfaction.
So for 16 years we never knew how well we were really doing because there was little quantitive evidence, which for any organisation that uses data in its decision making process is not a great approach.
Since March this year I have decided to make it my mission to design methodologies and build tools that will enable the following questions to be answered.
In the rest of this article I will unpack the first three as they are somewhat related.
The Unanswered Accessibility Questions:
- How successful is ‘our’ accessibility programme from a user experience perspective?
- How can quantitive and qualitative research be segmented by barrier and human characteristic?
- What are the opinions of disabled audiences in any quantitive research study?
- Out of our X thousands/millions of web pages, what needs attention now? (I’ll park this one for now as it is about technical rather than user data)
Can anyone truthfully answer any of these without rich, reliable and regularly repeated quantitive data? There are clear paths for the opinions of individuals but there are regular problems I have seen repeated over and over again. The big one that popped its head up on a regular basis is quantitive data presented as if it is qualitative.
There is a really thorough article on NN Group on this subject, but the basics are that you cannot extrapolate the opinions of a user population from just 30, 50 or 100 participants. Qualitative research is great for finding questions or exploring ideas, but it is practically useless for evaluating success.
The other side of the problem is recruitment as the participant groups are often too narrow, focusing on demographics rather than barriers. A person’s condition is a demographic marker, but the barriers they face are human characteristics. Recruiting and segmenting by characteristic, for instance, people who experience acute phonological dyslexic barriers, who are not confident readers and experience anxiety when faced with online forms is a lot more interesting than just ‘dyslexic.’
I am dyslexic and this condition can present in many different ways, is often misdiagnosed or not diagnosed at all, and I have seen in studies that related syndromes such as Irlens Syndrome are wrongly labelled as ‘dyslexia’ both by recruiters and the participants themselves. Actually, there’s another interesting intersectional characteristic sub-group right there, users experiencing acute phonological dyslexic as well as Irlens syndrome barriers.
If you have never heard of the phrase “nothing about us, without us”, it’s a great way of pointing out that everyone’s opinions counts, and yet practically every online quant survey is inaccessible. Disabled and neurodivergent audiences make up over 20% of every mainstream product audience, and within that there is a lot of diversity when it comes to barriers experienced. But without ensuring your have included that audience’s feedback, your marketing, audience or design research data will always be skewed and inaccurate by a potentially significant margin. To get to truly understand what the breadth of your audience thinks, you need to be able to not only give the opportunity for everyone to participate in feedback, and you also need to be able to identify different intersectional groups within that feedback.
So even if you think your quant platform is accessible, how do you check that it is inclusive?
This problem is exacerbated as you can’t always ask survey participants directly if they have a condition, sometimes because of data laws like GDPR, other times it’s because people either don’t want to declare a condition because of stigma or identity, or because this alone is a huge set of questions if it to be meaningful, or because people don’t identify as being disabled or neurodivergent, or in the case of neurodivergence they don’t yet realise they have a condition.
So far my thinking is that identifying barrier groups is the way forward. Demographics like diagnoses or personal identity are unreliable when it comes to understanding barriers, but combinations of characteristics and barriers can be strong identifiers of demographic groups… and as long as there is some sensible data separation, issues with GDPR can be avoided.
So here is the question for every accessibility and inclusive design practitioner, manager and leader.
If successful inclusive design can’t be measured by compliance, how do you know all your customers have a comparative experience of your products and services?
…and when I say “all” I mean sighted, vision impaired, hearing impaired, Deaf, motor impaired, dyslexic, autistic, dyspraxic, ADHDers, the colour blind… and all those lovely intersectional people in between.
For accessibility and inclusive design to deliver on what they promise, we need a way of measuring success. We need to know what doesn’t work so well and for whom, and evolve best practice across the industry based on reliable quantitive data.
If you have any thoughts or would like to talk to me directly, please seek me out on Twitter @garethfw
…and at some point I’ll be back with thoughts on question 4.