This is a non-comprehensive list of recommendations on how to improve the Screen Reader User Experience (UX). This list has worked for me in the past to get consistent results between planning and deployment. This article is mostly addressed to developers, visual and interaction designers, and QA testers. Not all recommendations apply to every role, but all outcomes are helpful for everyone. Other project stakeholders, like Project Managers, can benefit from knowing these recommendations. This is not a formal checklist but it can be the foundation for one.
There is a layer under every User Interface (UI) that “speaks” to the users. And I mean literally speaks to them. If it doesn’t, then something is wrong with the UI. Most of the time, individuals unfamiliar with Web Accessibility don’t realize this.
Same as we usually test by visually browsing and testing with the mouse. Performing tests with keyboard-only navigation and Screen Readers are becoming a requirement. To hear what components and their interaction sound like. Needless to say: silence is bad.
There is no replacement for hands-on manual in-person Screen Reader testing. To the writing of this article, there isn’t any automatic test for Screen Readers. Testing the Screen Reader UX from conception to implementation is one way to improve it.
General Improvements to Screen Reader UX
Define user journeys for every UI or page. Write it down as a numbered list. E.g., “User Tabs to component A, then uses the down-arrow key to reach element A1, …”.
Video record screen reader sessions based on defined user journeys. Make sure to enable “computer audio recording”, otherwise it will result in a video without audio. Video recordings are a great reference when explaining to a developer how to reproduce screen reader bugs.
Test in as many different screen readers as possible. Some are free, some are pricy, some are strict, and some are very forgiving.
Test accessible gestures for mobile devices, but also small devices with external keyboards. E.g., Android Tablets with external mini keyboards.
Beware of cross-screen-reader bugs and aim towards cross-screen-reader solutions. E.g., VoiceOver for Mac will vocalize just about everything, including dynamic content. As opposed to JAWS/NVDA for Windows, which may need a preloaded parent tag for similar results. That is to say, vocalization varies from one Screen Reader to the next, depending on implementation, platform, and devices.
Be patient while testing ARIA attributes. Testing vocalization will take much longer (even at expert levels) than the usual “Mouse + Browser” testing. This is normal, adjust expectations and time estimates.
Make sure to test for consistency and double-check screen reader vocalization across different environments. E.g., localhost, development, staging, live.
Video record experimental approaches to improve Screen Reader UX that didn’t make it to the final implementation. Save for future recycling.
Video record the approved “final” outcome to avoid and spot regressions.
Improving Screen Reader UX by Role
As a Designer, explore examples and references using a screen reader (desktop and mobile). Listen to what components and elements sound like. Video record the screen reader exploration sessions to show to developers and other stakeholders. Point out cross-screen-reader vocalization differences as soon as spotted; they tend to be forgotten.
As a Developer, test with a Screen Reader while developing. If designers provided a video recorded session of the expectations, try to aim for a similar result (desktop and mobile).
As a QA tester, add video recordings of screen reader bug detections to QA tickets (desktop and mobile). This will help developers reproduce and debug issues faster than reading text and interpreting the instructions on how to reproduce. It saves on explanations about how to reproduce and issue.
As a stakeholder, be aware of cross-screen-reader differences and limitations.
What to Avoid
Avoid using Chrome extension to replace or emulate Screen Readers software. The only focus of emulation should be to emulate the user, not the software. As of the writing of this article, I haven’t come across an extension that emulates some ARIA scenarios. Such as aria-expanded, or aria-live which already have some cross-screen-reader issues when using real software, so avoid emulators.
Avoid turning off the screen reader when it starts vocalizing. Instead, listen to it speaking, and try to associate the speech with the UI component and the interaction. I have to admit this happened to me at the beginning. Then I realized THIS is exactly what I should be testing: vocalization. Last year I wrote an article about overcoming the uneasiness of screen reader testing. It’s a helpful guide for slowly adapting to that new environment.
Avoid browsing the UI with the mouse while using a screen reader. This prevents hearing some additional instructions the Screen Reader might be vocalizing by default. E.g., specific keyboard key combination to interact with the component. Don’t skip components or elements with the mouse, always use keyboard-only navigation.
Avoid including ARIA attributes without actually testing them with a Screen Reader and listening to how they sound.
Avoid listening to music while testing with screen readers. Some developers and designers like to hear music or watch videos while working. Honestly, so do I, but then suddenly hearing the computer speaking might be distracting. This could make a slow process even slower.
The hard part of Web Accessibility is empathy. By that, I mean the part where a development team has to emulate users with disabilities to provide a solution. As opposed to sympathy, which is just caring about the Web Accessibility cause, and saying: “somebody should do something about it”. But then, navigating and testing (hands-on) while emulating users with disabilities is hard, and a process. A slow one.
Of course, hiring people with disabilities to do the testing and providing feedback is the best solution. Unfortunately, this is still not a widespread practice. Most of the time not in the hands of designers and developers to do the hiring. However, emulation is a good strategy, while keeping in mind certain details.
So, empathy equals user emulation, and user emulation is hard. This “hard part” proves to be hard most of the time because it’s usually not “on the radar” of the “usual” development practices. Hence, most of the time not “visible” to team members. The hard part is usually harder when neglected from the very beginning of a project. At that point, it just seems like a very time-consuming inconvenience to retrofit everything. The latter is unfortunately the most common approach to Accessibility. Retrofitting takes 10 times longer compared to what it could have taken if done “from scratch”.
Nevertheless, the “from scratch” approach generates a high level of anxiety among developers and stakeholders of the project. It gets perceived as if they are spending too much time in proper code semantics. Not in actual features and functionalities. Then we hear: “time is money”, “the client wants it for yesterday”, “not enough resources”, etc. Yes, all of the above, also makes part of a hard truth: “no features, no sales”.
Annotations to the rescue
Tackling implementation with Web Accessibility Annotations makes it less time-consuming. Even when developers aren’t “that much into emulation” or are still ramping up their accessibility knowledge. With annotations, they will know what needs to be there, code-wise, and why.
That said, a lot of time could be saved if Web Accessibility is considered at early stages. Like while UX Design and Visual Design stages are starting. If Web Accessibility Annotations are clearly communicated to developers from the beginning, they will be able to implement them faster. Of course, best if crafted at the very beginning. Most common categories for these annotations that will help speed up projects and make the hard part less painful are:
Accessible Keyboard Patterns being standard.
Labels and Descriptions provide context for all users.
Headings make sense if nested.
Landmarks and Regions are well organized.
Status Messages are meaningful and punctual.
Screen Reader narration UX.
There are great design tools to communicate Web Accessibility Annotations. They can also all be communicated to developers with a spreadsheet or a text document. Yes, it’s time-consuming to populate a spreadsheet with all those details, but it’s worth the effort in the long run.
By keeping in mind some details under each category, we will be on our way to crafting them correctly.
Accessible Keyboard Patterns
Keep in mind:
Keyboard navigation is the foundation for most Assistive Technologies. All UI components should work with the keyboard or provide a similar experience while using the keyboard. Source order matters, avoid breaking it, and thinking it can later be arranged using CSS grid or absolute positioning.
There is already a standard for Common Keyboard Interactions for most UI components, no need to reinvent the wheel. Have a cheat-sheet handy with all those included in your UIs, to avoid regressions.
Not everything on the UI is about “tabbing”. There are other keys on the keyboard as well. Some are exclusive to Screen Reader use.
Labels and Descriptions
Keep in mind:
All form elements and buttons should have labels. Text in label should make sense when audible (when vocalized by a Screen Reader).
A placeholder is not a label, use both, even if repetitive, or be creative to avoid repetition.
Don’t be afraid or annoyed by repetition. What may seem straight forward for some users, may not provide enough context for other users. Be inclusive: design, develop and deliver for all users.
Screen Reader users will get context from the labels and descriptions they hear. Trust your eyes, but also trust your ears. If you don’t hear it’s not there.
Don’t overuse aria-label, it’s invisible to sighted users, and it will override text in native labels for Screen Reader users.
Provide descriptions through aria-describedby attributes to give Screen Reader users more context if UI is complex. Use visually hidden text styles to provide context or instructions if you can’t provide aria-describedby due to code practices reasons.
Headings
Keep in mind:
Headings are for arranging or structuring content on a page. Not all big font instances have to be headings. Choose wisely.
Most Screen Readers allow “headings navigation” by just pressing one keyboard key, H key for JAWS/NVDA. Make sure the heading structure will make sense if users were to skip content by headings. Will they land on meaningful content?
Most Screen Readers can produce a list with all the headings on a page. This allows users to browse the list and jump to a specific heading on the page. Write down that list and structure it. Does it make sense if you read it out loud?
Some Screen Readers like NVDA will “nest” headings when listed. Organize your headings in a way they make sense, similar to an expandable Table of Contents.
Landmarks and Regions
Keep in mind:
Similar as with headings, Screen Reader users can list, navigate, and skip landmarks and regions by pressing only one keyboard key, R or D key for JAWS/NVDA. This is a capability only Screen Readers have. They must be well organized and make sense when listed.
Screen Readers will announce when users enter and exist a region or landmark. If you have more than one region or landmark of the same kind, then label them for differentiation. E.g.:
<nav aria-label=”top navigation”>
<nav aria-label=”breadcrumbs”>
<nav aria-label=”footer navigation”>
Status messages
Keep in mind:
If you remove something from the UI, don’t assume the user will “see” it’s no longer there. That’s not enough for Screen Reader users, they also need to “hear” it’s no longer there. Use aria-live or role=”status” to notify user of any changes that may affect them.
Same as the above but for element that weren’t there before. Notify users when new elements are introduced on the UI.
Don’t overuse role=”alert” this is a very aggressive and intrusive kind of notification. Use only when needed, for everything else use aria-live.
Screen Reader narration
Keep in mind:
The Screen Reader user experience should make sense throughout the whole user journey. Think of it as a person telling you over the phone what they are doing at every step. The “listening experience” should match the user interactions. Write it down and make sure developers receive it. Along with specs, wires, and visuals.
Silence is bad. Knowing beforehand how the UI should “sound like” will help spotting and fixing silent spots.
Use proper semantics. A link is not a button, even if you make it look like one. Screen readers will vocalize the true nature of the element and users will act according to what they hear. Screen Readers can also list Buttons and Links separately, so they should make sense when listed apart.
When browsing the web it’s clear that keyboard interactions, code sequence, labelling and status messages are probably the most neglected issues. Then, in my experience, thinking we can later rearrange the order of the components with CSS Grid, and enforce focus management with JavaScript, sets the ground for very unpleasant surprises.
Other Pain Points
Tables can also be problematic, especially if responsive. They are a complex topic worth an article, or several, of their own. Noted in my to-do, for a later time. For now, keep in mind Screen Readers can also navigate by Tables on desktop, T key for JAWS/NVDA, and have their own keyboard interactions. Therefore, semantics matter quite a great deal here. Oh! … and they sound different on mobile.
Form validation is also complex, although it could be very simple. It’s a controversial topic mainly due to the overuse of “dynamic validation”. It looks great visually, but it’s not very inclusive for Screen Reader users. They will hear an error as soon as they start typing. Creative solutions are needed to produce an inclusive UX. This topic too deserves its own article(s).
To wrap up, and clarify. Let’s not be fooled by thinking Web Accessibility Annotations is all that’s needed to get Web Accessibility Implementations done. They are just part of the specifications. It’s a reference. It helps. But it helps even more when team members are familiar with Screen Readers’ use and interactions. With how components and elements sound like. Then annotations will be accurate, UI/UX and code-wise. Like an audible mockup.
I remember the first reaction I had when I started to work on a Web Accessibility project and did Screen Reader testing. So I turned on the Screen Reader for the first time, then I wanted to shut it down immediately. I got confused between what my eyes were reading and what my ears were hearing. Concentrating on both areas at the same time, the visual and the audio, was hard. Got worst when the Screen Reader was narrating and I was trying to speak, while screen sharing and presenting something.
A word for newcomers
It’s been a while since that, and I’m well adapted now. But that same reaction I had, I keep finding it whenever I have to coach newcomers to Web Accessibility. When explaining how to optimize, code, and then do Screen Reader testing to confirm vocalization. That perceivable embarrassment, when they can’t turn off the Screen Reader. So, I’m writing this article to quickly share a link with newcomers. What you feel is normal, and you will adapt the more you use it, but don’t turn it off. It’s like the first time using Windows coming from Mac, or vice-versa. Or switching from a native language to a new language. It feels like your brain stretches.
So, how do I turn it off, they ask? The answer is, “let it speak, that’s the whole point of Screen Reader testing”. Listening to the spoken representation of the User Interface, and then verifying if it’s equivalent to the visual experience. Emulating the listening experience, as Screen Reader users would experience it. Empathizing to emulate users is hard, it’s a process, and adaptation takes time, but it’s worth it. Then, patience and practice.
First Aid Kit
If you are seriously overwhelmed by the Screen Reader narration to the point where you just can’t focus on what you are doing. Then you could use the following tricks butdon’t turn off the Screen Reader:
Press the Control key to pause it, works for all Screen Readers.
Turn down the volume and enable Speech Viewer for NVDA, comes free and can be enabled under “Tools” in the NVDA menu.
There is also JAWS Inspect for JAWS, which unfortunately has a cost.
If you are testing in VoiceOver for Mac, then you may already have seen the text output, so just turn down the volume.
Update on 11-22-2021, more aid tricks:
You could turn off the Speech Mode for NVDA. There are three Speech Mode settings so you can press Ins + S three times to cycle through them all.
On Windows 10, you could turn down the volume for just JAWS/NVDA with the “Sound and Volume Mixer” by right-clicking on the speaker icon on the system tray. Then select “Open Volume Mixer” to open it. Here you can change the volume for individual applications.
Uneasiness towards Web Accessibility
Sometimes I have also noticed that talking about Accessibility is uncomfortable for newcomers. Especially the user emulation part, it triggers different emotions ranging from fear to disdain. Going from “It’s scary to think about this, I don’t want to attract this”. Or “I can’t emulate because this will not happen to me, I don’t see myself there”. Well, on that, I guess it depends on the different authors we all read and our different points of view. Yet, Accessibility needs to be implemented, regardless. So, how do we break through this discomfort too?
Well, we have to be aware that, by avoiding or postponing Web Accessibility, either by omission or deliberately, we are discriminating against users with disabilities by preventing access to content or transactions. I know it’s a strong word, but that’s exactly what it is. In some jurisdictions, lawsuits would follow. Think of the users who can only use software with Screen Readers. They can’t turn it off.
Overcoming uneasiness
Last year I read author Brené Brown. In her book “Dare to Lead” she says discrimination comes as a result of shame. She proposes as an antidote for shame: Empathy and self-care. Understanding what triggers shame reduces its power, she says. I couldn’t agree more. It really sounds easy once placed in perspective. However, empathy is a process (it needs context, unlike sympathy). Self-care requires enough awareness for introspection, as well as a strong willpower.
So, it’s not easy to get to the antidote, although the effort is worth it. Nonetheless, sympathy is easy, because it doesn’t really require the context of “walking a mile in someone else’s shoes”. There, where we first need to learn how to tie those shoes, and the walking cadence. But sympathy is about caring and understanding.
Having said that, while working on the larger and well worth goal of removing shame, without being shameless. It should be sufficient to just be bold enough to have sympathy (caring). Understanding the fact that, we may be depriving users with disabilities of opportunities most users give for granted, and that is illegal in some places. It’s important to remember as well, that disabilities are something that can happen to anybody at any time in their lives. Accidents do happen to those born without disabilities, regardless of favourite authors or philosophical alignments. Also, most people in most cases, already know someone who was born with a disability.
Working in Web Accessibility projects gives the implementors a new perspective. Prepares them if a disability ever catches up with them, or puts them in a better position to help someone they know who was born with a disability. We implement Accessibility to empower users. Implementors are also users.
Empowering users with disabilities
While overcoming the uneasiness of the Screen Reader testing new surroundings, I suggest we always remember famous people with disabilities. Like Hellen Keller or Louis Braille. Back in their days, they were able to create systems to help, empower and inspire other people with disabilities. Shouldn’t it be easier now with the help of technology and the information we have at hand these days?
Brilliant minds like that of Stephen Hawking reached their highest point and popularity because they were empowered by the technology of their time, and by the people behind that technology.
As professionals involved in projects where Web Accessibility is implemented, we must focus on empowering users with the solutions we create in our daily work. Focus on making software everybody can use, just as intended for the physical world. If we plan for wheelchair ramps and automatic doors. Why not make sure keyboard navigation is provided as the first layer of Web Accessibility.
In his book “Outliers”, Malcolm Gladwell presents a series of interesting facts about successful people. We want to be successful as Web Accessibility implementors, don’t we? Gladwell writes about how they became “outliers”, and how the same formula can be applied to anybody, consisting mainly of 3 elements:
10,000-Hour Rule: Practicing a skill for 10,000 hrs.
Generational opportunity: being there while key events are happening.
Help from others: People that will propel those skills into action.
So, this is the best time in a generation to start empowering people with disabilities by means of technology, it’s a key event. Their perspective, and their unique circumstances, will provide humanity with contributions that wouldn’t be possible otherwise. We, as implementors, must use our talents and skills to propel theirs. And yes, it takes time.
Success criteria for web accessibility under WCAG 2.0 (Web Content Accessibility Guidelines) could be overwhelming if seen only from the textbook perspective. In my experience developers and managers have almost unanimous discomfort reactions to Web Accessibility projects. Such as: do we have to read “all that”, it’s just “so boring”, “just run the validator” … crickets and tumbleweeds to sum it up.
As a developer and learner of Web Accessibility, I realized that once moving past the “excruciating pain” of reading the criteria then it can be approached from different angles. From the User Experience angle for instance, and also by layers. Slowly, but really, by just testing it. Something developers do all the time. Now, that usually gets me into the following Q&A:
But, what do we need to “test” exactly?
What we unconsciously do most of the time: the user journey.
How do we do that?
By consciously empathizing with the disabilities our users may have, in other words, simulate or emulate.
Isn’t it enough to test my site with a validator?
It’s not. Validators are of great assistance when analyzing large websites for some criteria. Like 20% of them only. However, I have seen validators passing sites with flying colours only to realize they are, in fact, not accessible hands-on.
Concrete analogy, please?
Believing that just because you comply with a few criteria, makes your website “accessible” to a certain level. Would be like thinking your office building is “accessible” because it has a very big button to open the door automatically at the main entrance … but only after passing through a gravel parking lot and climbing a staircase. So how does the user make it to the door for starters?
Storefront Accessibility
Many sectors are subject to Web Accessibility compliance these days, for some —like government— is mandatory. Online retail has become the target of a growing number of lawsuits, also users with disabilities have clear expectations, therefore a growing need for Storefront Accessibility is on the rise. Sometimes making the difference between a “lead-to-cash” approach to a “lead-to-lawsuit” outcome.
Premises
Let’s illustrate the process with an example, but first establish some premises:
The main goal of a storefront is to allow users to checkout products.
Elements on the interface should facilitate the user to complete checkout, including users with disabilities.
Developers and Quality Assurance Testers often test by pretending a user can successfully get from point A to B or Z on the interface. The same folks should also test that users with disabilities are able to get to the same points.
Successfully getting from point A to B or Z in a test, while emulating a Persona with disabilities, will result in a number of successfully complied accessibility criteria.
About Empathy
Before getting into what a Persona is, let’s clarify empathy. It sounds like something easy to do, we’ve heard it many times: “put ourselves in somebody else’s shoes”, how hard could it be? Well, turns out people have different levels of empathy and are usually influenced by their own life experiences… so it’s not that easy.
Sympathy is NOT Empathy
Also, different perceptions of what empathy means complicate things, I’ve heard many spontaneous definitions: it’s about having a big heart, being all sentimental about something, being a philanthropist, or reading emotions “between the lines”… yes, I guess it could very well be all that depending on the context of the conversation, but still, all the aforementioned are closer to sympathy than empathy. Now, when talking about Web Accessibility, empathy is luckily a very pragmatic issue. For example, an online storefront is either accessible or isn’t. In other words, “half accessible” doesn’t do if a critical journey is not successful. As it wouldn’t do for a brick-and-mortar store. Either shoppers with disabilities can or cannot go in and shop.
Regardless of how one “feels” about the fact, our good intentions, thoughts and emotions poured into thinking about the users who are unable to use the storefront won’t make it more accessible. That’s sympathy. It’s nice. It’s motivation. It helps. It raises awareness. But it doesn’t make websites accessible.
My usual story to “induce” people into empathy is as follows, let’s add a relaxed atmosphere first and pretend we are in a restaurant or a bar, surrounded by family, friends or occasional bystanders, usually the context where I tell this story:
Pretend you (a user without disabilities) are shopping online for a simple product, such as let’s say… a yellow scarf for women, and suddenly your computer mouse stops working —its battery runs out— and you have to finish the checkout using only the keyboard.
Resistance to Empathy
Of course, there is always resistance to this empathy exercise, and it’s normal, we are placed out of our comfort zone. So I hear things like: “what if I’m using a laptop that has a built-in mousepad” … let’s agree that’s not the point. It may sound like nonsense having to empathize in something as banal as having a mouse, but it’s relevant to the full process.
I have to point out a generational factor in these casual audiences of my stories; let’s keep in mind that users that owned a computer in the mid-80s may recall how to move on the screen with a keyboard, back then ball mice were only starting to be introduced and it wasn’t until the late 90s that optical mice became commercially available, but users that were born in those decades may be caught off guard envisioning a keyboard-only scenario. As I said, not so easy to empathize with.
Anyway, once the example is assimilated and people start throwing theories and remembering or figuring out how to move on the screen using only the keyboard, then we will have an idea, a plan, a roadmap —a journey— on how to finish the checkout.
Ok, once this keyboard journey is assimilated, let’s add complexity: Let’s pretend you have purchased this item many times (get yellow scarves for everyone) and now you know the process by heart, know it “so good” you can do it without a mouse, so good you can do it “with your eyes closed” … really? … let’s try that: keyboard navigation + eyes closed.
Personas with Disabilities
Before closing your eyes, let’s define what a Persona is; In the context of User Experience (UX) Design, “personas” are archetypical users whose goals and characteristics represent the needs and limitations of a larger group of users. Yes, putting faces to users helps with the empathy process, by googling “personas for accessibility” we can find many readily available personas to use, but then yes, more reading … crickets and tumbleweeds again.
That said, to follow up on the casual oversimplified storytelling at the bar, and since this article is starting to get long (missing the point on not having to read that much) let’s just oversimplify in a short paragraph a couple of personas that can be easily emulated by users without disabilities. Enter Jane & John, coming from a previous article, they have helped me before when setting accessibility foundation perspectives, and expectations.
Jane: right-handed user, who recently broke her right hand, has to use keyboard-only navigation, relies on her sight to know where she’s at on the screen, and for getting to the next element in a User Interface.
John: blind user, uses keyboard-only navigation, relies on a screen reader vocalization to know where he’s at, and to get to the next element in a User Interface.
The tale of a yellow scarf
There are many ways a user can navigate a storefront, but there are always paths that are more common, those where the storefront actually makes money are the critical user journeys. Keyboard-only navigation is no exception to this, so let’s agree on an average super simple journey based on the product from a story on Storefront Accessibility, a yellow scarf for women.
Critical User Journey
Test use case:Jane is looking to buy a yellow scarf for herself. On the other hand, John wants to buy the same scarf for her girlfriend. To further clarify, Jane & John, are not related, not they know each other.
For both, Jane & John, the critical journey to buy a yellow scarf for women will look something like this:
Tab to Search field > type “Yellow scarf women” > Tab to the first product (pretending is the yellow scarf) > Start the checkout process.
Emulate or Simulate?
We’re getting there, thanks for reading this far. The difference between simulation and emulation is subtle. Since they both include the word “imitation” let’s stick to that concept. For the following example, we are going to be using emulation software, so let’s call it emulation, but know that I definitely mean imitation.
Now, Imitation is key to empathizing, and we need to do that as close as possible to how Jane & John will navigate to that yellow scarf in a storefront. We know both will be using keyboard-only navigation, so that leaves us with the following keys to complete the checkout.
Keyboard Interactions
TAB
SHIFT+TAB
SPACE
ENTER
Arrow keys.
Additionally, John will need a Screen Reader, a stand-alone software with many features. Also with an important learning curve. Luckily we can emulate the basics of this technology by installing the ChromeVox extension in your Chrome Browser to emulate a Screen Reader. It has to be said that, no extension to this date, has as many features as a full-fledge Screen Reader software, such as JAWS, NVDA or VoiceOver.
Ok, this is where I dare the bar’s audience and you, the reader —just kidding— I kindly invite you to choose any, or several, online storefronts out there on the web. But most importantly, one where you can find a yellow scarf for women and try to go as far as you can through the checkout process by emulating Jane & John. That said, you don’t “actually” have to buy the scarf every time, not if you don’t want to.
Emulating Jane
Follow the critical user journey using only the keyboard keys Jane would use.
Emulating John
Activate the ChromeVox extension and —finally!— do close your eyes, and then follow the critical user journey by listening to what the vocalization tells you, and by using only the keyboard keys John would use.
Common issues
As you compare your emulation experience throughout different sorts of storefront accessibility, you may run into issues such as: being unable to tab to the next logical element in the page by landing on random elements, unable to tab from any point forward (keyboard trap), unable to hear a meaningful description of the product, like its colour or its price. Sadly, this is very common and an indicator of the lack of Web Accessibility on a particular website.
Criterion compliance
Clarification of terms: criteria is plural; criterion is singular.
Now, if you have managed to successfully finish the checkout process by emulating Jane & John, then that storefront has complied with the following WCAG 2.0 criteria.
Table showing 16 criteria for the Successful emulation of Jane and John. The first 6 criteria are for Jane and John, the remaining 10 only for John)
Notice how most of the criteria are Level A with only a couple of Level AA. In my opinion, those above are the most important criteria to comply with “for starters”. They set the foundation for building a richer user experience on top of them. For example, adding more Level AA criteria, or plugging other assistive technologies like braille displays. Let’s consider them the hard part, or independent from “simpler” criteria, like those regarding the use of colour, text size, video and audio.
Well, this is where the story at the bar ends. Either if you were able or not to finish the emulation as Jane & John. Even if you just tried a little, it deserves a toast. As you may have realized by now, by closing your eyes you began to “see” where the problems are with Storefront Accessibility. So, cheers to that! with whatever you are drinking.
In Conclusion
We can validate Storefront Accessibility by “walking the walk” with empathy.
When planning, developing and testing an online storefront, any effort in the process to remove bugs related to the 16 criteria stated above will facilitate the checkout for users with disabilities.
A web developer should make sure an accessible critical user journey actually works before handing it over to the QA tester.
The best validation tool is empathy.
Sometimes by closing our eyes, we can “see” where the problem is.
Shoppers with disabilities should be able to purchase online as they do in brick-and-mortar stores.
Nobody is exempt from disabilities through a lifetime, best case scenario: we all age.
Web Accessibility should be seen as a business opportunity in any lead-to-cash strategy before it becomes a lead-to-lawsuit scenario.