This is a non-comprehensive list of recommendations on how to improve the Screen Reader User Experience (UX). This list has worked for me in the past to get consistent results between planning and deployment. This article is mostly addressed to developers, visual and interaction designers, and QA testers. Not all recommendations apply to every role, but all outcomes are helpful for everyone. Other project stakeholders, like Project Managers, can benefit from knowing these recommendations. This is not a formal checklist but it can be the foundation for one.
There is a layer under every User Interface (UI) that “speaks” to the users. And I mean literally speaks to them. If it doesn’t, then something is wrong with the UI. Most of the time, individuals unfamiliar with Web Accessibility don’t realize this.
Same as we usually test by visually browsing and testing with the mouse. Performing tests with keyboard-only navigation and Screen Readers are becoming a requirement. To hear what components and their interaction sound like. Needless to say: silence is bad.
There is no replacement for hands-on manual in-person Screen Reader testing. To the writing of this article, there isn’t any automatic test for Screen Readers. Testing the Screen Reader UX from conception to implementation is one way to improve it.
General Improvements to Screen Reader UX
Define user journeys for every UI or page. Write it down as a numbered list. E.g., “User Tabs to component A, then uses the down-arrow key to reach element A1, …”.
Video record screen reader sessions based on defined user journeys. Make sure to enable “computer audio recording”, otherwise it will result in a video without audio. Video recordings are a great reference when explaining to a developer how to reproduce screen reader bugs.
Test in as many different screen readers as possible. Some are free, some are pricy, some are strict, and some are very forgiving.
Test accessible gestures for mobile devices, but also small devices with external keyboards. E.g., Android Tablets with external mini keyboards.
Beware of cross-screen-reader bugs and aim towards cross-screen-reader solutions. E.g., VoiceOver for Mac will vocalize just about everything, including dynamic content. As opposed to JAWS/NVDA for Windows, which may need a preloaded parent tag for similar results. That is to say, vocalization varies from one Screen Reader to the next, depending on implementation, platform, and devices.
Be patient while testing ARIA attributes. Testing vocalization will take much longer (even at expert levels) than the usual “Mouse + Browser” testing. This is normal, adjust expectations and time estimates.
Make sure to test for consistency and double-check screen reader vocalization across different environments. E.g., localhost, development, staging, live.
Video record experimental approaches to improve Screen Reader UX that didn’t make it to the final implementation. Save for future recycling.
Video record the approved “final” outcome to avoid and spot regressions.
Improving Screen Reader UX by Role
As a Designer, explore examples and references using a screen reader (desktop and mobile). Listen to what components and elements sound like. Video record the screen reader exploration sessions to show to developers and other stakeholders. Point out cross-screen-reader vocalization differences as soon as spotted; they tend to be forgotten.
As a Developer, test with a Screen Reader while developing. If designers provided a video recorded session of the expectations, try to aim for a similar result (desktop and mobile).
As a QA tester, add video recordings of screen reader bug detections to QA tickets (desktop and mobile). This will help developers reproduce and debug issues faster than reading text and interpreting the instructions on how to reproduce. It saves on explanations about how to reproduce and issue.
As a stakeholder, be aware of cross-screen-reader differences and limitations.
What to Avoid
Avoid using Chrome extension to replace or emulate Screen Readers software. The only focus of emulation should be to emulate the user, not the software. As of the writing of this article, I haven’t come across an extension that emulates some ARIA scenarios. Such as aria-expanded, or aria-live which already have some cross-screen-reader issues when using real software, so avoid emulators.
Avoid turning off the screen reader when it starts vocalizing. Instead, listen to it speaking, and try to associate the speech with the UI component and the interaction. I have to admit this happened to me at the beginning. Then I realized THIS is exactly what I should be testing: vocalization. Last year I wrote an article about overcoming the uneasiness of screen reader testing. It’s a helpful guide for slowly adapting to that new environment.
Avoid browsing the UI with the mouse while using a screen reader. This prevents hearing some additional instructions the Screen Reader might be vocalizing by default. E.g., specific keyboard key combination to interact with the component. Don’t skip components or elements with the mouse, always use keyboard-only navigation.
Avoid including ARIA attributes without actually testing them with a Screen Reader and listening to how they sound.
Avoid listening to music while testing with screen readers. Some developers and designers like to hear music or watch videos while working. Honestly, so do I, but then suddenly hearing the computer speaking might be distracting. This could make a slow process even slower.
The hard part of Web Accessibility is empathy. By that, I mean the part where a development team has to emulate users with disabilities to provide a solution. As opposed to sympathy, which is just caring about the Web Accessibility cause, and saying: “somebody should do something about it”. But then, navigating and testing (hands-on) while emulating users with disabilities is hard, and a process. A slow one.
Of course, hiring people with disabilities to do the testing and providing feedback is the best solution. Unfortunately, this is still not a widespread practice. Most of the time not in the hands of designers and developers to do the hiring. However, emulation is a good strategy, while keeping in mind certain details.
So, empathy equals user emulation, and user emulation is hard. This “hard part” proves to be hard most of the time because it’s usually not “on the radar” of the “usual” development practices. Hence, most of the time not “visible” to team members. The hard part is usually harder when neglected from the very beginning of a project. At that point, it just seems like a very time-consuming inconvenience to retrofit everything. The latter is unfortunately the most common approach to Accessibility. Retrofitting takes 10 times longer compared to what it could have taken if done “from scratch”.
Nevertheless, the “from scratch” approach generates a high level of anxiety among developers and stakeholders of the project. It gets perceived as if they are spending too much time in proper code semantics. Not in actual features and functionalities. Then we hear: “time is money”, “the client wants it for yesterday”, “not enough resources”, etc. Yes, all of the above, also makes part of a hard truth: “no features, no sales”.
Annotations to the rescue
Tackling implementation with Web Accessibility Annotations makes it less time-consuming. Even when developers aren’t “that much into emulation” or are still ramping up their accessibility knowledge. With annotations, they will know what needs to be there, code-wise, and why.
That said, a lot of time could be saved if Web Accessibility is considered at early stages. Like while UX Design and Visual Design stages are starting. If Web Accessibility Annotations are clearly communicated to developers from the beginning, they will be able to implement them faster. Of course, best if crafted at the very beginning. Most common categories for these annotations that will help speed up projects and make the hard part less painful are:
Accessible Keyboard Patterns being standard.
Labels and Descriptions provide context for all users.
Headings make sense if nested.
Landmarks and Regions are well organized.
Status Messages are meaningful and punctual.
Screen Reader narration UX.
There are great design tools to communicate Web Accessibility Annotations. They can also all be communicated to developers with a spreadsheet or a text document. Yes, it’s time-consuming to populate a spreadsheet with all those details, but it’s worth the effort in the long run.
By keeping in mind some details under each category, we will be on our way to crafting them correctly.
Accessible Keyboard Patterns
Keep in mind:
Keyboard navigation is the foundation for most Assistive Technologies. All UI components should work with the keyboard or provide a similar experience while using the keyboard. Source order matters, avoid breaking it, and thinking it can later be arranged using CSS grid or absolute positioning.
There is already a standard for Common Keyboard Interactions for most UI components, no need to reinvent the wheel. Have a cheat-sheet handy with all those included in your UIs, to avoid regressions.
Not everything on the UI is about “tabbing”. There are other keys on the keyboard as well. Some are exclusive to Screen Reader use.
Labels and Descriptions
Keep in mind:
All form elements and buttons should have labels. Text in label should make sense when audible (when vocalized by a Screen Reader).
A placeholder is not a label, use both, even if repetitive, or be creative to avoid repetition.
Don’t be afraid or annoyed by repetition. What may seem straight forward for some users, may not provide enough context for other users. Be inclusive: design, develop and deliver for all users.
Screen Reader users will get context from the labels and descriptions they hear. Trust your eyes, but also trust your ears. If you don’t hear it’s not there.
Don’t overuse aria-label, it’s invisible to sighted users, and it will override text in native labels for Screen Reader users.
Provide descriptions through aria-describedby attributes to give Screen Reader users more context if UI is complex. Use visually hidden text styles to provide context or instructions if you can’t provide aria-describedby due to code practices reasons.
Headings
Keep in mind:
Headings are for arranging or structuring content on a page. Not all big font instances have to be headings. Choose wisely.
Most Screen Readers allow “headings navigation” by just pressing one keyboard key, H key for JAWS/NVDA. Make sure the heading structure will make sense if users were to skip content by headings. Will they land on meaningful content?
Most Screen Readers can produce a list with all the headings on a page. This allows users to browse the list and jump to a specific heading on the page. Write down that list and structure it. Does it make sense if you read it out loud?
Some Screen Readers like NVDA will “nest” headings when listed. Organize your headings in a way they make sense, similar to an expandable Table of Contents.
Landmarks and Regions
Keep in mind:
Similar as with headings, Screen Reader users can list, navigate, and skip landmarks and regions by pressing only one keyboard key, R or D key for JAWS/NVDA. This is a capability only Screen Readers have. They must be well organized and make sense when listed.
Screen Readers will announce when users enter and exist a region or landmark. If you have more than one region or landmark of the same kind, then label them for differentiation. E.g.:
<nav aria-label=”top navigation”>
<nav aria-label=”breadcrumbs”>
<nav aria-label=”footer navigation”>
Status messages
Keep in mind:
If you remove something from the UI, don’t assume the user will “see” it’s no longer there. That’s not enough for Screen Reader users, they also need to “hear” it’s no longer there. Use aria-live or role=”status” to notify user of any changes that may affect them.
Same as the above but for element that weren’t there before. Notify users when new elements are introduced on the UI.
Don’t overuse role=”alert” this is a very aggressive and intrusive kind of notification. Use only when needed, for everything else use aria-live.
Screen Reader narration
Keep in mind:
The Screen Reader user experience should make sense throughout the whole user journey. Think of it as a person telling you over the phone what they are doing at every step. The “listening experience” should match the user interactions. Write it down and make sure developers receive it. Along with specs, wires, and visuals.
Silence is bad. Knowing beforehand how the UI should “sound like” will help spotting and fixing silent spots.
Use proper semantics. A link is not a button, even if you make it look like one. Screen readers will vocalize the true nature of the element and users will act according to what they hear. Screen Readers can also list Buttons and Links separately, so they should make sense when listed apart.
When browsing the web it’s clear that keyboard interactions, code sequence, labelling and status messages are probably the most neglected issues. Then, in my experience, thinking we can later rearrange the order of the components with CSS Grid, and enforce focus management with JavaScript, sets the ground for very unpleasant surprises.
Other Pain Points
Tables can also be problematic, especially if responsive. They are a complex topic worth an article, or several, of their own. Noted in my to-do, for a later time. For now, keep in mind Screen Readers can also navigate by Tables on desktop, T key for JAWS/NVDA, and have their own keyboard interactions. Therefore, semantics matter quite a great deal here. Oh! … and they sound different on mobile.
Form validation is also complex, although it could be very simple. It’s a controversial topic mainly due to the overuse of “dynamic validation”. It looks great visually, but it’s not very inclusive for Screen Reader users. They will hear an error as soon as they start typing. Creative solutions are needed to produce an inclusive UX. This topic too deserves its own article(s).
To wrap up, and clarify. Let’s not be fooled by thinking Web Accessibility Annotations is all that’s needed to get Web Accessibility Implementations done. They are just part of the specifications. It’s a reference. It helps. But it helps even more when team members are familiar with Screen Readers’ use and interactions. With how components and elements sound like. Then annotations will be accurate, UI/UX and code-wise. Like an audible mockup.
I remember the first reaction I had when I started to work on a Web Accessibility project and did Screen Reader testing. So I turned on the Screen Reader for the first time, then I wanted to shut it down immediately. I got confused between what my eyes were reading and what my ears were hearing. Concentrating on both areas at the same time, the visual and the audio, was hard. Got worst when the Screen Reader was narrating and I was trying to speak, while screen sharing and presenting something.
A word for newcomers
It’s been a while since that, and I’m well adapted now. But that same reaction I had, I keep finding it whenever I have to coach newcomers to Web Accessibility. When explaining how to optimize, code, and then do Screen Reader testing to confirm vocalization. That perceivable embarrassment, when they can’t turn off the Screen Reader. So, I’m writing this article to quickly share a link with newcomers. What you feel is normal, and you will adapt the more you use it, but don’t turn it off. It’s like the first time using Windows coming from Mac, or vice-versa. Or switching from a native language to a new language. It feels like your brain stretches.
So, how do I turn it off, they ask? The answer is, “let it speak, that’s the whole point of Screen Reader testing”. Listening to the spoken representation of the User Interface, and then verifying if it’s equivalent to the visual experience. Emulating the listening experience, as Screen Reader users would experience it. Empathizing to emulate users is hard, it’s a process, and adaptation takes time, but it’s worth it. Then, patience and practice.
First Aid Kit
If you are seriously overwhelmed by the Screen Reader narration to the point where you just can’t focus on what you are doing. Then you could use the following tricks butdon’t turn off the Screen Reader:
Press the Control key to pause it, works for all Screen Readers.
Turn down the volume and enable Speech Viewer for NVDA, comes free and can be enabled under “Tools” in the NVDA menu.
There is also JAWS Inspect for JAWS, which unfortunately has a cost.
If you are testing in VoiceOver for Mac, then you may already have seen the text output, so just turn down the volume.
Update on 11-22-2021, more aid tricks:
You could turn off the Speech Mode for NVDA. There are three Speech Mode settings so you can press Ins + S three times to cycle through them all.
On Windows 10, you could turn down the volume for just JAWS/NVDA with the “Sound and Volume Mixer” by right-clicking on the speaker icon on the system tray. Then select “Open Volume Mixer” to open it. Here you can change the volume for individual applications.
Uneasiness towards Web Accessibility
Sometimes I have also noticed that talking about Accessibility is uncomfortable for newcomers. Especially the user emulation part, it triggers different emotions ranging from fear to disdain. Going from “It’s scary to think about this, I don’t want to attract this”. Or “I can’t emulate because this will not happen to me, I don’t see myself there”. Well, on that, I guess it depends on the different authors we all read and our different points of view. Yet, Accessibility needs to be implemented, regardless. So, how do we break through this discomfort too?
Well, we have to be aware that, by avoiding or postponing Web Accessibility, either by omission or deliberately, we are discriminating against users with disabilities by preventing access to content or transactions. I know it’s a strong word, but that’s exactly what it is. In some jurisdictions, lawsuits would follow. Think of the users who can only use software with Screen Readers. They can’t turn it off.
Overcoming uneasiness
Last year I read author Brené Brown. In her book “Dare to Lead” she says discrimination comes as a result of shame. She proposes as an antidote for shame: Empathy and self-care. Understanding what triggers shame reduces its power, she says. I couldn’t agree more. It really sounds easy once placed in perspective. However, empathy is a process (it needs context, unlike sympathy). Self-care requires enough awareness for introspection, as well as a strong willpower.
So, it’s not easy to get to the antidote, although the effort is worth it. Nonetheless, sympathy is easy, because it doesn’t really require the context of “walking a mile in someone else’s shoes”. There, where we first need to learn how to tie those shoes, and the walking cadence. But sympathy is about caring and understanding.
Having said that, while working on the larger and well worth goal of removing shame, without being shameless. It should be sufficient to just be bold enough to have sympathy (caring). Understanding the fact that, we may be depriving users with disabilities of opportunities most users give for granted, and that is illegal in some places. It’s important to remember as well, that disabilities are something that can happen to anybody at any time in their lives. Accidents do happen to those born without disabilities, regardless of favourite authors or philosophical alignments. Also, most people in most cases, already know someone who was born with a disability.
Working in Web Accessibility projects gives the implementors a new perspective. Prepares them if a disability ever catches up with them, or puts them in a better position to help someone they know who was born with a disability. We implement Accessibility to empower users. Implementors are also users.
Empowering users with disabilities
While overcoming the uneasiness of the Screen Reader testing new surroundings, I suggest we always remember famous people with disabilities. Like Hellen Keller or Louis Braille. Back in their days, they were able to create systems to help, empower and inspire other people with disabilities. Shouldn’t it be easier now with the help of technology and the information we have at hand these days?
Brilliant minds like that of Stephen Hawking reached their highest point and popularity because they were empowered by the technology of their time, and by the people behind that technology.
As professionals involved in projects where Web Accessibility is implemented, we must focus on empowering users with the solutions we create in our daily work. Focus on making software everybody can use, just as intended for the physical world. If we plan for wheelchair ramps and automatic doors. Why not make sure keyboard navigation is provided as the first layer of Web Accessibility.
In his book “Outliers”, Malcolm Gladwell presents a series of interesting facts about successful people. We want to be successful as Web Accessibility implementors, don’t we? Gladwell writes about how they became “outliers”, and how the same formula can be applied to anybody, consisting mainly of 3 elements:
10,000-Hour Rule: Practicing a skill for 10,000 hrs.
Generational opportunity: being there while key events are happening.
Help from others: People that will propel those skills into action.
So, this is the best time in a generation to start empowering people with disabilities by means of technology, it’s a key event. Their perspective, and their unique circumstances, will provide humanity with contributions that wouldn’t be possible otherwise. We, as implementors, must use our talents and skills to propel theirs. And yes, it takes time.
I often get questions about where to start implementing web accessibility. If there are easy parts to WCAG, and if it’s OK to start from there.
Yes, there are easy parts to WCAG. The easiest parts are those you really don’t have to do. Non-applicability, I mean all those success criteria that do not apply to your product or website. According to WCAG’s definition of conformance. If there is no content to which a success criterion applies, then the success criterion is satisfied. Consequently, there is great value in identifying those first. Every project is different, but for instance, if you don’t have video or audio in your project. In that case, you can skip those parts completely.
In my experience. I would say anything in the realm of automatic testing is easy. Easy to test, easy to fix. Then things start to get difficult when we need to manually test the User Interface. Mainly because there is a steep learning curve as to how exactly it needs to be tested and by who. Therefore, testing for Screen Reader is the most difficult part, in my opinion.
The easy parts
Now getting back to the easy parts. The following are easy: non-text content, sensory characteristics, use of color, contrast, page titled, link purpose, multiple ways, consistent navigation, consistent identification, language of page, link purpose and maybe error identification too. Mostly all those things on the web designer’s end. It almost seems logical to start there. Although, in my experience with transactional websites, these are huge time wasters. I’m not saying don’t implement them, they must, but not first. Change or adjust them at any time. However, implementing them first usually gives a false sense of accomplishment.
By the way, I’m intentionally removing the success criteria numbers in the previous paragraph. Removing the numbers makes articles more digestible, some readers have told me, especially for beginners. It seems numbers give the article a technical flavor. So, I will do this when possible. This is not a technical article, it’s more of a cautionary tale.
The disadvantage
One disadvantage of starting with the easy parts is thinking your website is accessible because an automated test says so. Tests such as Chrome Lighthouse, may indicate your project is 98% accessible. This could be misleading for your project stakeholders (e.g., client, high management, project manager, developers, designers, etc.). Especially if they are not familiar with web accessibility and falsely think they have reached a near-perfect accessible website. As it turns out, automated tests only cover 18 to 20% of the accessibility issues. And doing it so, page by page.
It should be clear: there’s no replacement for manual testing. So, if you make it to 100% in Lighthouse, that’s really only 20% of the accessibility for the tested page. Not for the full website. I have come across this misconception many times. It’s demoralizing for stakeholders and developers, finding out there is still a remaining 80% work to be done. Once they thought it was ALL done.
It’s also tempting for some stakeholders to claim a site is “half-accessible”. By taking fulfilled success criteria based on non-applicability, and then add them to the previous automated test percentage. Inflating numbers is a terrible idea. This doesn’t make a site more accessible.
So, where to start?
I have heard many times the claim that the first 3 steps to Accessibility are “easy”. Those steps being: Keyboard Navigation, Form Labels and Automated tests. I agree with this approach, but I kindly disagree with Keyboard Navigation being easy. The more interactions you have on one single page, the more complex the Focus Management becomes. However, I definitely agree Keyboard Navigation is the best place to start, even if it’s not easy.
Form labels, in a perfect world, are easy only if using native HTML elements. Yet, a rarity in real life these days. The more developers use <div>s and <span>s for everything while coding forms, the more dependent on ARIA roles and attributes those forms will be. Once crossing the ARIA imaginary point-of-no-return, then the next thing we will run into is Accessibility Patterns. Defining or reconstructing these patterns takes a lot of focus management work. Not an easy task past that point, but a must-have by then.
Anyway, in my experience, implementing the “hard parts” first sets a foundation. To support what you will add on top. The previously mentioned easy parts. Accessibility is not a one-time project, it’s a process. There could be regressions and we want to know where they are and how to avoid them.
Hard parts are time consuming
Yes, it takes time to do the hard parts first, and time is money. In real life, stakeholders get desperate with deadlines. Developers want to start features and functionality without mockups. Managers want to add look and feel quickly to show progress. Those efforts need direction. Developers for sure can start features, but making sure they are operable with the keyboard, not just the mouse. Progress on design is perfect as long as it doesn’t break keyboard navigation.
If you are working on a website that the client will customize later. Then fine-tuning contrast, fonts size and use of color is something you can put at the bottom of the to-do list. Better to concentrate on building on top of the keyboard navigation experience and visible (and audible) labels. But here’s the thing, this is also no different if you are working on a website that is “final”. Meaning the client won’t customize further. In this case, keyboard navigation will give designers better arguments as to where to place components. Other than just “design trends” or fashionable practices. It’s harder to roll back a component’s position and break keyboard navigation, than it is to roll back from color, contrast, or size changes.
Having said all that. The hard parts don’t have to be “that hard”. UX designers can help from the very beginning. By identifying things like Intended Keyboard Patterns, Visual and Audible Sequences, Minimal or Specific Screen Reader Narration.
The choice is yours. As I said before, every project is different. Just don’t let the easy parts consume the time you could be spending on the difficult parts. Accessibility needs to be implemented gradually, like onion layers, following Inclusive User Journeys. Instead of just literary complying with success criteria without understanding their meaning.
This Accessibility Advocate Checklist is a compass. A non-thorough and non-technical Accessibility checklist for the person taking the role of Accessibility Advocate in a development team. All steps in this checklist lead to knowledge discovery for the Advocate as well as for team members.
Identify
Critical user journeys.
Testing cases and QA scenarios for Accessibility
Verify
All User Interface components have Keyboard Accessible Patterns.
User journeys are doable using only the keyboard.
Keyboard traps do not interrupt or prevent user journeys from being successful.
User journeys vocalize properly using a Screen Reader with keyboard navigation.
If the project is for government employees in a regulated environment.
A Web Accessibility Advocate, in the context of Software Development, is the team member in charge of raising awareness and increasing the Web Accessibility literacy of the team. Also ensuring that knowledge stays in the team.
Among the responsibilities of a Web Accessibility Advocate are the following:
Raise awareness of potential consequences of non-compliance,
Point team members to an Accessibility Resource Center based on their roles.
Run the Accessibility Advocate Checklist for all team’s projects.
Provide accessibility questions for interviewing job candidates.
There are similar “advocacy” roles in software development teams. Such as the “UX Advocate” role, following along the lines of the User Advocacy principle. These advocacy roles become handy especially when the Designers’ ratio in development teams is very low. Also, when designers are temporary contractors. Once delivering wires and visuals, they leave. Hence, the need for a developer, who also has “that” UX awareness, to take on that role.
All that said, the same principle applies to Web Accessibility. Consequently, any team member can take on the role of Accessibility Advocate. However, coding practices covered in the topic are better communicated from one developer to another. It’s best if a developer takes on that role. Still, it could be anybody.
When taking on the Web Accessibility Advocate role, the huge body of knowledge should not be intimidating. There is no need to understand it deeply when starting, not even the empathy part. There are tools and tricks to learn how to empathize. However, the reasons for taking the role should be clear and convincing. Not just because it’s fancy-sounding or a trend.
Why it’s important
First of all, is important to understand why Web Accessibility is important in software? besides the fact that it’s the right thing to do. The web has the potential to bring an unprecedented level of independence to people with disabilities. For them Web Accessibility is freedom. As IT professionals, one has the great opportunity to enable this independence for people with disabilities and improve their lives.
People with disabilities can’t easily leave their houses They may encounter barriers outside their homes. But they can perform tasks from their computers like working, shopping, banking. Even have access to entertainment or playing games online. But that’s only if the websites or the software are built with accessibility in mind.
There’s a different approach to Accessibility, that also renders results. By not worrying about being “nice” for a moment, and just focusing on being “smart”. Then embracing Web Accessibility to “show-off”, protect or build a brand. Most likely competitors are also doing their part in complying with Web Accessibility.
Plus, a clear benefit of building inclusive software is that it often results in a larger user base. Yes, more clients, due to the positive impact on more people’s lives.
Now, if the previous “nice” or “smart” reasons aren’t good enough to convince. Then, the “litigation avoidance” reason should prevail. Lawsuits for non-compliance are very common these days. Litigation is the most expensive way to implement accessibility.
All the previous goes to show, that being a Web Accessibility Advocate is a way of being proactive. By anticipating and preventing lawsuits for software companies and their clients.
How to start
Every process is different, niches vary, teams have their own personalities. In my experience, I have found that answering the following questions helps in shaping up the Advocate role. Also, they help to outline an Accessibility Roadmap.
Who is the person that will take the role of Accessibility Advocate?
Is our product static and likely to have flawless and durable 100% Accessible status?
If our product evolves, when are regressions likely to happen?
How do we “painlessly” dive into Accessibility Requirements to anticipate and reduce legal problems?
How do we recognize the pitfalls to avoid?
Accessibility is a practice, not a one-time project. So how do we know where to start and where to end the cycle?
How do we know when we have arrived, or how much is left?
Do we start from scratch or retrofit?
Where will we build our Accessibility Resource Center? (Virtual space where we can pour in statistics, tips and tricks, references and painless approaches concerning Accessibility for our product).
As I wrote in a previous article, Web Accessibility Jobs are on the rise. This inevitably points to a job interview. It would be of great help if people involved in the recruitment process added some Web Accessibility questions to the interview. Especially for non-expert positions. That way, word will spread that this is something the hiring company values in candidates.
In my experience candidates will at least read about it later if they miss answering during the interview. Depending on the position, sometimes the hiring happens, sometimes it doesn’t, but the “Accessibility Seed” will be planted in that person’s mind. Using that “seed” analogy, let’s apply that to the questions.
5 Seed Questions for Interviews
Now, what are good questions to ask in order to assess the candidate’s knowledge about Web Accessibility? Short and sweet, if we are NOT hiring an “Accessibility Expert”. In my experience, these open-ended questions will have answers like “no”, or detailed descriptions. Also, they will leave candidates thinking, even if they don’t know the answer:
Do you know what Skip Links are? … if yes, elaborate.
Do you know what a Screen Reader is? … if yes, elaborate.
Do you know what the Accessibility Tree is? … if yes, elaborate.
Are you familiar with ARIA? … if yes, elaborate.
Are you familiar with WCAG? … if yes, elaborate.
Explanation: If candidates don’t know what Skip Links are, they are most likely not familiar with Keyboard Navigation, which in turn is a must for Screen Readers, which happens to vocalize the Accessibility Tree, there where we use ARIA to fix issues related to it. Mentioning WCAG just makes sure the candidate has never heard of Web Accessibility before. That, if the four previous were negative answers.
All 5 are easy to remember by candidates: Skip Links, Screen Reader, Accessibility Tree, ARIA, WCAG. All 5 are great entry points for research and personal improvement. They all dive deeper into knowledge areas shared by Designers, Developers, Product Owners, Managers and Testers. Even without them knowing.
It really depends on the position we’re hiring for. Questions will not be the same when hiring for a Web Accessibility Engineer/Specialist, or when hiring for a Frontend Developer, a Tester or an Intern.
Interview Questions for Experts
A quick search on Google for “web accessibility interview questions” will result in some of the links listed at the end, which are very good and detailed, but in my opinion, not all the questions apply to just any candidate. They all seem to be addressed to hiring either Web Accessibility Testers or Web Accessibility Engineers or Specialists.
Yes, it would be great if all candidates knew all those answers. But honestly, unless we are hiring experts most people don’t know, and they are not to blame. Web Accessibility is not new. Browsers, software, and laws that support it have been around for decades. What is new, is that the number of lawsuits became alarming around 2017 in the US. These days, it is a must-have for websites in North America with other regions following along. So, realistically, this is still new for way too many people.
Now, for non-expert positions. Candidates failing to answer any of the previous 5 seed questions should NOT prevent the hiring of candidates if they shine in other areas. Web Accessibility can be learned without having to know the theory. Also, there’s no better way to start understanding it than to power up a Screen Reader, start browsing and stop for silent elements.
Nonetheless, this is interesting and precious information to have handy. So, I’ll leave them here and come back to this list as I find more resources on the Job Interview topic:
When I finished writing the article about the Web Accessibility Engineer, I started to keep track of the links I posted there. Consequently, I noticed the information linked in there seemed to evolve over time. Hence, my curiosity led me to dig deeper into what other Web Accessibility Jobs could be found, or trending, for a Web Accessibility Professional. There are quite a bunch, and certainly on the rise.
This is great, it makes me very happy for newcomers and the new generation of professionals who join the Web Accessibility domain. For the most part, this work domain used to be a very lonely occupation before the year 2015. Or at least it was for me. That is to say, very few people could play along when implementing Web Accessibility.
To share the joy and for those who are interested, I will list and update my findings of Web Accessibility Jobs in this post. In addition to this, if you stumbled upon this post by chance and have a search string you want to share. Please do so in the comments. Do not post individual job positions. Only search strings.
In short: a Web Accessibility Engineer is the person ensuring web technology is released accessible by removing the barriers that may prevent equal access for users of assistive technologies. This definition is almost the same as that of a Web Accessibility Specialist. So, how did this come to be?
In the beginning
Since around 2017, a vicious practice known as Predatory Litigation became a trend in the Web Accessibility domain in the United States. It consists of sending letters to companies, threatening legal action, or even filing suits. Alleging their websites are not in compliance with the ADA (Americans with Disabilities Act). Even when that law does not literally address Web Accessibility. Only places of public accommodation are mandated as accessible by the ADA Title III.
But once in court, it becomes a matter of law interpretation. This is scary for companies. Rulings over time have been consistently clear. Saying websites are an extension of a business, which are places of public accommodation. Therefore websites should be accessible. This phenomenon of “serial lawsuits” has caused federal accessibility lawsuits to exceed the 2,000 suits mark by 2018, and growing.
Most of the time companies don’t know the accessibility level of their websites, so they settle. Although the recommendation is not to settle. Mainly, if a website is already somewhat accessible, and being improved on what is missing.
Now, thinking that just because software is not public-facing things will be “OK” with accessibility is not accurate. Employees can sue too.
In the US if a company has more than 15 people, then employees can sue based on the ADA. Especially if the company recruits candidates with disabilities. If the users happen to be Federal Government employees, then it’s even stricter. Employees can sue based on ADA and also under Section508. The government knows this, so they will avoid buying non-compliant software.
Litigation is the most expensive way to implement accessibility. Therefore, companies have started to recruit more candidates with Web Accessibility knowledge. Since 2018 a new role has started to emerge in job listings: Web Accessibility Engineer (WAG).
Before the WAG, there used to be the Web Accessibility Specialist (WAS). Very present in job listings too. Not to be confused with the IAAP WAS certification. It has the same name. The purpose of that credential is to validate the knowledge and skills required by the job role with the same name.
Now, by looking at the job description of both WAG and WAS I see many similarities. The only difference is that the WAG has more hands-on coding skills requirements. JavaScript (JS) for most cases. The salary also reflects this. Going from 10 to 40% more for the WAG according to internet pay scale sources. This varies by country and region. Here’s a quick glance:
In some of those salary links, it’s revealing to see under the WAG description that there are not enough reports to show salary distribution. I assume it’s still a very new job role as of 2019. Also, I see that WAG overlaps with the Accessibility Developer role. An interesting detail.
Usual Requirements
Regardless of years of experience. Removing the soft skills any professional should have. As well as the candidate’s good-to-haves, those change from one company to another. Removing all that, most job postings on the internet include the following as common qualifications for a Web Accessibility Engineer:
Experience with Accessibility Evaluation Tools.
Perform accessibility audits of web pages, desktop applications, and mobile apps.
Experience with Assistive Technologies across multiple platforms. Including Screen Readers, Magnification, and Read-aloud tools (e.g., VoiceOver, NVDA, JAWS, ZoomText, Dragon Naturally Speaking).
Experience with Mobile Screen Readers using gestures.
Write reports describing accessibility issues and recommendations for resolving them.
Knowledge of Document Accessibility Remediation (e.g., Word, PDF, PowerPoint, Excel).
Provide Quality Assurance feedback.
Rapid prototyping to evaluate potential technical solutions.
Solid knowledge of WCAG Success Criteria.
Solid knowledge of WCAG Techniques and Failures.
Knowledge of Accessibility compliance for ADA, Section508, CVAA-255, ACAA, EN 301 549 and Regulatory Environments.
Understanding of the difference between Legal Compliance vs Accessibility Beyond Compliance.
Answer accessibility-related questions, through helpdesk tickets and calls.
Train new hires and clients in accessibility standards.
Proficient with HTML, CSS and WAI-ARIA.
Proficient with JavaScript or an Object-oriented language (Seems a must-have, for most Web Accessibility Engineer’s job postings as opposed to Web Accessibility Specialist’s postings).
At least one Accessibility Certification: CPACC, WAS, 508TT, CPWA (mostly optional).
Experience working directly with disability communities (mostly optional).
Knowledge of User Requirements for people with disabilities.
Understanding of the difference between Accessible Design vs Inclusive Design.
The Bottom Line
In conclusion, a Web Accessibility Engineer is a Web Accessibility Specialist who is also a Web Developer.
I remember the same thing happened with Frontend Developers. There were many role names for people who only knew HTML, CSS and jQuery. They were called UI Developers, Integrators, etc. They complemented PHP and Java workflows. That, was before JS became a full-fledged language. Then JS was a requirement, and the Frontend Developer role was coined. Then backend knowledge was added to the mix, and the Full-stack Developer role was born.
In my opinion, the same thing will happen with Web Accessibility jobs. They will start to change and adapt to the workflow. Job roles and requirements will evolve over time.
Success criteria for web accessibility under WCAG 2.0 (Web Content Accessibility Guidelines) could be overwhelming if seen only from the textbook perspective. In my experience developers and managers have almost unanimous discomfort reactions to Web Accessibility projects. Such as: do we have to read “all that”, it’s just “so boring”, “just run the validator” … crickets and tumbleweeds to sum it up.
As a developer and learner of Web Accessibility, I realized that once moving past the “excruciating pain” of reading the criteria then it can be approached from different angles. From the User Experience angle for instance, and also by layers. Slowly, but really, by just testing it. Something developers do all the time. Now, that usually gets me into the following Q&A:
But, what do we need to “test” exactly?
What we unconsciously do most of the time: the user journey.
How do we do that?
By consciously empathizing with the disabilities our users may have, in other words, simulate or emulate.
Isn’t it enough to test my site with a validator?
It’s not. Validators are of great assistance when analyzing large websites for some criteria. Like 20% of them only. However, I have seen validators passing sites with flying colours only to realize they are, in fact, not accessible hands-on.
Concrete analogy, please?
Believing that just because you comply with a few criteria, makes your website “accessible” to a certain level. Would be like thinking your office building is “accessible” because it has a very big button to open the door automatically at the main entrance … but only after passing through a gravel parking lot and climbing a staircase. So how does the user make it to the door for starters?
Storefront Accessibility
Many sectors are subject to Web Accessibility compliance these days, for some —like government— is mandatory. Online retail has become the target of a growing number of lawsuits, also users with disabilities have clear expectations, therefore a growing need for Storefront Accessibility is on the rise. Sometimes making the difference between a “lead-to-cash” approach to a “lead-to-lawsuit” outcome.
Premises
Let’s illustrate the process with an example, but first establish some premises:
The main goal of a storefront is to allow users to checkout products.
Elements on the interface should facilitate the user to complete checkout, including users with disabilities.
Developers and Quality Assurance Testers often test by pretending a user can successfully get from point A to B or Z on the interface. The same folks should also test that users with disabilities are able to get to the same points.
Successfully getting from point A to B or Z in a test, while emulating a Persona with disabilities, will result in a number of successfully complied accessibility criteria.
About Empathy
Before getting into what a Persona is, let’s clarify empathy. It sounds like something easy to do, we’ve heard it many times: “put ourselves in somebody else’s shoes”, how hard could it be? Well, turns out people have different levels of empathy and are usually influenced by their own life experiences… so it’s not that easy.
Sympathy is NOT Empathy
Also, different perceptions of what empathy means complicate things, I’ve heard many spontaneous definitions: it’s about having a big heart, being all sentimental about something, being a philanthropist, or reading emotions “between the lines”… yes, I guess it could very well be all that depending on the context of the conversation, but still, all the aforementioned are closer to sympathy than empathy. Now, when talking about Web Accessibility, empathy is luckily a very pragmatic issue. For example, an online storefront is either accessible or isn’t. In other words, “half accessible” doesn’t do if a critical journey is not successful. As it wouldn’t do for a brick-and-mortar store. Either shoppers with disabilities can or cannot go in and shop.
Regardless of how one “feels” about the fact, our good intentions, thoughts and emotions poured into thinking about the users who are unable to use the storefront won’t make it more accessible. That’s sympathy. It’s nice. It’s motivation. It helps. It raises awareness. But it doesn’t make websites accessible.
My usual story to “induce” people into empathy is as follows, let’s add a relaxed atmosphere first and pretend we are in a restaurant or a bar, surrounded by family, friends or occasional bystanders, usually the context where I tell this story:
Pretend you (a user without disabilities) are shopping online for a simple product, such as let’s say… a yellow scarf for women, and suddenly your computer mouse stops working —its battery runs out— and you have to finish the checkout using only the keyboard.
Resistance to Empathy
Of course, there is always resistance to this empathy exercise, and it’s normal, we are placed out of our comfort zone. So I hear things like: “what if I’m using a laptop that has a built-in mousepad” … let’s agree that’s not the point. It may sound like nonsense having to empathize in something as banal as having a mouse, but it’s relevant to the full process.
I have to point out a generational factor in these casual audiences of my stories; let’s keep in mind that users that owned a computer in the mid-80s may recall how to move on the screen with a keyboard, back then ball mice were only starting to be introduced and it wasn’t until the late 90s that optical mice became commercially available, but users that were born in those decades may be caught off guard envisioning a keyboard-only scenario. As I said, not so easy to empathize with.
Anyway, once the example is assimilated and people start throwing theories and remembering or figuring out how to move on the screen using only the keyboard, then we will have an idea, a plan, a roadmap —a journey— on how to finish the checkout.
Ok, once this keyboard journey is assimilated, let’s add complexity: Let’s pretend you have purchased this item many times (get yellow scarves for everyone) and now you know the process by heart, know it “so good” you can do it without a mouse, so good you can do it “with your eyes closed” … really? … let’s try that: keyboard navigation + eyes closed.
Personas with Disabilities
Before closing your eyes, let’s define what a Persona is; In the context of User Experience (UX) Design, “personas” are archetypical users whose goals and characteristics represent the needs and limitations of a larger group of users. Yes, putting faces to users helps with the empathy process, by googling “personas for accessibility” we can find many readily available personas to use, but then yes, more reading … crickets and tumbleweeds again.
That said, to follow up on the casual oversimplified storytelling at the bar, and since this article is starting to get long (missing the point on not having to read that much) let’s just oversimplify in a short paragraph a couple of personas that can be easily emulated by users without disabilities. Enter Jane & John, coming from a previous article, they have helped me before when setting accessibility foundation perspectives, and expectations.
Jane: right-handed user, who recently broke her right hand, has to use keyboard-only navigation, relies on her sight to know where she’s at on the screen, and for getting to the next element in a User Interface.
John: blind user, uses keyboard-only navigation, relies on a screen reader vocalization to know where he’s at, and to get to the next element in a User Interface.
The tale of a yellow scarf
There are many ways a user can navigate a storefront, but there are always paths that are more common, those where the storefront actually makes money are the critical user journeys. Keyboard-only navigation is no exception to this, so let’s agree on an average super simple journey based on the product from a story on Storefront Accessibility, a yellow scarf for women.
Critical User Journey
Test use case:Jane is looking to buy a yellow scarf for herself. On the other hand, John wants to buy the same scarf for her girlfriend. To further clarify, Jane & John, are not related, not they know each other.
For both, Jane & John, the critical journey to buy a yellow scarf for women will look something like this:
Tab to Search field > type “Yellow scarf women” > Tab to the first product (pretending is the yellow scarf) > Start the checkout process.
Emulate or Simulate?
We’re getting there, thanks for reading this far. The difference between simulation and emulation is subtle. Since they both include the word “imitation” let’s stick to that concept. For the following example, we are going to be using emulation software, so let’s call it emulation, but know that I definitely mean imitation.
Now, Imitation is key to empathizing, and we need to do that as close as possible to how Jane & John will navigate to that yellow scarf in a storefront. We know both will be using keyboard-only navigation, so that leaves us with the following keys to complete the checkout.
Keyboard Interactions
TAB
SHIFT+TAB
SPACE
ENTER
Arrow keys.
Additionally, John will need a Screen Reader, a stand-alone software with many features. Also with an important learning curve. Luckily we can emulate the basics of this technology by installing the ChromeVox extension in your Chrome Browser to emulate a Screen Reader. It has to be said that, no extension to this date, has as many features as a full-fledge Screen Reader software, such as JAWS, NVDA or VoiceOver.
Ok, this is where I dare the bar’s audience and you, the reader —just kidding— I kindly invite you to choose any, or several, online storefronts out there on the web. But most importantly, one where you can find a yellow scarf for women and try to go as far as you can through the checkout process by emulating Jane & John. That said, you don’t “actually” have to buy the scarf every time, not if you don’t want to.
Emulating Jane
Follow the critical user journey using only the keyboard keys Jane would use.
Emulating John
Activate the ChromeVox extension and —finally!— do close your eyes, and then follow the critical user journey by listening to what the vocalization tells you, and by using only the keyboard keys John would use.
Common issues
As you compare your emulation experience throughout different sorts of storefront accessibility, you may run into issues such as: being unable to tab to the next logical element in the page by landing on random elements, unable to tab from any point forward (keyboard trap), unable to hear a meaningful description of the product, like its colour or its price. Sadly, this is very common and an indicator of the lack of Web Accessibility on a particular website.
Criterion compliance
Clarification of terms: criteria is plural; criterion is singular.
Now, if you have managed to successfully finish the checkout process by emulating Jane & John, then that storefront has complied with the following WCAG 2.0 criteria.
Table showing 16 criteria for the Successful emulation of Jane and John. The first 6 criteria are for Jane and John, the remaining 10 only for John)
Notice how most of the criteria are Level A with only a couple of Level AA. In my opinion, those above are the most important criteria to comply with “for starters”. They set the foundation for building a richer user experience on top of them. For example, adding more Level AA criteria, or plugging other assistive technologies like braille displays. Let’s consider them the hard part, or independent from “simpler” criteria, like those regarding the use of colour, text size, video and audio.
Well, this is where the story at the bar ends. Either if you were able or not to finish the emulation as Jane & John. Even if you just tried a little, it deserves a toast. As you may have realized by now, by closing your eyes you began to “see” where the problems are with Storefront Accessibility. So, cheers to that! with whatever you are drinking.
In Conclusion
We can validate Storefront Accessibility by “walking the walk” with empathy.
When planning, developing and testing an online storefront, any effort in the process to remove bugs related to the 16 criteria stated above will facilitate the checkout for users with disabilities.
A web developer should make sure an accessible critical user journey actually works before handing it over to the QA tester.
The best validation tool is empathy.
Sometimes by closing our eyes, we can “see” where the problem is.
Shoppers with disabilities should be able to purchase online as they do in brick-and-mortar stores.
Nobody is exempt from disabilities through a lifetime, best case scenario: we all age.
Web Accessibility should be seen as a business opportunity in any lead-to-cash strategy before it becomes a lead-to-lawsuit scenario.