The Hard Part of Web Accessibility

Reading Time: 6 minutes
Collage of images showing UX team designing user interactions and 3 user types on the bottom.
A collage of images showing on top a UX team designing user interactions. Bottom images showing intended users by output type. A man with headphones for audible output. A refreshable braille display for tactile output. An enlarged eye is seen through a magnifying glass, for visually magnified output.

The hard part of Web Accessibility is empathy. By that, I mean the part where a development team has to emulate users with disabilities to provide a solution. As opposed to sympathy, which is just caring about the Web Accessibility cause, and saying: “somebody should do something about it”. But then, navigating and testing (hands-on) while emulating users with disabilities is hard, and a process. A slow one.

Of course, hiring people with disabilities to do the testing and providing feedback is the best solution. Unfortunately, this is still not a widespread practice. Most of the time not in the hands of designers and developers to do the hiring. However, emulation is a good strategy, while keeping in mind certain details.

So, empathy equals user emulation, and user emulation is hard. This “hard part” proves to be hard most of the time because it’s usually not “on the radar” of the “usual” development practices. Hence, most of the time not “visible” to team members. The hard part is usually harder when neglected from the very beginning of a project. At that point, it just seems like a very time-consuming inconvenience to retrofit everything. The latter is unfortunately the most common approach to Accessibility. Retrofitting takes 10 times longer compared to what it could have taken if done “from scratch”.

Nevertheless, the “from scratch” approach generates a high level of anxiety among developers and stakeholders of the project. It gets perceived as if they are spending too much time in proper code semantics. Not in actual features and functionalities. Then we hear: “time is money”, “the client wants it for yesterday”, “not enough resources”, etc. Yes, all of the above, also makes part of a hard truth: “no features, no sales”.

Annotations to the rescue

Tackling implementation with Web Accessibility Annotations makes it less time-consuming. Even when developers aren’t “that much into emulation” or are still ramping up their accessibility knowledge. With annotations, they will know what needs to be there, code-wise, and why.

That said, a lot of time could be saved if Web Accessibility is considered at early stages. Like while UX Design and Visual Design stages are starting. If Web Accessibility Annotations are clearly communicated to developers from the beginning, they will be able to implement them faster. Of course, best if crafted at the very beginning. Most common categories for these annotations that will help speed up projects and make the hard part less painful are:

  1. Accessible Keyboard Patterns being standard.
  2. Labels and Descriptions provide context for all users.
  3. Headings make sense if nested.
  4. Landmarks and Regions are well organized.
  5. Status Messages are meaningful and punctual.
  6. Screen Reader narration UX.

There are great design tools to communicate Web Accessibility Annotations. They can also all be communicated to developers with a spreadsheet or a text document. Yes, it’s time-consuming to populate a spreadsheet with all those details, but it’s worth the effort in the long run.

By keeping in mind some details under each category, we will be on our way to crafting them correctly.

Accessible Keyboard Patterns

Keep in mind:

  • Keyboard navigation is the foundation for most Assistive Technologies. All UI components should work with the keyboard or provide a similar experience while using the keyboard. Source order matters, avoid breaking it, and thinking it can later be arranged using CSS grid or absolute positioning.
  • There is already a standard for Common Keyboard Interactions for most UI components, no need to reinvent the wheel. Have a cheat-sheet handy with all those included in your UIs, to avoid regressions.
  • Not everything on the UI is about “tabbing”. There are other keys on the keyboard as well. Some are exclusive to Screen Reader use. 

Labels and Descriptions

Keep in mind:

  • All form elements and buttons should have labels. Text in label should make sense when audible (when vocalized by a Screen Reader).
  • A placeholder is not a label, use both, even if repetitive, or be creative to avoid repetition.
  • Don’t be afraid or annoyed by repetition. What may seem straight forward for some users, may not provide enough context for other users. Be inclusive: design, develop and deliver for all users.
  • Screen Reader users will get context from the labels and descriptions they hear. Trust your eyes, but also trust your ears. If you don’t hear it’s not there.
  • Don’t overuse aria-label, it’s invisible to sighted users, and it will override text in native labels for Screen Reader users.
  • Provide descriptions through aria-describedby attributes to give Screen Reader users more context if UI is complex. Use visually hidden text styles to provide context or instructions if you can’t provide aria-describedby due to code practices reasons.


Keep in mind:

  • Headings are for arranging or structuring content on a page. Not all big font instances have to be headings. Choose wisely.
  • Most Screen Readers allow “headings navigation” by just pressing one keyboard key, H key for JAWS/NVDA. Make sure the heading structure will make sense if users were to skip content by headings. Will they land on meaningful content?
  • Most Screen Readers can produce a list with all the headings on a page. This allows users to browse the list and jump to a specific heading on the page. Write down that list and structure it. Does it make sense if you read it out loud?
  • Some Screen Readers like NVDA will “nest” headings when listed. Organize your headings in a way they make sense, similar to an expandable Table of Contents.

Landmarks and Regions

Keep in mind:

  • Similar as with headings, Screen Reader users can list, navigate, and skip landmarks and regions by pressing only one keyboard key, R or D key for JAWS/NVDA. This is a capability only Screen Readers have. They must be well organized and make sense when listed.
  • Screen Readers will announce when users enter and exist a region or landmark. If you have more than one region or landmark of the same kind, then label them for differentiation. E.g.:
    • <nav aria-label=”top navigation”>
    • <nav aria-label=”breadcrumbs”>
    • <nav aria-label=”footer navigation”>

Status messages

Keep in mind:

  • If you remove something from the UI, don’t assume the user will “see” it’s no longer there. That’s not enough for Screen Reader users, they also need to “hear” it’s no longer there. Use aria-live or role=”status” to notify user of any changes that may affect them.
  • Same as the above but for element that weren’t there before. Notify users when new elements are introduced on the UI.
  • Don’t overuse role=”alert” this is a very aggressive and intrusive kind of notification. Use only when needed, for everything else use aria-live

Screen Reader narration

Keep in mind:

  • The Screen Reader user experience should make sense throughout the whole user journey. Think of it as a person telling you over the phone what they are doing at every step. The “listening experience” should match the user interactions. Write it down and make sure developers receive it. Along with specs, wires, and visuals.
  • Silence is bad. Knowing beforehand how the UI should “sound like” will help spotting and fixing silent spots.
  • Use proper semantics. A link is not a button, even if you make it look like one. Screen readers will vocalize the true nature of the element and users will act according to what they hear. Screen Readers can also list Buttons and Links separately, so they should make sense when listed apart. 

When browsing the web it’s clear that keyboard interactions, code sequence, labelling and status messages are probably the most neglected issues. Then, in my experience, thinking we can later rearrange the order of the components with CSS Grid, and enforce focus management with JavaScript, sets the ground for very unpleasant surprises.

Other Pain Points

  • Tables can also be problematic, especially if responsive. They are a complex topic worth an article, or several, of their own. Noted in my to-do, for a later time. For now, keep in mind Screen Readers can also navigate by Tables on desktop, T key for JAWS/NVDA, and have their own keyboard interactions. Therefore, semantics matter quite a great deal here. Oh! … and they sound different on mobile.
  • Form validation is also complex, although it could be very simple. It’s a controversial topic mainly due to the overuse of “dynamic validation”. It looks great visually, but it’s not very inclusive for Screen Reader users. They will hear an error as soon as they start typing. Creative solutions are needed to produce an inclusive UX. This topic too deserves its own article(s).

To wrap up, and clarify. Let’s not be fooled by thinking Web Accessibility Annotations is all that’s needed to get Web Accessibility Implementations done. They are just part of the specifications. It’s a reference. It helps. But it helps even more when team members are familiar with Screen Readers’ use and interactions. With how components and elements sound like. Then annotations will be accurate, UI/UX and code-wise. Like an audible mockup.