Engineering

Accessibility Testing is an Essential Part of User Experience Testing

By

Ryan Heisler

on •

Jun 11, 2024

Why do we spend countless hours building software? In my experience, my work feels most fulfilling when it actually helps someone do or get something they need. It's a joyful experience to hear an end user say that something I built was useful to them. Users love it too - successful product teams focus on building the smoothest, most intuitive applications to solve real problems for real people.

Accessibility is a part of usability that improves the user experience for everyone, and yet so many project teams I've worked with spend almost none of their time on it, if any. When I bring it up, they acknowledge it as important, but the expectation is generally that the engineers can just make their code comply with the Web Content Accessibility Guidelines (WCAG) and the app will be accessible.

Writing WCAG-compliant code is not enough! In my experience building apps with accessibility in mind, I've discovered a number of accessibility issues in code that follows all the rules. Testing your own application can help you catch obviously bad experiences before you get feedback from your users and fine-tune. As an industry, we know this is true and we regularly test our apps with a keyboard, a mouse, and a monitor (or a touch-screen device). With a little learning and practice, we can catch the many obviously bad experiences we currently unleash on people who use other interfaces to interact with our apps.

Inputs and Labels on Android Firefox and TalkBack

"Every input needs a label" - most developers I've met know about this regardless of their level of knowledge of digital accessibility. And they all know you can either nest the input inside the label or associate them via the input's ID, both work just as well. Except they don't.

As of this writing, labels and inputs associated only by the “id” attribute provide a worse experience on Android TalkBack in the latest Firefox. When the user focuses the label, TalkBack reads the label twice. This won't prevent most users from using the input, but it's potentially confusing.

When my team ran into this issue, we couldn't find any documentation online, but we figured out the cause through experimentation. Nesting the input in the label was very easy for us to do, but we wouldn't have known to do it if we hadn't been testing with a wide range of platforms and screen readers.

Table with Table Header and Caption Elements on iOS

Inputs and labels showed us that built-in elements used correctly don't always provide the best experience. More complicated elements require more testing to make sure they work as you expect.

Various screen readers are known to have issues describing tables - this was mentioned specifically as part of my early education in accessibility. When my team was building a table for a web app, we were extra careful to follow best practices. However, when we tested the table on iOS with VoiceOver, it described the table, saying "Data table <caption>, row -90020023372036854775808, table start..."

That number stuck out to us, and with some searching we realized that it's the minimum signed 64-bit integer, except it has two zeroes after each of the first two digits. In lay terms, it’s a number that can easily show up because of a common kind of bug. We didn't have any numbers in the table, and it only had two columns and a few rows. It's a bizarre experience for a user to hear that when trying to understand a table, not least because they might think the number is data from the table itself.

Our search also led us to a page on PowerMapper where this issue has been documented since iOS 13, which came out in 2019. This page was very helpful in understanding the cause and solution. Testing confirmed that having a caption and table headers in the table caused the issue. Captions are the standard way to label an HTML table according to the Mozilla Developer Network caption element page and the World Wide Web Consortium (W3C) caption and summary page, so this is another example of best practices leading to a bad experience.

We found other issues with our table, including one header being read when a different header was selected. Considering this, the small number of columns in our data, and the fact that we expect most of our users to be on mobile devices, we decided to structure the page differently and avoid the table altogether. This was easy for us, but other developers with the same problem may have to use a table. It's important to consider the constraints of your product and target users, and experiment to find a way to improve the experience within those constraints.

Focus trap that doesn't trap focus

We were building a collapsible menu to contain a website's navigation links in narrow windows where the links wouldn't fit across the top of the page. Our first idea was to cover the entire page with the menu and trap the user's focus until they closed the menu or activated one of the links. The focus trap would cycle their focus to the top of the menu if they tried to tab past the last item. Let me be clear: according to the W3C's Web Accessibility Initiative's (WAI) pages about the Menu Button Pattern and the Menu and Menubar Pattern, this is the wrong way to implement a collapsing menu! We didn't know that at the time, but testing our implementation for the experience with a screen reader helped us figure it out.

The way we made this work was by writing a relatively complex set of functions that would run when the user pressed a key with focus on or in the menu. That should have been our first clue - if you're implementing a lot of JavaScript to do something you find on most websites, you might just be building a custom widget. And if you're doing that, you should first check to see if there's a native HTML element that can do what you want. In our case, the details element does everything we need.

We didn't find the details element, and we also didn't take the next best step, which is to look around the internet for a guide on the best practices for the pattern you're implementing. The WAI ARIA Authoring Practices Guide is a great place to start, and would have told us not to trap focus in this case.

Doing none of that, we plowed ahead with our implementation and discovered the problem. When you move focus to the next item with a mobile screen reader, there is no keyboard event and therefore, focus isn't trapped. In fact, TalkBack and VoiceOver don't trigger any event when they move focus to the next item, so there's no programmatic way to react to a user doing that. For us, this resulted in focus moving to an element on the page behind the menu, including a highlighted band around the non-visible element. This is a non-equivalent experience, but it would be extra confusing for someone who is using the screen reader and using it's focus highlight to find the focused element visually on the screen.

Considering our audience and the technologies we expect them to use, we were able to remove the focus trap and stop covering the screen with the menu, pushing the page contents down instead. We considered using the inert attribute on the page behind the menu, but it's relatively new and many of our users use browser versions from before it came out. As always, your best solution depends on your audience.

Why Does this Matter?

Simply put, we as an industry decide who is disabled by our products and who isn't. There is nothing inherently natural about the ability to use a keyboard, mouse, and monitor to interact with a website. The information and functionality in a computer is just electrical signals. None of us can perceive it or interact with it as it exists in the physical world - we all use devices like monitors and keyboards to get some benefit from it.

Most people reading this page need a keyboard to fill in a form on a web page, but we don't think of this as a "disability." So why is someone else “disabled” when they need a voice-to-text interface to do the same? It's based on the assumptions that software builders make - that users like them are "normal."

When we build software and test it, we decide who can access it using those assumptions. We generally consider a page or feature "done" if we can perceive it and interact with it using a monitor, mouse, and keyboard. Our implicit bias tells us that anyone who can't do that must have some deficiency in their body that stops them from interacting with our software like a "normal" person can, but that's not true.

Someone who uses a screen reader to understand a web page is not disabled by their body. We disabled them when we chose to build the software in a way that excludes the devices they use to interact with our software. If we truly want to build software that works for people, testing with assistive technologies is an essential, bare-minimum part of the software development lifecycle.

Why do we spend countless hours building software? In my experience, my work feels most fulfilling when it actually helps someone do or get something they need. It's a joyful experience to hear an end user say that something I built was useful to them. Users love it too - successful product teams focus on building the smoothest, most intuitive applications to solve real problems for real people.

Accessibility is a part of usability that improves the user experience for everyone, and yet so many project teams I've worked with spend almost none of their time on it, if any. When I bring it up, they acknowledge it as important, but the expectation is generally that the engineers can just make their code comply with the Web Content Accessibility Guidelines (WCAG) and the app will be accessible.

Writing WCAG-compliant code is not enough! In my experience building apps with accessibility in mind, I've discovered a number of accessibility issues in code that follows all the rules. Testing your own application can help you catch obviously bad experiences before you get feedback from your users and fine-tune. As an industry, we know this is true and we regularly test our apps with a keyboard, a mouse, and a monitor (or a touch-screen device). With a little learning and practice, we can catch the many obviously bad experiences we currently unleash on people who use other interfaces to interact with our apps.

Inputs and Labels on Android Firefox and TalkBack

"Every input needs a label" - most developers I've met know about this regardless of their level of knowledge of digital accessibility. And they all know you can either nest the input inside the label or associate them via the input's ID, both work just as well. Except they don't.

As of this writing, labels and inputs associated only by the “id” attribute provide a worse experience on Android TalkBack in the latest Firefox. When the user focuses the label, TalkBack reads the label twice. This won't prevent most users from using the input, but it's potentially confusing.

When my team ran into this issue, we couldn't find any documentation online, but we figured out the cause through experimentation. Nesting the input in the label was very easy for us to do, but we wouldn't have known to do it if we hadn't been testing with a wide range of platforms and screen readers.

Table with Table Header and Caption Elements on iOS

Inputs and labels showed us that built-in elements used correctly don't always provide the best experience. More complicated elements require more testing to make sure they work as you expect.

Various screen readers are known to have issues describing tables - this was mentioned specifically as part of my early education in accessibility. When my team was building a table for a web app, we were extra careful to follow best practices. However, when we tested the table on iOS with VoiceOver, it described the table, saying "Data table <caption>, row -90020023372036854775808, table start..."

That number stuck out to us, and with some searching we realized that it's the minimum signed 64-bit integer, except it has two zeroes after each of the first two digits. In lay terms, it’s a number that can easily show up because of a common kind of bug. We didn't have any numbers in the table, and it only had two columns and a few rows. It's a bizarre experience for a user to hear that when trying to understand a table, not least because they might think the number is data from the table itself.

Our search also led us to a page on PowerMapper where this issue has been documented since iOS 13, which came out in 2019. This page was very helpful in understanding the cause and solution. Testing confirmed that having a caption and table headers in the table caused the issue. Captions are the standard way to label an HTML table according to the Mozilla Developer Network caption element page and the World Wide Web Consortium (W3C) caption and summary page, so this is another example of best practices leading to a bad experience.

We found other issues with our table, including one header being read when a different header was selected. Considering this, the small number of columns in our data, and the fact that we expect most of our users to be on mobile devices, we decided to structure the page differently and avoid the table altogether. This was easy for us, but other developers with the same problem may have to use a table. It's important to consider the constraints of your product and target users, and experiment to find a way to improve the experience within those constraints.

Focus trap that doesn't trap focus

We were building a collapsible menu to contain a website's navigation links in narrow windows where the links wouldn't fit across the top of the page. Our first idea was to cover the entire page with the menu and trap the user's focus until they closed the menu or activated one of the links. The focus trap would cycle their focus to the top of the menu if they tried to tab past the last item. Let me be clear: according to the W3C's Web Accessibility Initiative's (WAI) pages about the Menu Button Pattern and the Menu and Menubar Pattern, this is the wrong way to implement a collapsing menu! We didn't know that at the time, but testing our implementation for the experience with a screen reader helped us figure it out.

The way we made this work was by writing a relatively complex set of functions that would run when the user pressed a key with focus on or in the menu. That should have been our first clue - if you're implementing a lot of JavaScript to do something you find on most websites, you might just be building a custom widget. And if you're doing that, you should first check to see if there's a native HTML element that can do what you want. In our case, the details element does everything we need.

We didn't find the details element, and we also didn't take the next best step, which is to look around the internet for a guide on the best practices for the pattern you're implementing. The WAI ARIA Authoring Practices Guide is a great place to start, and would have told us not to trap focus in this case.

Doing none of that, we plowed ahead with our implementation and discovered the problem. When you move focus to the next item with a mobile screen reader, there is no keyboard event and therefore, focus isn't trapped. In fact, TalkBack and VoiceOver don't trigger any event when they move focus to the next item, so there's no programmatic way to react to a user doing that. For us, this resulted in focus moving to an element on the page behind the menu, including a highlighted band around the non-visible element. This is a non-equivalent experience, but it would be extra confusing for someone who is using the screen reader and using it's focus highlight to find the focused element visually on the screen.

Considering our audience and the technologies we expect them to use, we were able to remove the focus trap and stop covering the screen with the menu, pushing the page contents down instead. We considered using the inert attribute on the page behind the menu, but it's relatively new and many of our users use browser versions from before it came out. As always, your best solution depends on your audience.

Why Does this Matter?

Simply put, we as an industry decide who is disabled by our products and who isn't. There is nothing inherently natural about the ability to use a keyboard, mouse, and monitor to interact with a website. The information and functionality in a computer is just electrical signals. None of us can perceive it or interact with it as it exists in the physical world - we all use devices like monitors and keyboards to get some benefit from it.

Most people reading this page need a keyboard to fill in a form on a web page, but we don't think of this as a "disability." So why is someone else “disabled” when they need a voice-to-text interface to do the same? It's based on the assumptions that software builders make - that users like them are "normal."

When we build software and test it, we decide who can access it using those assumptions. We generally consider a page or feature "done" if we can perceive it and interact with it using a monitor, mouse, and keyboard. Our implicit bias tells us that anyone who can't do that must have some deficiency in their body that stops them from interacting with our software like a "normal" person can, but that's not true.

Someone who uses a screen reader to understand a web page is not disabled by their body. We disabled them when we chose to build the software in a way that excludes the devices they use to interact with our software. If we truly want to build software that works for people, testing with assistive technologies is an essential, bare-minimum part of the software development lifecycle.

Why do we spend countless hours building software? In my experience, my work feels most fulfilling when it actually helps someone do or get something they need. It's a joyful experience to hear an end user say that something I built was useful to them. Users love it too - successful product teams focus on building the smoothest, most intuitive applications to solve real problems for real people.

Accessibility is a part of usability that improves the user experience for everyone, and yet so many project teams I've worked with spend almost none of their time on it, if any. When I bring it up, they acknowledge it as important, but the expectation is generally that the engineers can just make their code comply with the Web Content Accessibility Guidelines (WCAG) and the app will be accessible.

Writing WCAG-compliant code is not enough! In my experience building apps with accessibility in mind, I've discovered a number of accessibility issues in code that follows all the rules. Testing your own application can help you catch obviously bad experiences before you get feedback from your users and fine-tune. As an industry, we know this is true and we regularly test our apps with a keyboard, a mouse, and a monitor (or a touch-screen device). With a little learning and practice, we can catch the many obviously bad experiences we currently unleash on people who use other interfaces to interact with our apps.

Inputs and Labels on Android Firefox and TalkBack

"Every input needs a label" - most developers I've met know about this regardless of their level of knowledge of digital accessibility. And they all know you can either nest the input inside the label or associate them via the input's ID, both work just as well. Except they don't.

As of this writing, labels and inputs associated only by the “id” attribute provide a worse experience on Android TalkBack in the latest Firefox. When the user focuses the label, TalkBack reads the label twice. This won't prevent most users from using the input, but it's potentially confusing.

When my team ran into this issue, we couldn't find any documentation online, but we figured out the cause through experimentation. Nesting the input in the label was very easy for us to do, but we wouldn't have known to do it if we hadn't been testing with a wide range of platforms and screen readers.

Table with Table Header and Caption Elements on iOS

Inputs and labels showed us that built-in elements used correctly don't always provide the best experience. More complicated elements require more testing to make sure they work as you expect.

Various screen readers are known to have issues describing tables - this was mentioned specifically as part of my early education in accessibility. When my team was building a table for a web app, we were extra careful to follow best practices. However, when we tested the table on iOS with VoiceOver, it described the table, saying "Data table <caption>, row -90020023372036854775808, table start..."

That number stuck out to us, and with some searching we realized that it's the minimum signed 64-bit integer, except it has two zeroes after each of the first two digits. In lay terms, it’s a number that can easily show up because of a common kind of bug. We didn't have any numbers in the table, and it only had two columns and a few rows. It's a bizarre experience for a user to hear that when trying to understand a table, not least because they might think the number is data from the table itself.

Our search also led us to a page on PowerMapper where this issue has been documented since iOS 13, which came out in 2019. This page was very helpful in understanding the cause and solution. Testing confirmed that having a caption and table headers in the table caused the issue. Captions are the standard way to label an HTML table according to the Mozilla Developer Network caption element page and the World Wide Web Consortium (W3C) caption and summary page, so this is another example of best practices leading to a bad experience.

We found other issues with our table, including one header being read when a different header was selected. Considering this, the small number of columns in our data, and the fact that we expect most of our users to be on mobile devices, we decided to structure the page differently and avoid the table altogether. This was easy for us, but other developers with the same problem may have to use a table. It's important to consider the constraints of your product and target users, and experiment to find a way to improve the experience within those constraints.

Focus trap that doesn't trap focus

We were building a collapsible menu to contain a website's navigation links in narrow windows where the links wouldn't fit across the top of the page. Our first idea was to cover the entire page with the menu and trap the user's focus until they closed the menu or activated one of the links. The focus trap would cycle their focus to the top of the menu if they tried to tab past the last item. Let me be clear: according to the W3C's Web Accessibility Initiative's (WAI) pages about the Menu Button Pattern and the Menu and Menubar Pattern, this is the wrong way to implement a collapsing menu! We didn't know that at the time, but testing our implementation for the experience with a screen reader helped us figure it out.

The way we made this work was by writing a relatively complex set of functions that would run when the user pressed a key with focus on or in the menu. That should have been our first clue - if you're implementing a lot of JavaScript to do something you find on most websites, you might just be building a custom widget. And if you're doing that, you should first check to see if there's a native HTML element that can do what you want. In our case, the details element does everything we need.

We didn't find the details element, and we also didn't take the next best step, which is to look around the internet for a guide on the best practices for the pattern you're implementing. The WAI ARIA Authoring Practices Guide is a great place to start, and would have told us not to trap focus in this case.

Doing none of that, we plowed ahead with our implementation and discovered the problem. When you move focus to the next item with a mobile screen reader, there is no keyboard event and therefore, focus isn't trapped. In fact, TalkBack and VoiceOver don't trigger any event when they move focus to the next item, so there's no programmatic way to react to a user doing that. For us, this resulted in focus moving to an element on the page behind the menu, including a highlighted band around the non-visible element. This is a non-equivalent experience, but it would be extra confusing for someone who is using the screen reader and using it's focus highlight to find the focused element visually on the screen.

Considering our audience and the technologies we expect them to use, we were able to remove the focus trap and stop covering the screen with the menu, pushing the page contents down instead. We considered using the inert attribute on the page behind the menu, but it's relatively new and many of our users use browser versions from before it came out. As always, your best solution depends on your audience.

Why Does this Matter?

Simply put, we as an industry decide who is disabled by our products and who isn't. There is nothing inherently natural about the ability to use a keyboard, mouse, and monitor to interact with a website. The information and functionality in a computer is just electrical signals. None of us can perceive it or interact with it as it exists in the physical world - we all use devices like monitors and keyboards to get some benefit from it.

Most people reading this page need a keyboard to fill in a form on a web page, but we don't think of this as a "disability." So why is someone else “disabled” when they need a voice-to-text interface to do the same? It's based on the assumptions that software builders make - that users like them are "normal."

When we build software and test it, we decide who can access it using those assumptions. We generally consider a page or feature "done" if we can perceive it and interact with it using a monitor, mouse, and keyboard. Our implicit bias tells us that anyone who can't do that must have some deficiency in their body that stops them from interacting with our software like a "normal" person can, but that's not true.

Someone who uses a screen reader to understand a web page is not disabled by their body. We disabled them when we chose to build the software in a way that excludes the devices they use to interact with our software. If we truly want to build software that works for people, testing with assistive technologies is an essential, bare-minimum part of the software development lifecycle.

Have questions about adding accessibility testing into your software development lifecycle?