Hi 👋 I’m Erin Doyle Senior Developer and Accessibility a11y Expert AMA

On Friday 12/20/19 at 4PM PST I’ll be here to answer your questions about developing software, React, and accessibility (a11y)


Reply to this thread with your questions!

My latest course on React Accessibility

My episode of the egghead podcast


When developing day to day, what are some tools and/or tips to keep a11y in mind?

1 Like

I often struggle to know what role to give DIV blocks. Though I try to use the fewest tags possible with as many of those being as semantically correct to achieve a design I find myself using more role=“presentation” than I am happy with. Is there a comprehensive guide or cookbook I don’t know about?


What’s something that you’ve become interested in learning about lately, whether it has to do with programming or not?


I want to send more accessible emails. One problem I have is images of code because inline code in email doesn’t format well.

The images aren’t accessible for the visually impaired!

Should the images be linked to text files? Is the alt appropriate for dropping that much text into?

I’m general how should we handle images with complex content?


What is the biggest misconception when it comes to accessibility?


What are the best tools I could include in my toolkit in order to write accessible React components/apps?

I highly recommend installing the eslint-plugin-jsx-a11y so that whenever you run eslint it’s checking your code for potential accessibility issues. After than installing react-axe which will report any findings to the JavaScript console while you’re testing your app in development. Having those two tools as part of your normal workflow should catch a lot of things early on while you’re developing. After that I would just suggest trying to make manual testing (using a screen reader, keyboard only, and high contrast tools) part of what you do right before you send something to QA and/or make it part of the testing that QA performs. If you’re adding a new feature or making a big change to something on the front-end there is always likely to be implications to a11y, so that should be the cue to make sure you’re testing those changes.


Not every

needs to have a role. A
only needs to have a role when you want to make it clear to the screen reader that the
is either a part structural of the page or part of specific type of UI widget so that the user knows what to expect from it and additional things like the state of the widget, labels for the parts of the widget, keyboard behavior and focus management can all be conveyed appropriately to the user. You really only need to use role=“presentation” when you’re trying to tell the screen reader to not provide any kind of additional context or information about that element. With most screen readers it will still be announced but it won’t be given any other kinds of properties.

This is a great resource for what roles to use and when: https://www.w3.org/TR/wai-aria-practices-1.1

1 Like

I’ve really been trying to learn more about improving performance. I’ve been learning about how to effectively use React.memo, useCallback and Suspense. I’ve done some work recently on bundle splitting and dynamic pre-loading. Big plug to egghead courses by Kent C. Dodds and Michael Chan on React with hooks and Suspense that have been really helpful in my learning!

The next thing on my list that I want to start learning is XState and we’ve got a couple brand spanking new courses on egghead I’m going to check out!

The recommendation for images OF text is to make the alt value the exact same text that’s in the image but I think images of code doesn’t work with that. Normally if someone were reading code with a screen reader they’d be able to control the speed with which they stepped through that text and could go back, etc. If all of that text were in an alt attribute it would just all be read in one long stream that the user couldn’t control and I think that would be overwhelming. So I don’t think that approach will work.

I think what I’d try is either giving the image an empty alt alt="" so that the screen reader doesn’t even read that it’s there since it won’t be useful. Then follow it immediately with a link to the code elsewhere where the link is either itself descriptive or give it an aria-label to be really descriptive to tell screen reader users what they will find there. Or if you didn’t want the link to be as obvious you could wrap the image with the link and then you would want to provide alt text with instructions about clicking the link to read the code. This is definitely one of those things you’ll want to test out the experience to determine what feels the most clear to the screen reader user.

That’s a tough one as there are a number of biggies. I guess the most important one though as far as the impact we can have as developers is that many people will look at accessibility as if it’s an additional feature that has to be added. I’ve heard developers talk about not being able to convince their company or product owners to give them the time to work on their app’s accessibility or that for various reasons their app doesn’t need to be accessible. We need to stop looking at accessibility as something that gets added later, like a feature only if absolutely necessary and can be prioritized among other features. Writing accessible applications should just be part of our normal development process. You don’t need permission to write unit tests on your code or to have QA test changes to the application (at least I hope not) before its deployed to Production. Writing accessible code, running auditing tools (just like you would run something like eslint anyway) and testing that accessibility is not impacted should just be looked at as part of your workflow. It should be part of your definition of done. If users can’t use your application or a feature in it then that’s a bug, it shouldn’t matter less if those people are using a keyboard instead of a mouse, or are using a screen reader, etc.

1 Like

Any tips to persuade your team to treat accessibility as a priority?


We should all be attentive, listen, and learn more about accessibility in general, but is a11y specialization a good career move for developers?

I think it’s really helpful if you can describe or demonstrate the user experience to people when things aren’t accessible. I think once people really understand how a lack of accessibility impacts people it gives them empathy and they see these bugs as critical versus something they can put off. When my team triages accessibility bugs we really dig into what the experience is like depending on the bug. We talk about what groups are impacted (i.e. screen reader users, keyboard users, high contrast users, etc) and what the impact is. Not all bugs are showstoppers but when you analyze the experience you find that some may absolutely be a showstopper for a certain group and once we realize that those get top priority.

I think so! Because so many people know little to nothing about a11y but companies are realizing how important it needs to be (especially due to the rise in lawsuits) these skills are in high demand. Many people are overwhelmed by how to get their apps from zero to accessible. They see this as a monumental task. So I think being able to claim experience/knowledge in testing for accessibility and being able to write accessible code and fix issues is massively beneficial for a developer’s career right now.

I’ve used the “pitch” that “accessibility is for everyone” to try and sell the idea that we all benefit, but I was intrigued by this post on Twitter by E.J. Mason:

Ultimately it struck a chord with me.

Why is listening to folks that are actually disabled important when it comes to accessibility?

Yeah I guess this sounds pretty harsh at first but it makes sense and I can agree with it. So one thing I’ve definitely come to realize as I’ve gotten into accessibility and still worry about all the time is that those of us that do not have a certain disability cannot really ever fully understand what that experience is. We can use the same tools and we can try to relate to the experience or imagine what it might be like but we can’t truly be in those people’s shoes. For instance when I test with a screen reader I often leave my eyes open, so I always have that additional context (I’m also the one that wrote the code so I’m very aware already of what is on the screen). I have to guess at how I think users are going to use their various tools and adaptive strategies but again, approaching them from one who doesn’t have to 100% rely on them I’m not likely to get it right all the time. So really it’s best when we can learn from people who have disabilities about how they use tools and adaptive strategies to use the web. And I think what E.J. Mason is getting at is that people with disabilities have no other choice/options when it comes to using the web and dealing with horribly inaccessible sites. They are dramatically impacted and so we need to remember that our goal is first and foremost to improve their experiences. We need to work on bringing their experience closer to parity with non-disabled people. So yeah, I get it.