
Last summer, as I was finishing a graduate degree, I wrote a final paper on how the evolution of Artificial Intelligence (AI) could affect users with disabilities who use Assistive Technologies (AT) and how Cybercriminals can exploit AI shortcomings to target this demographic. Although my original paper is about Screen Readers, this could very well apply to any user relying on Assistive Technology to interact with digital products and services.
I found the AI/AT/Cybercrime intersection very interesting, so I decided to blog about it. For now, to keep the word count manageable, I will only outline my findings in this post and later write follow-up articles for each topic, which I will link here. I’m including the references at the end of this post because they are many, to avoid confusion with my own articles, and also to give full credit to source authors.
As of January 2024, legislation on Digital Accessibility is intensifying in North America [1, 2, 3] and Europe [4], imposing substantial fines for non-compliance. This article explores the intersection of AI, Digital Accessibility, and Cybersecurity, and how this intersection could affect visually impaired users with the evolution of AI-Agents replacing traditional User-Agents.
The Evolution of User-Agents into AI-Agents
AI is rapidly evolving, presenting both advancements and challenges, with user privacy and security at the forefront. AI-Agents, autonomous bots designed to perform tasks on behalf of users, are reshaping the digital landscape. Traditionally, Screen Reader software operates on top of User-Agents, such as web browsers, remaining undetectable by web analytics. However, the emergence of AI-Agents introduces new dynamics [5], raising concerns about user privacy and cybersecurity.
Privacy Concerns in the Digital Accessibility Landscape
In the realm of Digital Accessibility, the ethics of Assistive Technology detection have been debated extensively [6, 7]. Platform Design Principles emphasize the protection of user privacy for individuals with disabilities [8]. Detection of Assistive Technology poses a risk of unintended analytics discrimination and the potential disclosure of users’ disabilities without their consent.
AI’s Role in Enhancing Accessibility
On a positive note, AI has significantly improved Digital Accessibility for individuals with disabilities [9]. Tech companies leverage AI to automate functionalities, such as generating alternative text for images and facilitating voice chatbot interactions [10]. These efforts align with evolving accessibility regulations, aiming to create a more inclusive digital environment.
Cybersecurity Threats for Visually Impaired Users
Despite advancements in Digital Accessibility, visually impaired users remain vulnerable to cyber threats due to the lack of visual cues and limited software support [11]. Their top concerns include the theft of private information, malicious access to financial data, and the exposure of personal information. The rise of AI use in cybercrime further complicates this landscape, enabling more sophisticated attacks that are challenging to detect and combat [12].
Challenges in Differentiating Good Bots from Bad Bots
As AI-Agents become more prevalent, distinguishing between beneficial AI bots and malicious ones becomes a significant challenge. Ensuring the ethical use of AI-Agents, particularly in protecting user privacy and preventing fraudulent activities, becomes paramount.
Addressing Privacy Concerns in the AI Era
Most development companies are well aware of AI’s user privacy shortcomings [13], so they implement measures to make datasets private, reduce user identification possibilities, and eliminate edge cases from algorithms. However, users with disabilities usually fall into these edge cases that get eliminated [14]. Therefore, ensuring their inclusion and protection in the age of AI is imperative.
Balancing Opportunities and Challenges
Visually impaired users stand to benefit significantly from AI advancements, with AI-Agents automating tasks to enhance accessibility. However, the potential for cybercriminals to exploit users’ disabilities through digital interactions calls for a careful balance between opportunities and challenges [15]. For example, there could be times when users need to disclose their disability to interact with medical services, government agencies, obtain special discounts, or find special accommodations when booking hotel rooms, flights, or dinner. Right there and then, the AI-Agent will own sensitive information that could potentially be used to discriminate through analytics, or target users with disabilities for fraudulent purposes.
Conclusion
Innovators have to keep on their radar this intersection of AI, Digital Accessibility, and Cybersecurity; the crucial balance between harnessing AI’s opportunities and addressing the challenges it poses. Prioritizing user privacy, inclusivity, and safeguarding sensitive information despite the use of Assistive Technology. As we move into the future, continued research is essential to understand the implications of the transition from User-Agents to AI-Agents and its impact on visually impaired users, and those with disabilities in general. Striving for privacy equity in the age of AI is critical to prevent an internet divide and ensure a digital future that benefits everyone.
References
- The Americans with Disabilities Act. (1990).
- The Accessibility for Ontarians with Disabilities Act. (2005).
- The Accessible Canada Act. (2019).
- European Accessibility Act. (2019).
- McGinley-Stempel, R. (2023). Preparing For The Era Of The AI Agent. Forbes Technology Council.
- Bureau of Internet Accessibility. (2021). Analytics Tools Can’t Track Screen Readers — And Shouldn’t.
- Roselli, A. (2022). On Screen Reader Detection.
- Web Platform Design Principles. (2023).
- Chun Yu & Jiajun Bu. (2021). The practice of applying AI to benefit visually impaired people in China. Commun. ACM 64, 11 (November 2021), 70–75.
- Ara, J. and Sik-Lanyi, C. (2022). Artificial intelligence in web accessibility: potentials and possible challenges. Proceedings of IAC 2022.
- Inan, F. A., Namin, A. S., Pogrund, R. L., & Jones, K. S. (2016). Internet Use and Cybersecurity Concerns of Individuals with Visual Impairments. Journal of Educational Technology & Society, 19(1), 28–40.
- Islam, R. (2023). AI And Cybercrime Unleash A New Era Of Menacing Threats. Forbes Technology Council.
- Gravrock, E. von . (2022). Artificial intelligence design must prioritize data privacy. World Economic Forum.
- Frick, T. (2021). How Many People With Disabilities Use My Website? Mighty Bytes.
- Short, K. (2021). Accessibility and Digital Security. Security.org.