As an autistic person, also living with a chronic health condition, I’m used to hearing conversations about ‘progress’ that don’t include or relate to people like me. As the world grapples with two of today’s major transitions – the rise of AI and the ongoing global reckoning of environmental sustainability – I find myself, once again, feeling like crucial perspectives (and opportunities) are being missed.
These shifts are typically framed as progress, but for neurodivergent and disabled people, the picture is more complicated and full of inherent contradictions. Digital tools and AI can provide access and equity in productivity and executive function that people like me have needed for a long time. But they are not developed with us in mind and the environmental costs are unsustainable and monumental. As humanity continues our longstanding pursuit of innovation, we cannot ignore the fundamental questions: who benefits, who is left behind and what does the environmental future of that reality look like?
Like many other neurodivergent people, I was excited when AI tools like ChatGPT became available and mainstream. I was emotionally affected by the opportunities it opened up to me. Tasks that seem simple for a neurotypical mind, yet impossibly intimidating and inaccessible to mine, suddenly had an access point, a way of navigating them that was potentially life changing.
At the end of 2024, EY published a study of 300 neurodivergent and disabled people that use Microsoft Copilot. The results were positive: 85% said Copilot can support a more inclusive workplace and 76% said it helps them thrive professionally. Contrast this with a 2023 survey by Fable in which only 7% of disabled users felt adequately represented in how AI tools are being developed.
That disconnect mirrors how I often feel when using these tools due to their environmental impact. Every time I use ChatGPT or other AI tools, it feels empowering, life-changing, at the same time as feeling like a deep, moral failure. Behind every prompt is the guilt of knowing the energy and water consumption used. From 2020 to 2023, Microsoft increased its water usage by 87%. Google’s increased by 69%. This is clearly unsustainable and poses one of the biggest new threats to the green transition.
We have also been historically left behind by the green transition itself. Disabled people are more at risk during extreme weather events due to non-accessible public shelters and poor strategic policy. Despite the 2015 Paris Agreement including a requirement for disability inclusion, only 35 out of 192 countries actually mention disabled people or accessibility in their pledges (NDCs). Only 45 countries in the world mention disability in their green policies and programmes. Countries that do not mention it in either include the UK, the US, China and Japan.
So, if these digital tools offer huge opportunities for access, but are not being developed with us in mind – often assuming high digital literacy, using neurotypical UX designs and keeping the most useful features behind paywalls – then access itself becomes another skills divide. It’s clear that AI is here, and here to stay. The challenge governments now face is how to make sure it’s developed responsibly. That includes balancing its growth, and therefore its potential for accessibility, with environmental sustainability. It also means ensuring that neurodivergent and disabled people have the support, training and lifelong learning pathways to make full use of these innovations.
For starters, an obvious step is that neurodivergent and disabled voices are part of the development strategies for these tools. In the same Fable survey that found only 7% of respondents felt represented, 87% said they were willing to provide feedback to AI developers.
To address the significant environmental impact of AI, its usage could be taxed or levied – especially at an enterprise level – with the revenue funding initiatives like water replenishment. As part of broader welfare policies and support services, verified neurodivergent and disabled people could access AI tools through a credit system. Copilot, for example, could be taxed for corporate users but remain free for access users. This way, environmental responsibility falls on industries – not on the shoulders of those who rely on AI for accessibility and inclusion.
AI companies could be required to publish ‘double-impact’ reports in which tangible benefits to marginalised users are listed alongside environmental footprint, with a commitment to balance both. We are familiar with carbon offsetting, so could water offsetting programmes be part of AI policy, in which water usage is matched with replenishment and tied to access commitments, so AI tools that are used and developed for accessibility are protected?
A just transition must be inclusive. We have the potential to vastly improve the lives and experiences of neurodivergent and disabled people through technology, but those aspirations must be intentional and core to its development. And none of it can be done at the cost of environmental progress, which already feels like we are dangerously close to abandoning. Innovation for innovation’s sake – without regard for who it benefits or who it excludes, nor its impact on the planet – is not just. If we want it to be, we must be intentional about designing it that way.
Looking Forward: A Fairer, More Inclusive Future
EIT Campus is more than a digital learning environment — it is a community. While no digital platform can fully overcome the limitations of online learning, by creating space for diverse voices we aim to break barriers and make learning more relevant. Perhaps it’s time to approach skills acquisition and education in a more comprehensive, inclusive, and forward-looking way. We invite all learners to engage, share perspectives, and help shape a just transition that benefits everyone.
No comments yet