How to avoid climate apartheid fueled by A.I.
Divisions of race and class will be dangerously exacerbated through a collision of artificial intelligence and climate catastrophes unless policymakers prioritize algorithmic fairness and more equitable tech.

COMMENTARY BY Marco Tedesco and David Sathuluri
Depending on which way you look in New York City, you'll face two starkly different landscapes. Look downtown: The gleaming towers of Wall Street stand as monuments to financial power. Turn 180 degrees, staring across the Harlem River, and you’re assaulted with toxic air, inadequate infrastructure and deepening inequalities.
This divide has been defined in the past by geography, race and class. Now, it’s being redrawn by climate and artificial intelligence. Areas that were environmentally safe a few decades ago are now under the threat of climate hazards, whose effects are amplified by the socially vulnerable status of many residents in areas such as the South Bronx and Queens. Mirroring this aspect, the same machine-learning models that optimize high-frequency trading and flood defenses in Manhattan threaten to entrench “climate apartheid”—via the increase in energy consumption and associated CO2 emissions—promoting digital colonialism and illiteracy as vehicles of social injustice.
The irony is stark: AI, a technology often heralded as the key to solving climate change, acts as another barrier for the most vulnerable.
Lower Manhattan’s $7 billion Financial District & Seaport Climate Resilience Master Plan exemplifies this dynamic through a different lens, one that reveals the stark contrast in climate protection approaches. The plan relies on massive physical infrastructure: extending Manhattan's shoreline up to 200 feet into the East River, creating permanently elevated barriers 15 to 18 feet high for storm protection and building a multi-level waterfront system with passive flood defenses designed to protect against daily tidal flooding.
These fortifications aim to shield nearly 1 million workers and tens of billions in assets from storm surges and sea-level rise. Yet the South Bronx, a community plagued by particulate pollution averaging 8.5 µg/m³ (according to Columbia’s Lamont-Doherty Earth Observatory)—in conjunction with asthma rates twice the city average and the highest rates of hospitalization for respiratory ailments—received its first FloodNet sensors only in 2024, with just two devices to monitor flooding across the entire borough's most vulnerable neighborhoods. While Lower Manhattan gets billions in permanent flood barriers, the South Bronx gets $7.2 million worth of basic flood monitoring sensors shared across all five boroughs.
This disparity reflects broader patterns of environmental, climate and digital racism. An alarming 22.4% of Bronx households lack broadband access and one in three homes have no computer. Without connectivity, residents cannot access air quality alerts, telehealth services for pollution-related illnesses or community-sourced data platforms that would enable grassroots advocacy. Moreover, they cannot access or learn how to use AI tools, the same ones that are condemning their community to more heatwaves and flooding. Meanwhile, Manhattan’s “Silicon Alley” hosts more than 2,000 AI startups and 35 so-called unicorns, incubating algorithms that serve investment banks and real-estate speculators.
AI-driven climate finance platforms package “green” bonds and ESG (environmental, social and governance) products based on proprietary risk-scoring models, yet these same algorithms mask significant investments in fossil fuel expansion. In early 2025, the six largest U.S. banks quietly withdrew from the United Nations-backed Net-Zero Banking Alliance, and BlackRock followed suit. These actions reveal that AI-driven ESG metrics can be wrongly used to prioritize short-term profits over long-term planetary health, effectively socializing climate risks onto marginalized communities.
This "Wall Street Climate Consensus" has systematically reversed course on its climate commitments. In 2024 alone, the world's 65 largest banks committed $869 billion to fossil fuel companies—a dramatic $162.5 billion increase from the previous year. JPMorgan Chase led this surge, providing $53.5 billion in fossil fuel financing, followed by Bank of America ($46 billion) and Citigroup ($44.7 billion). This algorithmic approach to investment identifies low-risk financial returns while treating externalities such as pollution, health impacts and infrastructure failures as acceptable collateral damage borne by under-resourced neighborhoods.
By 2028, AI systems in the United States could consume 300 terawatt-hours annually, enough electricity to power more than 28 million households. Total data centers including AI could reach 325-580 TWh by the same year. AI-specific power demand could grow more than thirtyfold by 2035. Meanwhile, data centers are increasingly located in socially vulnerable communities that are already overburdened by pollution. One single but illustrative case is Elon Musk's xAI data center in Memphis, operating 35 unpermitted gas turbines in the historically Black Boxtown neighborhood, pumping out cancer-causing pollutants and toxic chemicals such as formaldehyde. Google's greenhouse gas emissions rose by 48% since 2019, largely attributed to AI-driven data center energy consumption.
Halting AI's expansion is neither feasible nor desirable—the imperative lies in redirecting this technology toward energy justice and environmental sustainability. Community solar programs are demonstrating how AI can democratize clean energy access. In New Jersey, AI-powered crowdsourcing tools enabled 158 participants from low-income neighborhoods to engage directly with renewable energy planning, while AI systems can optimize energy systems to reduce carbon emissions by up to 50%. In the South Bronx, community-academic partnerships are pioneering models of AI-driven advocacy. Grassroots organizers working with Columbia researchers have deployed low-cost, crowdsourced smart air-quality sensors across Asthma Alley neighborhoods, enabling residents to visualize hotspots, detect emission patterns and press city agencies for targeted interventions. Similarly, nonprofits have adapted edge-computing AI models to predict Bronx River overflows, capitalizing on recent flood events that overwhelmed antiquated combined-sewer systems.
Machine-learning techniques also streamline environmental legal advocacy. Platforms that automate analysis of zoning permit applications, environmental impact statements and regulatory filings enable organizations such as WE ACT for Environmental Justice and UPROSE to flag potential violations and submit challenges in near real time. By converting sprawling bureaucratic documents into structured data, AI can empower legal teams to prioritize cases, track enforcement actions and mobilize public pressure with unprecedented speed.
But even these initiatives reveal significant challenges at the intersection of AI and climate justice. Data equity remains elusive. Crowdsourced sensor networks require community buy-in, technical training and ongoing maintenance—resources that are scarce in socially vulnerable areas. Without sustained funding and capacity-building, these systems risk becoming transient projects rather than enduring infrastructure. Additionally, many AI models are proprietary and opaque, making it impossible for communities to scrutinize risk-scoring methodologies, bias in training data or the socioeconomic assumptions. Without regulatory mandates for audit-ready code and open-source standards, accountability remains aspirational.
The governance of AI in climate policy is also fragmented. Municipal agencies, philanthropic foundations and private firms each deploy AI tools with limited coordination, leading to siloed systems that fail to integrate across sectors. Comprehensive resilience requires interoperable platforms that can share data on air quality, flood risk, energy usage and health outcomes, yet technical and institutional barriers inhibit such collaboration.
Addressing these gaps means policymakers must enact a suite of interventions that redesign AI as a lever for justice rather than a hammer to crush it. Climate and technology funding—whether municipal bonds, federal grants or philanthropic dollars—should mandate open-source AI standards, audit-ready code, transparency around training datasets and documentation about model assumptions. Universal broadband must be treated as a critical climate and digital infrastructure.
Local New York City laws 60 and 64, which require environmental justice assessments for city actions, should be expanded to declare connectivity a human right during emergencies, with AI-driven network self-healing to maintain service during extreme weather events. Risk-scoring models used by banks and asset managers should incorporate equity weights that penalize investments neglecting frontline communities, and regulatory bodies should require climate finance platforms to disclose equity-adjusted risk assessments. Finally, digital literacy and AI fellowships must be embedded in environmental justice strategy. Partnerships between AI firms, community colleges and schools in disadvantaged areas can cultivate local talent pipelines, offering paid internships, technical training and mentorship. By empowering residents to co-design AI tools rather than simply serving as data subjects, these programs can foster sustained community control over technology.
As global leaders converge in Manhattan this month for the 80th United Nations General Assembly, these two very different versions of New York City carry profound lessons. The choices we make today, about who writes AI algorithms, who sets data-sharing standards and who benefits from digital infrastructure, will determine whether climate AI deepens existing inequities or becomes a force for transformative justice. Wall Street flood barriers and trading engines may protect wealth, but without intentional policy, community-led innovation and unwavering transparency, these same technologies risk relegating the South Bronx and similar communities to disposable status.
We must repurpose AI to dismantle climate apartheid rather than reinforce it. By mandating open-source standards, treating broadband as a necessary climate infrastructure, codifying algorithmic fairness in finance and investing in community-led AI capacity, New York can chart a new course and set an example for the world. The lines of code we write today must bridge the divides of race, class and geography, ensuring that all New Yorkers benefit from the promise of AI-driven climate resilience. Failure to do so risks perpetuating a digital and environmental caste/race system where the wealthy remain shielded, and the vulnerable remain exposed.
Marco Tedesco is a Lamont Research Professor at the Lamont-Doherty Earth Observatory of Columbia University and Adjunct Scientist at the NASA Goddard Institute for Space Studies (GISS). He is also affiliated with the Data Science Institute, he is Affiliated professor at Sant’Anna School of Economics in Pisa, Italy and has been the Resident Scientist at the Columbia Business School for the past two years.
David Sathuluri is a policy expert, researcher and climate justice advocate dedicated to advancing justice and equality in human rights for all. He holds a Master’s degree in Climate and Society from the Columbia Climate School at Columbia University, specializing in intersectional studies, justice, artificial intelligence, inclusive climate action, climate technology, policy innovation, and social entrepreneurship.