Secure Tech Cannot Be a Luxury Item

Millions in crisis rely on ubiquitous tools that betray them: from social media to Starlink. The deeper shock comes when secure tools also exclude them through everyday hurdles like phone verification. In order to have real alternatives, we must reclaim hyper-accessibility.

Secure Tech Cannot Be a Luxury Item
Photo by Magenta on Unsplash

An Israeli strike on a tent clearly marked for journalists killed Anas Al-Sharif, ending the life of one of Gaza’s most beloved and vital documenters of the genocide. From his childhood in Jabalia camp, he had vowed to bear witness, and he did so until his last moments, when a war crime killed him and his team at Al Jazeera. In the weeks after his death, the list of those killed continues to grow, including more journalists. In a cruel irony, the platforms he relied on to report were the same ones used to silence and smear him, and later identify him for the targeted strike. This reliance on tools that can be manipulated by adversaries is not unique to Palestinians however: millions in war, genocide, and crisis face the same bind. And the safer, more ethical alternatives often remain out of reach.

In Gaza, both journalists and everyday people must use platforms like Telegram, X, Facebook, Instagram, and WhatsApp, to communicate. As 7amleh’s July 2025 report points out: 

“Social media platforms have not merely served as an extension of Palestinian social life, but also as a prerequisite for survival. With the nearly total destruction of infrastructure and recurrent electricity and communication outages, the digital space has become a survival tool and a platform for documentation and expression in the absence of international media coverage.” 

Despite condemnation by experts the world over, investigations have shown that Israeli automated and AI-assisted systems ingest vast datasets, including from social networks, to gain communications metadata. At checkpoints, Palestinians — including our interviewees — often have their devices copied with tools like Cellebrite, feeding messages, photos, and networks into Israel’s AI-driven monitoring systems and weapons. These systems generate “recommendation lists” that automate decisions of life and death, producing bombing targets. While the exact method is uncertain, Al-Sharif’s killing can be linked to Israel’s proven tactic of using geolocation data with AI-generated target lists drawn from the same apps he used for reporting.

Why are the oppressed forced to speak through their oppressor’s platforms, which are wide open to deadly surveillance? It is obviously not a lack of caution or awareness, but a structural dependence. Often, there is nowhere else to go, no other platform that could match the reach, immediacy, and accessibility of the same tools most ready to betray. And that is often by design.

Monetizing Survival: The Starlink Dilemma

In Sudan, another genocide exemplifies this cruel irony. In the world’s largest humanitarian crisis, families already starved by shuttered kitchens also face a digital void. The destruction of telecommunications infrastructure, blackouts, and a fractured network have left almost 30 million people cut off from banking, humanitarian coordination, and even the most fundamental need of knowing if loved ones are alive. The internet shutdowns and destruction of infrastructure have been weapons of war aimed at obstructing humanitarian aid and ensuring that atrocities can be committed in silence.  

As Yassmin Abdel-Magied observes, Elon Musk was ready to fill this void. He now owns one of the only points of access and source of connectivity for many in Sudan. The lives of millions hinge on his whims, which he has already shown in Ukraine. For Sudan, it also comes with a profound cost. 

Starlink can be exploited for surveillance or intelligence by the RSF and other factions, risking exposing civilians to drone strikes or other risks. RSF has also weaponized Starlink’s monopoly to demand heavy licensing fees for terminals, profiting from the desperation of those under indescribable suffering. 

I felt a glimmer of this dilemma during Israel’s war on Iran in June. Our families were forced to evacuate as Tehran – a metro area with nearly 17 million inhabitants – came under Israeli fire. Evacuations were slow. Many had nowhere to go as the missiles fell. I, like many others, tracked the war through fragments: videos on X, short texts slipping through on iMessage, Instagram DMs or WhatsApp, or calls to landlines. Then, silence.

Entire blocks of time passed when nothing came through. There was no way to know if the family was OK, especially when their neighborhoods were being hit. On the night when my parents’ neighborhood was bombarded, they finally decided to evacuate and their route cut through the main target zones. The silence was unbearable. A colleague and friend offered a way to connect: a wellness call using contacts with Starlink. I was desperate, but also too afraid of the risk since Starlink traffic was reportedly monitored and shared with Israeli forces during wartime operations. Could my call help the same military dropping bombs on them? At that moment, the impossible calculus of war and communication came to my world – one many in Sudan and Gaza face every day. 

There are more than nine levels of hell to which corporations, willingly or not, expose people. The hope is that our secure tech breaks this structural dependence on them, but we have work to do. 

A Surprising Barrier to Safety: Phone Number Verification 

Having worked with experts and communities in high-risk crises around the world, the information I bring back about insecure corporate technology is not what shocks people most. The surprise comes when we document how inaccessible the safer, more private, and secure technologies remain. In times of crisis, especially war and military engagement, secure communication is vital. Yet for many, it is the very thing they cannot access. Often, the reason lies in dated design choices and entrenched industry standards — not inevitability. 

One widespread flaw is a core infrastructural design decision that abandons millions of people at the very first step of access: the phone number required for app sign-up and verification. This antiquated method is still treated as a standard across the industry. As documented in our MENA research with queer communities, phone number verification via SMS or voice call can effectively ban entire groups from accessing vital communication tools. In Iran and Sudan, sanctions and geo-blocking prevent users from receiving verification codes, locking them out during times of crisis. Interviewees in Iran told us a black market cropped up around this requirement so that people could buy foreign phone numbers in order to access apps like Tinder or Signal. 

As far as Iran, one of the apps I trusted for work and secure communications during bloody uprisings or war – Signal – was the one I could not even share with loved ones. Built with default encryption by people who care about privacy, Signal is one of the few mainstream tools that can push back against the torrent of intercepted data that is turned into a weapon of war and repression. Yet by clinging to outdated industry norms, it leaves people abandoned.

“Registering with Signal [in Iran] has been very difficult since 2020/22,” Mo Hoseini, a digital rights expert and trainer, told me. “Depending on operators or provinces, the OTP code might come through or take a long time, but most often it never comes through. Since the Iran–Israel war, this has gotten a lot worse, which is likely due to a further systematic blockage by the Iranian government. This is the same for WhatsApp and Telegram. Getting Signal’s code is the least likely recently, including both on IranCell and Hamrahe Aval.”

It begs the question, how can a tool be truly private or secure if those who need privacy or safety the most cannot access it?

Raya Sharbain, a Palestinian security trainer who also works tirelessly to provide eSIMs in Gaza explained: “While Palestinian telcos are constantly trying to fix the network inside the Strip that’s been destroyed by Israel, people are resorting to WiFi, eSIMs, and hotspots. One of the main demands we get is recharging local Palestinian SIMs so people can send SMSs and make regular calls. With no income, and no means to recharge [the money on] your phone credit, there may be no way to receive SMSs, let alone verification codes.” 

At the US-Mexico border, rights workers tell us that refugees and migrants often abandon safer messaging tools for Facebook and other insecure apps – not out of choice, but necessity. In their journeys, losing a phone, SIM card, or being robbed means losing access to food sources, immigration updates, safe routes, or family. Platforms that lock people out without an option to log on from another device (for example via a username or email) are unusable when survival depends on quickly reconnecting. No one should have to choose between food, security, and privacy.

Furthermore, in countries with mandatory SIM registration, legal identities are directly connected to online activities. In our MENA research, a quarter of queer respondents reported feeling unsafe because of sign-up and login systems tied to phone numbers. This information has been used by state actors for surveillance, entrapment, and prosecution — it is also considered a “solid evidentiary” link between communications and an individual’s legal identity. In parallel, underground markets have emerged where phone numbers are traded to police and state actors to target queer individuals. Our research highlights reports where attackers can steal SMS verification codes to hijack accounts. Then, telecommunications providers can access these codes directly and provide them to inquiring state security.

There are countless examples of design choices and industry standards that exclude people, even when alternatives exist. Automated moderation, for instance, often echoes its toxic predecessors. On Bluesky, which can be a real ethical escape from X, moderation systems flagged repetitive posting and tagging as “abuse.” This design decision disproportionately silenced Palestinians in Gaza and would affect others in high-risk zones who depend on such practices for urgent collective action and survival. These are just two of many examples of gaps we document that can be addressed if only the decentered were centered in design.

Hyper-Accessibility for the Public Good

I am not talking about fixing everything in the corporate technosphere — that is a larger struggle still ahead. There are changes and choices that are possible right now, especially for our more ethical alternatives. The problem begins with design choices that ignore the most acute contexts. These choices determine who can speak, who can organize, and who can survive. It is no accident that the most data-extractive technologies are also the most accessible in times of crisis.

This outcome is a predictable one of corporate design. Platforms are engineered for maximum engagement because more users mean more data, monetization, leverage, and ultimately more control. Advertisers and investors profit from this endless circulation, but so do repressive states. In war or uprising, governments gain the same advantage because populations who are dependent on these tools for survival are essentially forced to also create heaps of exploitable metadata that can track, censor, and target them. Accessibility itself becomes weaponized.

Because these tools are cheap, ubiquitous, and functional on old devices, they remain indispensable for those who cannot afford alternatives. Like a polluted city where the wealthy can leave, those who remain are the poor, the politically powerless, and the most exploitable. Their continued presence is no accident: corporations know who will be left behind and how much can be extracted from them. Hyper-accessibility today is not a neutral fact of technology, it is a strategic asset of profit-driven systems. The very features that make X, Meta, or Telegram usable in Gaza or Khartoum are inseparable from business models built on dependency and exploitation.

Hyper-accessibility could be reclaimed as a public good, stripped from corporate control and redirected toward safety, privacy, and resilience. 

To challenge the corporations that thrive on our most painful days, we must avoid repeating the same design patterns that abandon those most at risk. It can start with small design changes based on acute contexts. It means refusing to place the burden of finding alternatives on those who, frankly, have no alternatives. The responsibility lies with people who have choices and the resources to invest, redesign, and imagine differently. 

Is that even possible? Well, yes. Imagine if Palestinians in Gaza were the only user base, what would we build? If Sudanese families under blackout and at risk of surveillance were the intended market, what would innovation look like? If at-risk migrants who need both fast access and deep privacy on any device were the core focus, what would our UX look like? Investment would shift toward creative change, and the “standard” would look radically different.

It becomes a point of necessity to be creative and move to new models that combine privacy, safety, and hyperaccessibility – especially in times of crisis. As Yatharth, a caste and design researcher and designer based in India told us: “When you don’t have to face any problems, what do you innovate about? If you come from a marginalized location, you have things to solve for.” 

For example, alternatives to phone numbers can exist. Speaking with experts throughout our research, it has become clear that there are many options if we want to invest in finding them. To name a few: usernames, quizzes as human-style CAPTCHAs or user-verification methods, quizzes to authenticate for registration or access, gamified onboarding that can even deter spam, or even creative cryptographic solutions.

Imagine Bluesky working with communities in authoritarian regimes, war zones, or contexts of genocide. It would adapt its “locked open” philosophy to account for real-world risks during times of repression both in the US and globally, refining defaults and building in structural privacy and access. It could even adapt harm reduction features like panic buttons, ephemeral messages, user-controlled privacy options, or app-cloaking – especially as people increasingly gather on the app to post about their resistance to the new Trump administration.  

It is a question then of who we are prioritizing, and not what we are prioritizing. 

The oppressor’s tools are hyper-accessible because they were designed that way. Our tools of liberation must be no less accessible, but safer, more private, and more resistant. Until then, those who need lifelines most will remain trapped in the poisoned city, forced to rely on tools that were never built for them, and often built against them

Afsaneh Rigot is the Founder and Principal Researcher of the De|Center. She is also an advisor to the Cyberlaw Clinic at Harvard Law School and ARTICLE 19, as well as an affiliate at the Berkman Klein Center at Harvard University. She is the author of the Design From the Margins methodology and a number of landmark research projects on tech, human rights, international policing, and state violence.