Can We Build Tech That Is Not Oppressive?

With a Trump presidency and the rise of global authoritarianism, we must see that the antidote to our techno-harms are found in communities framed as “dangerous” or the "other," writes Afsaneh Rigot.

Purple motif that resembles stitched thread or a computer circuit

I spend a lot of time talking to experts, doing interviews, and looking at court files from all around the world to see how communities are being targeted or criminalized. In interviews I ask details like:

“How did they identify you or your community?”

“How were you arrested? Were you surveilled?”

“What did they look for on your devices and how did they access it?”

“What platforms or apps did they look for or use?”

“What did they ask about in your interrogation?”

“What are the biggest threats your community faces?”

Our interviews are conversations with people whose expertise is born from lived experience – as well as advocates or defense lawyers from these communities. Sometimes they are both.

Thanks for reading! Subscribe for free to receive new posts and support my work.

We at the De|Center do this for a very particular reason: to understand the methods of repression and how technology is weaponized toward that goal. In these interviews, layers of vast harms from state criminalization emerge. Sometimes we see it through police or border enforcement, state security apparatus, or even militias. Other times, the weaponization can come through our every day technologies, like the use of expanding AI or even contact lists on our phones. In the end, it is always about how it affects our communities, how it was used for harm, and what changes they see as needed.

This deep understanding of criminalization is not only for lawyers, human rights experts, or historians, it is vital for technologists too. The details of how highly marginalized communities are criminalized are the key to how we can build robust and even scalable tech that is not oppressive. This is how Design From the Margins truly works.

Fear and dehumanization of communities who are othered pave the way for abusive technological advancement. These groups are placed in protection black holes and become testing grounds for this tech. They face the biggest threats from states and power regardless of who is in office: from migrants at borders and incarcerated people in prisons, to apartheid systems. Once these approaches have been tested on those living in the margins, it is then expanded to other segments of the population.

With Trump’s inauguration, the list of promised chaos and violence is ongoing and ever-changing. From declaring a state of emergency at the border to making transphobic anti-DEI policy official on his first day, one thing is clear: even if this administration enforces 50% of the pledged chaos and violence, the billionaire-filled cabinet, tech-bro advisors, and private contractors are set to gain big.

First, fear is stoked based on manufactured narratives, which then naturally leads to the expansion of national and border security policies or needs. This expansion, in turn, allows for increased policing, military, and immigration infrastructure. These "needs"sustain jobs in prison industries, national security, and border control — which feeds into a cycle that lines the pockets of private contractors. Like a product that perpetuates its own demand, the use of these systems creates the need for more. This self-perpetuating economy of demand is an economy of violence.

This market for abusive tech and carceral facilities and equipment isn’t new. It’s been long established: From Bush to Obama to Trump 1.0 and Biden – especially post-September 11 — and the broad powers that came with the threat of “enemies” within and beyond. It expanded through each presidency and grew into the arsenal that is at Trump 2.0’s disposal now.

Meanwhile, major tech companies have profited from selling tech for violence and control — from billion-dollar projects used in Israel’s apartheid and genocide, to social media companies venturing into military contracts. This free-for-all has been flourishing at the cost of the lives of some of the world’s most oppressed people. Now, the US’s own most disenfranchised and over-policed communities are about to see it expand under a tech-oligarch infused Trump administration. Regardless of how effective Trump 2.0 is, he has already empowered this system to keep churning.

At the same time, Trump and his de-facto tech-bro co-President Elon Musk have continued their push for deregulation of AI and cryptocurrency. The level of overt tech-billionaire influence on the US has reached a new peak with high government and advisory roles filled by folks like Musk and Peter Thiel. Over the last few weeks, Apple and Amazon, Google and Meta donated historic sums to the Trump inaugural fund and took their seats ahead of lawmakers on the dais. Mark Zuckerberg made swift changes to align with the political winds – again – pushing for Meta’s moderation to resemble X, which allows more hate speech about Black and brown people, immigration, and gender (Musk, Vance, and Trump’s obsessions). Zuckerberg spewed non-facts about DEI and tech’s apparent need for more masculinity. Amazon moved the same way, just more quietly.

This goes beyond “Big Tech,” with its bottomless venture capitalist coffers, and still engages with a “Little Tech” venture capitalist’s agenda that demands fewer regulations – regardless of who it hurts – in order to “innovate” and thrive on its own terms. Top to bottom, the industry is pushing ever more aggressively for whoever gives them what they want, without interference. The same benefits the same: less regulation, fewer guardrails, more military contracts, more opportunities for data mining, and more uncurbed expansion of monopolies. And of course, let’s not even speak of ethics, harms or commitment to human rights.

It can be hard to imagine what can be done in the face of this tsunami. But we must remember that tech harms are not new — even the current ones — they are just more overt and likely to expand. Communities across the world have already faced many versions of dictatorships, demagogues, and oligarchies. We have simply now turned to a new chapter in capitalistic authoritarianism. Communities around the world living under oppression are already the testing ground for tech deregulation, exploitation, violent policing, and military tactics. The red lines of what becomes acceptable always start to disappear with those impacted by anti-terror laws, border control, national security, and indecency/morality regimes. This categorization creates protection black holes that allows the state and enforcement apparatus to do what it wants – it’s the same playbook of control all over the world. It is only then that they expand to everyone else. And power maintains itself from increasing criminalization.

The experience of the most criminalized in the US and globally shows how power really works. This knowledge, the workarounds, can also be translated into stealthy tech design, build, and engineering choices that help navigate systems. We can foresee what parts of the features of whole technologies can become weapons, but also how they could become stealthy tools to retain privacy from an abusive police officer forcing a phone open, for example. We can predict when AI advertised as entertainment can actually be a weapon. There are so many other outcomes possible than what we think is inevitable. When done right, protective systems can be built in ways that make it much harder (and more expensive) to undo work — and the tools become something so secure and robust in useability, safety, and privacy. This is a big ask, but with technical skills, building against oppression can be achievable – especially when it shows the overall benefits to the integrity and robustness of platforms. Then the big task can become the only path for building non-oppressive tech playing in the hands of tyrants.

Regardless of abuse from tech companies, people in the US and around the world will continue to use these platforms, especially the most marginalized. The same platforms become people’s lifelines despite the dangers. It’s important to continue to activate tech workers that remain inside companies to make needed changes that make weaponization of tech too difficult and too expensive.

These methods are also vitally important as we look at building and introducing accessible — and privacy-preserving — new platforms and technologies that break the monopolies of Big Tech. Newer platforms need to make sure they don’t repeat mistakes and leave the same gaps that make them inaccessible or unsafe to those that need them most. Only then can they truly challenge monopolies. There’s so much potential if the right methods are used for better futures in our tech ecosystem.

Understanding how criminalization works can be vital in making changes to tech that are futureproof regardless of who’s in charge. It’s one of the only methods to merge human rights and justice into tech design where the priorities of each side don’t immediately clash.

In our investigations and interviews, one purpose of collecting details about criminalization is to translate — and reverse engineer — them into changes that need to be made to existing or new tech tools, platforms, policy, and even resistance strategies. This information allows us to outline nuanced gaps about emerging tech. We ask communities about the changes they want to see and how they navigate and resist the abuses they presently face. The answers are always detailed and brilliantly strategic, unpacking elements of how our everyday tech tools can be manipulated. For example how devices become tracking tools or a trove of evidence for trumped up charges. Or, how new AI tools that are supposedly made for our “security” reveal many layers of abuses that the builders of these tools didn’t foresee or assumed as low-risk.

We have identified so many cases that show ingenious and stealthy changes that can protect and make tech more usable for everyone. Engineering, cryptographic protocols, designing, scenario-building, threat-modeling, impact assessments, and policy drafting that does not focus on the most marginalized and criminalized will not have the full frame of incoming harms.

All that to say, we are here, we are doing the work, and we will be sharing our findings as we move forward. We are currently working with marginalized communities in the US to inform our upcoming report, tentatively titled “A New Era of US Digital Authoritarianism: Action from the Margins," which is slated for publication in March. In this report, we'll draw on our existing research with marginalized and criminalized groups to illustrate tactics of tech-facilitated oppression that Trump 2.0 may employ over the next four years. We work in jurisdictions such as Egypt, Palestine, Tunisia, El Salvador, Mexico, Uganda, Ethiopia, India, Iran, Algeria, Saudi Arabia, Kenya, and Russia.

Interviews will include US-based groups that work on prisoners’ rights and abolition, Palestine action and anti-war movements, reproductive justice, trans rights, and with undocumented migrants. We are also working with those impacted by national security and anti-terror laws from the Patriot Act, for example experts and investigators in Guantanamo Bay and those currently incarcerated. This research will help us further explain how these tactics will most likely extend past the experience of decentered communities who already experience these daily harms — it will allow us to see what civil society and philanthropies can do to blunt these harms.

Stay tuned and thanks for reading. Until next time.

De|Center in the World

In the meantime, if you are going to RightsCon in Taipei, February 24-27, let’s connect.

Here are some of the events we are scheduled for — and of course we will be milling about as well.

  • Continuing Edge-ucation: Using Design from the Margins to Learn from Palestine, Iran, Mexico, Kenya, Ethiopia
  • Safer Swipes - Designing Dating Apps from the Margins
  • Let’s talk about the elephant in the room: Transnational policing and human rights
  • Digital targeting and policing of LGBTQ+ in MENA: How can platforms help?
  • Community Village booth (Tuesday Feb 25)

Meet Our Team

Afsaneh Rigot is the Founder and Principal Researcher of the De|Center. She is also an advisor to the Cyberlaw Clinic at Harvard Law School and ARTICLE 19, as well as an affiliate at the Berkman Klein Center at Harvard University. She is the author of the Design From the Margins methodology and a number of landmark research projects on tech, human rights, international policing, and state violence.

Jessica Fjeld is the Managing Director of the De|Center. She is also an affiliate of the Berkman Klein Center for Internet & Society, a guest instructor at Harvard Law School’s International Human Rights Clinic, and a member of the boards of the the Data Nutrition Project and the Center for the Study of Technology and Society (CETyS) at Universidad de San Andres.

Roja Heydarpour is a writer and editor for the De|Center. She has worked for The Daily Beast, The New York Times, Al-Monitor, and Columbia Global Reports, among others and teaches ESOL and citizenship preparation classes at the Brooklyn Public Library.

Find our Advisory Board Members and Guardrail Advisors on our site: The De|Center

Also, we’re now on social media! Be some of the first to follow us and share!

Thanks for reading! Subscribe for free to receive new posts and support the work.