A Cahya Legawa's Les pèlerins au-dessus des nuages

If you ask what technologies we should avoid, you’re asking one of the most critical questions of our time: Just because we can build something, does that mean we should?

The answer requires understanding that technology is never neutral. Every tool we create reshapes us—our relationships, our societies, our understanding of what it means to be human. And some technologies reshape us in ways we may not survive.

Technologies We Should Avoid (And Why)

1. Autonomous Weapons That Decide Who Dies

What it is: AI systems that select and engage targets without human intervention. Drones, robots, automated defense systems that make kill decisions independently.

Why avoid it:

It severs the moral chain of responsibility.

When a human soldier kills, there’s a person who bears that weight—who must live with the decision, who can be held accountable, who carries the moral burden. When an algorithm kills, who is responsible? The programmer who wrote the code years ago? The general who deployed the system? The machine itself?

We’re outsourcing the decision of who deserves to die to mathematics.

And mathematics has no conscience. It has no ability to recognize surrender, to show mercy, to understand context that isn’t in its training data. It cannot see the child soldier who was forced to fight. Cannot recognize the white flag that wasn’t quite white enough to match its pattern recognition.

It makes war too easy.

If killing becomes automated—if you can wage war without risking your own soldiers, without bearing the psychological cost—war becomes cheaper. The barrier to violence lowers. We’ll fight more wars, not fewer, because the human cost that once gave leaders pause has been eliminated.

Once this technology exists, it proliferates. You can’t uninvent it. And eventually, these weapons will be used by governments, terrorists, anyone with the technical capacity. We’ll have created tools of mass killing that operate beyond human moral judgment.

The line we cross: Delegating the choice of human death to machines. Some decisions should require a human to bear the weight. Killing is one of them.


2. Perfect Surveillance Systems

What it is: Technologies that track every movement, every communication, every transaction. Facial recognition everywhere. Predictive algorithms that know where you’ll be before you do. Social credit systems that judge your worth.

Why avoid it:

Privacy isn’t just about hiding bad things. It’s essential to being human.

You cannot develop an authentic self if you’re always being watched. You cannot think dangerous thoughts, make mistakes, change your mind, explore ideas, become someone different—if every action is recorded, judged, and potentially punished.

The chilling effect is total. When people know they’re being watched, they conform. They self-censor. They perform the version of themselves that’s safest, most acceptable. Surveillance doesn’t just record behavior—it changes behavior.

It enables tyranny at a scale previously impossible.

Every authoritarian government in history has been limited by information. You can’t oppress everyone if you don’t know what everyone is doing. But with perfect surveillance? You can identify dissidents before they organize. Predict rebellion before it forms. Punish thought before it becomes action.

China’s social credit system is a preview: your score drops if you jaywalk, criticize the government, associate with the wrong people. Low score? You can’t travel, can’t get jobs, can’t send your children to good schools.

Once the infrastructure exists, it will be abused. Maybe the current government is benevolent. What about the next one? What about when the technology is hacked? When the database leaks? When the algorithms are wrong and you’re flagged as a threat for reasons you’ll never know?

The line we cross: The assumption of privacy. The ability to exist without constant judgment. The space to be imperfect, to change, to be human without performance.


3. Engineered Pandemic Pathogens

What it is: Gain-of-function research that makes viruses more transmissible or deadly. Bioweapons. Genetically modified pathogens designed to target specific populations.

Why avoid it:

The risk-benefit calculation is insane.

The argument for gain-of-function research is: “We need to study dangerous pathogens to prepare for them.” But in creating them, you’ve guaranteed their existence. You’ve made the threat you claim to be preventing.

Accidents happen. Labs leak. Researchers get infected. Security fails. Every biosafety level 4 lab in the world is a potential epicenter for a catastrophic pandemic—not from nature, but from our own creation.

It’s dual-use technology with no dual.

Unlike nuclear technology (which can generate power or destroy cities), engineered pathogens have essentially one use: harm. There’s no peaceful application for a virus designed to spread faster and kill more efficiently.

The bioweapon that kills indiscriminately.

You cannot target a virus to only kill your enemies. Pandemics don’t respect borders, politics, or intent. Engineering a deadly pathogen is building a weapon that will eventually kill your own people. Guaranteed.

We’re creating extinction-level threats for research purposes.

A sufficiently deadly and transmissible pathogen could end civilization. Not might—could. And we’re experimenting with creating exactly that in labs around the world, secured by fallible humans following imperfect protocols.

The line we cross: Creating existential threats to the entire species in the name of studying existential threats. The cure is worse than the disease we’re allegedly preventing.


4. Addictive-by-Design Technology

What it is: Social media algorithms optimized for engagement above all else. Infinite scroll. Variable reward schedules. Notification systems designed to create compulsive checking. Apps that exploit psychological vulnerabilities to maximize “user retention” (addiction).

Why avoid it (or heavily regulate):

It’s cognitive hijacking.

These technologies exploit the same neural pathways as gambling addiction, drug addiction, behavioral compulsion. The variable reward (maybe this scroll will have something interesting!) triggers dopamine in ways that override rational decision-making.

The companies admit they’re designing for addiction, then call it “engagement” to avoid the moral weight. Former tech executives have confessed: “We knew we were creating something harmful. We did it anyway because it was profitable.”

It’s destroying attention spans, mental health, especially in children.

Rates of anxiety, depression, self-harm, and suicide among teenagers have skyrocketed in correlation with smartphone/social media adoption. Causation is debated, but the platforms know they’re harmful—their own internal research shows it—and they suppress the findings.

It’s not neutral technology that people choose to misuse. It’s technology designed to be misused—designed so you can’t moderate your use, designed so your children can’t moderate their use, designed so saying “just use it responsibly” is like telling a gambling addict to “just be responsible at the casino that’s engineered every detail to make you keep gambling.”

It’s polarizing societies. Engagement algorithms learned that anger, fear, and outrage keep people clicking. So they feed you content that makes you angry. They create filter bubbles where you only see views that confirm your biases. They’re not connecting humanity—they’re fracturing it for profit.

The line we cross: Designing technology to exploit human psychological vulnerabilities. Making billions by making users addicted. Especially when the users are children whose brains aren’t developed enough to resist.


5. Human Genetic Engineering for “Enhancement”

What it is: Not fixing genetic diseases (which might be justifiable), but engineering “superior” humans. Selecting for intelligence, beauty, strength, longevity. Designer babies optimized for parental preferences or societal values.

Why we should approach with extreme caution (not necessarily avoid entirely, but be very careful):

It creates genetic inequality that becomes permanent.

Rich people can afford to genetically enhance their children. Poor people cannot. The inequality gap, currently economic and social, becomes biological and inherited. We create a literal genetic aristocracy.

Within generations, you have a subspecies divide: the enhanced and the natural. And the enhanced will be smarter, healthier, more capable—through no merit of their own, just the accident of wealthy parents.

It ends diversity.

If everyone selects for the same traits (intelligence, beauty, health), we narrow the human gene pool. We lose neurodiversity, physical diversity, the weird outliers who see things differently and create breakthrough innovations.

Evolution selected for diversity because environments change. Optimizing for today’s values might create catastrophic vulnerability to tomorrow’s challenges.

It turns children into products.

You’re not having a child—you’re designing a product according to specifications. The child becomes a reflection of parental ambition rather than a person with their own autonomy. The expectations, the pressure, the sense that you were engineered for a purpose—psychologically devastating.

Who decides what’s “better”?

Enhanced intelligence sounds good. But what about enhanced aggression? Enhanced conformity? Enhanced beauty according to whose standards? Enhanced to serve what social function?

The history of eugenics—forced sterilization, racial “improvement,” engineering “superior” humans—is one of humanity’s darkest chapters. Genetic enhancement is eugenics, just with parental choice instead of state force. That doesn’t make it less dangerous.

The line we cross: Turning human reproduction into quality control. Ending the fundamental equality of “all humans are born with equal dignity” by making some literally, biologically superior.


6. Brain-Computer Interfaces for Behavioral Control

What it is: Technology that reads and writes to the brain. Not assistive tech for paralyzed patients (which might be beneficial), but systems that can monitor thoughts, alter emotions, modify behavior.

Why avoid it:

It’s the end of mental privacy.

If your thoughts can be read, you have no internal space. No ability to think something without acting on it, to consider an idea before deciding whether you believe it, to have dark thoughts you’d never act on but need to process.

Mental autonomy is the last frontier of freedom. Even in the most oppressive societies, your mind was yours. Brain interfaces that can monitor or alter thought end that.

It enables thought crime.

If your brain interface can detect hostile thoughts toward the government, religious doubt, prohibited ideas—you can be punished for thinking, not just acting. Orwell’s thoughtcrime becomes technically possible.

It allows behavioral modification without consent.

Depressed? The interface alters your neurochemistry. Angry? It dampens your aggression. Questioning authority? It enhances compliance. Not through choice, not through therapy—through technological override of your will.

Even if voluntarily adopted, it changes what voluntary means.

If brain interfaces make you more productive, everyone will be pressured to get them. Not legally forced, but economically forced—you can’t compete with enhanced workers. Can’t get the job without the interface. And once installed, who controls it? You? Your employer? The state?

The line we cross: The boundary of the self. If your thoughts and emotions can be monitored and modified by external technology, what does “you” even mean?


7. Deepfake Technology (Without Safeguards)

What it is: AI that creates perfect video/audio forgeries. Making anyone appear to say or do anything.

Why it’s dangerous:

It destroys the concept of evidence.

For all of human history, “seeing is believing” was mostly reliable. Photos and videos could be faked, but it took skill and left traces. Now? Anyone can create perfect forgeries of anyone saying anything.

This breaks:

  • Justice systems (video evidence becomes worthless)
  • Journalism (how do you verify anything?)
  • Democracy (fake videos of candidates the day before elections)
  • Personal relationships (revenge porn with faces swapped, false “evidence” of infidelity)
  • Historical record (how will future generations know what really happened?)

It doesn’t just allow deception—it makes truth unverifiable.

Even when real videos emerge—real crimes, real confessions, real evidence—the accused can just say “deepfake.” And how would you prove otherwise?

The line we cross: Shared reality. When we can’t agree on what’s real, when evidence becomes meaningless, society fractures. We can’t function without some baseline of verifiable truth.


The Pattern: What Makes Technology Worth Avoiding?

Looking at these examples, several themes emerge:

1. Irreversible Harm

Technologies that, once deployed, cannot be recalled. Pandemics can’t be un-leaked. Surveillance infrastructure, once built, will be used. Genetic changes pass to future generations.

If the downside is permanent and catastrophic, extreme caution is warranted.

2. Asymmetric Power

Technologies that concentrate enormous power in few hands. Surveillance gives governments total control over populations. Genetic enhancement creates biological aristocracies. Autonomous weapons let the technologically advanced wage war without risk.

If it makes power imbalances permanent and extreme, it threatens human dignity.

3. Erosion of Human Autonomy

Technologies that override choice, manipulate behavior, eliminate privacy, control thought. When technology makes us less free, less able to choose our own path—it’s degrading what makes us human.

4. Profit Over Wellbeing

When the technology is designed to exploit rather than serve. Addictive algorithms, manipulative interfaces, systems that make money by making users worse off.

If the business model requires user harm, the technology should be regulated or banned.

5. Existential Risk

Technologies that could end civilization. Engineered pandemics, advanced AI without alignment, technologies we don’t understand but deploy anyway because we can.

If the worst case scenario is extinction, we need to be very, very careful.


The Philosophical Question: Human Capability vs. Human Wisdom

We have developed faster than we have matured.

We’re a species that figured out nuclear fission before we figured out how not to wage war every generation. That developed social media before we understood its psychological effects. That’s creating artificial intelligence before we’ve solved artificial wisdom.

Technology amplifies both our capabilities and our flaws.

Give a wise, compassionate person powerful technology—they might create immense good.
Give a greedy, short-sighted person the same technology—catastrophe.

The problem: we’re building technologies that require saint-like wisdom to use safely, and deploying them in a world run by ordinary, flawed humans.


What We Should Do Instead

Not avoid all technology—but demand:

1. Precautionary principle: When the downside is catastrophic, err on the side of caution. Don’t deploy until you’re confident in safety.

2. Democratic oversight: Technologies that affect everyone should be governed by everyone, not just tech companies or militaries.

3. Mandatory ethics review: Like medical research requires ethics boards, powerful technologies should require independent assessment of societal impact.

4. Transparency: If you’re building something that affects millions, the public has a right to know how it works and what risks it poses.

5. Right to refuse: People should be able to opt out of surveillance, brain interfaces, genetic modification without social/economic penalty.

6. Human-centered design: Technology should serve human flourishing, not vice versa. If your technology requires humans to adapt to it rather than it adapting to humans—redesign it.


The Ultimate Question

What is technology for?

If the answer is: “To make human life better, richer, more free, more dignified”—then we avoid technologies that make us less free, less human, less dignified.

If the answer is: “Because we can” or “Because it’s profitable”—we’ve lost the plot.

The technologies we should avoid are the ones that treat humans as resources to be optimized, controlled, exploited, or redesigned rather than beings with inherent dignity deserving of freedom and flourishing.


What technology do you think we should avoid? And what would it cost us to refuse it?

Commenting 101: “Be kind, and respect each other” // Bersikaplah baik, dan saling menghormati (Indonesian) // Soyez gentils et respectez-vous les uns les autres (French) // Sean amables y respétense mutuamente (Spanish) // 待人友善,互相尊重 (Chinese) // كونوا لطفاء واحترموا بعضكم البعض (Arabic) // Будьте добры и уважайте друг друга (Russian) // Seid freundlich und respektiert einander (German) // 親切にし、お互いを尊重し合いましょう (Japanese) // दयालु बनें, और एक दूसरे का सम्मान करें (Hindi) // Siate gentili e rispettatevi a vicenda (Italian)

Satu tanggapan

Tinggalkan komentar