Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's really important to separate the tools from the intent IMHO.

Just as one can use a butcher knife for its intended purpose or use it to kill, it's possible to design military equipment to "kill people and break things" without necessarily meaning to wage aggressive warfare.

It's true that the military tools are often misused, but that wouldn't change in the real world by simply not having them; pacifist countries are sheep in a field full of wolves.

Just look at Ukraine, and then compare to nations that e.g. gave up nuclear weapons and then suffered "regime change". The list includes Libya, parts of Ukraine, and effectively Iraq (who were very close to a nuke circa 1991). There's a reason Iran and North Korea want the bomb, and that reason is because the deterrent value is real, not imagined.

Rather it's the same as an underlying principle behind the Second Amendment push (no one cares for my defense more than I care for my defense), scaled up to the geopolitical level.

So just as I think it's possible to appreciate the craftsmanship and design that goes into a well-made katana even if you don't intend to run the sword through someone's guts, I think it's possible to appreciate at a technical level some of the technology used in military gear without feeling like it means you support war. ;)



One could also argue that the SR-71 was a tool that prevented death rather than facilitated it -- by delivering intelligence. It was a reconnaissance aircraft and despite being operated by the Air Force the payload of the Blackbird was never weaponized.


An interceptor variant reached the prototype stage. Ant one point, quite a few were on order.

     http://en.wikipedia.org/wiki/Lockheed_YF-12


It's really important to separate the tools from the intent IMHO.

First thing I'll say: I'm not sure I've reached a conclusion regarding that statement or not.

But I've been leaning rather more strongly to the idea that things can be classified as beneficial vs. otherwise. Though sometimes counterintuitively.

One of the more interesting hypotheticals I've run across recently is the Paperclip Maximizer example: http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment assumes an AI stable structure of goals or values, and shows that AIs with apparently innocuous values could pose an existential threat.

The idea: any simple-minded optimization behavior which doesn't take into consideration human values can, taken to the logical extreme, prove hazardous.

It was posted to HN a few years ago, though it didn't trigger much discussion at the time:

https://news.ycombinator.com/item?id=1747413


It's a great thought experiment. I think I even read it when it came across HN. But it seems to me to be an argument about why we keep humans "in the loop", as it were, and less about what tools you give the humans.

It does open some questions about how "innovative" one might want to be when developing a weapon, even for defensive uses.

It's sort of unfortunate that nuclear physics made the gains it did, when it did, as a big part of the reason the U.S. ended up making the big push for the bomb, at a time when they needed all the resources they could get put into things like logistics for shipping materiel, was because of the fear that Germany might get it first. I.e., "if someone will get the bomb, we'd better do it better they do".

I suppose the Cold War would have ensured proliferation one way or another, but WWII certainly did not help the cause of non-proliferation.

But either way, war or defense (whatever you call it) can never be a simple-minded optimized anything. It is almost the very highest level of human holistic competition. So I'm not worried about the technology (as long as we don't make it self-aware, of course), I'm worried about the people.


Especially considering that human beings can, very easily, become totally divorced from healthy, "normal" human values.

It's not enough to merely keep humans in the picture, but to keep healthy, undamaged humans in the picture.

Let's say, perhaps, that after a particularly costly war, the only humans left are a mixture of mentally unstable, angry, ambitious, victory-driven amputees, with intense biases imbued upon them by surviving particularly horrific and violent combat. These people, in a warped attempt to say "never again", optimize an artificially intelligent, fully automated child-rearing skinner box [1] to mold children into their own image, as the natural and perfect outcome which produces a society averse to violent warfare. The result is that every child that emerges from the skinner box is an angry, warped sociopath, missing limbs, who rationalizes even trivial behavior with an arbitrary moral high ground of extreme polar ideology.

But wait... aren't humans... technically classifiable as self-assembling intelligent constructs, spewed forth from the bald nothingness of space and time by mere coincidence? What if WE are the beast we fear?

Oh... oh god.

[1] https://en.wikipedia.org/wiki/B._F._Skinner


Organizational behavior and group decisionmaking is among the more fascinating fields I've encountered.


it seems to me to be an argument about why we keep humans "in the loop"

That's helpful but not sufficient.

• It doesn't address the criticality of appropriate feedback controls and limits. People and algorithms both make bad decisions.

• A given group of humans might not act in the best interests of all humans, or even a specified larger group, or even themselves.

Re: how innovative we want to be even with defensive weapons. Body armor is generally far less dangerous than an RPG or assault rifle. But defensive technologies such as antibiotics can, if misused, lead to larger downstream threats (through antibiotic resistance). The areas of unintended consequences, moral and morale hazard, and the like, make for fascinating study.

Re: the bomb. Yes, the Germans and Japanese were both conducting nuclear research (though I believe the Japanese project was limited and/or curtailed). Another interesting speculation I've seen is of what might have happened had the US not used the atomic bomb on Hiroshima and Nagasaki: it's possible that the true horrors of the weapon wouldn't have been realized and that the next military action (the Korean War) might have gone nuclear.

The places where I've been spending time looking at things are a bit more nuanced though: sustainability in light of finite resources and/or maximum flows: is saving lives really an unalloyed good? What of technology in general which might increase technological risks (simply of failure). Was the Green Revolution a good thing? Humans have been something of a paperclip maximizer, except that our paperclips are humans. In the long run even that may not work out for us. Systems need negative feedback loops.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: