mind sims - atleast some recent conversations I've had. sorry but I have to rant more about this
Main
I recently thought about rewriting my cofounder doc from the lens of solving culture war, as opposed to solving ASI risk. Given below is a major issue with this though:
I think atleast one reason I am hesitant to shift the offence-defence balance of cybersecurity in favour of offence, is that it's just going to be Losers fighting Losers from there onwards. I'm afraid no one will actually win fast enough, to make any difference.
Long list of examples of positions I think are losing positions
economic left v right - communism / socialism and anarchocapitalism / auth capitalism are both losing positions
political auth v lib - authoritarianism and libertarianism / anarchism / small govt are both losing positions, anti-war is a losing position, pro-war and pro-nationalism is a losing position, classic liberalism is a losing position, not having a position is also a losing position. At this point I genuinely believe being pro-democracy is not entirely a winning position.
social left v right - traditional religion is obviously a losing position, extreme level of individualism is also a losing position, casual sex maxxing is obviously a losing position. I am quite symapthetic to social left to be quite blunt (pro-LGBTQ, pro-new types of marriage and community) but I am just as anti-economic left as I am pro-social left
I don't think leaking secrets of all these camps is sufficient to actually solve the culture war in any meaningful way. It will obviously accelerate the solution a lot, including in ways I can't immediately foresee. (Very important phrase - "including in ways I cant immediately foresee") But also, someone needs to actually invent a new political ideology (roughly pro-democracy pro-capitalism anti-ASI) and have that Win, otherwise all this is for naught.
Do I want to gamble the fate of the entire world on "positive consequences I cant immediately foresee"? That requires an almost religious faith in something like "transparency good privacy bad" or "input processing output, people with full information can error-correct and make good decisions over time" or similar. I have faith in both, yes, but not enough faith to gamble my self-esteem on it.
Main 2
How do I get all these left-leaning people to just fucking die, without actually killing them? Why are people so attached to these wrong ideas? Clearly, just providing information alone is not actually fixing these people. A lot of their brains are broken beyond repair.
In general yeah, most people are great at articulating their individual problems, and suck at articulating solutions. Their problems are personal stuff like "my parents dislike my lack of religion" or "I hate my boss but have to work under him to pay rent" and their solutions are deranged stuff like "redistribute the economy".
"More information" will help the people who are actually working towards solutions, which is currently very few people.
I did consider that a transparent society will find it much easier to slow down an ASI arms race (or multiple other arms race) - TO DO
I also considered how more information could solve the US-China culture war (which is entirely different but just as important as the actual geopolitical war). What does average person think about sex/marriage/family/etc and why does it differ?
holy fuck - ok this seems like an important thing - more information doesnt really seem to be the bottleneck for me to create a transhumanist political system. That's why I'm not curious about more information.
More information makes it easier to stop the bad outcome, but it might not be a big bottleneck for the good outcome.
Because of time constraint, I feel pressured to stop the bad outcome first. Kill or be killed. If I had unlimited time, I would just go work to build the good outcome directly.
Why am I so hesitant to go back to the drawing board and find a new solution?
Most solutions can't be enforced through words, they have to be enforced via violence. Via shifting a technological offence/defence balance, basically.
I have already looked into many of the offence/defence balances that can be shifted in the next 5-10 years. I have looked into various chemical offence/defence balances. I have looked into bio offence/defence balances. I have looked into cyber. I have looked into privacy/transparency offence/defence balance. And so on.
(Minor reason, but) Even if I did find some other offence/defence balance I could shift, such that the problem of ASI risk would get either solved or ameliorated, there's a high chance that solution also involves killing of innocent people. Deontological code will obviously still not work. Important - The point here is not deontonological code, the point here is a coherent ideology that lets me retain self-esteem. All ways of killing innocent people are not equally privileged (unlike society thinks), some are worse than others, in terms of hit to my self-esteem.
Important - I need to differentiate between those who think all violence is bad, and those who think this specific form of violence (cyberattack AI companies and US govt) is bad. The former are obviously goners whose opinion I should not take that seriously. The latter I should atleast consider their opinions a little bit, even if I strongly disagree at the end for ideological reasons.
P.S. Should I write an article on why violence is good? For me, "technological offence/defence means violence is good" is an almost obvious point. But maybe I should try to persuade more people of it? Whenever I go the persuasion I route I end up with "fuck these people, they will never get it", so I conclude that no, I am not going the persuasion route.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month