Guess I'm a rationalist now
323 points by nsoonhui 2 weeks ago | 1186 comments- contrarian1234 2 weeks agoThe article made me think deeper about what rubs me the wrong way about the whole movement
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
- jandrese 1 week agoThey remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
- troyastorino 1 week agoThe overlap between the Effective Altruism community and the Rationalist community is extremely high. They’re largely the same people. Effective Altruism gained a lot of early attention on LessWrong, and the pessimistic focus on AI existential risk largely stems from an EA desire to avoid “temporal-discounting” bias. The reasoning is something like: if you accept that future people count just as much as current people, and that the number of future people vastly outweighs everyone alive today (or who has ever lived), then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
- dasil003 1 week agoIntelligence and rational thought is useful, but like any strategy it has its tradeoffs and limitations. No amount of intelligence can overcome the chaos of long time horizons, especially when we're talking about human civilization. IMHO it's reasonable to pick a long-term problem/risk and focus on solving it. But it's pure hubris to think rationality will give you anything approaching high confidence of what the biggest problems and risks actually are on a 20-50 year time horizon, let alone 200-500 years or longer.
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
- AnthonyMouse 1 week ago> Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
- nradov 1 week agoIn what sense are people in those communities "quite smart"? Stupid is as stupid does. There are plenty of people who get good grades and score highly on standardized tests, but are in fact nothing but pontificating blowhards and useless wankers.
- georgeecollins 1 week agoFor anyone who takes Effective Altruism or Rationalism seriously I strongly recommend reading "You Are Not a Gadget" It was written more than ten years ago and was so prescient about the issues of social media and also contained one of the most devastating problems with EA, the idea of a circle of empathy.
You don't have to agree with any of this. I am not defending every idea the author has. But I recommend that book.
- semi-extrinsic 1 week ago> then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
This is the logic of someone who has failed to comprehend the core ideas of Calculus 101. You cannot use intuitive reasoning when it comes to infinite sums of numbers with extremely large uncertainties. All that results is making a fool out of yourself.
- bena 1 week agoTechnically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.
I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.
If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.
- ineptech 1 week agoWhat's the seductive-but-wrong part of EA? As far as I can tell, the vast majority of the opposition to it boils down to "Maybe you shouldn't donate money to pet shelters when people are dying of preventable diseases" vs "But donating money to pet shelters feels better!"
- benchly 1 week agoThis tracks based on my limited contact with LessWrong during the whole Roko's Basilisk thing.
I quickly lost interest in Roko's Basilisk, but that is what brought me in the door and started me looking around the discussions. At first, it was quite seductive. There was a strange fearlessness there, a willingness to say and admit some things about humanity, our limitations and how we tend to think that other great thinkers maybe danced around in the past. After awhile it became clear that while there were a select few individuals who had found some balance between purely rational thinking and how reality actually works, most of the rest had their heads so far up their asses that they'd fart and call it a cool breeze. Reminded me of my brief obsession with Game Theory and realizing that even it's creators knew it's utility was not quite as advertised to the layman (as in it would not really help you predict or plan for anything at all, just model how decisions might be made).
- jdmichal 1 week agoI'm not familiar with any of these communities. Is there also a general bias towards one side between "the most important thing gets the *most* resources" and "the most important thing gets *all* the resources"? Or, in other words, the most important thing is the only important thing?
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
- dmurray 1 week agoThe other weird direction it leads is space travel.
If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.
- im3w1l 1 week agoI think most everyone can agree with this: Being 100% rigorous and rational, reasoning from first principles and completely discarding received wisdom is a great trait in a philosopher but a terrible trait in a policymaker. Because for the former, exploring ideas for the benefit of future generations is more important than whether they ultimately reach the right conclusion or not.
- 1 week ago
- gammarator 1 week agoIt seems self-evident to me that epistemic humility _requires_ temporal discounting. We should not be confident we can predict the future well enough to integrate utility centuries forward.
- PaulHoule 1 week agoEA always rubbed me the wrong way.
(1) The kind of Gatesian solutions they like to fund like mosquito nets are part of the problem, not part of the solution as I see it. If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want. Expecting NGOs to do everything for them is the same kind of neoliberal thinking that has rotted state capacity in the core and set us up for a political crisis.
(2) It is one thing to do something wrong, realize it was a mistake, and then make amends. It's another thing to do plan to do something wrong and to try to offset it somehow. Many of the high paying jobs that EA wants young people to enter are "part of the problem" when it comes to declining stage capacity, legitimation crisis, and not dealing with immediate problems -- like the fact that one of these days there's going to be a heat wave that is a mass causality event.
Furthermore
(3) Time discounting is a central part of economic planning
https://en.wikipedia.org/wiki/Social_discount_rate
It is controversial as hell, but one of the many things the Soviet Union got wrong before the 1980s was planning with a discount rate of zero, which led to many economically and ecologically harmful projects. If you seriously think it should be zero you should also be considering whether anybody should work in the finance industry at all or if we should have dropped a hydrogen bomb on Exxon's headquarters yesterday. At some point speculations about the future are just speculation. When it comes to the nuclear waste issue, for instance, I don't think we have any idea what state people are going to be in 20,000 years. They might be really pissed that buried spent nuclear fuel some place they can't get at it. Even the plan to burn plutonium completely in fast breeder reactors has an air of unreality about it, even though it happens on a relatively short 1000 year timescale we can't be sure at all that anyone will be around to finish the job.
(4) If you are looking for low-probability events to worry about I think you could find a lot of them. If it was really a movement of free thinkers they'd be concerned about 4,000 horsemen of the apocalypse, not the 4 or so that they are allowed to talk about -- but talk about a bunch of people who'll cancel you if you "think different". Somehow climate change and legitimation crisis just get... ignored.
(5) Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion, not least "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.
- Johanx64 1 week ago>People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
They don't even do this.
If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.
Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).
Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.
You can't deprive non-existent people out of anything.
And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.
- dasil003 1 week ago
- cassepipe 1 week agoI read the whole tree of responses under this comment and I could only convince myself that when people have no arguments they try to make you look bad.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
- gopher_space 1 week agoEver had a conversation with someone who literally cannot reexamine their base principles? Like, it's functionally impossible for them? That's everyone's central criticism of rationalists.
Being able to do that is pretty much "entry level cognition" for a lot of us. You should be doing that yourself and doing it all the time if you want to play with the big kids.
One of the things I really miss about the old nerds-only programmer's pit setup was the amount of room we had for instruction, especially regarding social issues. The scenes from the college department in Wargames were really on the nose, but highlight a form of education that was unavoidable if you couldn't just dip out of a conversation.
- smus 1 week agoFeels like "they are wrong and smug" is enough reason to dislike the movement
- sanderjd 1 week agoOr maybe you are demonstrating the exact failure to demonstrate introspective humility that people are talking about?
- 1 week ago
- ajkjk 1 week agoHere's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
- skinnymuch 1 week agoRead other comments then. Not everyone is saying “they think they are better than us”.
The primary issues as others have noted is they focus on people going to the highest paying jobs without much care for morality of the jobs. Ergo they are fine being net negatives in terms of their work and philosophy.
All they do is donate money. Donations don’t fix society. Nothing changes structurally. No root problems are looked at.
They largely ignore capitalism’s faults or when I’ve seen them talk about, it’s done in a way of superficially decrying capitalist issues but then largely going along with them. Which ties into how they focus on high paying jobs regardless of morality (I’m exaggerating here but the overall point is correct).
—
HN is not intelligent when it comes to politics or the world. The avg person here is a western chauvinist with little political knowledge but a defensive ego about it. No need to be sad about this comment page.
- trod1234 1 week ago[flagged]
- gopher_space 1 week ago
- noname120 1 week ago> They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
- notahacker 1 week agoThe confluence of Bay Area rationalism and academic philosophy means a lot of other EA space is given to discussing hypotheticals in longwinded forum posts, blogs and papers. Some of those are well-trod utilitarian debates, others take it towards uniquely EA arguments like asserting that given that there could be as many as 10^31 future humans, essentially anything which claims to reduce existential risk - no matter how implausible the mechanism - has higher expected value than doing stuff that would certainly save human lives. An apparently completely unironic forum argument asked their fellow EAs to consider the possibility that given various heroic assumptions, the sum total of the suffering caused to mosquitos by anti-malaria nets might in fact be larger than the suffering caused by malaria they prevent. Obviously not a view shared by EAs who donate to antimalaria charities, but absolutely characteristic of the sort of knots EAs like to tie themselves in - it even has its own jokey jargon ('the rebugnant conclusion' and 'taking the train to crazy town') to describe adjacent arguments and the impulse to pursue them.
The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...
It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....
- jhbadger 1 week agoAs Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".
- jandrese 1 week agoOne example is Newcomb's problem. It presupposes a ridiculous scenario where a godlike being acts irrationally and then people try to base their life around "winning" the game that will never ever happen to them.
- notahacker 1 week ago
- yawnxyz 1 week agoHaving just moved to the Bay Area, I've met a few AI "safety researchers" who seems to come from this EA / Rationalist camp, and they all behave more like preachers than thinkers / academics / researchers.
I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
- comp_throw7 1 week ago> I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
In that case I don't think you've met any of the people under discussion.
- comp_throw7 1 week ago
- TimTheTinker 1 week ago> their initial conditions were highly artificial
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
- nyeah 1 week agoYes, there's a name for it. They're dumbasses.
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22 https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
- nyeah 5 days agoOr "garbage in, garbage out"? But that's really about computers. And maybe it is name calling. Is there no standard name for this? Or not in English?
Maybe it's similar to the "Good Student" picture. Bright within a given assignment, but taking the assignment to be immutable, or taking no interest in where the assignment comes from.
Or I've heard a saying "nobody will pay you to solve problems that they've already defined clearly."
- nyeah 1 week agoMaybe "appeal to authority" covers it. The "rationality" or "intellect" or even "brilliance" of some claim's originator are fun things to talk about. But (even taking the claims at face value), those things aren't logically connected to whether the claim is actually true.
- TheAncientGeek 1 week agoBullet biting.
- nyeah 1 week ago
- protocolture 1 week ago>They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic
I have read Effective Altruists like that. But I also remember seeing a lot of money donated to a bunch of really decent sounding causes because someone spent 5 minutes asking themselves what they wanted their donation to maximise, decided on "Lives saved" and figured out who is doing the best at that.
- OtherShrezzing 1 week agoThey _are_ the effective altruism crowd.
- foobar_______ 1 week agoI really appreciate this comment. Now in my 30s, I was not even aware of EA or Rationalist groups in my 20s but your description of a person high on the smell of their own farts and putting logic as king was me. The world is so chaotic that it felt good and right that I could lean on something "true" like logic. It took me to the "correct" answers according to logic but very misguided answers as I see them now. Humans are driven by emotions and the world is probabilistic so relying so heavily on logic just seems so misguided now that I'm older.
- Viliam1234 1 week agoBy the way, "the world is probabilistic" is one of the key lessons of the rationality community. (Infinite certainty would require infinite amount of evidence, according to the Bayes' Theorem.) When you find someone on internet trying to assign probabilities to their beliefs, chances are you have met a rationalist.
- Viliam1234 1 week ago
- maybelsyrup 1 week ago> They remind me of the "Effective Altruism" crowd
Honestly thought they were the same people
- Viliam1234 1 week agoNot all rationalists are EAs, and not all EAs are rationalists, but there are many people who are both. The very idea of using evidence and numbers to choose the optimal charity has a rationalist vibe. But of course, one can donate to an effective charity without being interested in LessWrong-style rationality.
https://www.lesswrong.com/ and https://forum.effectivealtruism.org/ run on the same software, and some people have an account at both of them, but they are generally separate web communities.
- Viliam1234 1 week ago
- jhbadger 1 week agoThere is a great recent book by Adam Becker "More Everything Forever" that deals with the overlapping circles of "effective altruists", "rationalists", and "accelerationists". He's not very sympathetic to the movements as he sees them as mostly rationalizing what their adherents wanted to do anyway -- funding things like rockets and AI over feeding the needy because they see the former as helping more people in the future than dealing with real problems today.
- not_your_mentat 1 week agoThe notion that our moral obligation somehow demands we reduce the suffering of wild animals in an ecosystem, living their lives as they have done since predation evolved and as they will do long after humans have ceased to be, is such a wild misunderstanding of who we are and what we are and what the universe is. I love my Bay Area friends. To quote the great Gwen Stefani, “This sh!t is bananas.”
- mlinhares 1 week agoI'd recommend listening to the behind the bastards episode on some of these folks, it is wild https://podcasts.apple.com/us/podcast/part-one-the-zizians-h...
- vishnugupta 1 week agoI don’t have anything to add to the discussion except that EA was the very first thing that popped into my mind when I read Rationalists.
- HPsquared 1 week agoPeople who confuse the map for the territory.
- UncleOxidant 1 week ago> They remind me of the "Effective Altruism" crowd
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker. Highly recommended.
- fanwood 1 week agoAbsolutely agree. The whole movement is unbearably naive
- j_timberlake 1 week agoThis is a perfect example of a Hacker News hot-take, you complain about their attitude while making zero reference to their actual accomplishments or failures. No one will downvote you for posting something factually incorrect, because you didn't post any facts at all.
For anyone else reading, a good example of what EAs do can be seen with the GiveWell charity: https://www.givewell.org/top-charities-fund
Lots of anti-malaria and vitamin stuff (as a cheap way to save lots of lives). There are also tons of EA animal charities too, such as Humane League: https://thehumaneleague.org/our-impact
- habinero 1 week agoA weird cult doing helpful community service doesn't cancel out the harm of them being a cult.
If they want to donate to charity, they can just donate. You don't gotta make a religion out of it.
- habinero 1 week ago
- noosphr 1 week agoIncidentally a good book on logic is the best antidote to that type of thinking. Once you learn the difference between a valid and a sound argument and then realize just how ambiguous every English sentence is the idea that just because you have a logical argument you have something useful in everyday life becomes laughable rather quickly.
I also think the ambiguity of meaning in natural language is why statistical llms are so popular with this crowd. You don't need to think about meaning and parsing. Whatever the llm assumes is the meaning is whatever the meaning is.
- Viliam1234 1 week ago> Once you learn the difference between a valid and a sound argument and then realize just how ambiguous every English sentence is the idea that just because you have a logical argument you have something useful in everyday life becomes laughable rather quickly.
Ironically, that reminds me of "37 Ways That Words Can Be Wrong", written by Eliezer Yudkowsky in 2008...
- Viliam1234 1 week ago
- sanderjd 1 week agolol, I love that you identified the exact correct academic term for this phenomenon.
- 1 week ago
- trod1234 1 week agoExcept with Effective Altruism (EA), its not pure logic.
Logic requires properties of metaphysical objectivity.
If you use the true meaning of words it would be called irrationality, delusion, sophism, or fallacy when such things are claimed true when in fact they are false.
- troyastorino 1 week ago
- mitthrowaway2 1 week ago> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
- Certhas 1 week agoI think this is a valid point. But to some degree both can be true. I often felt when reading some of these type of texts: Wait a second, there is a wealth of thinking on these topics out there; You are not at all situating all your elaborate thinking in a broader context. And there absolutely is willingness to be challenged, and (maybe less so) a willingness to be wrong. But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever. And importantly maybe there also isn't as much taste for understanding the limits of vigorous discussion and rational deduction. Adorno and Horkheimer posit a dialectic of rationality and enlightenment, Habermas tries to rebuild rational discourse by analyzing its preconditions. Yet for all the vigorous intellectualism of the rationalists, none of that ever seems to feature even in passing (maybe I have simply missed it...).
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
- mitthrowaway2 1 week agoThat's a fair point. Speaking only for myself, I think I fail to understand why it's important to situate philosophical discussions in the context of all the previous philosophers who have expressed related ideas, rather than simply discussing the ideas in isolation.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
- Aurornis 1 week ago> But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
- voidhorse 1 week agoYou're spot on here, and I think this is probably also why they appeal to programmers and people in software.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
- mitthrowaway2 1 week ago
- Aurornis 1 week agoI grew up with some friends who were deep into the early roots of online rationalism, even slightly before LessWrong came online. I've been around long enough to recognize the rhetorical devices used in rationalist writings:
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
- mitthrowaway2 1 week agoYes, I do think that these hedging statements make them immune from the specific criticism that I quoted.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
- Veedrac 1 week agoIf putting up evidence about how people were wrong in their predictions, I suggest actually pointing at predictions that were wrong, rather than on recent predictions about the future that that you disagree over how they will resolve. If putting up evidence about how people make excuses for failing predictions, I suggest actually showing them do so, rather than projecting that they will do so and blaming them for your projection.
- mitthrowaway2 1 week ago
- bakuninsbart 1 week agoWeirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
- mitthrowaway2 1 week agoI don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
- salynchnew 1 week ago> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
- ToValueFunfetti 1 week ago>they bought mansions for team building
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
- gjm11 1 week ago> both can be true
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
- Viliam1234 1 week ago> they bought mansions for team building
Because they we doing so many workshops that buying a building was cheaper than renting all the time.
You may argue that organizing workshops is wrong (and you might be right about that), but once you choose to do them, it makes sense to choose the cheaper option rather than the more expensive one. That's not rationalist rhetoric, that's just basic economy.
- Dracophoenix 1 week ago[dead]
- mitthrowaway2 1 week ago
- sanderjd 1 week agoI read rationalist writing for a very long time, and eventually concluded that this part of it was, not universally but predominantly, performative. After you read enough articles from someone, it is clear what they have conviction in, even when they are putting up disclaimers saying they don't.
- freejazz 1 week ago>Are you sure you're not painting this group with an overly-broad brush?
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
- hiddencost 1 week agoThey're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
- mitthrowaway2 1 week agoI don't see how this is a response to what I wrote. Can you explain?
- JamesBarney 1 week agoWhat ngo's are the rationalists or effective altruists responseable for killing or defunding?
- mitthrowaway2 1 week ago
- Certhas 1 week ago
- NoGravitas 1 week agoI've always seen the breathless Singularitarian worrying about AI Alignment as a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman, from algorithmic bias, to policy-washing, to energy costs and acceleration of wealth concentration. It also leads to so-called longtermism - discounting the benefits of solving current real problems and focusing entirely on solving a hypothetical one that you think will someday make them all irrelevant.
- tuveson 1 week agoMy feeling has been that it’s a lot of people that work on B2B SaaS that are sad they hadn’t gotten the chance to work on the Manhattan Project. Be around the smartest people in your field. Contribute something significant (but dangerous! And we need to talk about it!) to humanity. But yeah computer science in the 21st century has not turned out to be as interesting as that. Maybe just as important! But Jeff Bezos important, not Richard Feynman important.
- HPsquared 1 week ago"Overproduction of elites" is the expression.
- HPsquared 1 week ago
- thom 1 week agoThe Singularitarians were breathlessly worrying 20+ years ago, when AI was absolute dogshit - Eliezer once stated that Doug Lenat was incautious in launching Eurisko because it could've gone through a hard takeoff. I don't think it's just an act to launder their evil plans, none of which at the time worked.
- salynchnew 1 week agoYeah, people were generally terrified of this stuff back before you could make money off of it.
- notahacker 1 week agoFair. OpenAI totally use those arguments to launder their plans, but that saga has been more Silicon Valley exploiting longstanding rationalist beliefs for PR purposes than rationalists getting rich...
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
- salynchnew 1 week ago
- iNic 1 week agoFrom my experience people who worry about existential AI risk tend to worry more about the more mundane problems than the wider population. In particular they have been very vocal about scamming, cube security and also wealth (& power!) concentration.
- NoMoreNicksLeft 1 week ago>s a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman,
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
- danans 1 week ago> We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
- danans 1 week ago
- philipov 1 week agoyep, the biggest threat posed by AI comes from the capitalists who want to own it.
- impossiblefork 1 week agoI actually think the people developing AI might well not get rich off it.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
- parpfish 1 week agoOr the propagandists that use it
- conception 1 week agoNah, it’s from state actors using the technology to control and wage war by far.
- impossiblefork 1 week ago
- tuveson 1 week ago
- stickfigure 1 week ago> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Have you ever read Scott Alexander's blog (Slate Star Codex, now Astral Codex X)? It's full of doubt and self-questioning. The guy even keeps a public list of his mistakes:
https://www.astralcodexten.com/p/mistakes
I'll admit my only touchpoint to the "rationalist community" is this blog, but I sure don't get "full of themselves" from that. Quite the contrary.
- TheAncientGeek 1 week agoWell, you can get it from LessWrong and , even mire, Yudkowsky. Rationalism has overall and orthodox wings
- TheAncientGeek 1 week agoLiberal and orthodox.
- TheAncientGeek 1 week ago
- TheAncientGeek 1 week ago
- Avicebron 2 weeks agoYeah the "rational" part always seemed a smokescreen for the ability to produce and ingest their own and their associates methane gases.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
- shermantanktop 1 week agoThe meta with these people is “my brilliance comes with an ego that others must cater to.”
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
- dkarl 1 week agoIt's a conflict as old as time. What do you do when an argument leads to an unexpected conclusion? I think there are two good responses: "There's something going on here, so let's dig into it," or, "There's something going on here, but I'm not going to make time to dig into it." Both equally valid.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
- bilbo0s 1 week agoMight kind of point to real life people having too much of what is now called, "rationality", and very little of what used to be called "wisdom"?
- bilbo0s 1 week ago
- Cthulhu_ 1 week agoIt feels like a shield of sorts, "I am a rationalist therefore my opinion has no emotional load, it's just facts bro how dare you get upset at me telling xyz is such-and-such you are being irrational do your own research"
but I don't know enough about it, I'm just trolling.
- TacticalCoder 2 weeks ago[dead]
- shermantanktop 1 week ago
- felipeerias 2 weeks agoThe problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
- ImaCake 1 week ago>The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
- lsp 1 week agoA lot of people really need to be reminded of this.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
- baxtr 1 week agoYes, and if you read Popper that’s exactly how he defined rationality / the scientific method: to solve problems of life.
- lsp 1 week ago
- loose-cannon 2 weeks agoReducibility is usually a goal of intellectual pursuits? I don't see that as a fault.
- nyeah 1 week agoOk. A lot of things are very 'reducible' but information is lost. You can't extend back from the reduction to the original domain.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
- nyrikki 1 week ago'Reducibility' is a property if present that makes problems tractable or possibly practical.
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
- colordrops 2 weeks agoWhat the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.
- jltsiren 2 weeks ago"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.
- nyeah 1 week ago
- ImaCake 1 week ago
- hiAndrewQuinn 2 weeks ago>Maybe it's actually going to be rather benign and more boring than expected
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
- s1mplicissimus 2 weeks ago> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
- hiAndrewQuinn 2 weeks agoIt's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
- 1 week ago
- hiAndrewQuinn 2 weeks ago
- socalgal2 1 week agoDo you think opposing the manhattan project would have lead to a better world?
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
- hiAndrewQuinn 1 week agoMy answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
- hiAndrewQuinn 1 week ago
- s1mplicissimus 2 weeks ago
- iNic 1 week agoEvery community has a long list of etiquettes, rules and shared knowledge that is assumed and generally not spelled out explicitly. One of the core assumptions of the rationalist community is that every statement has uncertainty unless you explicitly spell out that you are certain! This came about as a matter of practicality, as it would be inconvenient to preempt every other sentence with "I'm uncertain about this". Many discussions you will see have the flavor of "strong opinions, lightly held" for this reason.
- kypro 1 week agoThere are few things I hold strong opinions on, but where I do if they're also out of step with what most people think I am very vocal about them.
I see this in rationalist spaces too – it doesn't really make sense for people to talk about things that they believe in strongly but that 95%+ of the public also believe in (like the existence of air), or that they don't have a strong opinion on.
I am a very vocal doomer on AI because I predict with high probability it's going to be very bad for humanity and this is an opinion which, although shared by some, is quite controversial and probably only held by 30% of the public. Given the importance of the subject, my confidence, and that fact I feel the vast majority of people are even wrong or are significantly underweighting caetrosphohic risks, I have to be vocal about it.
Do I acknowledge I might be wrong? Sure, but for me the probability is low enough that I'm comfortable making very strong and unqualified statements about what I believe will happen. I suspect others in the rationalist community like Eliezer Yudkowsky think similarly.
- megaman821 1 week agoHow confident should other people be that random people in conversation or commentors on the internet are at accurately predicting the future? I strongly believe that nearly 100% are wrong in both major and minor ways.
Also, when you say you have a strong belief, does that mean you have emptied you retirement accounts and you are enjoying all you can in the moment until the end comes?
- mitthrowaway2 1 week agoI'm not kypro, but what counts as "strong belief" depends a lot on the context.
For example, I won't cross the street without 99.99% confidence that I will survive. I cross streets so many times that a lower threshold like 99% would look like insanely risky dart-into-traffic behaviour.
If an asteroid is heading for earth, then even a 25% probability of apocalyptic collision is enough that I would call it very high, and spend almost all my focus attempting to prevent that outcome. But I wouldn't empty my retirement account for the sake of hedonism because there's still a 75% chance I make it through and need to plan my retirement.
- mitthrowaway2 1 week ago
- megaman821 1 week ago
- resters 1 week agoNot meaning to be too direct, but you are misinterpreting a lot about rationalists.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
- matthewdgreen 1 week agoMy impression is that many rationalists enjoy believing that they update their beliefs, but in practice they're human and just as attached to preconceived notions as anyone else. But if you go around telling everyone that updating is your super-power, you're going to be a lot less humble about your own failures to do so.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
- thom 1 week agoYes, I've never met anyone who stated they have "strong opinions, weakly held" who wasn't A) some kind of arsehole and B) lying.
- thom 1 week ago
- ajkjk 1 week agonot to be too cynical here, but I would say that the most-apt description of the rationalists is that they are people who would say they are constantly looking for updates to their models. But that they are not necessarily doing it appreciably more than anyone else is. They will do it freely on unimportant things---they tend to be smart people who view the world intellectually and so they are free to toss or keep factual beliefs about things, of which they have many, with little fanfare, and sure, they get points for that. But they are as rooted in their moral beliefs as anybody else is. Maybe more than other people since they have such a strong intellectual edifice that justifies not changing their minds, because they believe that their beliefs follow from nearly irrefutable calculations.
- resters 1 week agoYou're generalizing that all self-proclaimed rationalists are hypocrites and heavily biased? I mean, regardless of whether or not that is true, what is the point of making such a broad generalization? Strange!
- resters 1 week ago
- jrflowers 1 week agoIt seems that you are conflating theoretical rationalists with the actual real-life rationalists that write stuff like
>The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right
“Guy Who Is Always Right” as a role in a social group is a terrible target, yet it somehow seems like what rationalists are aiming for every time I read any of their blog posts
- aspenmayer 1 week agoThe rationalist community are fish in barrels convinced that they’re big fish in a small pond because that’s what those who moved them from the pond told them to convince them to enter the barrel. Once in the barrel, the fish are told that they will be moved to a big pond so that they can be big fish in a big pond together. If the fish/fishmonger telling you things is bigger than you, they may not share your preferences about where you fit in to the food chain, and they may not even perceive you at all. You are chum.
- aspenmayer 1 week ago
- matthewdgreen 1 week ago
- ummonk 1 week agoRationalists have always rubbed me the wrong way too but your argument against AI doomerism is weird. If you care about first principles, how about the precautionary principle? "Maybe it's actually benign" is not a good argument for moving ahead with potentially world ending technology.
- xyzzy123 1 week agoI don't think "maybe it's benign" is where anti doomers are coming from, more like, "there are also costs to not doing things".
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
- XorNot 1 week agoIt's just the slippery-slope fallacy: if X then obviously Y will follow, and there will be no further decisions, debate or time before it does.
- XorNot 1 week ago
- IshKebab 1 week agoHe wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.
- noname120 1 week agoNo rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.
- noname120 1 week ago
- nradov 1 week agoThe precautionary principle is stupid. If people had followed it then we'd still be living in caves.
- ummonk 1 week agoI take it you think the survivorship bias principle and the anthropic principle are also stupid?
- ummonk 1 week ago
- eviks 1 week agoBut not accepting this technology could also be potentially world ending, especially if you want to start many new wars to achieve that, so caring about the first principles like peace and anti-ludditism brings us back to the original "real lack of humility..."
- adastra22 1 week agoThe precautionary principle does active harm to society because of opportunity costs. All the benefits we have reaped since the enlightenment have come from proactionary endeavorers, not precautionary hesitation.
- xyzzy123 1 week ago
- dv_dt 1 week agoThe rationalist discussions rarely consider what should be the baseline assumption of what if one or more of the logical assumptions or associations are wrong. They also tend to not systematically plan to validate. And in many domains - what could hold true for one moment can easily shift.
- resource_waste 1 week ago100%
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
- mitthrowaway2 1 week agoThis seems like exactly the opposite of everything I've read from the rationalists. They even called their website "less wrong" to call attention to knowing that they are probably still wrong about things, rather than right about everything. A lot of their early stuff is about cognitive biases. They have written a lot about "noticing confusion" when their foundational beliefs turn out to be wrong. There's even an essay about what it would feel like to be wrong about something as fundamental as 2+2=4.
Do you have specific examples in mind? (And not to put too fine a point on it, do you think there's a chance that you might be wrong about this assertion? You've expressed it very confidently...)
- cogman10 1 week agoIt's every bit a proto religion. And frankly quite reminiscent of my childhood faith.
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
- mitthrowaway2 1 week ago
- lechatonnoir 1 week agoI am really not sure where you get any of these ideas. For each of your critiques, there are not only discussions, but taxonomies of compendiums of discussions about the topics at hand on LessWrong, which can easily be found by Googling any keyword or phrase in your comment.
On "considering what should be the baseline assumption":
https://www.lesswrong.com/w/epistemology
https://www.lesswrong.com/w/priors, particularly https://www.lesswrong.com/posts/hNqte2p48nqKux3wS/trapped-pr...
On the idea that "rationalists think that they can just apply rationality infinitely to everything":
https://www.lesswrong.com/w/bounded-rationality
On the critique that rationalists are blind to the fact that "reason isn't the only thing that's important", generously reworded as "reason has to be grounded in a set of human values", some of the most philosophically coherent stuff I see on the internet is from LW:
https://www.lesswrong.com/w/metaethics-sequence
https://www.lesswrong.com/w/human-values
On "systematically plan to validate":
https://www.lesswrong.com/w/rationality-verification
https://www.lesswrong.com/w/making-beliefs-pay-rent
On "what could hold true for one moment could easily shift":
https://www.lesswrong.com/w/black-swans
- dv_dt 1 week agoLooking it the first link https://www.lesswrong.com/w/epistemology - it has frankly a comically shallow description of the topic, same with https://www.lesswrong.com/w/priors. In just about every discussion, I may be just the entirely wrong audience, but to me they don't even begin to address the narrow topic of choice, let alone form competent building blocks to form any solid world view.
I support anyone trying to form rational pictures of the universe and humanity. If the LessWrong community approach seems to make sense and is enriching to your understanding of the world then I am happy for you. But, every time I try to take a serious delve into LessWrong, and I have done it multiple times over the years, it sets off my cult/scam alerts.
- dv_dt 1 week ago
- resource_waste 1 week ago
- stuaxo 1 week agoCalling yourself rationalists: frames everyone else as irrational.
It reminds me Kier Starmers Labour, calling themselves "the adults in the room".
Its a cheap framing trick, belying an emptiness on the people using it.
- viccis 1 week agoI agree 100%, and that's my main issue with them. To build a group with its identity centered around "we form our opinions with logical inquiry from first principles" implies that you think that everyone else is doing something else. In reality, we just end up with a lot of worldviews and arguments that seem suspiciously like they are nothing more than people advocating for their own interests using some sophistry that is compelling enough (to them) to trick themselves into thinking they have other motivations.
When ones find themself mentioning Aella as one of the members taking their movement "in new directions," then they should stop and ask whether they are the insightful well rounded person with much to say about all sorts of things, or whether they are just a very gifted computer scientist who is still not well rounded enough to recognize a legitimate dimwit like Aella when they see one.
And in general, I do feel like they suffer from "I am a genius at X, so my take on Y should be given special consideration." If you're in a group where everyone's talking about physics and almost none of them are physicists, then run. I'm still surprised at how little consideration these people give philosophy and the centuries of its written thought. Some engineers spend a decade or more building up math and science skills to the point that they can be effective practitioners, but then they think they can hop right into philosophical discussions with no background. Then when they try to analyze a problem philosophically, their brief (or no) experience means that they reason themselves into dead-end positions like philosophical skepticism that were tackled in a variety of ways over the past centuries.
- lechatonnoir 1 week agoI am sure that there are some people who exhibit the behaviors you're describing, but I really don't think the group as a whole is disinterested in prior work or discussion of philosophy in general:
https://www.lesswrong.com/w/epistemology
https://www.lesswrong.com/w/priors
https://www.lesswrong.com/posts/2x67s6u8oAitNKF73/ (a post noting that the foundational problems in mech interp are grounded in philosophical questions about representation ~150 years old)
https://www.lesswrong.com/w/consciousness (the page on consciousness first citing the MIT and Stanford encyclopedias, then providing a timeline from Democritus, through Descartes, Hobbes,... all the way to Nagel, Chalmers, Tegmark).
There is also sort of a meme of interest in Thomas Kuhn: https://www.lesswrong.com/posts/HcjL8ydHxPezj6wrt/book-revie...
See also these attempts to refer and collate prior literature: https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/rationalis...
https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-t...
https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-t...
https://www.lesswrong.com/posts/HLJMyd4ncE3kvjwhe/the-best-r...
https://www.lesswrong.com/posts/bMmD5qNFKRqKBJnKw/rigorous-p...
Now, one may disagree with the particular choices or philosophical positions taken, but it's pretty hard to say these people are ignorant or not trying to be informed about what prior thinkers have done, especially compared to any particular reference culture, except maybe academics.
As for the thing about Aella, I feel she's not as much of a thought leader as you've surmised, and I think doesn't claim to be. My personal view is that she does some interesting semi-rigorous surveying that is unlikely to be done elsewhere. She's not a scientist/statistician or a total revolutionary but her stuff is not devoid of informational value either. Some of her claims are hedged adequately, some of them are hedged a bit inadequately. You might have encountered some particularly (irrationally?) ardent fans.
- lechatonnoir 1 week ago
- gjm11 1 week agoPretty much every movement does this sort of thing.
Religions: "Catholic" actually means "universal" (implication: all the real Christians are among our number). "Orthodox" means "teaching the right things" (implication: anyone who isn't one of us is wrong). "Sunni" means "following the correct tradition" (implication: anyone who isn't one of us is wrong").
Political parties: "Democratic Party" (anyone who doesn't belong doesn't like democracy). "Republican Party" (anyone who doesn't belong wants kings back). "Liberal Party" (anyone else is against freedom).
In the world of software, there's "Agile" (everyone else is sluggish and clumsy). "Free software" (as with the liberals: everything else is opposed to freedom). People who like static typing systems tend to call them "strong" (everyone else is weak). People who like the other sort tend to call them "dynamic" (everyone else is rigid and inflexible).
I hate it too, but it's so very very common that I really hope it isn't right to say that everyone who does it is empty-headed or empty-hearted.
The charitable way to look at it: often these movements-and-names come about when some group of people picks a thing they particularly care about, tries extra-hard to do that thing, and uses the thing's name as a label. The "Rationalists" are called that because the particular thing they chose to focus on was rationality; maybe they do it well, maybe not, but it's not so much "no one else is rational" as "we are trying really hard to be as rational as we can".
(Not always. The term "Catholic" really was a power-grab: "we are the universal church, those other guys are schismatic heretics". In a different direction: the other philosophical group called "Rationalists" weren't saying "we think rationality is really important", they were saying "knowledge comes from first-principles reasoning" as opposed to the "Empiricists" who said "knowledge comes from sense experience". Today's "Rationalists" are actually more Empiricist than Rationalist in that sense, as it happens.)
- swat535 1 week agoIf you examine history, from the Bible you get Judaism. And from Judaism, Christianity as Christ said "Do not think that I am come to break the Law or the Prophets. I am not come to break: but to fulfill." [Matth. v. 17]
The Catholic Church follows the Melchisedec order (Heb v. ; vi. ; vii). The term Catholic (καθολικη) was used as early as the first century; it is an adjective which describes Christianity.
The oldest record that we have to this day is the Epistle of Ignatius to the Smyrnaeans Chapter 8 where St. Ignatius writes "ωσπερ οπου αν η Χριστος Ιησους, εκει η καθολικη εκκλησια". (just as where Jesus Christ is, there is the Catholic Church.):
https://greekdoc.com/DOCUMENTS/early/i-smyrnaeans.html
The protestors in the 16th c. called themselves Protestants, so that's what everyone calls them. English heretic-schismatics didn't want to share the opprobrium so they called themselves English, hence Anglican. In USA they weren't governed congregationally like the Congregationalists, or by presbyters like the Presbyterians, but by bishops, so they called themselves Bishop-ruled, or Episcopalians. (In fact, Katharine Jefferts-Schori changed the name of the denomination from The Protestant Episcopal Church to The Episcopal Church recently.)
The orthodox catholics called themselves Orthodox to distance themselves from the unorthodox of which there were plenty, spawning themselves off in the wake of practically every ecumenical council.
Lutherans in the USA name themselves after Father Martin Luther, some Augustinian priest from Saxony who protested against the Church's hypocritical corruption at the time, and the controversy eventually got out of hand and precipitated a schism/heretical revolution, back in the 1500s, but Lutherans back in Germany and Scandinavia call themselves Gospel churches, hence Evangelical. Some USA denominations that go back to Germany and who came over to USA brought that name with them.
Pentecostals name themselves after the incident in Acts where the Holy Spirit set fire to the world (cf. Acts 2) on the occasion of the Jewish holiday of Shavuot, q.v., which in Greek was called Fiftieth Day After Passover, hence Pentecosti. What distinguishes Pentecostals is their emphasis on what they call "speaking in tongues", which in my opin...be charitable, kempff...which they see as a continuance of the Holy Spirit's work in the world and in the lives of believers.
- swat535 1 week ago
- viccis 1 week ago
- baxtr 1 week agoMy main problem with the movement is their emphasis on Bayesianism in conjunction with an almost total neglect of Popperian epistemology.
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
- the_af 1 week agoWhat really confuses me is that many in this so called "rationalist" clique discuss Bayesianism as an "ism", some sort of sacred, revered truth. They talk about it in mystical terms, which matches the rest of their cult-like behavior. What's the deal with that?
- mitthrowaway2 1 week agoThat's specific to Yudkowsky, and I think that's just supposed to be humor. A lot of people find mathematics very dry. He likes to dress it up as "what if we pretend math is some secret revered knowledge?".
- mitthrowaway2 1 week ago
- kurtis_reed 1 week agoSo what's the difference between Bayesianism and Popperian epistemology?
- uniqueuid 1 week agoPopper requires you to posit null hypotheses to falsify (although there are different schools of thought on what exactly you need to specify in advance [1]).
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...
- TheAncientGeek 1 week ago.
Popperians claim that positive justification is impossible.
Popperians claim.Induction doesn't exist (or at least , matter in science)
Popper was prepared to consider the existence of Propensities objective.probabilities, whereas Bayesians, particularly those who follow Jaynes believe in determinism and subjective probability.
Popperian refutation is all or nothing, whereas Bayesian negative information is gradual.
In Popperism, there can be more than one front running or most favoured theory, even after the falsifiable ones have been falsified, since there aren't quantifiable degrees of confirmation.
For Popper and Deutsch, theories need to be explanatory, not just predictive. Bayesian confirmation and disconfirmation only target prediction directly -- if they are achieving explanation or ontological correspondence , that would be the result of a convenient coincidence.
For Popperians, the construction of good theoretical conjectures is as important as testing them. Bayesian seem quite uninterested in where hypotheses come from.
For Deutschians, being hard-to-vary is the preferred principle of parsimony. For Yudkuwsians, it's computation complexity.
Error correction as something you actually do. Popperians like to put forward hypotheses that are easy to refute. Bayesians approve theoretically of "updating", but dislike objections and criticisms in practice.
(Long term) prediction is basically impossible . More Deutsch than Popper -- DD believed that the growth and unpredictability of knowledge . The creation of knowledge is so unpredictable and radical that long term predictions cannot be made. Often summarised to "prediction is impossible". Of course , Bayesians are all about prediction --but the predictive power of Ates tends only to be demonstrated in you models, where the ontology isn't changing under your feet. Their AI I predictions are explicitly intuition based.
Optimism versus Doom. Deutsch is highly optimistic that continuing knowledge creation will change the world for the better (a kind of moral realism is a component of this). Yudkowsky thinks advanced AI is our last invention and will kill us all.*
- uniqueuid 1 week ago
- lechatonnoir 1 week agoHere's a collection of debates about that topic:
https://www.lesswrong.com/posts/85mfawamKdxzzaPeK/any-good-c...
I personally don't have that much of an interest in this topic, so I can't critique them for quality myself, but they may at least be of relevance to you.
- kragen 1 week agoHmm, what epistemological propositions of Popper's do you think they're missing? To the extent that I understand the issues, they're building on Popper's epistemology, but by virtue of having a more rigorous formulation of the issues, they resolve some of the apparent contradictions in his views.
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong? Or overlooked something major? If so, what?
(Amusingly, you seem to have avoided making any falsifiable claims in your comment, while implying that you could easily make many of them...)
- baxtr 1 week ago> Popper’s falsificationism – this is the old philosophy that the Bayesian revolution is currently dethroning.
https://www.yudkowsky.net/rational/bayes
These are the kind of statements I’m referring to. Happy to be falsified btw :) that’s how we learn.
Also note that Popper never called his theory falsificationism.
- baxtr 1 week ago
- agos 1 week agois epidemiology a typo for epistemology or am I missing something?
- baxtr 1 week agoYes, thx, fixed it.
- baxtr 1 week ago
- uniqueuid 1 week agoThe counterpoint here is that in practice, humility is only found in the best of frequentists, whereas the rest succumb to hubris (i.e. the cult of irrelevant precisions).
- empiko 1 week agoI actually think that their main problem is the belief that they can learn everything about the world by reading stuff on the Web. You can't understand everything by reading blogs and books, in the end, some things are best understood when you are on the ground. Unironically, they should go touch the grass.
One example for all. It was claimed that a great rationalist policy is to distribute treated mosquito nets to 3rd-world-ers to help eradicate malaria. On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides. Unfortunately, rationalists forgot to ask people that live with mosquitos what they would do with such nets.
- noname120 1 week ago> On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides.
Could you recommend an article to learn more about this?
- noname120 1 week ago
- the_af 1 week ago
- camgunz 1 week agoYeah I don't know or really care about Rationalism or whatever. But I took Aaronson's advice and read Zvi Mowshowitz' Childhood and Education #9: School is Hell [0], and while I share many of the criticisms (and cards on the table I also had pretty bad school experiences), I would have a hard time jumping onto this bus.
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
- matthewdgreen 1 week agoCherry-picking friendly studies is one of the go-to moves of the rationalist community.
You can convince a lot of people that you've done your homework when the medium is "an extremely blog post with a bunch of studies attached" even if the studies themselves aren't representative of reality.
- tasty_freeze 1 week agoIs there any reason you are singling out the rationalist community? Is that not a common failure mode of all groups and all people?
BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.
- tasty_freeze 1 week ago
- genewitch 1 week agoMy wife is LMSW (not CPS!) and sees ~5 people a day. 153,922 population in the metro area. Mind you, this is adults, but they're all mandated to show up.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US per statistia:
> In 2022, there were about 5,036 intake and screening workers in child protective services in the United States. In total, there were about 30,750 people working in child protective services in that year.
- verall 1 week ago37% of children obviously do not experience a CPS investigation before age 18.
- camgunz 1 week agoYeah OK I can see that. Mostly you inspired me to do a little napkin math based on the report I linked, which says ~3.1m kids got CPS investigations (etc) in 2023, which is ~8,500 a day. But, the main author in a subsequent paper shows that only ~13% of kids have confirmed maltreatment [0]. That's still far lower than the 38% for homeschooled kids.
- verall 1 week ago
- TheAncientGeek 1 week agoIt's not just Moshowitz a lot of them have the same syndome.
- ummonk 1 week ago> but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn"
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
- camgunz 1 week agoI mean, I'm fine addressing Tabarrok's argument head on: I think there's far more to gain helping the millions of kids/adults who are functionally illiterate than helping the small number of gifted kids the educational system is underserving. His argument is essentially "these kids will raise the tide and lift all boats", but it's clear that although the tide has been rising for generations (advances in the last 60-70 years are truly breathtaking) more kids are being left behind, not fewer. There's no reason to expect this dynamic to change unless we tackle it directly.
- camgunz 1 week ago
- matthewdgreen 1 week ago
- js8 1 week ago> The people involved all seem very... Full of themselves ?
Kinda like Mensa?
- parpfish 1 week agoWhen I was a kid I wanted to be in Mensa because being smart was a big part of my identity and I was constantly seeking external validation.
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
- GLdRH 1 week agoI didn't meet anyone who seemed arrogant.
However I'm always surprised how much some people want to talk about intelligence. I mean, it's the common ground of the group in this case, but still.
- NoGravitas 1 week agoPersonally, I subscribe to Densa, the journal of the Low-IQ Society.
- GLdRH 1 week ago
- parpfish 1 week ago
- jonstewart 1 week agoI think the thing that rubs me the wrong way is that I’m a classic cynic (a childhood of equal parts Vonnegut and Ecclesiastes). My prior is “human fallibility”, and, nope, I am doing pretty well, no need to update it. The rationalist crowd seems waaaaay too credulous. Also, like Aaronson, I’m a complete normie in my personal life.
- mise_en_place 1 week agoYeah. It's not like everything's a Talmudic dialectic.
"I haven't done anything!" - A Serious Man
- mise_en_place 1 week ago
- red75prime 1 week ago> Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
We can also apply the principle of epistemic humility to, say, climate change: we don't have a full grasp of Earth biosphere, maybe some unexpected negative feedback loop will kick in and climate change will revert itself.
It doesn't mean that we shouldn't try to prevent it. We will waste resources in hypothetical worlds where climate change self-reverts, but we might prevent civilization collapse in hypothetical worlds where climate change goes as expected or more severely.
Rationalism is about acting in a state of uncertainty. So
> I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is
goes as an unspoken default. And some posts on lesswrong explicitly state "Epistemic status: not sure about it, but here's my thoughts" (though I find it a bit superfluous).
Nevertheless. The incredible danger of an unaligned superhuman AI doesn't allow to ignore even small chances of them being right about it.
Though I, myself, think that their probability estimates of some of their worries is influenced by the magnitude of negative consequences (humans aren't perfect Bayesians after all (and that includes me and validity of this statement)).
- Viliam1234 1 week agoI am surprised that the highest upvoted comment (but also many other comments here) describes almost the opposite of the rationality community as I know it.
Now of course I don't claim to know every rationalist all over the planet, so maybe you met a different kind than I did. Or maybe this is just an internet stereotype, based on presumed similarity to other groups (such as Mensa) that most people are more familiar with.
For starters, it seems inevitable in discussions like this, that someone will mention "rationalism vs empiricism" as the One True Meaning of the word "rationalist". Because, apparently, each word has exactly one meaning over the entire history... and people at Wikipedia are foolish for having created a disambiguation page for this word. You don't have to worry; the idea of reasoning about things from the first principles is completely unrelated to the rationality community.
In my experience, rationality community was the first place where I met intelligent people willing to admit that they didn't know something, or even to put a probability estimate on their beliefs. That's something I don't see often even on HN, which is one of the smartest places on internet. Somewhere else in this thread, people are mocking Scott Alexander for predicting something with probability 30%, because that thing didn't happen. Well, yes, that's the point; probability 30% literally means that the thing is more likely to not happen than to happen. That doesn't mean we should ignore it, though. If the weather forecast predicts 30% probability of rain, I will take an umbrella, while still believing that no rain is more likely than rain. Is that foolish?
You complain about lack of humility and lack of acknowledgment that someone doesn't fully understand something. Well, how would you rate your own comment, from this perspective? How would Hacker News audience rate it? Oh, I don't have to guess, because it is already the highest rated comment in this thread. What does that prove? Should I now forever remember you as "the guy who was wrong" and HN as "the community that upvoted the guy who was wrong"? Or should I shrug and say "people make mistakes all the time... in best case, they are able to admit the mistake and learn"? Which is a healthier approach? Would I or my community get similar courtesy in a similar situation?
- benreesman 1 week agoAny time people engage in some elaborate exercise and it arrives at: "me and people like me should be powerful and not pay taxes and stuff" the reason for making the argument is not a noble one, the argument probably has a bunch of tricks and falsehoods in it, and there's never really any way to extract anything useful, greed and grandiosity are both fundamentally contaminative processes.
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
- drdaeman 1 week ago> The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Is it really a rationality when folks are sort of out of touch with reality, replacing it with models that lack life's endless nuances, exceptions and gotchas? Being principled is a good thing, but if I correctly understand what you're talking about - surely ignoring something just because it doesn't fit some arbitrarily selected set of principles is different.
I'm no rationalist (I don't have any meaningful self-identification, although I like the idea of approaching things logically) but I've had enough episodes of being guilty of something like this - having an opinion on something, lacking the depth, but pretending it's fine because my simple mental model is based on some ideas I like and can bring order to the chaos. So maybe it's not rationalism at all, but something else masquerading as it, like probably being afraid of mismatching the expectations?
- BurningFrog 1 week agoAn unfortunate fact is that people who are very annoying can also be right...
- sebzim4500 1 week agoThat seems pretty silly to me. If you believe that there's a 70% chance that AI will kill everyone it makes more sense to go on about that (and about how you think you/your readers can decrease that number) than worry about the 30% chance that everyting will be fine.
- Seattle3503 1 week agoI think the rationalists have failed to humanize themselves. They let their thinkpieces define them entirely, but a studiously considered think piece is a narrow view into a person. If rationalists were more publicly vulnerable, people might find them more publicly relatable.
- mitthrowaway2 1 week agoScott Aaronson is probably the most publicly-vulnerable academic I've ever found, at least outside of authors who write memoirs about childhood trauma. I think a lot of other prominent rationalists also put a lot of vulnerability out there.
- Seattle3503 1 week agoHe didn't take the rationalist label until today. Him doing so might help their image.
- Seattle3503 1 week ago
- mitthrowaway2 1 week ago
- naasking 1 week ago> "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is"
I dunno, that seems like the tone of every article I've read on less wrong.
- aredox 1 week agoThey are the perfectly rational people who await the arrival of a robot god...
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
- nradov 1 week agoThere is certainly some racism in parts of American culture. We have a lot of work to do to fix that. But on a relative basis it's also one of the least racist cultures in the world.
- nradov 1 week ago
- skissane 1 week ago> I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle..
Consider ethical pluralism – by which I mean, there is enormous disagreement among humans as to what is ethical, we can't even agree on what the first principles are. Sure, we all agree on a lot of non-controversial applications – e.g. killing children for sport is gravely evil – but even when we agree on the conclusion, we won't agree on the premises.
Is it any different for theoretical rationality? I don't think so. I think we have the same situation of rational pluralism – sure, we can agree on a lot of non-controversial applications, but we lack agreement on what the first principles are. And when you disagree on the principles, you tend to reach completely opposite conclusions in edge cases.
But at least, with ethical pluralism, a preference utilitarian or a Kantian or a natural law theorist is very open about what their ethical first principles are, and how they differ from those of others. By contrast, the "rationalists" seem to present there as being only one possible rationality, their own.
- reverendsteveii 1 week agofor me it was very easy to determine what rubs me the wrong way:
>I guess I'm a rationalist now.
>Aren't you the guy who's always getting into arguments who's always right?
- gjm11 1 week agoIn fairness, that's (allegedly, at least; I guess he could be lying) a quotation from another person. If someone came up to you and said "Aren't you the guy who's essentially[1] always right?", wouldn't you too be inclined to quote them, whether you agreed with them or not?
[1] S.A. actually quoted the person as follows: "You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" which differs in several ways from what reverendsteveii falsely presents as a direct quotation.
- gjm11 1 week ago
- alephnerd 1 week agoIt's basically a secular religion.
Substitute God with AI or the concept of rationality and use "first principles"/Bayesianism in an extremely dogmatic manner similar to Catechism and you have the Rationalist/AI Alignment/Effective Altruist movement.
Ironically, this is how plenty of religious movements started off - basically as formalizations of philosophy and ethics that fused with what is basically lore and worldbuilding.
- gjm11 1 week agoThis complaint seems to amount to "They believe something is very important, just like religious people do, therefore they're basically a religion". Which feels to me like rather too broad a notion of "religion".
- alephnerd 1 week agoThat's a fairly reductive take of my point. In my experience with the Rationalist movement (who I have the misfortune of being 1-2 people away from), the millenarian threat of AGI remains the primary threat.
Whenever I try to get an answer of HOW (as in the attack path), I keep getting a deus ex machina. Reverting to a deus ex machina in a self purported Rationalist movement is inherently irrational. And that's where I feel the crux of the issue is - it's called a "Rationalist" movement, but rationalism (as in the process of synthesizing information using a heuristic) is secondary to the overarching theme of techno-millenarianism.
This is why I feel rationalism is for all intents and purposes a "secular religion" - it's used by people to scratch an itch that religion often was used as well, and the same Judeo-Christian tropes are basically adopted in an obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an ex-talmid.
There's nothing wrong with that, but hiding behind the guise of being "rational" is dumb when the core belief is inherently irrational.
- alephnerd 1 week ago
- gjm11 1 week ago
- energy123 1 week ago> maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
Part of the concern is there's no one "AI". There is frontier that keeps advancing. So "it" (the AI frontier in the year 2036) probably will be benign, but that "it" will advance and change. Then the law of large numbers is working against you, as you keep rolling the dice and hoping it's not a 1 each time. The dice rolls aren't i.i.d., of course, but they're probably not as correlated as we would like, and that's a problem as we keep rolling the dice. The analogy would be nuclear weapons. They won't get used in the next 10 years most likely, but on a 200 year time-frame it's a big deal as far as species-level risks go, which is what they're talking about here.
- James_K 1 week agoImplicit in calling yourself a rationalist is the idea that other people are not thinking rationally. There are a lot of “we see the world as it really is” ideologies, and you can only ascribe to one if you have a certain sense of self-assuredness that doesn't lend itself to healthy debate.
- Ancapistani 1 week ago> The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
I’m not super familiar with this community in particular, but is it possible that it’s small/homogenous enough that participants feel comfortable “unmasking” aroudn one another? If so, then those seemingly-confident assertions may be seen by the in-group as implicitly tentative.
The Python community used to be like that. I’d say it peaked around 2014, at which point it became politically fractured.
- lxe 1 week agoThe rationalist movement is an idealist demagogue movement in which the majority of thinkers don't really posses the domain knowledge or practical experience in the subjects they thinktank about. They do address this head on, however, and they are self-aware.
- jvvw 1 week agoI wonder if there is also a problem where if you are too 'rational' then it's easy to dismiss other people's emotions as invalid - you see them as cognitive biases and look down on people who want to listen to or express those emotions in some way. Not all rationalists do this of course, but I think it's an easy trap to fall into and perhaps tolerated in the rationalist community in a way that it might not be elsewhere.
- freejazz 1 week ago> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ?
You'd have to be to actually think you were being rational about everything.
- ineedaj0b 2 weeks agorationalism got pretty lame the last 2-3 years. imo the peak was trying to convince me to donate a kidney.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
- josephg 2 weeks agoWhere do the post rats hang out these days? I got involved in the stoa during covid until the online community fragmented. Are there still events & hangouts?
- astrange 1 week agoThey're a group called "tpot" on twitter, but it's unclear what's supposed to be good about them.
There's kind of two clusters, one is people who talk about meditation all the time, the other is center-right people who did drugs once. I think the second group showed up because rationalists are not-so-secretly into scientific racism (because they believe anything they see with numbers in it) and they just wanted to hang out with people like that.
There is an interesting atmosphere where it feels like they observed California big tech 1000x engineer types and are trying to cargo cult the way those people behave. I'm not sure what they get out of it.
- Trasmatta 1 week agoNot "post rat", but r/SneerClub is good for criticisms of rationalists (some from former rationalists)
- jes5199 1 week agopostrats were never a coherent group but a lot of people who are at https://vibe.camp this weekend probably identify with the label. some of us are still on twitter/X
- astrange 1 week ago
- josephg 2 weeks ago
- lo_zamoyski 1 week agoA good question to ask: what differentiates X from everything else?
So, how is "rationalism" different from everything else? What warrants this distinction? It can't be the use of reason or some refuge for rational discussion. I don't think I have to explain why that would be a ridiculous position to take.
- Viliam1234 1 week ago> It can't be the use of reason [...] I don't think I have to explain why that would be a ridiculous position to take.
I think it would be better if you did, because otherwise you leave me guessing what your argument is, and if I guess wrong then I just wasted a lot of words for no good reason.
My guess is that you meant some combination of: "everyone is using their reason" and "everyone believes that their approach is the reasonable one" or "look at all those awesome scientists and philosophers, how many smart words they wrote and how many awesome inventions they made". Which is all true. But I wish I knew which one of these is closest to your objection, because they are not the same; the reason that everyone uses is obviously not the same degree or kind as the reason the awesome scientists use.
In my opinion, the thing that makes the rationality community different from everything else is a combination of things. None of these things, taken individually, is exclusive to the rationality community. Actually, most of the ideas, perhaps all of them, are taken from books written by someone else. The unique thing was (1) putting all these things together, (2) publicly, and (3) trying to take them seriously. As opposed to: only caring about one of these things and ignoring the rest, or teaching them only to selected people e.g. at school, or just discussing things without actually attempting to change your life because of them.
Here are some of the things:
* Probabilistic thinking. The idea that there is no absolute certainty, only probabilities. (Although some probabilities can, for all practical purposes, be very high or low.) That people should try to express their probabilities in numbers, and then calibrate themselves to get those numbers right. Getting probabilities right means that if you keep a log of all things that you have assigned to e.g. a 30% probability, then statistically about 30% of them should turn out to be true in long term. This was specifically taught at the rationality minicamps; they even made an app for that. Some math related to this.
* The consistency of reality. As opposed to e.g. the idea that science and religion are "separate magisteria" and each follows different laws of logic and probability. Nope. The laws of logic you use in your lab are exactly the same as the laws of logic you should use outside the lab. Just like gravity does not stop applying when your job is over, neither do the laws of evidence. Science is not a special world that plays by special rules; the atoms you study in the lab are the same atoms that the entire world is made of.
* People are "predictably irrational" (to use the popular phrase). Yes... and now that you know, you should do something about that. Even if you can't be perfect, that doesn't mean there is no low-hanging fruit to pick. Learn about the most frequent types of mistakes humans make, try to notice how it feels inside when you are doing that, and either try to avoid such situations, or try to recognize the feeling when it happens, or at least use some reflection to notice it afterwards. Together try to create an environment where those mistakes are easier to avoid.
* Notice the many ways how words can fail to reflect reality. For example, if you use the same word for two different things, it prevents you from noticing their differences. (But it is a natural mistake to make if those things are indeed similar, and if in the past they were interchangeable and just recently stopped being so.) How people incorrectly assume that everything that can be understood has a simple explanation, because that was kinda true in the jungle where our ancestors evolved. Etc.
* Reality matters. If your model of the world does not match reality, it is your model that is wrong. If you learn that fairies do not exist, that does not mean that the world suddenly became less magical -- it was always the same as now, only you didn't know it.
* Notice how you could do things better. Then actually do it.
* Ethics. Consequentialism and its problems. Good reasons to avoid temptations that seem convincing at the moment.
...and many more, sorry this already took too much time to write. If you are curious, https://www.readthesequences.com/ -- but the point is, this is not just about reading an interesting text, but actually changing yourself. And of course, there are many people who read the texts and don't change themselves. That cannot be prevented, especially when the text is freely available online. The rationality community actually had many workshops where people could practice these skills, so the opportunity was there.
Again, none of these things is scarce individually. It's the combination. For example, people who do math + care about improving the world = effective altruism. Trying to counter biases + caring about truth = steelmaning your opponents. Probabilistic thinking + reality matters = even if there is only 30% chance that a superhuman AI might kill us all, we better try hard to reduce that chance. The synergy of all of that... that is the goal that the rationality community is moving towards.
- Viliam1234 1 week ago
- Bengalilol 1 week agoI am very tense about defining oneself as rationalist. For many aspects, I find it too first degree to be of any interest. At all.
And I have this narrative ringing in my head as soon as the word pops.
https://news.ycombinator.com/item?id=42897871
You can search HN with « zizians » for more info and depth.
- pjscott 1 week agoI remember, from the earliest days, the rationalist movement talking about how cults prey on people and how to recognize it and walk the other way. The Ziz cult checks so many of the boxes that they were warning against back then, years ago—
—And years later, once the Ziz cult started preying on vulnerable people, the response from the mainstream rationalist movement was… to post warnings about avoiding this messed-up cult, to explain exactly how it was manipulating its victims, and how the best thing to do (in the absence of legal remedy) was to stay the hell away from that diseased social scene.
I’m not sure how they could have done better. Any sufficiently large movement attracts crazy people. No matter how well or poorly they may deal with that fact, anybody can do guilt-by-association forever after.
- pjscott 1 week ago
- JKCalhoun 1 week ago> lack of acknowledgment that maybe we don't have a full grasp of the implications of AI
And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".
- hollerith 1 week agoMaybe the effort to make machines that are as smart and as capable as possible (the professed goal of the leaders of most of the AI labs) should be singled out because it actually is very dangerous.
- creata 1 week agoFor what it's worth (not much), they were obsessed with AI safety way before it was "sexy".
- hollerith 1 week agoYes, people worried about AI started the Berkeley rationality movement (or whatever you want to call it) in the hope that it would help people become rational enough to understand the argument that AI is a very potent danger.
Some people who fit the description above are Eliezer Yudkowsky and Anna Salamon.
Eliezer started writing a sequence of blog posts that formed the nucleus of the movement in Nov 2006 (a month after the start of Hacker News).
Anna started working full time on AI safety in Mar 2008 and a few years later became the executive director of a non-profit whose mission was to try to help people become more rational. (The main way of its doing so has been in-person workshops IIUC.)
- JKCalhoun 1 week agoThanks. I don't know their history. (They've only come up on my radar since a string of murders in the news earlier this year.)
- hollerith 1 week ago
- hollerith 1 week ago
- voidhorse 2 weeks agoTo me they have always seemed like a breed of "intellectuals" who only want to use knowledge to inflate their own egos and maintain a fragile superiority complex. They are't actually interested in the truth so much as they are interested in convincing you that they are right.
- gadders 1 week agoTo me they seem like a bunch for 125 IQ people (not all) trying to convince everyone they are 150 IQ people by trying to reason stuff from first principles and coming up with stuff that your average blue collar worker would tell them is rubbish just using phronesis.
- 77pt77 1 week ago> The people involved all seem very... Full of themselves ?
The Zizians certainly were: https://en.wikipedia.org/wiki/Zizians
- j7ake 1 week agoThe core is that hyper rationalists mistaken their model of the world with the world itself. They then lecture with confidence their opinions in the real world, ignoring the hairy details and unknown unknowns in the real world.
- sanderjd 1 week agoYeah, for a bunch of supposedly super-rational people, they have always seemed to have a pretty irrational belief in their own ability to overcome the human tendency toward irrationality!
- idontwantthis 1 week agoI'm hyper "rational" when I go through a period of clinical anxiety. I know exactly how the future is going to play out and of course it's all going to be terrible.
- kryogen1c 1 week ago> The people involved all seem very... Full of themselves ?
Yes, rationalism is not a substitute for humility or fallibility. However, rationalism is an important counterpoint to humanity, which is orthogonal to rationalism. But really, being rational is only binary - you cant be anything other than rational or irrational. You're either doing what's best or you're not. That's just a hard pill for most people to swallow.
To use the popular metaphor, people are drowning all over the world and we're all choosing not to save them because we don't want to ruin our shoes. Look in the mirror and try and comprehend how selfish we are.
- cjs_ac 2 weeks agoI think the absolutism is kind of the point.
- 1 week ago
- sfblah 1 week agoThe problem with effective altruism is the same as that with most liberal (in the American sense) causes. Namely, they ignore second-order effects and essentially don't believe in the invisible hand of the market.
So, they herald the benefits of something like giving mosquito nets to a group of people in Africa, without considering what happens a year later, whether the nets even get there (or the money is stolen), etc. etc. The reality is that essentially all improvements to human life over the past 500 years have been due to technological innovation, not direct charitable intervention. The reason is simple: technological impacts are exponential, while charity is, at best, linear.
The Covid absolutists had exactly the same problem with their thinking: almost no interventions sort of full isolation can fight back against an exponentially increasing threat.
And this is all neglecting economic substitution effects. What if the people to whom you gave mosquito nets would have bought them themselves, but instead they chose to spend their money some other way because of your charity? And, what if that other expenditure type was actually worse?
And this is before you come to the issue that Subsaharan Africa is already overpopulated. I've argued this point several times with ChatGPT o3. Once you get through its woke programming, you come to the reality of the thing: The European migration crisis is the result of liberal interventions to keep people alive.
There is no free lunch.
- zahlman 1 week ago> tension btwn being "rational" about things and trying to reason about things from first principle.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.
- trod1234 1 week agoThe reason we are here and exist today is because of great rationalist thinkers that were able to deduce and identify issues of survival well before they happened through the use of first principles.
The crazies and blind among humanity today can't think like that, its a deficiency people have, but they are still dependent on a group of people that are capable of that. A group that they are intent on ostracizing and depriving existence from in various forms.
You seem so wound up in the circular Paulo Freire based perspective that you can't think or see.
Bring things back to reality. If someone punches you in the face, you feel that fist hitting your face. You know someone punched you in the face. Its objective.
Imagine for a second and just assume that these people are right in their warnings, that everything they see is what you see, and all you can see is when you tip over a particular domino that has been tipped over in the past, a chain of dominoes falls over and at the end is the end of organized civilized society which tips over the ability to produce food.
For the purpose of this thought experiment, the end of the world is visible and almost here, and you can't change those dominoes after they've tipped, and worse you see the majority of people trying to tip those dominoes over for short term profit believing nothing they ever do can break everything.
Would you not be frothing at the mouth trying to get everyone you cared about to a point where they pry that domino up before it falls? so you and your children will survive? It is something you can't unsee, it is a thing that cannot be undone. Its coming. What do you do? If you are sane, you try with everything you have to help them keep it from toppling.
Now peal this thought back a moment, adjust it where it is still true, but you can't see it and you can only believe what you see.
Would you approach this differently given knowledge of the full consequence knowing that some people can see more than you? Would you walk out onto a seemingly visibly stable bridge that an engineer has said not to walk out on? Would you put yourself in front of a dam cracks running up the side, when an evacuation order was given? What would the consequence be for doing that if you led along your family and children to such places ignoring these things?
There are quite a lot of indirect principles that used to be taught which are no longer taught to the average person and this blinds them because they do not recognize it and recognition is the first thing you need to be able to act and adapt.
People who cannot adapt fail Darwin's fitness. Given all potential outcomes in the grand scheme of things, as complexity increases 99% of all outcomes are death vs life at 1%.
It is only through great care that we carry things forward to the future, and empower our children to be able to adapt to the environments we create.
Finally, we have knowledge of non-linear chaotic systems where adaptability fails because of hysteresis, where no matter how much one prepares the majority given sufficient size will die, and worse there are cohorts of people who are ensuring the environment we will soon live in is this type of environment.
Do you know how to build an organized society from scratch? If there is no reasonable plan, then you are planning to fail. Rather than make it worse through inaction, get out of the way so someone can make it better.
- cryptonector 1 week agoShades of Objectivism. But clearly Objectivism was worse since Ayn Rand insisted that objective reality is knowable, but since it's only somewhat knowable Objectivism ends up requiring priests of sorts. Still, Rationalism does not seem to have this tremendous flaw[], it's just that rationalists might.
[] Eh, I know little about Rationalism. Please correct me.
- swagmoney1606 1 week agoWhen I was 15-18, in around 2017, I got extremely into Elizer Yud, "the sequences", lesswrong, and the rationalist community. I don't think many people realize how it appeals to vulnerable people in the same way that Atlas Shrugged, Christianity, The furry community (/jk), the self-help world, andrew tate and "manosphere content", does.
It provides answers, a framework, AND the underpinnings of "logic", luckily, this phase only lasted around 6 months for me, during a very hard and dangerous time in my life.
I basically read "from AI to zombies", and then, moved into lesswrong and the "community". It was joining the community that immediately turned me off.
- I thought Roko's basilisk was mind numbingly stupid (does anyone else that had a brief stint in the rationalist space think it's fucking INSANE that grimes and elon musk "bonded" over Roko's basilisk? Fucking depressing world we live in) - Elizer Yud's fanboys once stalked and harassed someone all over the internet, and, when confronted about it, Elizer told him he'd only tell them to stop after he issued a very specific formal apology, including a LARGE DISCLAIMER on his personal website with the apology... - Eugenics, eugenics, eugenics, eugenics, eugenics - YOU MUST DONATE TO MIRI, OTHERWISE I, ELIZER (having published no useful research), WON'T SOLVE THE ALIGNMENT PROBLEM FIRST AND THEN WE WILL ALL DIE. GIVE ALL OF YOUR MONEY TO MIRI NOWWWWWWWWWWWWWWWWWWWWWWW
It's an absolutely wild place, and honestly, I think I would say, it is difficult to define "rational" when it comes to a human being and their actions, especially in an absolute sense, and, the rationalist community is basically very similar to any other religion, or perhaps light-cult. I do not think it would be fair to say "the average rationalist is a better decision maker than the average human", especially considering most important decisions that we have to make are emotional decisions.
Also yes I agree, you hit the nail on the head. What good is rational/logical reasoning if rational and logical reasoning typically requires first principles / a formal system / axioms / priors / whatever. That kind of thing doesn't exist in the real world. It's okay to apply ideas from rationality to your life, but it isn't okay to apply ideas from rationality to "what is human existence", "what is the most important thing to do next" / whatever.
Kinda rambling so I apologize. Seeing the rationalist community seemingly underpin some of the more disgusting developments of the last few years has left me feeling a bit disturbed, and I've always wanted to talk about it but nobody irl has any idea what any of this is.
- renewiltord 1 week agoJesus, this is why online commentary is impossible to read. It's always full of so many caveats I can't get through to the meat of what someone is saying. "I don't know if what I'm saying is right but I think that it probably is. With that caveat, here's a thought that may or may not reflect reality but is my current model to predict certain things. $THOUGHT. Given that I have said that, I have to confess to not living my life entirely by that maxim. I may not have captured all angles to this."
Thankfully, the rationalists just state their ideas and you're free to use their models properly. It's like people haven't written code at all. Just putting repeated logging all through the codebase with null checks everywhere. Just say the thing. That suffices. Conciseness rules over caveating.
Human LLMs who use idea expansion. Insufferable.
Of course that is only my opinion and I may not have captured all angles to why people are doing that. They may have reasons of their own to do that and I don't mean to say that there can never be any reasons. No animals were harmed in the manufacture of this comment to my knowledge. However, I did eat meat this afternoon which could or could not be the source of the energy required to type this comment and the reader may or may not have calorie attribution systems that do or do not allocate this comment to animal harm.
- jandrese 1 week ago
- mathattack 1 week agoLogic is an awesome tool that took us from Greek philosophers to the gates on our computers. The challenge with pure rationalism is checking the first principles that the thinking comes from. Logic can lead you astray if the principles are wrong, or you miss the complexity along the way.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
- kragen 1 week agoThe ‘rationalist’ group being discussed here aren't Cartesian rationalists, who dismissed empiricism; rather, they're Bayesian empiricists. Bayesian probability turns out to be precisely the unique extension of Boolean logic to continuous real probability that Aristotle (nominally an empiricist!) was lacking. (I think they call themselves “rationalists” because of the ideal of a “rational Bayesian agent” in economics.)
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
- 1propionyl 1 week agoThey can call themselves empiricists all they like, it only takes a few exposures to their number to come away with a firm conviction (or, let's say, updated prior?) that they are not.
First-principles reasoning and the selection of convenient priors are consistently preferenced over the slow, grinding work of iterative empiricism and the humility to commit to observation before making overly broad theoretical claims.
The former let you seem right about something right now. The latter more often than not lead you to discover you are wrong (in interesting ways) much later on.
- JamesBarney 1 week agoWho are all the rationalists you guys are reading?
I read the NYT and rat blogs all the time. And the NYT is not the one that's far more likely to deeply engage with the research and studies on the topic.
- JamesBarney 1 week ago
- tsimionescu 1 week agoBayesian inference is very, very often used in the types of philosophical/speculative discussions that Rationalists like instead of actual empirical study. It's a very convinient framework for speculating wildly while still maintaining a level of in-principle rationality, since, of course, you [claim that] you will update your priors if someone happens to actually study the phenomenon in question.
The reality is that reasoning breaks down almost immediately if probabilities are not almost perfectly known (to the level that we know them in, say, quantum mechanics, or poker). So applying Bayesian reasoning to something like the number of intelligent species in the galaxy ("Drake's equation"), or the relative intelligence of AI ("the Singularity") or any such subject allows you to draw any conclusion you actually wanted to draw all along, and then find premises you like to reach there.
- joerick 1 week agoBeautifully put.
- joerick 1 week ago
- 1propionyl 1 week ago
- edwardbernays 1 week agoLogic is the study of what is true, and also what is provable.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
- jhanschoo 1 week agoI'm just going to defend Aristotle a bit. His incomplete logic and metaphysics nevertheless provided a powerful foundation to inquire into many aspects of the world that his predecessors did not do, nor do systematically. His community did not shy away from empirical research in biology. They all came to wrong conclusions in some things, but we should rather fault their successors for not challenging them.
- jrm4 1 week agoYup, can't stress the word "tool" enough.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
- jrm4 1 week agolooks like I riled up the Rationalists, huh
- pjscott 1 week agoYou stated one of their core doctrines, something they’ve been loudly preaching for as long as they’ve existed, as though it was something they disagree with and had never even considered. Can you blame them for a bit of exasperation? For wanting to simply downvote-and-disengage from someone who makes up falsehoods about them and then gloats about how it annoyed them? Life is too short to tilt at windmills.
- pjscott 1 week ago
- jrm4 1 week ago
- eth0up 1 week ago>provisional humility.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
- zahlman 1 week ago> Must. Increase. The. pH
Those who do so would be... based?
- eth0up 1 week agoBasically.
The level of humility in most subjects is low enough to consume glass. We would all benefit from practicing it more arduously.
I was merely adding support to what I thought was fine advice. And it is.
- 1 week ago
- eth0up 1 week ago
- zahlman 1 week ago
- kragen 1 week ago
- samuel 2 weeks agoI'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
- Bjartr 1 week agoYeah, the whole community side to rationality is, at best, questionable.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
- zahlman 1 week ago> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
Well said. Rationalism is about doing rationalism, not about being a rationalist.
Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").
> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
On the other hand, success is supposed to look exactly like actually being right more often.
- Bjartr 1 week ago> success is supposed to look exactly like actually being right more often.
I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.
- Bjartr 1 week ago
- wizzwizz4 1 week agoChapter 67. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu... (And since it's in the book, and people know about it, obviously they're not doing it themselves.)
- FeepingCreature 1 week agoAlso the Valley of Bad Rationality tag. https://www.lesswrong.com/w/valley-of-bad-rationality
- TeMPOraL 1 week agoAlso that the Art needs to be about something else than itself, and a dozen different things. This failure mode is well known in the community; Eliezer wrote about it to death, and so did others.
- FeepingCreature 1 week ago
- wannabebarista 1 week agoThis reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).
- the_af 1 week ago> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
- mariusor 1 week ago> you just know they actually mean "MoreRight".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
- mariusor 1 week ago
- zahlman 1 week ago
- greener_grass 1 week agoI think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
- elt895 1 week agoAre there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore. 2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.
- wannabebarista 1 week agoLikely the best resource to learn about philosophy is the Stanford Encyclopedia of Philosophy [0]. It's meant to provide a rigorous starting point for learning about a topic, where 1. you won't get bogged down in a giant tome on your first approach and 2. you have references for further reader.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
- aubanel 1 week agoI've read Bertrand Russell's "A History of Western Philosophy" and it's the first ever philosophy book that I didn't drop after 10 pages, because of 2 things: 1- He's logic (or at least has the same STEM kind of logic that we use), so he builds his reasoning logically and not via bullshit associations like plays on words or contrived jumps. 2- He's not afraid to tell "this philosopher said that, it was an error", which is extremely new compared to other scholars who don't feel authorized to criticise even obvious errors. Really recommend!
- NoGravitas 1 week agoI don't know if there's anything like a comprehensive high-level guide to philosophy that's any good, though of course there are college textbooks. If you want real/academic philosophy that's just more readable, I might suggest Eugene Thacker's "The Horror of Philosophy" series (starting with "In The Dust Of This Planet"), especially if you are a horror fan already.
- voidhorse 1 week agoIt's not a nice response but I would say: don't be so lazy. Struggle through the hard stuff.
I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.
Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.
- molochai 1 week agoAt the risk of being roasted for recommending pop-culture things, the podcast Philosophize This is pretty good for a high-level overview. I'm sure there are issues and simplifications, and it's certainly not actual source material. The nice part is it's sort of a start-to-finish, he goes from the start of philosophy to modern day stuff, which helps a ton in building foundational understanding without reading everything ever written.
- ashwinsundar 1 week agoI don't have an answer here either, but after suffering through the first few chapters of HPMOR, I've found that Yudk and others tech-bros posing as philosophers are basically like leaky, dumbed-down abstractions for core philosophical ideas. Just go to the source and read about utilitarianism and deontology directly. Yudk is like the Wix of web development - sure you can build websites but you're not gonna be a proper web developer unless you learn HTML, CSS and Javascript. Worst of all, crappy abstractions train you in some actively bad patterns that are hard to unlearn
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
- wannabebarista 1 week ago
- FeepingCreature 1 week agoIn AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
- margalabargala 1 week agoAdditional note: compatibilism is only obviously correct if you accept that "free will" actually just means "the experienced perception/illusion of free will" as described by Schopenhauer.
Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.
And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.
- margalabargala 1 week ago> And it turns out if you do this, you can discard 90% of philosophy as historical detritus
This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.
- TheAncientGeek 1 week ago>actually, this is obviously correct
Nobody know what's actually correct, because you have to solve epistemology first, and you have to solve epistemology to solve epistemology...etc.etc.
>And it turns out if you do this, you can discard 90% of philosophy as historical detritus
Nope. For instance , many of the issues Kant raised are still live.
>The massive advantage of the Sequences is they have justified and well-defended confidence
Nope. That would entail answering objections , which EY doesn't stoop to.
>Compatibilism is really obviously correct
Nope. It depends on a semantic issue , what free will means.
- margalabargala 1 week ago
- sixo 1 week agoTo the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
- biofox 1 week agoIf you contextualise the outmoded ideas as part of the Great Conversation [1], and the story of how we reached our current understanding, rather than objective statements of fact, then they becomes a lot more valuable and worthy of study.
- jay_kyburz 1 week agoI have kids in high school. We sometimes talk about the difference between the black and white of math or science, and the wishy washy grey of the humanities.
You can be right or wrong in math. You have can an opinion in English.
- throwawaymaroon 1 week ago[dead]
- biofox 1 week ago
- HDThoreaun 1 week agoRationalism largely rejects continental philosophy in favor of a more analytic approach. Yes these ideas are not new, but they’re not really the mainstream stuff you’d see in philosophy, literature, or history studies. You’d have to seek out these classes specifically to find them.
- TheAncientGeek 1 week agoAnalytical philosophy is rationalism done right.
- TimorousBestie 1 week agoThey largely reject analytic philosophy as well. Austin and Whitehead are roughly as detestable to a Rationalist as Foucault and Marx.
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
- TheAncientGeek 1 week ago
- samuel 1 week agoI don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
- bnjms 1 week agoI think you’re mostly right.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
- ashwinsundar 1 week agoI think I'd rather subscribe to an actual religion, than listen to these weird rationalist types of people who seem to have solved the problem that is "everything". At least there is some interesting history to learn about with religion
- ashwinsundar 1 week ago
- elt895 1 week ago
- turtletontine 1 week ago
I believe this is what Wittgenstein called “language games”For example, I recall being in lot of arguments that are purely "semantical" in nature.
- throwaway314155 1 week agoIn spirit of playing said game, I believe you can just use the word "pedantic" these days.
- throwaway314155 1 week ago
- 1 week ago
- quickthrowman 1 week agoYour time would probably be better spent reading his magnum opus, Harry Potter and the Methods of Rationality.
- ramon156 1 week agoSounds close to Yuval's book nexus which talks about the history of information gathering
- hiAndrewQuinn 1 week agoIf you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
- Bjartr 1 week ago
- gooseus 2 weeks agoI've never thought ill of Scott Aaronson and have often admired him and his work when I stumble across it.
However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
- NoGravitas 1 week agoI have always considered Scott Aaronson the least bad of the big-name rationalists. Which makes it slightly funny that he didn't realize he was one until Scott Siskind told him he was.
- wizzwizz4 1 week agoReminds me of Simone de Beauvoir and feminism. She wrote the book on (early) feminism, yet didn't consider herself a feminist until much later.
- wizzwizz4 1 week ago
- dcminter 2 weeks agoUpvote for the play link - that's interesting and I hadn't heard of it before. Worthy of a top-level post IMO.
- gooseus 1 week agoI heard of the play originally from Chapter 10 of On Tyranny by Timothy Snyder:
https://archive.org/details/on-tyranny-twenty-lessons-from-t...
Which I did post top-level here on November 7th - https://news.ycombinator.com/item?id=42071791
Unfortunately it didn't a lot of traction and dang told me that there wasn't a way to re-up or "second chance" the post due to the HN policy on posts "correlated with political conflict".
- dcminter 1 week agoAh, I guess I see his point; I can't see the discussion being about use of metaphor in political fiction rather than whose team is worst.
Still, I'm glad I now know the reference.
- dcminter 1 week ago
- gooseus 1 week ago
- DrNosferatu 1 week ago«What's the difference between a "rationalist" and "rationalizer"? Only the incentives.»
Bad joke? That phrase should be framed in big print.
- NoGravitas 1 week ago
- lukas099 1 week agoThis is vibe-based, but I think the Rationalists get more vitriol than they deserve. Upon reflecting, my hypothesis for this is threefold:
1. They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group. People tend not to like being in other peoples' out-groups.
2. They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
3. They're nerds. Whatever has historically caused nerds to be bullied/ostracized, they probably have.
- Aurornis 1 week ago> They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group.
The rationalist community is most definitely not exclusive. You can join it by declaring yourself to be a rationalist, posting blogs with "epistemic status" taglines, and calling yourself a rationalist.
The criticisms are not because it's a cool club that won't let people in.
> They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
Herein lies one of the problems with the rationalist community: For all of their talk about heterodox ideas and entertaining different viewpoints, they are remarkably lockstep in many of their opinions.
From the outside, it's easy to see how one rationalist blogger plants the seed of some topic and then it gets adopted by the others as fact. A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something, even though actual experts picked it apart from the beginning and showed how the author was misinterpreting studies, abusing statistics, and ignoring more prominent factors.
The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence and they do this very frequently.
- lukas099 1 week ago> The rationalist community is most definitely not exclusive.
I agree, and didn't intend to express otherwise. It's not an exclusive community, but it is a community, and if you aren't in it you are in the out-group.
> The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence
I don't know if this is true or not, but if it is I don't think it's why people scorn them. Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
- const_cast 1 week ago> Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
Yes, most people don't care how you arrived at an opinion, they rather care about the practical impact of said opinion. IMO this is largely a good thing.
You can logically push yourself to just about any opinion, even absolutely horrific ones. Everyone has implicit biases and everyone is going to start at a different starting point. The problem with string of logic for real-world phenomena is that you HAVE to make assumptions. Like, thousands of them. Because real-world phenomena are complex and your model is simple. Which assumptions you choose to make and in which directions are completely unknown, even to you, the one making said assumptions.
Ultimately most people aren't going to sit here and try to psychoanalyze why you made the assumptions you made and if you were abused in childhood or deduce which country you grew up in or whatever. It's too much work and it's pointless - you yourself don't know, so how would we know?
So, instead, we just look at the end opinion. If it's crazy, people are just going to call you crazy. Which I think is fair.
- const_cast 1 week ago
- gjm11 1 week agoLockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
The followup post from the same author https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-get... is currently at a score of +306, again higher than either of those other pro-lithium-hypothesis posts.
Or maybe this https://substack.com/home/post/p-39247037 (I admit I don't know for sure whether the author considers himself a rationalist, but I found the link via a search for whether Scott Alexander had written anything about the lithium theory, which it looks like he hasn't, which turned this up in the subreddit dedicated to his writing).
Speaking of which, I can't find any sign that they got an ACX grant. I can find https://www.astralcodexten.com/p/acx-grants-the-first-half which is basically "hey, here are some interesting projects we didn't give any money to, with a one-paragraph pitch from each" and one of the things there is "Slime Mold Time Mold" talking about lithium; incidentally, the comments there are also pretty skeptical.
So I'm not really seeing this "gets adopted by the others as fact" thing in this case; it looks to me as if some people proposed this hypothesis, some other people said "eh, doesn't look right to me", and rationalists' attitude was mostly "interesting idea but probably wrong". What am I missing here?
- Aurornis 1 week ago> Lockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
That post came out a year later, in response to the absurdity of the situation. The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
One of the links is a Eliezer Yudkowsky blog praising the work, which now includes an edited-in disclaimer at the top about how he was mistaken: https://www.lesswrong.com/posts/kjmpq33kHg7YpeRYW/briefly-ra...
Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
- Aurornis 1 week ago
- zahlman 1 week ago> A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something
As proof of what, exactly? And where is your evidence that such a thing happened?
> while ignoring contradictory evidence and they do this very frequently.
The evidence available to me suggests that the rationalist community was not at all "lockstep" as regards the evaluation of SMTM's hypothesis.
- lukas099 1 week ago
- woopwoop 1 week agoAgree, but I think there is another, more important factor. They are a highly visible part of the internet, and their existence is mainly internet-based. This means that the people assessing them are mainly on the internet, and as we all know, internet discourse tends to the blandly negative (ironically my own comment is a mild example of this).
- lukas099 1 week agoGreat point!
- lukas099 1 week ago
- at_a_remove 1 week agoI will throw in an additional factor: any group, community, or segmentation of the general population wherein the participants both tend to have a higher than average intelligence (whatever that means) and whose preoccupation revolves around almost any form of cogitation, consideration, products of human thought ... will invariably get hit with some form of snobbery/envy, even if no explicitly stated intelligence threshold or gatekeeping is present.
Bluntly put, you are not allowed to be even a little smart and not all "aww shucks" about it. It has to be in service of something else like medicine or being a CPA. (Fun fact I found in a statistics course: the average CPA has about five points of IQ on the average doctor.) And it is almost justified, because you are in constant peril of falling down into your own butt until you disappear, but at the same time it keeps a lot of people under the thumb (or heel, pick your oppressive body part) of dumbass managers and idiots who blithely plow forward without a trace of doubt.
- johnfn 1 week agoHN judges rationality quite severely. I mean, look at this thread about Mr. Beast[1], who it's safe to say is a controversial figure, and notice how all the top comments are all pretty charitable. It's pretty funny to take the conversation there and then compare the comments to this article.
Scott Aaronson - in theory someone HN should be a huge fan of, from all reports a super nice and extremely intelligent guy who knows a staggering amount about quantum mechanics - says he likes rationality, and gets less charity than Mr. Beast. Huh?
- creata 1 week agoThe people commenting under the Mr. Beast post are probably different to the people commenting under this post.
Anyway, Mr. Beast doesn't really pretend to be more than what he is afaik. In contrast, the Rationalist tendency to use mathematics (especially Bayes's theorem) as window dressing is really, really annoying.
- directevolve 1 week agoWhat HN has for the rationalist movement isn’t just annoyance - it’s deep contempt and hatred.
- directevolve 1 week ago
- foldr 1 week agoMost people are trying to be rational (to be sure, with varying degrees of success), and people who aren't even trying aren't really worth having abstract intellectual discussions with. I'm reminded of CS Lewis's quip in a different context that "you might just as well expect to be congratulated because, whenever you do a sum, you try to get it quite right."
- throwaway314155 1 week agoBeing rational and rationalist are not the same thing. Funnily this sort of false equivalence that relies on being "technically correct" is at the core of what makes them...difficult.
- throwaway314155 1 week ago
- sandspar 1 week agoFittingly enough, the Rationalist community talks about this a lot. The canonical article is here ("I can tolerate anything except the outgroup").*
The gist is that if people are really different from us then we tend to be cool with them. But if they're close to us - but not quite the same - then they tend to annoy us. Hacker News people are close enough to Rationalists that HN people find them annoying.
It's the same reason why e.g. Hitler-style Neo Nazis can have a beer with Black Nationalists, but they tend to despise Klan-style Neo Nazis. Or why Sunni and Shia Muslims have issues with each other but neither group really cares about Indigenous American religions or whatever.
* https://slatestarcodex.com/2014/09/30/i-can-tolerate-anythin...
- creata 1 week ago
- teamonkey 1 week ago> This is vibe-based
You mean an empirical observation
- lowbloodsugar 1 week agoThree examples of feelings-based conclusions were presented. There is what is so, and how you feel about them. By all means be empirical about what you felt, and maybe look into that. “How this made me feel” describes the cause of how we got the USA today.
- lowbloodsugar 1 week ago
- j_timberlake 1 week agoI think these are all true and relevant, but the main problem is that their thesis that "ASI alignment will be extremely difficult" can only really be proven in hindsight.
It's like they're crying wolf but can't prove there's actually a wolf, only vague signs of one, but if the wolf ever becomes visible it will be way too late to do anything. Obviously no one is going to respect a group like that and many people will despise them.
- creatonez 1 week agoNope. Has nothing to do with them being nerds. They are actively dangerous, their views almost always lead to extremely reactionary politics. EA and RA are deeply anti-human. In some cases that manifests as a desire to subjugate humanity with an iron fist technocratic rule, in other cases it manifests as a desire to kill off humanity.
Either way, as an ideology it must be stopped. It should not be treated with kids gloves, it is an ideology that is actively influencing the ruling elites right now (JD Vance, Musk, Thiel are part of this cult, and also simultaneously believe in German-style Nazism, which is broadly compatible with RA). The only silver lining is that some of their ideas about power-seeking tactics are so ineffective they will never work -- in other words, humanity will prevail over these ghouls, because they came in with so many bad assumptions that they've lost touch with reality.
- j_timberlake 1 week agoIf you're going to claim a group is "deeply anti-human ghouls", maybe include an example or 2 of this in your post.
- therealdrag0 1 week agoHuh? They give millions of dollars to global humanitarian development funds saving at least 50,000 lives per year. Maybe you’re taking a few kooks who have a controversial lecture somewhere as representing everyone else.
- creatonez 1 week agoIt is true that there were EAs before the conception of "longtermism" that were relatively mundane and actually helping humanity, and not part of a death cult. But those have been shunned from the EA movement for a while now.
- creatonez 1 week ago
- Uhhrrr 1 week agoIn what specific way do you disagree with them?
- creatonez 1 week agoThe support for eugenics and ethnic cleansing, the absolute obsession with strictly utilitarian ethics and ignorance of other ethics, the "kill a bunch of humans now so that trillions can live in the future" longtermist death cult, and the whole Roko's basilisk worship that usually goes like "one AI system can take over the entirety of humanity and start eating the galaxy, therefore we must forcefully jump in the driver seat of that dangerous AI right now so that our elite ideology is locked in for a trillion years of galactic evolution".
- creatonez 1 week ago
- j_timberlake 1 week ago
- Aurornis 1 week ago
- NoGravitas 1 week agoProbably the most useful book ever written about topics adjacent to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the topic of the book is nominally the Alt-Right, a lot more of it is about the capital-R Rationalist communities and individuals that incubated the neoreactionary movement that is currently dominant in US politics. It's probably the best book to read for understanding how we got politically and intellectually from where we were in 2010, to where we are now.
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
- FeepingCreature 1 week agoIf you want a book on the rationalists that's not a smear dictated by a person who is banned from their Wikipedia page for massive npov violations, I hear Chivers' The AI Does Not Hate You and Rationalist's Guide to the Galaxy are good.
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
- at_a_remove 1 week agoIt might be fair play, however. If I correctly recall, LessWrong had, for a while, a prominent wiki admin who had been punted from Wikipedia for his frothing npov.
- Viliam1234 1 week agoI am not aware of Less Wrong having a Wikipedia admin. Are you perhaps thinking about David Gerard, admin of RationalWiki and Wikipedia, who once got in trouble for his decade-long internet crusade against Scott Alexander?
https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
- Viliam1234 1 week ago
- at_a_remove 1 week ago
- Matticus_Rex 1 week ago> Probably the most useful book
You mean "probably the book that confirms my biases the most"
- zahlman 1 week ago> incubated the neoreactionary movement that is currently dominant in US politics
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You are presenting a highly contentious worldview for the sake of smearing an outgroup. Please don't. Further, the smear relies on guilt by association that many (including myself) would consider invalid on principle, and which further doesn't even bear out on cursory examination.
At least take a moment to see how others view the issue. "Reliable Sources: How Wikipedia Admin David Gerard Launders His Grudges Into the Public Record" https://www.tracingwoodgrains.com/p/reliable-sources-how-wik... includes lengthy commentary on Sandifer (a close associate of Gerard)'s involvement with rationalism, and specifically on the work you cite and its biases.
- kragen 1 week agoThanks for the recommendation! I hadn't heard about the book.
- Aurornis 1 week agoIronically, bringing this topic up always turns the conversation to ad-hominem attacks about the messenger while completely ignoring the subject matter. That's exactly the type of argument rationalists claim to despise, but it gets brought up whenever inconvenient arguments appear about their own communities. All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
If anyone wants to actually engage with the topic instead of trying to ad-hominem it away, I suggest at least reading Scott Alexander's own words on why he so frequently engages in neoreactionary topics: https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
Some select quotes:
> First is a purely selfish reason - my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.
> Third is that I want to spread the good parts of Reactionary thought
> Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.
In this case, HBD means "human biodiversity" which is the alt-right's preferred term for racialism, or the division of humans into races with special attention to the relative intelligence of those different races. This is an oddly recurring theme on Scott Alexander's work. He even wrote a coded blog post to his followers about how he was going to deny it publicly while privately holding it to be very correct.
- zahlman 1 week ago> Ironically, bringing this topic up always turns the conversation to ad-hominem attacks about the messenger while completely ignoring the subject matter.
This is not a fair or accurate characterization of the criticism you're referring to.
> All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
They are not doing any such thing. The content is being dismissed because it has been repeatedly evaluated before and found baseless. The arguments are acknowledged as specious. Sandifer makes claims that are not supported by the evidence and are in fact directly contradicted by the evidence.
- Viliam1234 1 week ago> my blog gets about 5x more hits and new followers when I write about Reaction
Notice that most of that writing is negative, such as "anti-Reactionary manifesto" or more recently "Moldbug sold out".
- zahlman 1 week ago
- mananaysiempre 1 week agoThat book, IMO, reads very much like a smear attempt, and not one done with a good understanding of the target.
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
[1] https://slatestarcodex.com/2013/03/03/reactionary-philosophy...
[2] https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...
[3] https://slatestarcodex.com/2013/10/24/some-preliminary-respo...
[4] https://www.astralcodexten.com/p/moldbug-sold-out
[5] https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
[6] https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
[7] https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
- Aurornis 1 week agoI always find it interesting that when the topic of rationalists' fixation on neoreactionary topics comes into question, the primary defenses are that it's important to look at controversial ideas and that we shouldn't dismiss novel ideas because we don't like the group sharing them.
Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
It's no secret that Scott Alexander had a bit of a fixation on neoreactionary content for years. The leaked e-mails showed he believed there to be "gold" in some of their ideas and he enjoyed the extra traffic it brought to his blog. I know the rationalist community has been working hard to distance themselves from that era publicly, but dismissing that chapter of the history because it feels too much like a "smear" or because we're not supposed to like the author feels extremely hypocritical given the context.
- FeepingCreature 1 week agoThere are certain parts of the history of the rationalist movement that its enemies are orders of magnitude more "fixated" on than rationalists ever were, Neoreaction and the Basilisk being the biggest.
Part of evaluating unusual ideas is that you have to get really good at ignoring bad ones. So when somebody writes a book called "Neoreaction: a Basilisk" and claims that it's about rationality, I make a very simple expected-value calculation.
- throwaway314155 1 week ago> The leaked e-mails
Curious to read these. Got a source?
I've always been very skeptical of Scott "Alexander" after he and his supporters tricked half of reddit into harassing some journalists for "doxxing" him when his identity was public knowledge seemingly because he just really didn't like the takes presented by the journalists. The way he refers to them like it was a hit piece targeting him reeked of conspiratorial and paranoid thinking.
edit:
https://www.nytimes.com/2021/02/13/technology/slate-star-cod...
- zahlman 1 week ago> when the topic of rationalists' fixation on neoreactionary topics comes into question, the primary defenses are that it's important to look at controversial ideas and that we shouldn't dismiss novel ideas because we don't like the group sharing them.
No. Rationalists do say that it's important to do those things, because that's true. But it is not a defense of a "fixation on neoreactionary topics", because there is no such fixation. It only comes across as a fixation to people who are unwilling to even understand what they are denigrating.
You will note that Scott Alexander is heavily critical of neoreaction.
> Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
No. Nobody said that those criticism should be ignored. What was said is that those criticism are invalid, because they are. It is not ad-hominem against Sandifer to point out that Sandifer is trying to insinuate untrue things about Alexander. It is simply observing reality. Sandifer attempts to describe Alexander, Yudkowsky et. al. as supportive of neoreactionary thought. In reality, Alexander, Yudkowsky et. al. are strongly-critical-at-best of neoreactionary thought.
> The leaked e-mails showed he believed there to be "gold" in some of their ideas
This is clutching at straws. Alexander wrote https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f... , in 2013.
You are engaging in the same kind of semantic games that Sandifer does. Please stop.
- FeepingCreature 1 week ago
- Aurornis 1 week ago
- 1 week ago
- kurtis_reed 1 week agoThe "neoreactionary movement" is definitely not dominant
- FeepingCreature 1 week ago
- roenxi 2 weeks agoThe irony here is the Rationalist community are made up of the ones who weren't observant enough to pick that "identifying as a Rationalist" is generally not a rational decision.
- MichaelZuo 2 weeks agoFrom what I’ve seen it’s a mix of that, some who avoid the issue, and some who do it intentionally even though they don’t really believe it.
- MichaelZuo 2 weeks ago
- khazhoux 1 week agoI would love a link to anything that could convince me the Rationalist community isn't just a bunch of hoo-haw.
I read the first third of HPMOR. I stopped because I found the writing poor, but more importantly, it didn't "open my mind" to any higher-order way of rationalist thinking. My takeaway was "Yup, the original HP story was full of inconsistencies and stupidities, and you get a different story if the characters were actually smart."
I've read a bunch of EY essays and a lot of lesswrong posts, trying to figure out what is the mind-shattering idea.
* The map is not the territory --> of course it isn't.
* Update your beliefs based on evidence --> who disagrees with this? (with exception on religion)
* People are biased and we need to overcome that --> another obvious statement
* Make decisions based on evidence and towards your desired outcomes --> thanks for the tip?
Seems to me this whole philosophy can be captured in about half page of notes, which most people would nod and say "yup, makes sense."
- thornewolf 1 week agoSelf-help books are useful, not because they have "mind-shattering ideas", but because they collect a lot of self-consistent and reasonably coherent advice into the same place.
In this same way, the rationalist knowledge seeking strategies are not "mind-shattering" but simply reasonable. It presents a set of rules to follow to be more effective in the world around you.
The parts of rationalism that stretch past the half page of notes mainly concern all the downstream conclusions that pop up from this reasonable set of epistemological rules.
- Viliam1234 1 week ago> Update your beliefs based on evidence --> who disagrees with this?
Like, dozens of comments in this thread?
For example, people expressing strong opinions on what Effective Altruism is actually about, when https://www.givewell.org/charities/top-charities is just one google search away... but why would anyone bother checking before they post a strong opinion?
The #1 comment says that the rationality community is about "trying to reason about things from first principle", when if fact it is the opposite.
A commenter links a post by Scott Alexander and claims that Scott predicted something and was wrong, when if fact in the linked article Scott says he gives it a probability 30% (which means he gives probability 70% to that thing not happening). Another commenter defends that as a "perfectly reasonable but critical comment".
And hey, compared to most of the internet, HN is the smart place, and the local discussion norms are better than average. But it still doesn't seem like people here actually care about being, uhm, less wrong about things, even ones that are relatively trivial to figure out.
So basically, the mind-shattering idea is to build a community that actually works like that (well, most of the time). Where providing evidence gets upvoted, and unsubstantiated accusations that turn out to be wrong get downvoted, and a few more things like this.
Plus there is the idea of trying to express your beliefs as probabilities, using actual numbers. That's why EY cannot stop talking about the Bayes' Theorem. Yes, people actually do that.
- Turn_Trout 1 week ago> The #1 comment says that the rationality community is about "trying to reason about things from first principle", when if fact it is the opposite.
Oh? Eliezer Yudkowsky (the most prominent Rationalist) bragged about how he was able to figure out AI was dangerous (the most stark Rationalist claim) from "the null string as input."[1]
[1] https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
- Turn_Trout 1 week ago
- thornewolf 1 week ago
- tptacek 1 week agoWell that was a whole thing. I especially liked the existential threat of Cade Metz. But ultimately, I think the great oracle of Chicago got this whole thing right when he said:
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
- Aurornis 1 week ago> I especially liked the existential threat of Cade Metz.
I am perpetually fascinated by the way rationalists love to dismiss critics by pointing out that they met some people in person and they seemed nice.
It's such a bizarre meme.
Curtis Yarvin went to one of the "Vibecamp" rationalist gatherings, was nice to some prominent Twitter rationalists, and now they are ardent defenders of him on Twitter. Their entire argument is "I met him and he was nice".
It's mind boggling that the rationalist part of their philosophy goes out the window as soon as the lines are drawn between in-group and out-group.
Bringing up Cade Metz is a perennial favorite signal because of how effectively they turned it into a "you're either with us or against us" battle, completely ignoring any valid arguments Cade Metz may have been brought to the table. Then you look at how they treat Neoreactionaries and how we're supposed to look past our disdain for them and focus on the possible good things in their arguments, and you realize maybe this entire movement isn't really about truth-seeking as much as they think it is.
- tptacek 1 week agoI'm hyperfixated on the idea that they're angry at Metz for "outing" Scott Alexander, who published some of his best-known posts under his own name.
- autarch 1 week agoIsn't "Scott Alexander" his first and middle name? Did he actually use his last name on his blog before the NYT published that piece?
- comp_throw7 1 week agoYour persistent refusal to acknowledge that Scott did not want to go from a world where his patients Googling his real name did _not_ immediately lead to his blog, to a world where it _did_ immediately lead to his blog, and that was his primary (and valid) objection to having his real first and last name put into print, is baffling.
- autarch 1 week ago
- tptacek 1 week ago
- dragonwriter 1 week ago> Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
- astrange 1 week ago> There's an -ism for that
It's Buddhism.
https://en.wikipedia.org/wiki/Anattā
> Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
That is indeed a problem with it. The Buddhist solution is to make you promise not to do that.
https://en.wikipedia.org/wiki/Bodhicitta
And the (well, a) term for the entire problem is "non-dual awareness".
- astrange 1 week ago
- leshow 1 week agoIdeology is at its most powerful when people think it doesn't exist.
- twoodfin 1 week agoI genuinely worry that the Rationalists are actually Fascist Anarchists who don’t own cars.
- Aurornis 1 week ago
- IlikeKitties 1 week agoI once was interested in a woman who was really into the effective altruism/rationalism crowd. I went to a few meetings with her but my inner contrarian didn't like it.
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
- cue_the_strings 2 weeks agoI feel like I'm witnessing something that Adam Curtis would cover in the last part of The Century of Self, in real time.
- greener_grass 2 weeks agoThere was always an underlying Randian impulse to the EA crowd - as if we could solve any issue if we just get the right minds onto tackling the problem. The black-and-white thinking, group think, hero worship and charicaturist literature are all there.
- cue_the_strings 1 week agoI always wondered is it her direct influence, or is it just that those characteristics naturally "go together".
- cue_the_strings 1 week ago
- greener_grass 2 weeks ago
- nathcd 1 week agoSome of the comments here remind me of online commentary about some place called "the orange site". Always wondered who they were talking about...
- Nursie 1 week agoHonestly, as a daily reader and oftentimes commenter on the dreaded orange site, I've seen a lot of criticism of its denizens, very often along the lines of -
"Computer people who think that because they're smart in one area they have useful opinions on anything else, holding forth with great certainty about stuff they have zero undertanding or insight into"
And you know what, I think they're right. The rest of you are always doing that sort of thing!
(/s, if it's necessary...)
- mitthrowaway2 1 week agoCan't stand that place. Those people are all so sure that they're right about everything.
- Nursie 1 week ago
- retRen87 1 week agoHe already had a rationalist “coming out” like ages ago. Dude just make up your mind
- kragen 1 week agoWhile this was an interesting and enjoyable read, it doesn't seem to be a “rationalist ‘coming out’”. On the contrary, he's just saying he would have liked going to a ‘rationalist’ meeting.
- retRen87 1 week agoThe last paragraph discusses how he's resisted the label and then he closes with “the rationalists have walked the walk and rationaled the rational, and thus they’ve given me no choice but to stand up and be counted as one of them.”
He’s clearly identifying as a rationalist there
- kragen 1 week agoOh, you're right! I'd add that it's actually the penultimate paragraph of the first of two postscripts appended to the post. I should have read those, and I appreciate the correction.
- kragen 1 week ago
- retRen87 1 week ago
- kragen 1 week ago
- amarcheschi 2 weeks agoThey call themselves rationalist, yet they don't have very rational opinions if you ask them about scientific racism [1]
[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
- wffurr 2 weeks agoI am not sure precisely it not very rational about that link. Did you have a specific point you were trying to make with it?
- amarcheschi 2 weeks agoYes, that they're not "rational".
If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...
1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
- exoverito 1 week agoHuman ethnic groups are measurably different in genetic terms, as based on single nucleotide polymorphisms and allelic frequency. There are multiple PCA plots of the 1000 Genomes dataset which show clear cluster separation based on ancestry:
https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.
https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
- tptacek 1 week agoThe Lynn IQ data isn't so much "flawed" as it is "fraudulent". There has never been a global comparative study of IQ; Lynn exploited the fact that IQ is a common diagnostic tool, even in places that don't fund a lot of social science research and thus don't conduct their own internal cross-sectional studies, and so ended up comparing diagnostic tests done at hospitals in the developing world with research studies done of volunteers in the developed world.
- derangedHorse 2 weeks agoNothing about the article you posted in your first comment seems racist. You could argue that believing in the conclusions of Richard Lynn’s work makes someone racist, but to support that claim, you’d need to show that those who believe it do so out of willful ignorance of evidence that his science is flawed.
- mjburgess 2 weeks agoA lot of "rationalists" of this kind are very poorly informed about statistical methodology, a condition they inherit from reading papers written in these pseudoscientific fields about people likewise very poorly informed.
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
- exoverito 1 week ago
- pixodaros 1 week agoIn that essay Scott Alexander more or less says "so Richard Lynn made up numbers about how stupid black and brown people are, but we all know he was right if those mean scientists just let us collect the data to prove it." The level of thinking most of us moved past in high school, and he is a MD who sees himself as a Public Intellectual! More evidence that thinking too much about IQ makes people stupid.
- amarcheschi 2 weeks ago
- 2 weeks ago
- ineedaj0b 2 weeks ago[flagged]
- wffurr 2 weeks ago
- voidhorse 2 weeks agoThese kinds of propositions are determined by history, not by declaration.
Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
- Sharlin 2 weeks agoBleh, labels can be restrictive, but guess what labels can also be? Useful.
- resource_waste 1 week ago>These kinds of propositions are determined by history, not by declaration.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
- 2 weeks ago
- Sharlin 2 weeks ago
- djoldman 1 week agoJust to confirm, this is about:
https://en.wikipedia.org/wiki/Rationalist_community
and not:
https://en.wikipedia.org/wiki/Rationalism
right?
- FeepingCreature 1 week agoAbsolutely everybody names it wrong. The movement is called rationality or "LessWrong-style rationality", explicitly to differentiate it from rationalism the philosophy; rationality is actually in the empirical tradition.
But the words are too close together, so this is about as lost a battle as "hacker".
- gjm11 1 week agoI don't think "rationality" is a good name for the movement, for the same reason as I wish "effective altruism" had picked a different name: it conflates the goal with the achievement of the goal. A rationalist (in the Yudkowsky sense) is someone who is trying to be rational, in a particular way. But "rationality" means actually being rational.
I don't think it's actually true that rationalists-in-this-sense commonly use "rationality" to refer to the movement, though they do often use it to refer to what the movement is trying to do.
- FeepingCreature 1 week agoYeah, that's why they famously say they're an "aspiring rationalist." But I don't think there's anything wrong with setting a target, even if it's unreachable.
- FeepingCreature 1 week ago
- gjm11 1 week ago
- thomasjudge 1 week agoAlong these lines I am sort of skimming articles/blogs/websites about Lightcone, LessWrong, etc, and I am still struggling with the question...what do they DO?
- Mond_ 1 week agoLook, it's just an internet community of people who write blog posts and discuss their interests on web forums.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
- FeepingCreature 1 week agoTwenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
- astrange 1 week ago> hat superintelligence was probably coming, was deeply unsafe
Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.
The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.
Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.
- throwaway314155 1 week agoI don't know - the level of seriousness they discuss w.r.t. alignment issues just seem so out of touch with the realities of large language models and the notion of a super intelligence being "closer than ever" gives way too much credit to the capabilities (or lack there of) of LLM's.
A lot of it seems rooted in Asimov-inspired, stimulant-fueled philosophizing than any kind of empirical or grounded observations.
- astrange 1 week ago
- kurtis_reed 1 week agoHang out and talk
- dennis_jeeves2 1 week ago[dead]
- Mond_ 1 week ago
- FeepingCreature 1 week ago
- radicalbyte 2 weeks ago* 20 somethings who are clearly on spectrum
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
- krapp 2 weeks agoThese are the people who came up with Roko's Basilisk, Effective Altruism and spawned the Zizians. I think Robert Evans described them not as a cult but as a cult incubator, or something along those lines.
- toasterlovin 1 week agoAlso:
* Communal living
* Sacred texts & knowledge
* Doomsday predictions
* Guru/prophet lives on the largesse of followers
It's rich for a group that claims to reason based on priors to completely ignore that they possess all the major defining characteristics of a cult.
- timmytokyo 1 week agoBelow is a list of cult markers I posted here the last time this came up. It recaps some of the bullet points above.
1. Apocalyptic world view.
2. Charismatic and/or exploitative leader.
3. Insularity.
4. Esoteric jargon.
5. Lack of transparency or accountability (often about finances or governance).
6. Communal living arrangements.
7. Sexual mores outside social norms, especially around the leader.
8. Schismatic offshoots.
9. Outsized appeal and/or outreach to the socially vulnerable.
- timmytokyo 1 week ago
- nancyminusone 1 week agoThey seem to have a lot in common with the People's Front of Judea (or the Judian People's Front, for that matter).
- TheAncientGeek 1 week agoAlso, mandatory scriptures and group hiouses.
- ausbah 1 week agoso many of the people i’ve read in these rationalist groups sound like they need a hug and therapy
- krapp 2 weeks ago
- bikamonki 1 week agohttps://en.wikipedia.org/wiki/Rationalist_community
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
- Matticus_Rex 1 week agoMost weird sex stuff takes place outside of cults, so that doesn't follow.
- bmacho 1 week agoThere's weird sex stuff, logically, it's likely not a cult.
- bmacho 1 week ago
- Matticus_Rex 1 week ago
- jrm4 1 week agoMy eyes started to glaze over after a bit; so what I'm getting here is there a group that calls themselves "Rationalists," but in just about every externally meaningful sense, they're smelling like -- perhaps not a cult, but certainly a lot of weird insider/outsider talk that feels far from rational?
- pja 1 week agoCapital r-Rationalism definitely bleeds into cult-like behaviour, even if they haven’t necessarily realised that they’re radicalising themselves.
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
- pja 1 week ago
- KolibriFly 1 week agoIt's encouraging to hear that behind all the internet noise, the real-life community is thriving and full of people earnestly trying to build a better future
- bee_rider 1 week agoThe main things I don’t like about rationalism are aesthetic (the name sucks and misusing the language of Bayesian probability is annoying). Sounds like they are a thoughtful and nice bunch otherwise(?).
- Zaylan 1 week agoReading this made me realize that rationality isn’t really about joining a group. It’s more about whether you care about how you think. No matter how solid the logic is, if the starting assumptions are off, it doesn’t help much. Reality is often messier than any model we build. How do you decide when to trust the model vs trust your instincts?
- 2 weeks ago
- bargainbin 1 week ago[flagged]
- tomhow 1 week agoPlease don't comment like this on Hacker News. It breaks several guidelines, most notably these ones:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names.
Please don't fulminate. Please don't sneer...
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
- cogman10 1 week ago> clever enough to always be right
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
- KolibriFly 1 week agoSometimes the meta-skill of how you come across while being right is just as important as the correctness itself…
- falcor84 1 week agoI don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
- lowbloodsugar 1 week ago“I don’t like how they said it” and “I don’t like how this made me feel” is the aspect of the human brain that has given us Trump. As long as the idea that “how you feel about it” is a basis for any decision making, the world will continue to be fucked. The authors audience largely understand that “this made me feel” is an indication that introspection is required, and not an indication that the author should be ignored.
- computerthings 1 week ago[dead]
- computerthings 1 week ago
- gadders 1 week agoIt's a coping mechanism for autists, mainly.
- tomhow 1 week ago
- os2warpman 1 week agoRationalists as a movement remind me of the individuals who claim to be serious about history but are only interested in a very, VERY specific set of six years in one very specific part of the world.
And boy are they extremely interested in ONLY those six years.
- scoofy 1 week agoIt's weird that "being interested in philosophy" is like... a movement. My background is in philosophy, but the rationalist vs nonrationalist debate seems like an undergraduate class dispute.
My old roommate worked for Open Phil, and was obsessed with AI Safety and really into Bitcoin. I never was. We still had interesting arguments about it all the time. Most of the time we just argued until we got to the axioms we disagreed on, and that was that.
You don't have to agree with the Rationalist™ perspective to apply philosophically rigorous thinking. You can be friends and allies with them without agreeing with all their views. There are strong arguments for why frequentism may be more applicable than bayesianism in different domains. Or why transhumanism is a pipe dream. They are still conversations that are worthwhile as long as you're not so confident in your position that you think you might learn something.
- Aurornis 1 week ago> It's weird that "being interested in philosophy" is like... a movement. My background is in philosophy, but the rationalist vs nonrationalist debate seems like an undergraduate class dispute.
Bring up the rationalist community within academic philosophy circles and you'll get a lot of groans.
The fun part about rationalists is that they like to go back to first principles and rediscover basics. The less fun part is that they'll ignore all of the existing work and pretend they're going to figure it all out themselves, often with weird results.
This leaves philosophy people endlessly frustrated as the rationalists write long essays about really basic philosophy concepts as if they're breaking new ground, while ignoring a lot of very interesting work that could have made the topic much more interesting to discuss.
- timmytokyo 1 week agoRationalists are constantly falling into ditches that actual philosophers crawled out of centuries ago. But what's even more exasperating is that they do it in tendentious disquisitions that take thousands of wasted words to get.. to.. the.. everloving.. point.
- achenet 1 week ago> they do it in tendentious disquisitions that take thousands of wasted words to get.. to.. the.. everloving.. point.
Right, and "actual philosophers" like Sartre and Heidegger _never_ did that. Ever.
"Being and Nothingness" and "Being and Time" are both short enough to fit into a couple tweets, right?
</irony>
- achenet 1 week ago
- mynameajeff 1 week agoIt's a lot like how every "interesting"/pop scientific field (think quantum) has a niche community of Free Thinkers that the "establishment" just won't listen to. If you hear from the actual core scientific community you'll find out that they just already went over this 40-400 years ago and nothing being argued as groundbreaking is new or groundbreaking like it's being pitched as. That indifference can feel like rejection to the highly-excited outsiders which can develop into animosity which just further isolates them.
- scoofy 1 week agoI mean, I don't disagree with you there. Even within academic philosophy circles, you'll get groans when one sect is discussing things with another sect. Lord knows ancient philosophy academics and analytic philosophy academics are going to get headaches just trying to hold a conversation... and we're not even including continental school.
My point is that, yes, while it may be a bit annoying in general (lord knows how many times I rolled my eyes at my old roommate talking about trans-humanism), the idea that this Rationalist™ movement "thinking about things philosophically" is controversial is just weird. That they seem to care about a philosophical approach to thinking about things, and maybe didn't get degrees and maybe don't understand much background while forming their own little school, seems as unremarkable is it is uncontroversial.
- timmytokyo 1 week ago
- Aurornis 1 week ago
- apples_oranges 1 week agoNever heard of the man, but that was a fun read. And it looks like a fun club to be part of. Until in becomes unbearable perhaps. Also raises the chances to get invited to birthday orgies..? Perhaps I should have stayed a in academia..
- moolcool 1 week ago> Until in becomes unbearable perhaps
Until?
- moolcool 1 week ago
- Barrin92 1 week ago>"frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future"
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
- Terr_ 1 week agoThis also related to that earlier bit:
> I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?
The reason they landed on a not-so-rational risk to humanity is because it fulfilled the psycho-social need to have a "terrible burden" that binds the group together.
It's one of the reasons religious groups will get caught up on The Rapture or whatever, instead of eradicating poverty.
- Terr_ 1 week ago
- protocolture 1 week ago"I have come out as a smart good thinking person, who knew"
>liberal zionist
hmmmm
- xinuc 1 week agoAs others have pointed out about the movement, it's for people who think they are smart and doing good things. It's just fitting that he is a Zionist who think he is a good person.
- xinuc 1 week ago
- pja 1 week agoScott Aaronson, the man who turned scrupulosity into a weapon against his own psyche is a capital R rationalist?
Yeah, this surprises absolutely nobody.
- great_tankard 1 week ago"YOUR ATTENTION PLEASE: I have now joined the club everyone assumed I was already a member of."
- mitthrowaway2 1 week agoIt's his personal blog, the only people whose attention he's asking for are the people choosing to wander over there to see what he's up to.
Not his fault that people deemed it interesting enough to upvote to the front page of HN.
- mitthrowaway2 1 week ago
- great_tankard 1 week ago
- norir 1 week agoOne of my many problems with rationalism is that it generally fails to acknowledge it's fundamentally religious character while pronouncing itself superior to all other religions.
- old_man_cato 1 week ago"I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all)"
Not incidental!
- akomtu 1 week agoSounds like they hear only themselves? There is a common type of sophisticated thinkers who have trained their intellectual muscle to a remarkable degree and got stuck there, refusing to see that intellect isn't the capstone of life. Their mind, ears and mouth quickly form a closed loop that makes them hear only what they themselves say. When this loop strengthens, this thinker becomes a dogmatic cult leader with a sophisticated, but dimly lit inner world that can, nevertheless impress smaller minds. Such inner worlds are always like mazes, with lots of rooms and corridors, all dimly lit, without exits and with their impressively developed mind roaming along these corridors like the Minotaur.
- Mikhail_K 1 week ago"Rationalists," the "objectivists" rebranded?
- lanfeust6 1 week agoPolitical affiliation distribution is similar to the general population.
- lanfeust6 1 week ago
- throw7 1 week agoI used to snicker at these guys, but I realized I'm not being humble or to be more theologically minded: gracious.
Recognizing we all take a step of faith to move outside of solipsism into a relationship with others should humble us.
- DrNosferatu 1 week agoAre you really concerned about the consequences of AI?
Then, empower UN-like organizations to oversee the use of technology - like the United Nations Atomic Energy Commission.
And, if you're even further concerned, put in place mechanisms that guarantee that the productivity gains, yields, and GDP increases obtained via the new technology of AI are distributed and enjoyed by all of the living population with a minimum of fairness.
For some reason, specially this last bit, doesn't really fly with our friends, The Rationalists. I wonder why...
- nathias 1 week agoI don't understand the urge for Americans to name things the opposite of what they are.
- aosaigh 1 week ago> “You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
- stevenhuang 1 week agoPretty sure that's meant to be taken light-heartedly. You know, as a joke.
- chipsrafferty 1 week agoIt's not.
- chipsrafferty 1 week ago
- api 1 week agoThat's exactly what Rationalism(tm) is.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
- cshimmin 1 week agoThank you for finally making this make sense to me.
- api 1 week agoAnother thing that's endemic in Rationalism is a kind of specialized variety of the Gish gallop.
It goes like this:
(1) Assert a set of priors (with emphasis on the word assert).
(2) Reason from those priors to some conclusion.
(3) Seamlessly, without skipping a beat, take that solution as valid because the reasoning appears consistent and make that part of a new set of priors.
(4) Repeat, or rather recurse since the new set of priors is built on previous iterations.
The entire concept of science is founded on the idea that you can't do that. You have to stop and touch grass, which in science means making observations or doing experiments if possible. You have to see if the conclusion you reached actually matches reality in any meaningful way. That's because reason alone is fragile. As any programmer knows, a single error or a single mistaken prior propagates and renders the entire tree invalid. Do this recursively and one error anywhere in this crystalline structure means you've built a gigantic tower of bullshit.
I compare it to the Gish gallop because of how enthusiastically they do it, and how by doing it so fast it becomes hard to try to argue against. You end up having to try to counter a firehose of Oh So Very Smart complicated exquisitely reasoned nonsense.
Or you can just, you know, conclude that this entire method of determining truth is invalid and throw the entire thing in the trash.
A good "razor" for this kind of thing is to judge it by its fruit. So far the fruit is AI hysteria, cults like the Zizians, neoreactionary political ideology, Sam Bankman Fried, etc. Has anything good or useful come from any of this?
- api 1 week ago
- cshimmin 1 week ago
- ModernMech 1 week agoRationalists are better called Rationalizationists, really.
- stevenhuang 1 week ago
- mkoubaa 1 week agoThe problem with rationalism is we don't have language to express our thoughts formally enough nor a compiler to transform that language into something runnable (platonic AST) nor a machine capable of emulating reality.
Expecting rational thought to correspond to reality is like expecting a 6 million line program written in a hypothetical programming language invented in the 1700s to run bug free on a turing machine.
Tooling matters.
- pjscott 1 week agoThe rationalist movement has been talking about this since the beginning, and has consistently come to the conclusion that of course the thing to do is admit to our own fallibility and try to do the best we can with our profound limitations.
(You didn’t explicitly say otherwise, so if my exasperation is misdirected then you have my apology in advance.)
- mkoubaa 1 week agoNo need to apologize, I am unashamedly an opponent of rationalism and I hold the entire movement and every one like it in history in withering contempt.
- mkoubaa 1 week ago
- pjscott 1 week ago
- lasersail 1 week agoI was at Lighthaven that week. The weekend-long LessOnline event Scott references opened what LightHaven termed "Festival Season", with a summer camp organised for the following 5 week days, and a prediction market & forecasting conference called Manifest the following weekend.
I didn't attend LessOnline since I'm not active on LessWrong nor identify as a rationalist - but I did attended a GPU programming course in the "summer camp" portion of the week, and the Manifest conference (my primary interest).
My experience generally aligns with Scott's view, the community is friendly and welcoming, but I had one strange encounter. There was some time allocated to meet with other attendees at Manifest who resided in the same part of the world (not the bay area). I ended up surrounded by a group of 5-6 folks who appeared to be friends already, had been a part of the Rationalist movement for a few years, and had attended LessOnline the previous weekend. They spent most of the hour critiquing and comparing their "quality of conversations" at LessOnline with the less Rationalist-y, more prediction market & trading focused Manifest event. Completely unaware or unwelcoming of my presence as an outsider, they essentially came to the conclusion that a lot of the Manifest crowd were dummies and were - on average - "more wrong" than themselves. It was all very strange, cult-y, pseudo-intellectual, and lacking in self-awareness.
All that said, the experience at Summer Camp and Manifest was a net positive, but there is some credence to sneers aimed at the Rationalist community.
- thornewolf 1 week agoI attended Manifest to meet some rationalists. I consider myself a post-rat. I also heard some comments that Manifest was less Rationalist-y than LessOnline but w/o the additional commentary on "dummies" (not meant to imply you didn't hear it).
I did find some rationalists too far down their "epistemological rabbit hole" to successfully unwind in one or two conversations but nevertheless many clever people. I still need some time to make post-rats out of them, though.
Affirming that it was a positive experience. I'm glad to have attended.
- thornewolf 1 week ago
- stuaxo 1 week agoHad to stop reading, everyone sounded so awful.
- 1 week ago
- bovermyer 1 week agoI think I'm missing something important.
My understanding of "Rationalists" is that they're followers of rationalism; that is, that truth can be understood only through intellectual deduction, rather than sensory experience.
I'm wondering if this is a _different_ kind of "Rationalist." Can someone explain?
- kurtis_reed 1 week agoIt's a terrible name that collides with the older one you're thinking of
- FeteCommuniste 1 week agoThe easiest way to understand their usage of the term "rational" might be to think of it as the negation of the term "irrational" (where the latter refers mostly to cognitive biases). Not as a contrast with "empirical."
- kurtis_reed 1 week ago
- leggy77 1 week agoI read Scott Aaronson's blog and his papers and enjoy both. That said, he's a perfect example of a person who is a genius in one area, thinks this translates to expertise or intuition in other areas, and is often very, very wrong.
- DrNosferatu 1 week agoHuman emotions guide reasoning: that's why we need Politics.
These people should have read "Descartes' Error" with more attention than they spent on Friedman and Hayek.
- anonnon 1 week agoDoes that mean he read the Harry Potter fanfic?
- tines 1 week agoThe HP fanfic is what decisively drove me away from this shitshow years ago. I'm so glad I read that first rather than getting sucked in through another more palatable route.
- tines 1 week ago
- DrNosferatu 1 week agoI'm yet to see "Effective Altruism" or (neo-)"Rationalism" that is not mostly the rationalization of exploiting less privileged people than them, and justifying behaving in anti-social, exploitative and extractionary ways.
- aiiizzz 1 week agoExamples?
- aiiizzz 1 week ago
- resource_waste 1 week ago"I'm a Rationalist"
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
- Joker_vD 2 weeks agoAh, so it's like the Order of the October Star: certain people have simply realized that they are entitled to wear it. Or, rather, that they had always been entitled to wear it. Got it.
- dr_dshiv 1 week agoSince intuitive and non-rational thinking are demonstrably rational in the face of incomplete information, I guess we’re all rationalists. Or that’s how I’m rationalizing it, anyway.
- chipsrafferty 1 week agoAll rationalists and effective altruists are bad people pretending to be good people for social capital.
- cess11 1 week agoThe narcissism in this movement is insufferable. I hope the conditions for its existence will soon pass and give way to something kinder and more learned.
- jrd259 1 week agoI'm so out of the loop. What is the new, special sense of Rationalist over what it might have meant to e.g. Descarte?
- 1 week ago
- DrNosferatu 1 week agoI cannot falsify the thesis that so-called "Rationalism" and "Effective Altruism" is - in practice - just folklore so that top-percenters don't pay taxes and feel good about themselves.
As the least worse solution to maximize Social Utility has long been invented: Democracy and political action.
- Viliam1234 1 week agoI have some doubts about the rationality of trying to avoid paying taxes by sending half of their salary to anti-malaria charities, but maybe those people just suck at math which is why they get paid so much.
Or maybe you are wrong.
- DrNosferatu 1 week agoInstead of waving the appeal to authority fallacy, try to do some reading up:
https://ips-dc.org/the-true-cost-of-billionaire-philanthropy...
- DrNosferatu 1 week ago
- Viliam1234 1 week ago
- stephc_int13 1 week agoThis is largely a cult, showing most of the red flags.
But if we put aside the narcissistic traits, lack of intellectual humility, religious undertones and (paradoxically) appeal to emotional responses with apocalyptic framing, the whole thing is still irrelevant BS.
They work in a vacuum, on either false or artificial premises with nothing to back their claims except long strings of syllogism.
This is not Science, no measurements, no experiments, no validation, zero value apart from maybe intellectual stimulation and socialisation for nerds with too much free time…
- d--b 1 week agoSorry, I haven't followed what is it that these guys call Rationalism?
- pja 1 week agohttps://en.wikipedia.org/wiki/Rationalist_community
Fair warning: when you turn over some of the rocks here you find squirming, slithering things that should not be given access to the light.
- pja 1 week ago
- gblargg 1 week ago> The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a moderate, pro-abundance agenda that would resonate with the public and finally defeat MAGA.
I can't say I'm surprised.
- user____name 1 week agoI somehow was expecting talk about Descartes and Spinoza.
- danans 1 week ago> A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru.
Apart from a charismatic leader, a cult (in the colloquial meaning) needs a business model, and very often, a sense of separation from, and lack of accountability to those who are outside the cult, which provides conveniently simpler environment under which the cults ideas operate. A sort of "complexity filter" at the entry gate.
I'm not sure how the Rationalists compare to those criteria, but I'd be curious to find out.
- 1 week ago
- 1 week ago
- babuloseo 1 week agoI stopped reading once I read the word "zionist"
- 1 week ago
- t_mann 2 weeks ago> “You’re [X]?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
- simianparrot 2 weeks agoNo actual person talks like that —- and if they really did, they’ve taken on the role of a fictional character. Which says a lot about the clientele either way.
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
- FeteCommuniste 2 weeks agoI assumed that the stuff in quotes was a summary of the general gist of the conversations he had, not a word for word quote.
- riffraff 2 weeks agoI don't think GP objects to the literalness, as much as to the "I am known for always being right and I acknowledge it", which comes off as.. not humble.
- riffraff 2 weeks ago
- bmacho 1 week ago> —-
0.o
> No actual person talks like that
I think it is plausible that there are people (readers) that find other people (bloggers) basically always right, and that would be the first think they would say to them if they met them. n=1, but there are some bloggers that I think are basically always right, and I am socially bad, so there is no telling what would I blurt out if I met them.
- FeteCommuniste 2 weeks ago
- johnfn 1 week agoTo be honest, if I encountered Scott Aaronson in the wild I would probably react the same way. The guy is super smart and thoughtful, and can write more coherently about quantum computing than anyone else I'm aware of.
- NooneAtAll3 1 week agoif only he stayed silent on politics...
- NooneAtAll3 1 week ago
- kragen 1 week agoWhy would you comment on the post if you stopped reading near its beginning? How could your comments on it conceivably be of any value? It sounds like you're engaging in precisely the kind of shallow dismissal the site guidelines prohibit.
- JohnMakin 1 week agoAren't you doing the same thing?
- kragen 1 week agoNo, I read the comment in full, analyzed its reasoning quality, elaborated on the self-undermining epistemological implications of its content, and then related that to the epistemic and discourse norms we aspire to here. My dismissal of it is anything but shallow, though I am of course open to hearing counterarguments, which you have fallen short of offering.
- kragen 1 week ago
- JohnMakin 1 week ago
- dcminter 2 weeks agoAlso...
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
- ARandumGuy 1 week agoAt least one cult originates from the Rationalist movement, the Zizians [1]. A cult that straight up murdered at least four people. And while the Zizian belief system is certainly more extreme then mainstream Rationalist beliefs, it's not that much more extreme.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
[1]: https://en.wikipedia.org/wiki/Zizians [2]: https://www.iheart.com/podcast/105-behind-the-bastards-29236...
- jcranmer 1 week agoThe podcast Behind the Bastards described Rationalism not as a cult but as the fertile soil which is perfect for growing cults, leading to the development of cults like the Zizians (who both the Rationalists and Zizians are at pains to emphasize their mutual hostility to one another, but if you're not part of either movement, it's pretty clear how Rationalism can lead to something like the Zizians).
- astrange 1 week agoI don't think that podcast has very in-depth observations. It's just another iteration of east coast culture media people who used to be on Twitter a lot, isn't it?
> the fertile soil which is perfect for growing cults
This is true but it's not rationalism, it's just that they're from Berkeley. As far as I can tell if you live in Berkeley you just end up joining a cult.
- astrange 1 week ago
- wizzwizz4 1 week agoNo, Guru Eliezer Yudkowsky wrote an essay about how people asking "This isn’t a cult, is it?" bugs him, so it's fine actually. https://www.readthesequences.com/Cultish-Countercultishness
- NoGravitas 1 week agoHank Hill: Are y'all with the cult?
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
- dcminter 1 week agoExtreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps? My hobby is occasionally compared to a cult. The typical reaction of an adherent to this accusation is generally "Heh, yeah, totally a cult."
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
- NoGravitas 1 week ago
- ARandumGuy 1 week ago
- junon 2 weeks agoI got to that part, thought it was a joke, and then... it wasn't.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
- joenot443 1 week agoScott's done a lot of really excellent blogging in the past. Truthfully, I think you risk depriving yourself of great writing if you're willing to write off an author because you didn't like one sentence.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
- epiccoleman 1 week agoSince I had not engaged with the ASOIAF community for quite some time, I had forgotten that, apparently, the phrase "fat pink mast" is stored somewhere in my long term memory. I sometimes struggle to remember my own wedding anniversary, but "fat pink mast" pops right to the surface.
I'd like to thank my useless brain for deciding to write that one down.
- epiccoleman 1 week ago
- derangedHorse 2 weeks agoIs it not a joke? I’m pretty sure it was.
- lcnPylGDnU4H9OF 1 week agoIt doesn’t really read like a joke but maybe. Regardless, I guess I can at least be another voice saying it didn’t land. It reads like someone literally said that to him verbatim and he literally replied with a simple, “Yes.” (That said, while it seems charitable to assume it was a joke but that doesn’t mean it’s wrong to assume that.)
- myko 1 week agoI laughed, definitely read that way to me
- IshKebab 1 week agoI think the fact that we aren't sure says a lot!
- alphan0n 2 weeks agoIf that was a joke, all of it is.
*Guess I’m a rationalist now.
- lcnPylGDnU4H9OF 1 week ago
- joenot443 1 week ago
- 1 week ago
- 1 week ago
- James_K 1 week ago[flagged]
- simianparrot 2 weeks ago
- PoignardAzur 1 week agoAs someone who likes both the Rationalist community and the Rust community, it's fascinating to see the parallels in how the Hacker News crowd treats both.
The contempt, the general lack of curiosity and the violence of the bold sweeping statements people will make here are mind-boggling.
- Aurornis 1 week ago> the general lack of curiosity
Honestly, I find the Hacker News comments in recent years to be most enlightening because so many comments come from people who spent years immersed in rationalist communities.
For years one of my friend groups was deep into LessWrong and SSC. I've read countless blog posts and other content out of those groups.
Yet every time I write about it, I'm dismissed as an uninformed outsider. It's an interesting group of people who like to criticize and dissect other groups, but they don't take kindly to anyone questioning their own circles.
- zahlman 1 week ago> Yet every time I write about it, I'm dismissed as an uninformed outsider.
No; you're being dismissed as someone who is entirely too credulous about arguments that don't hold up to scrutiny.
Edit: and as someone who doesn't understand basics about what rationalists are trying to accomplish in certain contexts (like the concept of a calibration curve re the example you brought up of https://www.astralcodexten.com/p/grading-my-2021-predictions). You come across (charitably) as having missed the point, because you have.
- comp_throw7 1 week agoYou keep saying things that are proven to be trivially false with 5 seconds of examination, so I think the dismissal is justified.
- zahlman 1 week ago
- cosmojg 1 week agoBoth the Rationalist community and the Rust community are very active in pursuing their goals, and unfortunately, it's far easier to criticize others for doing things than it is to actually do things yourself. Worse yet, if you are not yourself actively doing things, you are far more likely to experience fear when other people are actively doing things as there is always some nonzero chance that they will do things counter to your own goals, forcing you to actively do something lest you fall behind. Alas, people often respond to fear with hatred, especially given the benefit of physical isolation and dissociation from humanity offered by the Internet, and I think that's what you're seeing here on Hacker News.
- georgeecollins 1 week agoI thought HN liked Rust? I learned Rust because I heard about it on HN back in the day. I don't want to start a debate about programming languages but if I were going to look for Rust fans, this is where I would expect to find them.
- wavemode 1 week agoI mostly see non-Rust languages' communities on the defensive on HackerNews, not the other way around.
If you open any thread related to Zig or Odin or C++ you can usually ctrl-F "Rust" and find someone having an argument about how Rust is better.
EDIT: Didn't have to look very far back for an example: https://news.ycombinator.com/item?id=44319008
- Ar-Curunir 1 week agoThere is little ideological overlap between rationalists and Rust folks (and thank god for that). IME unlike the rationalists, Rust has actually done something useful for the world, and HN’s opinion of Rust and its community has drastically improved over the last decade.
The rationalists have not had any such clearly positive effects, and rather their adherents (Thiel, Vance, etc) have had severely deleterious effects on society.
There is no comparison between the two communities.
- epiccoleman 1 week agoI don't know much about Thiel, but Vance does not seem like an "adherent" of rationalism in any way I recognize. I have trouble imagining Vance talking about "The Sequences" or "AI x-risk."
Mentioning Thiel and Vance together brings to mind a different thread of weird netizen philosophizing - one I don't really know much about but which I guess I'd sum up as the "Moldbug / Curtis Yarvin fandom." "Neoreactionaries" might be the right term?
I definitely recognize that the Venn diagram between those two "intellectual movements" (big scare quotes there) overlaps quite a bit, but it seems like a bit of stretch to lump what Vance, Thiel, and other right-wing tech bro types are up to under the rationalism banner.
Update: Having read through some of the other links in the thread, I have "updated" (as the rationalists say) my mental model of that Venn diagram to be slightly more overlapping. I still think they're distinct, but there's more cross-pollination between the Moldbugs and the ACX crowd than I initially realized.
- epiccoleman 1 week ago
- absurdo 1 week agoWhat kind of commentary here have you heard here around the Rust community? I haven’t heard much. Certainly nothing close to violent. I’ve heard lots of shit talking regarding Go because they refused to introduce generics - and to be fair, the reasoning behind the refusal was smug arrogance led by the pompous cock rpike.
I will say as someone who has been programming before we had standardized C++ that “programming communities” aren’t my cup of tea. I like the passion and enthusiasm but it would be good for some of those lads to have a drag, see a shrink and get some nookie.
- 1 week ago
- Aurornis 1 week ago
- Fraterkes 2 weeks ago[flagged]
- codehotter 2 weeks agoI view this as a political constraint, cf. https://www.astralcodexten.com/p/lifeboat-games-and-backscra.... One's identity as Academic, Democrat, Zionist and so on demands certain sacrifices of you, sometimes of rationality. The worse the failure of empathy and rationality, the better a test of loyalty it is. For epistemic rationality, it would be best to https://paulgraham.com/identity.html, but for instrumental rationality it is not. Consequently, many people are reasonable only until certain topics come up, and it's generally worked around by steering the discussion to other topics.
- Fraterkes 2 weeks agoI don’t really buy this at all: I am more emotionally invested in things that I know more about (and vice versa). If Rationalism breaks down at that point it is essentially never useful.
- lcnPylGDnU4H9OF 1 week ago> I don’t really buy this at all
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
- lcnPylGDnU4H9OF 1 week ago
- voidhorse 2 weeks ago[flagged]
- tome 2 weeks agoI'm curious how you assess, relatively speaking, the shittiness of defence of genocide versus false claims of genocide.
- tome 2 weeks ago
- Fraterkes 2 weeks ago
- Fraterkes 2 weeks ago(Ive also been somewhat dogmatic and angry about this conflict, in the opposite direction. But I wouldnt call myself a rationalist)
- skybrian 1 week agoAnything in particular you want to link to as unreasonable?
- komali2 1 week agoWhat's incredible to me is the political blindness. Surely at this point, "liberal zionists" would at least see the writing on the wall? Apply some Bayesian statistical analysis to popular reactions to unprompted military strikes against Iran or something, they should realize at this point that in 25 years the zeitgeist will have completely turned against this chapter in Israel's history, and properly label the genocide for what it is.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
- kombine 2 weeks ago[flagged]
- pbiggar 2 weeks ago[flagged]
- zaphar 2 weeks agoI'm not a Rationalist, however, nothing you said in your first paragraph is factual and therefore the resultant thesis isn't supported. In fact it ignores nearly 2-3000 years of history and ignores a whole bunch of surrounding context.
- simiones 2 weeks agoThe 2-3000 years of history are entirely and wholly irrelevant. Especially as history shows clearly that the Palestinians are just as much the descendants of the ancient Israelites as the Jewish diaspora that returned to their modern land after the founding of modern Israel. The old population from before the Roman conquest never disappeared - some departed and formed the diaspora, but most stayed. Some converted to Christianity during this time as well. Later, they were conquered by Mohammed and his Caliphate, and many converted to Islam, but they're still the same people.
- atwrk 2 weeks agoNot interested in discussing that topic here, but that is precisely the kind of category error that would fit right in with the rationalist crowd: GP was talking about human rights, i.e. actual humans, you are talking about nations or peoples, which is an entirely orthogonal concept.
- simiones 2 weeks ago
- phgn 2 weeks agoVery well put.
- skippyboxedhero 2 weeks ago[flagged]
- simiones 1 week agoWhile both sides have been engaged in crimes against humanity, only one is engaged in a violent occupation, by any stretch of the imagination.
- simiones 1 week ago
- zaphar 2 weeks ago
- codehotter 2 weeks ago
- throwaway984393 1 week ago[dead]
- VincentEvans 1 week ago[dead]
- chipsrafferty 1 week agoI'm absolutely shocked that a rationalist would also be a Zionist /s
- unit149 1 week ago[dead]
- absurdo 1 week agoWhat the fuck am I reading lmao.
- bdbenton5255 1 week ago[flagged]
- meindnoch 1 week ago[flagged]
- LastTrain 2 weeks ago[flagged]
- paganel 2 weeks ago[flagged]
- phgn 2 weeks agoThank you for sharing the link.
It's very hard for me to take anyone seriously who doesn't speak out against the genocide. They're usually arguing about imaginary problems.
("if the will exists" in the article puts the blame for the situation on one side, which is inacceptable)
- honeybadger1 2 weeks ago[flagged]
- phgn 2 weeks ago
- musha68k 1 week agoVery Bay Area to assume you invented Bayesian thinking.
- MeteorMarc 2 weeks agoThis is what rationalisme entails: https://plato.stanford.edu/entries/rationalism-empiricism/
- Sharlin 2 weeks agoThat's a different definition of rationalism from what is used here.
- AnimalMuppet 1 week agoIt is. But the Rationalists, by taking that name as a label, are claiming that they are what the GP said. They want the prestige/respect/audience that the word gets, without actually being that.
- FeepingCreature 1 week ago(The rationalists never took that label, it is falsely applied to them. The project is called rationality, not rationalism. Unfortunately, this is now so pervasive that there's no fixing it.)
- 1 week ago
- FeepingCreature 1 week ago
- AnimalMuppet 1 week ago
- greener_grass 2 weeks agoFor any speed-runners out there: https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism
- Sharlin 2 weeks ago