Harvard ethics professor allegedly fabricated multiple studies
232 points by 1sembiyan 1 year ago | 215 comments- biohazard2 1 year agoPrevious discussion: https://news.ycombinator.com/item?id=36424090
- dang 1 year agoThanks! Macroexpanded:
Harvard dishonesty expert accused of dishonesty - https://news.ycombinator.com/item?id=36424090 - June 2023 (201 comments)
- dang 1 year ago
- joshe 1 year agoShe was right about her book title "Rebel Talent: Why It Pays to Break the Rules at Work and in Life."
Committing fraud paid well. HBS professor salary is about $220K per year. Ten years of salary is a $2 million payoff. Big incentives to do this.
Worth a policy change by university for clawbacks of salary and especially pensions and tenure. Like insider trading she should be forced to disgorge all the profits from her fraud. Dan Ariely too.
And worth charging for criminal fraud, she defrauded her university and students. 5 years in jail seems fine.
It's time to stop pretending academics are a medieval nobles who grow faint at the idea of their sacred honor being besmirched. With huge salary opportunities for bold populist claims academics will take big risks to achieve choice jobs even if they think they will eventually be discovered.
- hx8 1 year agoShe also defrauded all publications she submitted fabricated science to. She defrauded all readers of her studies, and everyone that wrote about her studies. Imagine if actual public policy was made on behalf of her science.
- mlyle 1 year ago> Worth a policy change by university for clawbacks of salary
As a matter of employment law, it's generally pretty difficult to claw back pay from an employee. And this is almost always a good thing.
- stronglikedan 1 year agoDefinitely a good thing. There's already channels for this. If they can prove fraud, they can go through the legal system to get that money back. It doesn't need to be a university policy, and shouldn't be - or any enforceable employer policy for that matter.
- godelski 1 year agoAgreed. I don't see how the current system doesn't already have channels to take action against her. If fraud can be proven, which looks like it can be, then: the school will fire her, sue her, the publisher/s can sue her, etc. Even that guy with the signed book in the comments. There is no need to update contracts to necessitate clawbacks since these essentially already exist.
I'd assume that such a policy change could actually have severe unintended consequences as it would necessitate some metric to evaluate this, which it is notoriously difficult to do with respect to research. Not to mention that most research that is wrong is not wrong to intentional acts of deception, but just that people make mistakes. I'd assume the consequences would result in this territory, making mistakes far more costly. Not something we want for research, where essentially we're trying to learn how to play the rules to a game by playing the game. Often mistakes help us learn those rules.
- godelski 1 year ago
- stronglikedan 1 year ago
- chaxor 1 year agoThe policies that cause 'publish or perish' to flourish are also heavily to blame here. If a scientist only is expected (or directly limited) to publish a handful of extremely good quality articles throughout their lifetime, they will likely make much more sure that the results are real.
- godelski 1 year agoThese policies also just make for bad science in general. In science you need people challenging the status quo, not everyone, but a significant percentage. A publish or perish paradigm encourages railroading in intellectual thought as well as encourages laziness in evaluation, as there is increased pressure on all researchers who are also the ones performing reviews. People who have zero incentive to perform a good review and many incentives to do a quick review (which is even actively encourage). This results in an over-reliance on benchmarkism, which we see in many fields. In ML we see SOTA chasing, forgetting that datasets are proxies which have their own biases and limitations as well as the same for our evaluation methods. In many of the softer sciences -- such as psychology or medicine -- we see a reliance on p-values, forgetting that 0.05 is an arbitrary value and not meaningful without additional context. All our sciences are converging on Goodhart's "Warning."
There is utility in both speed and in a slow step. The truth is that we need both. A continuum of methods and ideas. After all, we are searching in the dark and our only real advantage is parallel Monte Carlo search. The problem is that this is incredibly difficult to accurately evaluate and we are arrogantly attempting to evaluate exceptionally noisy systems with extreme precision without even having a basic model of that system. We could relate this to businesses too, but the timetables for them are typically shorter and so the chaff can more clearly be gained (often post-explained through pure intent instead of it's combination with luck). Then again, we are trying to make academia a business, trying to apply their noisy successes to a completely different model.
- moralestapia 1 year ago>The policies that cause 'publish or perish' to flourish are also heavily to blame here.
Ugh.
There's plenty of honest researchers in the field.
You don't have to be corrupt to build a nice career and, now that I think of it, only people who are equally corrupt seem to embrace/justify/tolerate these practices.
- ChainOfFools 1 year agoThey're also expected, in a research university, to raise a constant stream of grant money. a certain subset of academics are starting to figure out that this becomes much easier when you invest a comparative pittance in a decent wardrobe , makeup , and good lighting so as to cast your research net extremely wide into populous rhetoric, Ted talks etc. The effort to reward ratio is fantastic as it invariably results in being contacted after your show, and it is a show, by people who have access to public outreach funds but very little understanding of the field nor your place in it, but want to be attached to whatever momentum you have and we'll throw money at you.
This is multiplied several-fold when you are doing this populist intellectual dinner talk circuit in one of several countries whose elite are anxious to establish (or regain) their own scientific prestige on the world academic stage and will throw unbelievable amounts of money at the most brittle and flimsy proposals so long as there is an R1 institution attached.
- karaterobot 1 year agoI agree that the up-or-out culture of tenure is bad, and so are many other terrible university policies and practices. But I would not go so far as to ascribe blame to them. 100% of the responsibility goes to the people who knowingly falsified data to advance their careers. That doesn't mean the culture shouldn't change.
- godelski 1 year ago> 100% of the responsibility goes to the people who knowingly falsified data to advance their careers.
I don't think anyone would argue otherwise. The issue is that if there's a culture that encourages (incentivizes, not advocates) cheating, then we should expect a culture where more people cheat. I think right now it is hard to argue that the current culture doesn't encourage cheating, and if we're being honest, doesn't have a good means of identifying it (publish or perish also decreases reviewing quality).
- godelski 1 year ago
- chongli 1 year agoThe policies that cause 'publish or perish' to flourish
What policies are those? I thought 'publish or perish' was an expected outcome given the extreme supply/demand situation in the academic job market.
- barelysurviving 1 year agoMy gf is in academia (in economics) and she told me that to get a tenure you sign contracts demanding you’ll publish 4-5 relevant research papers in 5 years. A geologist friend of mine told me something similar, just with different numbers.
- barelysurviving 1 year ago
- godelski 1 year ago
- xk_id 1 year ago> It's time to stop pretending academics are a medieval nobles who grow faint at the idea of their sacred honor being besmirched. With huge salary opportunities for bold populist claims academics will take big risks to achieve choice jobs even if they think they will eventually be discovered.
Exactly. We are finally starting to see the unwinding of the 20th century scientist myth. With the advent of corporate culture in the academia, it now attracts and nurtures very different personality types.
- manvillej 1 year agoI have a literal signed copy from her at the HBS book signing. I feel cheated and lied to.
- cbsks 1 year agoOn the plus side, you now have a book with an interesting story!
- cbsks 1 year ago
- peteradio 1 year agoCould be she deserves an award for such a blatant violation of ethics! Sort of like Hendrix smashing his guitar.
- 1 year ago
- hx8 1 year ago
- screye 1 year agoStop treating non-sciences like sciences.
The social sciences have a certain hypocrisy to them. "We do not need to be as grounded in math as the other sciences" runs up against "Our results are mathematically significant and deserve the same respect as the other sciences."
Doing statistics in the hard sciences is easy, yet we study them at a graduate level. Setting up variable controls in hard sciences is easier, yet we spend years training people on eliminating even the smallest chance of error through rigid lab protocols. The hard sciences allow you to collect enough data-points, that a large effect size is almost always statistically significant... yet we report p-values religiously every step of the way.
Social sciences have none of those affordances, yet they somehow get away with even less mathematical and statistical rigor. I mean this both in training and in practice. The social sciences are harder to control. It is a 'science' that can say very little about anything (purely due to difficulty of setting up controlled experiments). Yet, social science produces a disproportionate number of 'spicy' results.
Not all fields will produce 'ground breaking' work and that should be fine. I'm not sure where the incentive/impetus for it comes from, but social scientists build storied careers on results that would get rejected from any respected science journal for lack of rigor. It is as if there is this pressure on every academic to prove something fundamental about human nature few years, all while it's plainly clear that we as humans know almost nothing about human nature. If centuries of work in this field has not been able to find a few foundational facts (fancy alliteration), then maybe the field needs to reconsider its ambitiousness and self-assured confidence.
- currymj 1 year agoSocial science is not the problem.
Biology is definitely a "real" science yet the problems with fraud in biology are, if anything, worse.
See the recent cases involving fabricated data in a Nature paper about Alzheimer's.
https://www.science.org/content/article/potential-fabricatio...
That actually seems to be a common pattern, there's another case now with strong evidence of data fraud in multiple Science/Nature papers, possibly implicating the president of Stanford.
https://stanforddaily.com/2023/04/25/stanford-president-dodg...
The problems seem less severe in physics, but there was that recent case where supposed experimental evidence of majorana fermions had to be retracted, with some research practices that look dubious in retrospect.
Or the famous Schön scandal at Bell Labs in semiconductor physics, where there was outright fraud.
- justinclift 1 year ago> Social science is not the problem.
Could it be there's more than one problem? :)
- gowld 1 year agoBroadly, the problem is replication.
Social science is non-replicable because the parameters are too fuzzy. (People are more diverse than electrons).
Lab science is non-replicable because the experiments are too expensive to set up, and sometimes are intentionally kept as trade secrets for economic reasons.
- gowld 1 year ago
- droptablemain 1 year agoThere are certainly problems in academia outside of social sciences but nothing quite like this:
- User23 1 year agoBiology is a huge field. Some parts of it are rigorous hard science, like a good chunk of cellular biology. Other parts of it are pure speculation, like the origin of life. And some are obscure and confused, like what exactly is life anyhow? As an example for that last one biologists appear to have settled for begging the question and saying well a thing is alive if it has life processes.
I don't know that anyone else does, but I like to draw a distinction between experimental scientific inquiry and rigorously observed phenomenology. Properly speaking I say a great deal of what's called science is really a species of phenomenology. For example cosmology is pure phenomenology. We can't actually run experiments on what happens when you smash two black holes together. However we can rigorously observe the phenomenon and abduce a descriptive model that can potentially even make predictions. But the word "science" has become a totem, so there's a lot of cargo culting[1] and punishing blasphemers.
However, I don't really think that whether a field is science, phenomenology, or some blend of both has much effect on the rate of fraud. Rather, I hypothesize the dominant factor in the incidence of fraud is the perceived magnitude of the potential benefit to be obtained. The Alzheimer's fraud is a good example of that. There are hundreds of billions and maybe even trillions of dollars in play in that space. Drug companies have already poured billions down the drain chasing dead ends so anyone who can sell them hope can potentially get a substantial payday. An amusing corollary of this is that given the relatively small stakes social science fraudsters are playing for, we can conclude that they are small-minded in the sense that they aren't thinking nearly as big as the drug company fraudsters.
[1] https://calteches.library.caltech.edu/51/2/CargoCult.htm
- justinclift 1 year ago
- UniverseHacker 1 year agoI’ve found the opposite… because of the complexity and noise the social sciences are using more sophisticated statistical methods than the “hard” sciences. In grad school I took Bayesian probability theory classes from social scientists because my “hard” science peers were still using frequentist statistical tests, often applied inappropriately by just copying methods from past papers that also applied the methods inappropriately. The social sciences in general seem to be a decade or two ahead in adopting improved methods, and researchers spend a lot more time understanding these methods, writing custom software and models, etc.
IMO, mathematical and computational “literacy” is particularly low in the medical and life sciences compared to the social sciences, physics and chemistry are not quite as bad.
- pclmulqdq 1 year agoThe best statisticians I know are economists, but they don't know how to run a study to avoid the influence of outside variables.
The best experimenters I know in terms of monitoring and controlling external variables are chemists, but sometimes they just use the wrong statistical test.
Biologists seem to generally be bad at both, and physicists seem to generally be good at both. I honestly think this is about the culture of the fields rather than knowledge - they all start as good scientists and just unlearn what they don't "need."
- gowld 1 year agoIndeed, social sciences use the most sophisticated alchemy to product "knowledge" from ignorance. :-)
Bayesian statistics is great, you can get any posterior you want by choosing good priors.
- wizzwizz4 1 year agoBut papers don't publish priors. They publish evidence.
"This study provides 5dB of evidence that the pigs do not reactionless flight capabilities, and 300dB of evidence that I don't know how physics works. However, according to my priors, the flying pigs hypothesis still comes out ahead." said no study ever.
- UniverseHacker 1 year agoAn informative prior requires good data… it is a misnomer that Bayesians just “make up” priors, they are generally the result of a past experiment. Indeed a prior can encode just a hunch or intuition if you have nothing else, but in practice that will be nearly identical to a uniform prior.
Personally, I always use a uniform prior unless I am able to cite or present the data that led to something else.
- wizzwizz4 1 year ago
- chaxor 1 year agoMore sophisticated does not mean more correct
- TheMagicHorsey 1 year agoI think it really varies department to department. Was your program in an American university?
- UniverseHacker 1 year agoAn R1 research university in the USA
Look at the most modern open source R and Python packages for statistical inference… probably 90 percent are developed and maintained by social science research labs
- UniverseHacker 1 year ago
- pclmulqdq 1 year ago
- taeric 1 year agoI'll push that it is also "stop treating academia as such high stakes as we treat it." We have "raised the bar" so much that you are basically thrown out with failure. If you make it life or death stakes, expect people to fight appropriately. At some level, that will be pushing the data harder than it has a right to be pushed.
- vacuity 1 year agoThe classic:
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair
Except possibly even more high stakes than that, since academics' very reputations are on the line.
- ohnonononohno 1 year agoREM: sunk-cost-falacy
https://en.m.wikipedia.org/wiki/Hat_puzzle
Gödel's incompleteness theorem (philosophical); 'Theorem states that no consistent system of axioms can be listed by an effective procedure, capable of proving all truth' (-;
https://en.m.wikipedia.org/wiki/Epistemic_logic
Hint: HN in 2016
regards...
- ohnonononohno 1 year ago
- fullshark 1 year agoThe only way to do this is to admit fewer PhD students, which probably should happen. There's only so many professorships, and some will always be more coveted than others, leading to those stakes.
- pas 1 year agoIt's probably not the absolute number of PhD students that's the problem. It's the crushing despair of uncertainty, the publish-and-eventually-still-perish incinerator, the cult of positive results, the completely ridiculous inefficiency of the actual work (because it's done mostly by inexperienced grad students, and by the time they know what's up they are already off to some godforsaken postdoc or worse, or leave academia for good if they are lucky).
Not to mention the whole tenure system is questionable. Yes, it's good that at least a few percent of academia has job security, but the selection process is full of perverse incentives and arguably society could get better results if tenure was for 10 years and based on a lottery.
- taeric 1 year agoI could posit a few other ways. Could also make it so that we aren't financially dependent on wild success, for example.
- pas 1 year ago
- vacuity 1 year ago
- IKantRead 1 year agoThe problem is funding.
The best psychology work tends to be much closer to philosophy than science. But philosophy generally doesn't get much funding.
Anyone interested in psychology will ultimately be forced to optimize for funding, which inevitably leads to fraudulent work since psychology based on scientific principles simply isn't possible yet as we don't know nearly enough about the function of the mind to explore the areas psychology is interested in.
- chengiz 1 year agoThis is just your run of the mill STEM looking down upon other fields argument. There is rampant fraud in STEM. If anything it's harded to detect because the people who do it are better informed on how to hide it.
- tejohnso 1 year ago> social scientists build storied careers on results that would get rejected from any respected science journal for lack of rigor
Wasn't it called social studies? When did social studies get branded as science in the first place?
- nemo44x 1 year agoThey started using some tools that scientists use and decided they too were a "Science" (because we used a tool that actual science uses) because it was the best way to be taken as seriously as real science. There has been a brewing jealousy in the academy over many decades with the type of success hard sciences have seen. So, just put the word "science" in your title and pretend you're doing "Science" and now your opinions get to be "the science" and if someone scrutinizes it (like actual science would be and is open to) you can say "the science is settled."
Of course the best irony is the current fad of rejecting "objective truth" (on whatever grounds are useful - it's racist, etc) and yet still claiming you are doing a science. Not the search for truth but rather the search for your truth.
- bakuninsbart 1 year agoIt was called social science first, then rebranded as social studies later to be more inclusive, after the British society of the sciences made the term 'science' a lot more exclusive.
https://www.cambridge.org/core/journals/modern-intellectual-...
- nemo44x 1 year ago
- bjourne 1 year agoFabricated data and fraudulent results are not unknown to the hard sciences either. See f.e https://www.youtube.com/watch?v=nfDoml-Db64 A professor manipulating data for an amusing pop-psychology result is small potatoes in comparison to what has rocked the physics world.
- mistermann 1 year ago> Stop treating non-sciences like sciences.
Stop ignoring the important nuances of the different problem spaces science works in, and lacks adequate sophistication in its methodologies for dealing with those nuances resulting in these gong shows regularly happening and then explained away as if there's no fundamental problem.
- rossdavidh 1 year agoThus far, I am unaware of any field of science which has seriously looked for problems like these, that has not found it.
- AndyMcConachie 1 year agoYou're argument is basically, "It's really hard so we shouldn't try."
A good social scientist understands everything you're pointing out. This isn't news to anyone nor is it even really controversial.
But we can witness phenomena in society so people try and understand them. Would you prefer we not try and understand them? Just give up because it's too hard to figure out?
- AlexandrB 1 year agoThere's definitely an appeal to solving a simpler, technological problem instead of tackling the more difficult, societal one. Consider crypto, which tries to solve concentration of power in financial institutions by creating a distributed technology for tracking financial transactions. Unfortunately this just kicks the can down the road.
- User23 1 year agoSometimes the can just needs to be kicked indefinitely. Every time you perform some kind of maintenance on something that degrades over time, like a house or a car, you're can kicking, because unless the thing being maintained ceases to exist, you or your successors will eventually face the same issue again and again. Frankly it's a pretty lousy metaphor.
- User23 1 year ago
- AlexandrB 1 year ago
- mc32 1 year agoYah they should be called Studies. Social studies, psychologic studies, economic studies, etc.
- gowld 1 year agomathematical studies, chemical studies, biological studies.
Or we could use fancy Latin suffixes:
sociology, psychology, economicology, biology, chemiology, pharmacology, mathology (https://www.youtube.com/channel/UC1_uAIS3r8Vu6JjXWvastJg).
- mc32 1 year agoscience implies a rigor which does not exist in those studies. Quite a few cities, NGOs, non-profits and other organizations make decisions on quick and dirty studies by university students involving 30 or 40 subjects because the results "feel good". The evening news will also trumpet these studies "A new study out of the University of X reveals that..." Politicians, "Well, there is a new study from the University of X that says if we..."
- mc32 1 year ago
- gowld 1 year ago
- currymj 1 year ago
- PaulKeeble 1 year agoAlmost no findings in psychology survive 20 years, almost all of them are determined to be fraudulent manipulation of data and findings or at the least fail to be replicated. Psychology is not producing scientific findings as a field, it's completely ignorable and responsible for a large chunk of the science replication crisis.
- UncleMeat 1 year agoPsychology is also one of the only fields funding replication.
My experience in CS is that the replicability of experimental results is embarrassingly bad, but this isn't making headlines in the same way so people don't consider it to be a problem.
This is data fraud in psych. Data fraud has also happened in plenty of other fields. When it occurs in these fields it is seen as a one-off. When it occurs in psych, it is because the entire field is useless garbage. That's not the lesson to take away here.
- justrealist 1 year ago> My experience in CS is that the replicability of experimental results is embarrassingly bad
I think that's true for low-tier/low-impact CS papers, but the difference is that literally nobody cares about those papers. The high-tier stuff is easily verifiable (like Tensorflow or whatever) and nobody is writing articles about 5% improvements against a benchmark in some obscure niche scheduling and planning domain.
Outside academic CS people are more scientific about experimental results because it has concrete implications on revenue or spend... but they aren't getting the results from conference papers.
- wongarsu 1 year agoThe ML part of CS is in a sense funding replication: there are a decent number of papers whose only premise is "we compare 5 recent papers on this benchmark (and fill a couple pages with discussion)" or "we made a new benchmark, here's how commonly cited papers compare (plus a couple pages of discussion)"
Outside the high-profile cases it seems accepted norm that papers perform far worse when scored against somebody else's benchmark. The real measure of quality is how big the gap is.
- UncleMeat 1 year agoI'm talking about papers in top PL and Security venues.
- wongarsu 1 year ago
- joshe 1 year agoNo. Psychology is the worst. It is encouraging that an insurgent group is taking replication on but it feels like only about 20% of the field.
Some results do replicate. Big Five personality traits, general classification of mental illness, mainstream IQ results, and human perception and performance. All psychology is not false and there are interesting things to learn from the field.
But the severity and frequency of fraud and non replication is worst in psychology. So much so that half of what you learned in Psych 101 ten years ago does not replicate.
The big challenge for the field is that humans know a lot of psychology. We can track our own thoughts and we are constantly interacting with people and trying to understand their psychology. A biologist who studies ants intensely for 5 years is maybe only one of 100 people have ever watched them that carefully. They'll find lots of new stuff.
Psychology doesn't have powerful techniques than that for determining new truths about humans. It's still mostly give a questionnaire or put people in weird situations.
So there are a lot of researchers hunting around for original ideas that are undiscoverable with our current tech. Some of them are bound to give into the temptation to just make up an interesting result with manufactured data. With p hacking they might not even be committing fraud, they are desperate for a positive result, when the get one they stop looking and publish.
So I would still guess in 2023 that about 80% of "new" discoveries published in journals won't replicate. That rises to 95% with journal publications that are published as university pr and reported in the media.
For coverage of what to remember to unlearn Rolf Degen is great: https://twitter.com/DegenRolf
Popular examples that do not replicate (mostly from Rolf):
Repressed childhood memories Social media harms Search bubbles and echo chambers Women prefer masculine men when fertile Priming (you see a fight in the hallway and then don't cooperate later) Power posing (puff out your chest to feel more confident) Watching eyes make people more honest (in Kahneman and Malcolm Gladwell)
Notice that a lot of these are interesting, and you want them to be true. That's not enough to make it so.
As a bonus, an interesting interview with Daniel Kahneman: https://www.edge.org/adversarial-collaboration-daniel-kahnem...
- mrangle 1 year agoOther fields have a fairly firm scientific foundation. Whereas it is routinely pointed out by psychologists and psychiatrists that psych has no such thing. The field struggles to authoritatively explain the first thing about human thinking.
Adding to complication in the psych field is the widely observed phenomenon that it draws students with psych problems. Like those who would be willing to fabricate data, for example.
But hey, at least it isn't sociology.
There are resilient exceptions in academia in general, but soft science fields have led the way in the decay of academic standards.
- mike_hearn 1 year agoPsychology really isn't the worst field out there. They do at least talk about their problems and claim to care about them, which is better than most fields. They actually do try to replicate each other's results sometimes and publish it when they fail. I used to believe psychology must be uniquely bad, these days I think it's just uniquely honest (!).
There are dozens of well known fields that are built on epistemological quicksand yet which point-blank refuse to admit to or talk about their problems. Because they have a culture of deny deny deny, even when the evidence is overwhelming, they actually get less attention because why bother doing a nice writeup of fraud if you know the result will just be stonewalling? It's better to focus on fields where there might be some actual response, a chance of improvement, no matter how minor.
- mike_hearn 1 year ago
- stcroixx 1 year agoWell, CS doesn't involve pharmaceuticals so the harm caused by fraud isn't as serious.
- triceratops 1 year agoIsn't that psychiatry? Psychologists aren't necessarily licensed medical doctors who can prescribe.
- UncleMeat 1 year agoWe could maybe make the conversation about the social penetration of ideas coming out of various disciplines and how invalid research can harm policy or medical practice, but that'd be a completely different discussion than "psychology is utter garbage as a field."
- manicennui 1 year agoBecause no one writes software that can lead to death.
- ajuc 1 year agoYou can easily kill people with software. It has happened.
- triceratops 1 year ago
- justrealist 1 year ago
- hx8 1 year agoThis is such an exaggerated claim.
* Plenty of studies inherently cannot survive 20 years. Send out a survey and ask people "on a scale of 1 to 10 how happy are you". These results will not match the results in 21 years simply because the population you are studying is different. This example is simplified, but explains why many results change over time.
* Plenty of studies have survived 20 years, and those that study psychology are really clear about which knowledge is foundational and which is on more questionable ground. The Big Five personalty traits is about 40 years old and has been found to be consistent across cultures. A ton of psychological research around psychological responses has survived the test of time.
* Psychologist often relay on more questionable data than other scientists because it's much cheaper and easier for psychologist to collect questionable data than other sciences. This allows for a wider exploration and makes research much more accessible. Anyone reading a paper with this type of data will be naturally skeptical. I am specifically referring to data collected from self-reported questionnaires, often collecting a sample that generalizes poorly over interesting populations.
- BeetleB 1 year ago> and responsible for a large chunk of the science replication crisis.
Former semiconductor researcher here. It is quite common for researchers not to believe published papers - even in prestigious journals from well known researchers. No one bothers replicating, and they don't put enough information in the paper to replicate (competition - they don't want others to get the secret sauce).
This was true for both computational and experimental papers.
Oh, and I did know one person personally who falsified data (and was caught). He just transferred to another prestigious school and got his PhD there instead.
It was quite demoralizing and helped me decide not to pursue academia.
- rcxdude 1 year agoI think a lot of the communication around science publishing has overemphasized the power of peer review. More people should realise it's the table stakes for science, not the conclusion.
- rcxdude 1 year ago
- klysm 1 year agoAn epistemological crisis rooted in a shitty incentive system and powered by bad statistics
- tgv 1 year agoAs I've written more than once here: it's are harder to experiment upon than physics. Society/individual behavior is also way more complex than e.g. rheology or organic chemistry, and doesn't submit itself to lab research.
But that's not the only problem. The problem is that people try it anyway. They have some hypothesis within a theoretical framework which is embedded in other frameworks, none of which is proven. How could they be? But, they set up an experiment anyway, and (usually after a couple of attempts) they find something that can reject H0. Then they publish an article stating that their theoretical framework is a fact.
There is so much wrong in this process, even ignoring the manipulation of the stimuli and conditions, and the statistical procedures, yet the theory has a good chance to make school and become the subject of research in dozens of psych departments, each contributing articles to the literature about it. After a while, an essentially flawed theory enters the handbooks.
Then, if you want to make a name for yourself as a researcher, and you need publications, you can simply attack older theories. They'll crumble like cookies, you get your publications, and the cycle restarts.
And as critical as I am of psychology, other social sciences have even lower empirical standards. Educational sciences, sociology, linguistics all have very little to show for a century of research. And that's ignoring disciplines like political sciences or history.
- posterboy 1 year agoOr Psychology is difficult and having passed the the bar is not a quality controlled guarantee, same as for example nautical engineering isn't. That's not enough reason to say that boats don't float.
- NoMoreNicksLeft 1 year agoWhat is psychology used for, and whatever those things are... are they still floating (so to speak)?
Some of it is used in policy. So when some convicted murderer is out on furlough and kills again, am I wrong to wonder if that was some of the top-notch applied psychology at work?
- posterboy 1 year agoThat's shifting the goal post and it suffers from survivor bias. The individual decision is met by few people. The policy is instated by very different people and informed by many factors outside the control of psychology, where false positives don't invalidate the whole system if the alternative is a slippery slope with draconic punishments – a paradox because the most severe punishment could amount to manslaughter if not murder.
- posterboy 1 year ago
- NoMoreNicksLeft 1 year ago
- pyrale 1 year agoThere are differences, however, between research using poor methodology that end up being worthless because it can't be replicated, and research where some data points were deleted and others were created in order to show a better result.
The work here isn't fraudulent because of a mistake or because the author has shaky statistics knowledge, but because it was fabricated.
- vgalin 1 year agoIf this is an established fact, where could one learn more about this?
- taeric 1 year agoIt is, at least, fairly well accepted as a thing. Name a "fact" that has been accepted as a learning, dive into it, find that it hasn't held up. :(
Some are purely on the popular science side of failure. The 10k of practice was basically thrown out, but the original research still seems good. They never claimed "if you do 10k of work, you will be good."
The "marshmallow test," though, seems completely tossed? Maybe there was something there?
Anchoring and other items? Not sure how well those have survived. :(
https://en.wikipedia.org/wiki/Replication_crisis looks to be a good article, though I haven't finished reading it.
- scythe 1 year agoSome of Kahneman and Tversky's work was thrown out, but the majority of it replicates:
- PaulKeeble 1 year agoThe Dunning Kruger data is also just a statistical manipulation.
- scythe 1 year ago
- endominus 1 year agoPer https://www.gleech.org/psych:
- Stanford Prison Experiment; Not an experiment, abuse was scripted, experimenter constantly intervened, reactions of participants were faked, and there was not even a scientific hypothesis they were testing. Definitely read the paper[0] debunking it.
- Milgram Experiment: (the one where people were ordered to shock actors) No good evidence for it. Researchers did not follow script, implausible levels of agreement between different experiments. Killer line is, “only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter.”
- Robber's Cave: (the one where two groups of kids immediately formed tribal hatred between one another) The conflict was orchestrated by experimenters and the experiment was actually repeated because the first time the kids absolutely refused to turn on one another. More information at [1].
- At best, weak evidence for implicit bias testing and stereotype threat.
- Weak evidence of "facial feedback" (smiling causes a good mood and frowning causes a bad mood)
- Good evidence against "ego depletion" (the idea that willpower is limited in a muscle-like fashion)
- Mixed evidence for Dunning-Kruger effect
- Questionable evidence for "hungry judge" effect (the idea that judicial sentences are massively more merciful in the morning and after a lunch recess due to "ego depletion" - this is also thoroughly debunked here [2])
- The 10,000 hours of practice leading to expertise idea has been disowned by its proponents
- No good evidence that tailoring teaching to students’ preferred learning styles has any effect on objective measures of attainment.
- No good evidence that brains contain one mind per hemisphere. i.e. the left-brain, right-brain split that people talk about, especially after the link between the hemispheres is severed.
- No good evidence for left/right hemisphere dominance correlating with personality differences.
Per https://danluu.com/dunning-kruger/:
- Most people talking about Dunning-Kruger have no idea what it actually means. The actual purported bias is much weaker than people claim - basically, that everyone either overestimates their ability, but that estimation is still positively correlated with actual ability, or estimated ability has basically no correlation with actual ability and everyone is just guessing.
- Increasing your wealth does in fact make you happier at a predictable rate. There is no "plateau" of wealth or income - what appears to be a plateau is misleading displays of data. In effect, increasing income by a proportional rate will increase reported happiness by a fixed rate. For example, say you make $10, and your happiness is 50. Then, your income increases to $20 and your happiness increases to 60. Then, doubling your happiness is necessary to increase your happiness by 10. It's a logarithmic function; plotted on a standard axis, it looks like a plateau, but plotted on a logarithmic scale and it's a constantly increasing line.
- Hedonic Adaptation (aka the hedonic treadmill) is a myth. Bad life events (divorce, disability, death of a loved one) all have negative long-term effects on happiness. Vice versa for positive events.
[0]: https://www.gwern.net/docs/psychology/2019-letexier.pdf
[1]: https://www.theguardian.com/science/2018/apr/16/a-real-life-...
[2]: http://daniellakens.blogspot.com/2017/07/impossibly-hungry-j...
- AzzieElbab 1 year agoNot to mention the grievance studies affair and the Sokal hoax. Next level
https://en.wikipedia.org/wiki/Grievance_studies_affairhttps://en.wikipedia.org/wiki/Sokal_affair
- AzzieElbab 1 year ago
- taeric 1 year ago
- booleandilemma 1 year agoIt sure does give you some cool factoids to bring up at parties though. Did you hear the one about the prison experiment?
- blululu 1 year agoThe interpretation is never so simple. I'm not sure what that guy at Stanford was trying to prove but his research conclusively proved that tenured psych professors at leading universities can be wildly unethical. This discovery helped support the creation of institutional review boards across the country - one of the first major restrictions on academic freedom in the history of science.
- sandworm101 1 year agoOr they proved that if you setup some people to work/live in a fake prison, they end up acting basically like all the other people who already work/live in realworld prisons. It's like having people sit in a MacDonald's for ten hours each day, then writing a paper about how subjects ate more fast food during the trial period. Breaking news: Starting a new job in Hawaii increases the likelihood that you will take up surfing.
- sandworm101 1 year ago
- blululu 1 year ago
- somsak2 1 year agoAny sources for these claims?
- UncleMeat 1 year ago
- throwawaaarrgh 1 year agoI honestly love how archaic the science research community is. It's like we're still in the 19th century, where the claims made by people of high reputation are taken as gospel, rather than trusting a rigorous process without personal or professional reputation bias. All the bullshit about publishing for the sake of getting a grant, getting your name on a paper to up your rep, etc is amazing. Like the purpose of science isn't science but to play an elaborate professional reputation game that happens to include science.
- NoMoreNicksLeft 1 year ago> Like the purpose of science isn't science but to play an elaborate professional reputation game that happens to include science.
The scary part comes about 50 years ago when everyone woke up one morning and figured out that the "happens to include science" part wasn't strictly necessary.
- adamrezich 1 year agoto my mind, this is the actually interesting psychological finding—that we, collectively, as a society, fall for this bullshit, time and time again, and we continue to reinforce societal structures that continue to let this sort of thing happen, largely without consequence.
- vacuity 1 year agoActual interviews of people being told about this fraud:
"Oh no! Anyways."
When most people are stuck in a rat race to get rich and "become successful", there's not much energy or reason to care about some drama, unless it's entertaining or political, perhaps. If my employer follows the newest workplace fads based on some studies, I might care. Otherwise, why should I spend mental energy (at the very least) trying to get the messes of scientific research cleaned up?
- dennis_jeeves1 1 year ago>we continue to reinforce societal structures that continue to let this sort of thing happen, largely without consequence.
People are morons, generation after generation, what else do you expect?
- vacuity 1 year ago
- NoMoreNicksLeft 1 year ago
- bluecalm 1 year agoNot very surprising. There are incentives to publish, especially surprising stuff (so you can call it ground breaking) and no incentives to be right in a sense that your results align with reality.
Imagine spending time to prepare a paper then collecting data just to see it doesn't show what you hoped it shows. Your choices are to bin several weeks/months of you work to the detriment of your career or help a bit so the data behaves. The risk is close to 0, the reward (or lack of punishment) is clear. It's crazy to not expect significant % of individuals facing the choice going with the massaging the data option.
It all flies in social science because most of the stuff they "research" range from interesting but not very useful curiosities to completely pointless. No devices are being built based on it. No serious investment being made. No policies are going to be affected or if they are then studies are picked for ideological compliance not for describing reality well. You're also not going to face a lawsuit as you might in say medicine. "Your honour, I've read the study , started breaking the rules and got expelled from school" is not going to get you far.
- currymj 1 year agoI don't think your last point is quite right.
there is a large amount of fraud (real fraud, photoshopping images to show fake scientific results) in biology, some of which has directly led to serious pharmaceutical investments in drugs that don't work. Recently Cassava Sciences got in trouble for this. https://www.nytimes.com/2022/04/18/health/alzheimers-cassava...
as far as the alleged data fraud in this story, at least two of the authors have made a lot of money giving talks, consulting, and acting as scientific advisors on the basis of their research, so there's also fairly serious financial stakes.
- currymj 1 year ago
- steveads 1 year agoThe irony is strong. Her 2018 book title: "Rebel Talent: Why It Pays to Break the Rules at Work and in Life"
- 1MachineElf 1 year agoFacts support that throughout human history, those who are lauded in the field of ethics are often the ones who help us rationalize, justify, and moralize what would previously have been considered unethical behavior.
- somsak2 1 year agoThis is mentioned in the second sentence of the linked article.
- 1MachineElf 1 year ago
- LatteLazy 1 year agoThe older I get, the more prejudiced I get: STEM exists, everything else is just bullshit. Ethics is just made up. Even the less rigorous "sciences" like psychology are mostly just nonsense (unreproducible "results", driven by fashion not results, more about getting donor/client money in than actually discovering or changing anything) as far as I can see.
- JHonaker 1 year agoI’ve had the opposite trajectory, and I think there’s a lot of danger in a wider adoption of your stance. The deeper I go on STEM things, the more I realize everything is, in your words, “bullshit.”
I want to be clear that I would phrase that as “based on consensus,” and not as bullshit. I have seen so many doomed/failed projects/models that have come up because people say “well, it’s math, so it has to be right.”
- LatteLazy 1 year agoI mean, if the maths was right it was right. If it was wrong and people decided to proceed anyway then that's "consensus based" (or bullshit as I ineloquently put it :) ) like non stem seems to be most of the time.
Sorry if I'm missing your point?
- JHonaker 1 year agoThe main problem is the extrapolation of the “small world” we construct with mathematics to the “large world” of an application.
Your model can be completely internally consistent and allow you to study things in the context of your assumptions perfectly, but that doesn’t mean it doesn’t have critical flaws that disqualify any learning about the system you’ve modeled.
The lack of understanding about what implications a representational choice have for the kinds of relationships a mathematical model can describe, and the lack of understanding concerning what are and aren’t critical domain components to model that’s difficult.
People often forget that there are two translation steps that have to take place. Domain -> Math -> Domain. That second one is too often implicit or assumed to follow, especially in ML.
- mike_hearn 1 year agohttps://en.wikipedia.org/wiki/Mathiness
Mathiness is a term coined by Nobel prize winner economist Paul Romer to label a specific misuse of mathematics in economic analyses. An author committed to the norms of science should use mathematical reasoning to clarify their analyses. By contrast, "mathiness" is not intended to clarify, but instead to mislead. According to Romer, some researchers use unrealistic assumptions and strained interpretations of their results in order to push an ideological agenda, and use a smokescreen of fancy mathematics to disguise their intentions.
- treprinum 1 year agoStuff like computing precise results from imprecise inputs etc.
- JHonaker 1 year ago
- LatteLazy 1 year ago
- nemo44x 1 year agoThey (humanities) mainly exist to perpetuate and legitimize state power over the individual. It serves the interests of elites and their prestigious media (the ones that win the awards they made up to differentiate themselves) gets to propagate the (predetermined) outcomes from the "studies" their elite institutions publish. This in turn makes these institutions and adjacent power structures (NGOs, media, activist networks, etc) even more powerful as it turns into policy and it's a never ending loop of self legitimization and justification.
- yamazakiwi 1 year agoIn the spirit of this thread, prove this is true using scientific study.
- nemo44x 1 year agoI don’t have the time or resources to conduct a scientific study on it. But empirically I think it’s clear and the incentive of power attraction makes complete sense.
Also, science doesn’t really prove things true. It can prove things likely. And it’s really good at showing what is not true. I think in general humanities fail to address falsifiable things. This is why we have a reproducibility problem and why it’s mainly bullshit.
- nemo44x 1 year ago
- yamazakiwi 1 year ago
- dreen 1 year agoYou're wrong, concentrating on just STEM leads to media illiteracy. I would even say it's highly conductive in breeding bigotry. Studying the humanities is essential in understanding the human condition, as well as making sense of the mass of information and cultural heirtage everyone finds themselves in.
- LatteLazy 1 year agoCareful: you have missed my point.
I didn't say Humanities were worthless. I just said there are no provable results there. "Is X ethical?" - no one can prove it but we will discuss it as long as someone will pay us to. Since there are no firm truths, anyone can get a degree (or more) without actually gaining the skills you refer to. In fact actually gaining the skills you allude to makes someone LESS likely to get funded or famous. It is extremists filled with certainty who rule these subjects...
- posterboy 1 year ago> Since there are no firm truths
that's bullshit. Classic liar's paradox and the solution is that sentence is ungrammatical in a strong sense of universal grammar ... which I can't prove, and I'm not philosopher enough to concern myself with lists of formal fallacies to make up for it
- dreen 1 year agoThere absolutely are provable results, you make observations, conslusions, you must show how those are linked with logic. "Is X ethical" is just one of milions of possible questions in the class of sciences called humanities, of which ethics are just a small subset. However, when doing any research including interacting with live humans, you must understably prove ethical conduct.
- posterboy 1 year ago
- LatteLazy 1 year ago
- GildedIgel 1 year agoThat’s why STEM professions have ethical guidelines and standards - here’s AMSTAT -> https://www.amstat.org/your-career/ethical-guidelines-for-st...
- LatteLazy 1 year agoThese always seem like nonsense to me.
People who don't need guidelines gain nothing from them. People who need them won't sign up to them. Meanwhile I have to fill in a form and have a 2h meeting before I can shine a laser on an (already dead, commercially available) butterfly's wing?!
- LatteLazy 1 year ago
- dennis_jeeves1 1 year ago>STEM exists, everything else is just bullshit
'M' is also largely bullshit. Worse, it's got immediate consequences.
- JHonaker 1 year ago
- jononomo 1 year agoDan Ariely is another person in this space who has been spouting BS (he is mentioned in this article). I remember when Ariely used to make the news on a regular basis and I'm glad that he's dropped off the face of the Earth.
- somsak2 1 year agoWhile not as popular as he was, he's still pretty big. I think the main reason is likely because he hasn't published a new book since 2019, though that's coming in September later this year.
https://trends.google.com/trends/explore?date=all&q=%2Fm%2F0...
- jononomo 1 year agoCrap. I hope he gets raked over the coals in interviews and is constantly badgered about his dishonesty.
- jononomo 1 year ago
- somsak2 1 year ago
- christkv 1 year agoThe worst part of these kind of studies and I include sociology and psychology studies is that they are often used to create policy in the private or public sector causing untold damage to organizations, employers and citizens.
- motohagiography 1 year agoIn the public sector, studies like these are the pretexts for unpopular policies for minority interests. Science seems like an obviously good way to balance minority and majority interests, but when studies become a forcing function for narrow political interests, the results have been equally as obvious. When you have the power of the state, what do you need truth for? There are a number of third-rail studies that are doubtful and even implausible but they underpin policy, so they are sacred relics of progress for which the punishment for questioning them is exile.
The social sciences are largely irreproducable, and our socities tolerate it because they give the bureaucracy the tools it needs to manage the legislative branches in a form of capture. Show me a study that demonstrates an official narrative is baseless, and I'll show you "a problematic fringe theory from a now-former academic."
- ethanbond 1 year agoHow else should we create policies if not “conduct research, design policy around what we believe we know?”
Of course if “these kind of studies” refers to fraudulent ones, then yeah no one would disagree.
- lockhouse 1 year agoThe issue is that policy is being established based on studies that are either very badly designed or in many cases outright fraudulent, often with devastating consequences.
Like many things, research is often spoiled by human nature. Researchers want to prove their hypothesis so desperately that they’re willing to do whatever it takes to make that hypothesis true. Perverse incentives are everywhere in the research world.
- th230603 1 year agoArguably there is huge demand for this sort of research which is a proximate cause of it being concocted fraudulently.
Lots of people working in policy don't have any real ethical or political engagement with the problems they are supposed to be solving. They lack vision and purpose. They choose to believe that irrelevant and trivial interventions can make huge inroads into difficult social problems, because they are like the man looking for his wallet under the lamppost. If making progress on really difficult problems requires deep thought, courageous leadership and persistent hard work, then they aren't going to be able to solve them anyway.
- bedobi 1 year agoWhat actually happens is orgs, be they for profit or governments or whatever, decide what they want to do, look for studies that justify it after the fact (in social science, you can find a study to support anything), and point to the studies despite many of them not even passing a basic smell test.
- edgyquant 1 year agoThrough the democratic process?
- staticman2 1 year agoI didn't realize voters were not allowed to read studies.
- staticman2 1 year ago
- lockhouse 1 year ago
- motohagiography 1 year ago
- rlucas 1 year agoA caution on the source; "The College Fix" is a rage-bait blog for one side of the tiresome U.S. culture war. As a result, expect sensational framing.
- blamazon 1 year agoFolks seeking a more neutral description of the situation may desire to click into the 4-part data-driven report that is the original source of the claims, which is linked in the article but also below:
- blamazon 1 year ago
- keepamovin 1 year agoI suppose that's one solution: if you don't want to become ethical you can just change the world (as in reframe it) until you do. I see nothing wrong here: he gets to profess his ethics, but doesn't actually have to live it. Mission accomplished. /s
- juujian 1 year agoIn this case it's a 'she'. Yes, this feels like a strange context to ensure representation in science :)
- keepamovin 1 year agoI'm showing my silly biases about gender in unethical behavior, ha ha! :) Thank you for the correction. I'll update my priors.
- keepamovin 1 year ago
- juujian 1 year ago
- neilv 1 year agoThe HN title doesn't include the "allegedly" currently in the article's title.
The lede of the article also calls it "accused of".
- pyrale 1 year agoYou can check the article explaining the evidence by yourself, it's pretty damning.
[1]: https://datacolada.org/109 [2]: https://datacolada.org/110 [3]: https://datacolada.org/111
- neilv 1 year agoHN has a convention of using the headline from the article, and it puts the article's publisher name/domain right after the headline, like attribution. So the impression is that particular publisher is saying that thing definitively.
The difference of "allegedly" is big to a publisher (missing it can break their own company), and to a journalist.
Because of this, I think it's also a big difference to the savvy reader. When an outlet dispenses with the "allegedly" or phrasing as someone else's claim/assertion, that feels to me like a rare occasion on which they really want to impress that it's definitive. And that they know what they're doing, and that experienced readers can tell. Never crying wolf. (A little like how you'd say "literally" once in a decade or lifetime, with emphasis, for impact, when it was really true.)
- neilv 1 year ago
- pyrale 1 year ago
- koollman 1 year ago"ethics". Well at least they are creating more examples to analyze
- medymed 1 year agoI wonder if *she fabricated the data or if desperate graduate students did. In which case the ethical considerations are perhaps(?) slightly different and may be worth writing a paper about, with survey data of course, only slightly modified to support the field’s party line.
- hexomancer 1 year ago"she was the only author involved in collecting and analyzing data for it, the trio reported"
- jpadkins 1 year agoShe. Professor Francesca Gino
- dsr_ 1 year agoAnd, in subsequent paragraphs, also Dan Ariely (he).
- dsr_ 1 year ago
- hexomancer 1 year ago
- medymed 1 year ago
- sva_ 1 year agoI like how the article simply links to sci-hub (in "Notably, a famous article discussing dishonesty [...]")
- pessimizer 1 year agoI've definitely never seen that before. I was actually a little startled on mouseover and for a moment wondered if it was my browser (did I install some weird plugin?)
- danjoredd 1 year agoIs freedom of information dishonesty?
- pessimizer 1 year ago
- hospitalJail 1 year agoThe Ivy leagues cannot catch a break.
I imagine this stuff happens at state schools too, but yet another nail in the coffin of their reputation.
Sincerely, a person who went to a school you've never heard of, but makes a ton of money because I know 4 skills that synergize and scarce.
- mahathu 1 year ago> Sincerely, a person who went to a school you've never heard of, but makes a ton of money
I always find it odd when people choose this as a metric for success. Why not "a person who went to a school you've never heard of, but has a job they enjoy deeply and that gives them purpose"?
- neilv 1 year agoThey could be saying "by the metrics and values of this thing I'm criticizing, I'm successful by not doing that."
Also, financial success is on-topic here, as this forum originated for Silicon Valley bros seeking wealth. In past lives, we were "greed is good" Wall Street bros. I'm sure some percentage of our earlier counterparts started out really loving spreadsheets, before the schemes.
- bhouston 1 year agoIt is not that dissimilar as branding in the real world. There are a ton of great little restaurants, but when you don't have time to really explore, some many small restaurants are also horrible, you pick a brand you are familiar with. The reason is that you know what to expect.
Similar to why so many people want to get Google, Meta, Microsoft, Apple, etc on their resume. While there are a ton of amazing small companies they just do not have the same brand recongition.
I think the economic term for this is "signalling." Having associating with these brand name schools gives you instant credibility: https://en.wikipedia.org/wiki/Signalling_(economics)
- bluefirebrand 1 year ago> Why not "a person who went to a school you've never heard of, but has a job they enjoy deeply and that gives them purpose"?
Because by and large people go to university to have a better chance of making more money
- gonzo41 1 year agoBecause in America (and most places), if you can have a job your enjoy that gives you purpose you're probably already rich. Usually with inherited middle class wealth, like having a stable home or better. You're probably sitting on top of the privilege pyramid by your countries standard.
Most aspirational people who are out to make mega bucks are trying to put distance between themselves and the time they didn't have that, and to setup the structures in their life so if they have kids, they can give that easy going life to their heirs.
- joncrane 1 year ago>Why not "a person who went to a school you've never heard of, but has a job they enjoy deeply and that gives them purpose"?
Because when a person has a lot of money, it vastly increases the scope of available activities that they enjoy and give them purpose.
- manicennui 1 year agoAll ten of those people don't make a good case for any particular school.
- 1 year ago
- grumple 1 year agoI think this arises because we view academics as accomplished experts within their walled gardens, but don't bestow a similar place in the real estate of the mind to people who have been successful at applying their skills in the real world.
- archontes 1 year agoIkigai.
- robertlagrant 1 year ago> I always find it odd when people choose this as a metric for success. Why not "a person who went to a school you've never heard of, but has a job they enjoy deeply and that gives them purpose"?
It's not odd. If your metric was "be as useful as possible to others" then making lots of money is an excellent proxy metric.
- manuelmoreale 1 year agoIt really isn’t thought. I might help 10 very reach people and make a lot of money or I might help 1000 and do it for free. Money is not an excellent proxy metric. Money is just money.
- a_cardboard_box 1 year agoIt's more a measurement of how useful you appear to rich people. You can't make money being useful to people without money. You can make money convincing rich people you're useful to them when you're not.
- manuelmoreale 1 year ago
- neilv 1 year ago
- lusus_naturae 1 year agoThe ivy/state dichotomy is unnecessary for research quality as it depends more on the research capacity and capability of an institution, e.g. if its an R1 vs. R2. You're probably right that perverse incentives degrade or corrupt research anywhere--it's a systemic problem. Moreover, any discussion of research ethics that doesn't include industry or industry-sponsored work is incomplete.
If scientists had more than a citation index to establish productivity so that they can justify getting or keeping funding, the community could probably make some headway towards addressing behavior such as Gino's. Outright fraud is less prevalent than bad science or misinterpreted or misunderstood results. Personally I am in favor of a replication index for all original work, at least.
- th230603 1 year agoGino worked for a business school, as did several of her collaborators.
Often for business school profs, their work will be judged less on academic acceptance (peer-reviewed publication, etc) and more on its 'real-world impact'. You are seen as a good academic of marketing if you can market yourself into Ted talks, pop science publications, and ultimately, influence on 'decision-makers' (the sort of politicians and thinktanks who believe that eg healthcare can be solved by painting hospitals orange).
So there are strong incentives to falsify results but they are quite different from those which bear on actual scientists.
- th230603 1 year ago
- gregshap 1 year agoThe headline gets way more engagement when it's Harvard. Kind of like Foxconn headlines with "Apple factory" not Nokia factory etc.
- eduction 1 year agoThis is just supposition but it's entirely possible there is some causality at play. The ivies are the most competitive universities in the U.S. (modulo some outliers like Stanford and MIT). If you are successful at cheating in academia then by definition you are going to end up teaching/researching at a more competitive university.
- Yajirobe 1 year ago> Sincerely, a person who went to a school you've never heard of, but makes a ton of money
Hey, good for you!
- Roark66 1 year ago>Sincerely, a person who went to a school you've never heard of, but makes a ton of money because I know 4 skills that synergize and scarce.
I always thought people in academia don't really make much money compared with what they could make in business. Adding to it that it takes a very long time and lots of hard work to get "to the top - get tenure" I'm amazed we're still seeing any scientific progress being made.
- mistermann 1 year ago> because I know 4 skills that synergize and are scarce.
Those would be? (Asking sincerely.)
- spywaregorilla 1 year agoThe Ivy leagues catch tons of breaks because they don't even notice this kind of thing. It's a complete non issue
- mahathu 1 year ago
- air7 1 year agoWow. The first few paragraphs read like a story from The Onion.
- koonsolo 1 year agoThe biggest problem with scientific studies is that you get noticed for remarkable results. So what do scientists do? Either try to exaggerate their conclusion or even fraud.
Most scientific studies are plain up boring and don't find anything new.
Maybe papers should be graded on "how well was this study executed" instead of "what kind of unexpected result did we end up with". Because the second is really asking for exaggeration, lies and fraud.
- frognumber 1 year agoI wouldn't say "even fraud." My experience is about half of the high-profile results from my alma mater (MIT) these days are fraud.
Note the selection bias: "High-profile." Plenty of grad students still do honest research. They just don't generate thousands of citations, media coverage, or find academic jobs.
I firmly believe tenure should be obsoleted, and academics should be fired for any sort of documented dishonesty, anywhere. Lying is not okay if you're engaged in the search for knowledge. If a person intentionally lies in a popular science book, a court case, or a newspaper article, they should be out.
- 11101010001100 1 year agoCan you point out examples of results coming out of MIT that are fraud?
- 11101010001100 1 year ago
- frognumber 1 year ago
- throwawayccx 1 year agoLet me guess the media went alongwith it without asking not giving voice to critics
- alkibiades 1 year agowhat bothers me most is when politicians and corporations make policies based on these junk studies
but no, we have to “trust the science”
for example, if you create a study whose results show that diversity is good in some way, you’ll get endless citations and orgs will make policies after it. even if it’s fraudulent, no one’s likely to look too deep into it. you wouldn’t be able to publish one that said the opposite. or if you did, you’d likely end your career
- wruza 1 year agoNotably, a famous article discussing dishonesty was found to contain fabrications, they wrote
Isn’t that self-fulfilling and as deconstructional as can be.
- 1 year ago
- manvillej 1 year agothe fourth part of the data colada research has been posted: http://datacolada.org/112
- codesnik 1 year agoResearching remorse, N=1
- biomcgary 1 year ago[flagged]
- DeathArrow 1 year agoMaybe he just needed more examples to discuss in his courses?
- trabant00 1 year agoWhere there's demand there will be supply. People are asking too much of science and technology in the last decades. We want instant results, shortcuts, secrets to success. Nobody wants to pay for slow steady progress and long term results. Change your life in 30 days with this diet, stoic philosophy program, psychology tidbits book, social media app, 100 push-ups, etc. What you want to buy is what they will be selling.
- h2odragon 1 year agoPeople used to blame God for their faults and "I know this isn't so but i Wish it was". Now they blame "Science." Perhaps soon they'll blame their AI assistants for the actions and attitudes they do no wish to be responsible for.
The one thing I wont bet on, is people suddenly deciding to accept that they are the genesis of their misery and their joy, and stop seeking an outside "other" source of authority.
- h2odragon 1 year ago
- neilv 1 year agoOn the referenced DataColada pieces:
> by Uri, Joe, & Leif
When publicly laying out a case like this, which could destroy the reputation of someone they identify by full name, I think it'd be good form to put one's own full names on the argument.
(Of course they'd have to stand behind it if they are sued for defamation, and the full names can be found elsewhere on the Web site. But that's not my point.)
Putting full names on the piece that people will read says to the reader that they take it seriously enough to put their own names on it.
And the whole alleged scandal is about integrity and what you'll put your name on.
- SamBam 1 year agoClick Menu -> About -> "The authors: Uri Simonsohn, Leif Nelson and Joe Simmons."
They're not hiding it to any extent. It's a big blog that has hundreds of posts. Many blogs refer to authors by their first name if they expect their readers to know them, or easily find them.
> ([...] and the full names can be found elsewhere on the Web site. But that's not my point.)
These are meaningless weasel words. It sounds like you wrote the post first, then belatedly realized the authors aren't hiding their names at all, but wanted to keep your post.
- neilv 1 year ago> These are meaningless weasel words. It sounds like you wrote the post first, then belatedly realized the authors aren't hiding their names at all, but wanted to keep your post.
Those are not weasel words. I said that in my original message. I anticipated your argument and addressed it, before you made it.
- neilv 1 year ago
- SamBam 1 year ago