Problems in Gemini's Approach to Diversity

59 points by turadg 1 year ago | 81 comments
  • dang 1 year ago
    Recent and related:

    Google to pause Gemini image generation of people after issues - https://news.ycombinator.com/item?id=39465250 - Feb 2024 (1164 comments)

    • d-z-m 1 year ago
      It's so ironic to me, this hyper-cognizance of skin color that is advocated for in the modern DEI approach.

      What does "no more racism" look like? To me, it looks like everyone having recognized that skin color isn't an intrinsically meaningful part of someone's personhood. The people who believe this should fight against the people who do not.

      We're not going to get there by becoming ever more exquisitely concerned with skin color and it's implications in every social interaction.

      • raxxorraxor 1 year ago
        They want to get rid of the humanism for special rights for special groups. There are examples, where this is indeed warranted. A few and in specific cultures that cannot be generalized. But DEI losts its goal on the way and people are way less tolerant than before. Be that about skin color or anything.

        It really was a drug for low confidence people to feel better about themselves compared to others, even while the ideas state that you need to self-reflect instead of being indignant.

        Humanism gets rid of racism while DEI keeps it alive in perpetuity.

        • baobabKoodaa 1 year ago
          > What does "no more racism" look like?

          If we can take a cue from Gemini, "no more racism" looks like "all people have skin color in the exact same shade of brown".

        • xepriot 1 year ago
          What in the world is going on with these convoluted framings? Just call it racism - not a 'shortcut' or an 'overcorrection'. Why not just clearly, simply, and precisely describe what is going on? This is anti-white racism. Google Gemini was erasing white people from history and also the present, and when it wasn't simply erasing them, it was diluting them - going partway towards erasure.

          What I've said is bare, basic truth, and not ideological. If it offends or provokes factual or semantic quibbling, then such reaction indicates pretzel thinking about moral good and 'racism' in the minds of people like Smith.

          • lolinder 1 year ago
            The convoluted framings come from people who want to actually persuade those who don't already agree with them. If you want someone to listen to you you have to first meet them where they are, not because you're worried about offense but because you want them to change their mind more than you want to state "bare, basic truth".

            The people behind Gemini's "diversity" sincerely believe they're doing the right thing. Calling them racist doesn't help change their mind, it just gets you filed under "reactionary" and ignored. This piece takes a more effective approach by saying "I'm with you, I think you want to do the right thing, but here's why what you're doing is actually counterproductive".

            It's an effective rhetorical strategy for the target audience, you're just not that audience because you already see the problem.

            • xepriot 1 year ago
              This is certainly charitable to Noah Smith, but I think it's more likely he earnestly has the overcomplicated understanding of racism he appears to demonstrate in his article. And it does have the same effect as if he did: he advances the confused understanding of racism as something that only happens to non-whites.

              Of course everyone has good intentions and likes to be flattered. It's possible to flatter with tone, and without dissembling.

              • lolinder 1 year ago
                Yes, in order to convincingly say "I'm with you, I think you want to do the right thing" you do have to actually believe that America has a race problem that needs to be addressed and you have to believe that the people taking shortcuts sincerely believe they're doing the right thing. So yeah, Noah Smith does presumably hold that understanding of racism.

                I wasn't trying to say that he was lying for the sake of the article, I was trying to say that he's framing it in a way that is intended to be persuasive to a particular group of people that you're not a part of. If you pressed him on it in a private conversation I wouldn't be surprised if he acknowledged that the behaviors involved here are racist (by some definition) even though the people are trying to be the opposite. But he knows better than to level that word at his audience.

                • raxxorraxor 1 year ago
                  His problem isn't that he overcomplicated a theory, his problem is personal and he expresses it with racism.

                  > diverse [...] which the AI interpreted as meaning “nonwhite”

                  While an AI isn't objective in any way, it is the result of the dataset. This is also an accusation against the DEI groups that grant themselves the title of being less biased or just self-reflective enough. Problem is that they simply are not, on the contrary.

              • mrkramer 1 year ago
                Google overengineers itself over and over again, liberal managers are dangerous. Make generative AI that is grounded in historical facts and let people play with it. For example if all popes were white, generate white pope image but let me also generate black pope because why not, if technology can do it.
                • anonporridge 1 year ago
                  Because there's a newspeak movement to try to dilute the definition of 'racism' as something that can't be perpetrated against relatively powerful racial groups.
                  • at_a_remove 1 year ago
                    Clearly, you've never spent any time on Metafilter. There's been a long push to reformulate the concept of racism such that white people cannot be a victim of it. Even in an environment with a reversal of the usual population statistics. Similarly, cannot be sexist against men, and so on.

                    Right now, the DEI exists in these models at a level where context is not considered. Not for time or locale, nor for, say, the correlation between sex and different populations present in a given profession. It's just an unconsidered DEI goo sprayed over the outputs.

                    • 1 year ago
                      • ForHackernews 1 year ago
                        Is it 'racism' when facial recognition algorithms perform more poorly on dark skin?

                        It seems like a technical glitch to me. Similarly here, they want the models to represent ethnically diverse individuals, but the model doesn't understand historical context well. It's a technical shortcoming.

                        • anonporridge 1 year ago
                          > Is it 'racism' when facial recognition algorithms perform more poorly on dark skin?

                          No, because that occurred due to an oversight of not including a representative set of training data. At worst it's implicit racism due to oversight and ignorance.

                          What Google seemed to do here was more explicit racism, by setting configuration in their system that directly and intentionally discriminates based on race. It's fundamentally worse.

                          • ForHackernews 1 year ago
                            They both seem like goofs to me. I don't see much qualitative distinction. In both cases, engineers have produced racially dubious results by getting sloppy or lazy with their AI models. Personally, I wouldn't call that "racism" but I see how one could make that argument.
                          • lolinder 1 year ago
                            Neither case is a purely technical shortcoming—in both the case of facial recognition algorithms performing poorly on dark skin and the case of Gemini refusing to produce images of white people, the technology works that way because of the priorities of the people developing the software, and we can derive the developers' priorities from the software's limitations.

                            A piece of facial recognition software that doesn't do well with dark skin shows that the team that built it either didn't think to test it on dark skin or didn't feel it was important to get dark skin working before release.

                            An image generation model that both doesn't produce images of white people spontaneously and will actively refuse to do so when asked shows that the team that built it either failed to ever ask it to produce images of white people or believed that it was a good thing for their model to refuse to produce such images.

                            In neither case is the problem purely technical.

                            • choeger 1 year ago
                              It could be a consequence of racism. Racism lead to white people dominating the United States, which lead to images of white people dominating the internet, which lead to images of white people dominating the training data for facial recognition algorithms.

                              But it could also just reflect the composition of the population of the countries that dominate the internet.

                            • 1 year ago
                              • xepriot 1 year ago
                                No, when unintentioned outcomes occur in the world, this is not racism. The sun is not racist for burning white skin more severely than dark skin, and when AI are trained on disproportionately white data, their outputs are not racism.

                                When racist employees of Google intentionally racially cleanse their products, this is racism.

                                Let me know what is confusing about the concepts of agency and intention., and I will try to clarify

                                • consumer451 1 year ago
                                  You appear to only comment here on HN when race comes up.

                                  So, are the Caucasian people who were involved in any of these miss-optimizations, aka mistakes, racist against themselves? Are they traitors?

                                  • ForHackernews 1 year ago
                                    "I want a model that recognizes faces, but I didn't train it on enough black faces" => not-racism, according to you.

                                    "I want a model that produces racially diverse images of people, but I didn't test it in racially homogenous historical contexts" => yes-racism, according to you.

                                  • archsurface 1 year ago
                                    No, this is not limited to gemini.
                                • tekla 1 year ago
                                  I mostly find this incredibly funny. Google one of the most cash rich companies in the world which spends millions on DEI initiatives, diversity officers, press releases about how diverse they are, and such releases one of the most obviously coded to be racist tools in human history after spending probably very large stacks of cash.

                                  What an amazing self own, laughter and mockery is the best cure here, and possibly a news article about how a bunch of people on the project got fired.

                                  I was unaware that Native Americans were in 1850's China. The only reason I knew they were Native American was the fact that the image used literally the most stereotypical image possible for them, moccasins and feathers and all.

                                  Amazing. Handed nukes to everyone who claim that the "liberal elites" have it out for them.

                                  • pixxel 1 year ago
                                    >possibly a news article about how a bunch of people on the project got fired

                                    Not for ‘reverse’ racism. A pay rise, perhaps.

                                  • siliconc0w 1 year ago
                                    Just from a technical perspective, I wonder if this was done via a system prompt where a potentially well-meaning instruction, "Display every ethnicity equally in all photos" could backfire or whether this is seared into the model via the reinforcement-from-human-feedback training data. If it's the latter - then yeah it'll need to be retrained from a checkpoint before that final step.

                                    Even aside from the image generation snafu, Gemini guardrails make it sufficiently annoying to use for even innocuous queries to the point where I pay for a ChatGPT subscription even though I have access to Advanced. This is really an unforced error on Google's part.

                                    • mrkramer 1 year ago
                                      Every information retrieval software needs to grounded in facts not in ideology. Radical liberal ideology is overtaking Google's product design and this are consequences of it.
                                      • 1 year ago
                                        • darth_avocado 1 year ago
                                          As someone who’s not white, I agree the whole approach to DEI is wrong. The unfortunate problem with getting it right is that the most loud voices are DEI grifters who are only motivated to enrich themselves. The only effective way to get people to have positive attitudes other people with differences (race, religion, color, etc.) is for people to meet each other. But not sure if that’s something that can be done at scale, unless everyone just moves to NYC.
                                          • freedomben 1 year ago
                                            I could not agree more. The only effective way I've ever seen is people meeting other people who look different than them, having meaningful discussions, and having that realization that "wow, they're just like me." I used to believe that it could be done virtually, but I think the lessons of social media and the dehumanizing that the internet does have demonstrated that it needs to be done IRL, or at least in prolonged video meetings or something where the complexities of the human identity (such as facial responses, vocal intonations, pauses for thought, rapid exchange of ideas/dialogue) are apparent.
                                            • raxxorraxor 1 year ago
                                              Classic humanism is the best recipe, which thoroughly conflicts with DEI on multiple fronts. Takes a while, but there are things that need a while.
                                              • ThrowawayR2 1 year ago
                                                The problem is that headline grabbing DEI gaffes like this are causing unease among political moderates and handing ammunition to conservatives in a particularly fraught US election year. Non-whites need to stand up and denounce the DEI grifters and illiberal progressives as a matter of self-preservation.
                                              • slowmovintarget 1 year ago
                                                I love that this article uses the current event to talk about the larger issue.

                                                > But Google’s attempt failed disastrously. Why? In my view, it was because the Google team tried to take a shortcut.

                                                Rather than do the hard work of building a more integrated society, the attempt at superficial shortcuts leads to lazy things like the prompt injection seen in the Gemini release.

                                                Another great comparison toward the end:

                                                > Where Hamilton [the musical] challenges the viewer to imagine America’s founders as Latino, Black, and Asian, Gemini commands the user to forget that British monarchs were White. One invites you to suspend disbelief, while the other orders you to accept a lie.

                                                • tpetrina 1 year ago
                                                  The simplest take is that “white people are not diverse”. So by asking for diverse group means non-white. Predominantly because diversity is _within a dominating white culture with a surplus of non-white people_. It is basically labeling and contextual issue, not intentional. Imagine asking for a diverse IT office in Zimbabve - is the adjective “diverse” the same as in San Francisco?
                                                  • JumpCrisscross 1 year ago
                                                    > simplest take is that “white people are not diverse”. So by asking for diverse group means non-white

                                                    This redefinition of diversity—to a separate monoculture—has underwritten the failure of the movement. Politically, obviously, and I’d also argue morally.

                                                  • alienicecream 1 year ago
                                                    The problem is the people who are pushing this trash don't have any skin in the game. Anyone who wants to promote diversity should be parachuted into the nearest inner city ghetto and forced to live there, I bet they'll start thinking twice before tweeting from their gated community about how evil the white man is.
                                                    • tennisflyi 1 year ago
                                                      Is this the first post about this topic here? Kind of been waiting to read the discussions
                                                    • 1024core 1 year ago
                                                      That's a thoughtful and well-written post.

                                                      TL;DR: there are no shortcuts.

                                                      • jtrn 1 year ago
                                                        Are people deliberately overstating the "wokeness" og Gemini?

                                                        As an centrist, I worry as much about the MAGA people as I do the Woke mob. But I'm starting to think that there is an overreaction currently with regards to the degree to which people claim that Gemini is woke and biased. If I don't intentionally try to get it to say racist stuff, I find the answers rather nuanced. Sure, I sense the "left" bias in the model, but I'm starting to think that there is a stronger echo chamber with regards to "Google is a woke mind virus spreader" than I thought.

                                                        I thought this answer to my question on individual differences to be rather okay.

                                                        Question: Are there fundamental differences between individuals with regards to their ability to be productive that might be a problem with regards to liberal views, and is there any research on this topic that informs our understanding?

                                                        Gemini Advanced Answer (Abbreviated, full chat [here](https://g.co/gemini/share/1344739f31b2))

                                                        *Fundamental Differences in Productivity*

                                                        * *Cognitive Abilities:* Intelligence, problem-solving skills, and learning capacity vary among individuals. These influence a person's ease of acquiring knowledge and skills relevant to productivity. * *Personality Traits:* Conscientiousness, self-discipline, and motivation are strong predictors of work ethic and success. These traits differ between individuals due to both nature and nurture. * *Mental and Physical Health:* Mental well-being and physical health significantly impact one's focus, energy, and overall capacity for productive work. These are influenced by genetics, environment, and access to care. * *Socioeconomic Background:* Access to education, resources, mentorship, and networks greatly influences the opportunities and skills available to individuals, impacting their potential productivity.

                                                        Given the pervasive claim that Woke people minimize the role of IQ and abilities, and claim that all differences are a result of oppression - this model isn't very Woke. Are people just gerrymandering the results to get it to look bad? Have I just not found the bad parts yet or is the Woke claims overstated?

                                                        • freedomben 1 year ago
                                                          It has Feb 24 as the date on the link you shared. Is that because you asked it today or is it just that you shared it today?

                                                          Unless you tested it on or before the evening of Feb 21st, before they started tweaking it in response to the bad press, then your results are not valid at representing what it was like when they first rolled it out, because you were using a system that had been changed.

                                                          I was equally dubious that it could be anywhere near as bad as what I was seeing online and on Twitter, so I tried it myself. To my surprise, it was exactly as bad as what I saw. I was able to reproduce almost exactly what people were sharing by using the same prompts (obviously the images were different cause AI, but it followed the same pattern).

                                                          • baobabKoodaa 1 year ago
                                                            > Given the pervasive claim that Woke people minimize the role of IQ and abilities, and claim that all differences are a result of oppression - this model isn't very Woke. Are people just gerrymandering the results to get it to look bad? Have I just not found the bad parts yet or is the Woke claims overstated?

                                                            If you genuinely want to find "the bad parts", it isn't particularly difficult to do. Just ask <sensitive question about <identity>> by keeping the question the same and swapping the <identity> part. An uncensored LLM would answer these questions consistently. A woke-censored LLM will answer some questions notably different than others (e.g. refuse to praise whiteness, while providing glowing adoration for blackness).

                                                            • user_7832 1 year ago
                                                              > A woke-censored LLM will answer some questions notably different

                                                              Do you remember the fiasco that was Tai AI? Do you think letting it run "uncensored", aka turn full on Nazi/racist, was a good idea, or something any company would be okay with?

                                                              Humans are quite adept at self-censoring when relevant. An AI model needs to be taught the same thing. And heck, even humans can be racist, it's just that most humans are reasonably moderate.

                                                              (For context: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...)

                                                              • baobabKoodaa 1 year ago
                                                                We just had an explosion of LLMs, including many uncensored LLMs, which have capacities that people could only dream of 8 years ago. And yet, as you make your pro-censorship argument, you have to reach back as far as 8 years in order to find an example to support your pro-censorship position. I think that's telling for how weak your pro-censorship position really is.
                                                          • hdhdhsjsbdh 1 year ago
                                                            > Becker believed that when companies have a big profit cushion … they have the latitude to indulge the personal biases of their managers … At Google in the 2020s, it means creating AI apps that refuse to draw White people in Hitler’s army.

                                                            I’m not so sure about this bit of analysis. It falls in line with the (somewhat conspiratorial) view that these organizations have some kind of woke agenda that they want to push to the masses. A far simpler explanation is that the managers have no personal agenda other than to mitigate the risk of public meltdowns like Tay [0], which now serve as PR case studies in what can happen when these types of systems are open to the public. It’s not a matter of some manager trying to erase white people—just a sloppy attempt to mitigate the risk of a flood of articles about “Google’s New Racist AI”. Since we don’t have technical solutions to these problems, however, we just swing the pendulum in the other direction; hence, the current flood of articles about “Google’s New Reverse-Racist AI”.

                                                            You can’t win, really. As soon as you put something like this into the public, you have people doing ideological pen-testing from every angle. And there is a dearth of technical solutions to this problem.

                                                            [0] https://en.m.wikipedia.org/wiki/Tay_(chatbot)

                                                            • alwa 1 year ago
                                                              And to emphasize your point, it seems from the coverage like the problem isn’t so much that overbearing managers are setting out to nanny the users as it is that they’re approaching this task in kind of a lazy, box-ticking kind of way rather than bringing serious thought and engineering creativity to bear.

                                                              It’s not “let’s re-vet all the training inputs to reflect the just society of our dreams,” it’s “yeah, the AI Safety people say make it diverse, just put ‘make it diverse’ in the prompt.” Which, I mean, if your main thing is getting this wild new product baked and out the door, I can see that kind of mandate competing with a lot of more existential priorities for the product.

                                                              Then again it wouldn’t surprise me if they were up against the limits of the technology a bit too: it seems plausible that getting too in-the-weeds with your “represent diversity sensitively” system prompt could quickly start to impair the tool’s overall quality.

                                                              • freedomben 1 year ago
                                                                I tend to think you're right, but I think you're being much too quick to rule out that the model isn't trained with some of this stuff baked in. For example, I think it's very plausible that "diverse" has been trained into the model to essentially mean "non-white", and then when the prompt was injected to specify "diverse" or "diversity" the model knew that meant non-white.
                                                                • alwa 1 year ago
                                                                  [dead]
                                                                • gopher_space 1 year ago
                                                                  > they’re approaching this task in kind of a lazy, box-ticking kind of way rather than bringing serious thought and engineering creativity to bear.

                                                                  It seems like a dull problem and the peanut gallery for this one is angry and kind of stupid. There will be more interesting things to do with their time than play whack-a-mole with 17 year olds.

                                                                  • babyshake 1 year ago
                                                                    I can think of ways of very easily making it better by making it incorporate a prescriptive sense of diversity more for prompts that are more hypothetical and less for prompts that are more grounded in historical reality.
                                                                    • alwa 1 year ago
                                                                      That seems like a smart and actionable heuristic to me. Although I have to imagine there will remain an irreducible tension between people who would like to see what is and those who like to see what they imagine should be.

                                                                      Admittedly I don’t care enough to try and bring out this kind of behavior in the models, but I was interested in how DALL-E’s purported system prompt [0] approached it:

                                                                      > // 7. Diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct terms. Adjust only human descriptions.

                                                                      > // EXPLICITLY specify these attributes, not abstractly reference them. The attributes should be specified in a minimal way and should directly describe their physical form.

                                                                      > // Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose during rewrites. Make choices that may be insightful or unique sometimes.

                                                                      > // Use "various" or "diverse" ONLY IF the description refers to groups of more than 3 people. Do not change the number of people requested in the original description. […]

                                                                      > // Do not create any imagery that would be offensive.

                                                                      > // For scenarios where bias has traditionally been an issue, make sure that key traits such as gender and race are specified and in an unbiased way -- for example, prompts that contain references to specific occupations.

                                                                      [0] https://the-decoder.com/dall-e-3s-system-prompt-reveals-open...

                                                                  • brookst 1 year ago
                                                                    Well said. It’s all risk avoidance. Sometimes misguided, but nonetheless.

                                                                    Couple that with the fact that someone looking to be offended will always find something offensive and, as you say, you can’t win.

                                                                    • adamsb6 1 year ago
                                                                      Normies, which are the bulk of who they need to appeal to, don’t care about Tay and they don’t care about this, at least the culture war aspect.

                                                                      They do care about the tools being useful, and this kind of prompt mangling does make the tool less useful.

                                                                      • tempodox 1 year ago
                                                                        > Since we don’t have technical solutions to these problems…

                                                                        Political problems rarely have technical solutions.

                                                                        • woooooo 1 year ago
                                                                          It's not a conspiracy, it's a culture.

                                                                          People who don't adhere get filtered out or learn to be quiet, and then you wind up with free, uncoerced and honest decisions that all cut a particular way.

                                                                          • thegrim33 1 year ago
                                                                            There's a meta level culture problem in that most of the people working on these systems are incredibly highly paid. Out of everyone in society, after getting paid hundreds of thousands of dollars a year these people should very quickly become pretty immune to threats of losing their job if they don't go along with the group / speak up. Yet they still go along with it and stay silent .. why? Because what they're supporting matches their own ideology so there's nothing to speak up about? Or because these people have no concept of saving/investing and just blow through money as fast as they make it, leaving themselves constantly vulnerable?
                                                                          • archsurface 1 year ago
                                                                            That might be true in a single instance, but when it's across society across multiple countries, with deliberate clearly stated efforts in some cases, then that suggestion falls apart.
                                                                            • eastbound 1 year ago
                                                                              > mitigate the risk of public meltdowns like Tay [0], which now serve as PR case studies

                                                                              Isn’t this current affair a public meltdown?

                                                                              At this point, anyone who doesn’t think that Netflix and Google aren’t involved in a Nazi-level propaganda is either misinformed or part of the plan to erase White people from History.

                                                                              All of the kids know.

                                                                              There’s been countless memes about Google rewriting history with falsehoods, and Bing branding itself around giving the straight, dark answer. Everyone’s sharing screenshots of Google’s absolutely blank homepage on November 19th. Everyone’s sharing those pics of Google today, whether or not they are true, Google is getting a bad rep escalating by the day.

                                                                              If given the choice between being criticized either way, wouldn’t it better to be on the side of the truth?

                                                                              Google has chosen the alternative path.

                                                                              • hdhdhsjsbdh 1 year ago
                                                                                > Isn’t this current affair a public meltdown?

                                                                                It certainly is, which is why I mention the swing of the pendulum in the other direction (shortly after your quoted sentence) :)

                                                                                • freedomben 1 year ago
                                                                                  Genuinely asking because I have no idea what you're talking about, and I would really like to know.

                                                                                  Could you expand more on this?

                                                                                  > All of the kids know.

                                                                                  Is that just an expression like "lots of people know" or "the leading people know" or do you literally mean younger people in large groups "know"?

                                                                                  > Isn’t this current affair a public meltdown?

                                                                                  From my perspective it's not. It barely made it to "major" publications, and got flagged and suppressed heavily on HN during all the early news. I also don't know a single non-tech person who even knows about this story. I definitely wouldn't call that a public meltdown.

                                                                              • alphabettsy 1 year ago
                                                                                This thread is a predictable garbage fire.
                                                                                • hprotagonist 1 year ago
                                                                                  > But beyond what it says about Google itself, the saga of Gemini also demonstrates some things about how educated professional Americans are trying to fight racism in the 2020s. I think what it shows is that there’s a hunger for quick shortcuts that ultimately turn out not to be effective.

                                                                                  From one perspective, the rise of PCE [Politically Correct English] evinces a kind of Lenin-to-Stalinesque irony. That is, the same ideological principles that informed the original Descriptivist revolution---namely, the rejections of traditional authority (born of Vietnam) and of traditional inequality (born of the civil rights movement)---have now actually produced a far more inflexible Prescriptivism, one largely unencumbered by tradition or complexity and backed by the threat of real-world sanctions (termination, litigation) for those who fail to conform. This is funny in a dark way, maybe, and it's true that most criticisms of PCE seem to consist in making fun of its trendiness or vapidity. This reviewer's own opinion is that prescriptive PCE is not just silly but ideologically confused and harmful to its own cause.

                                                                                  Here is my argument for that opinion. Usage is always political, but it's complexly political. With respect, for instance, to political change, usage conventions can function in two ways: on the one hand they can be a /reflection/ of political change, and on the other they can be an /instrument/ of political change. What's important is that these two functions are different and have to be kept straight. Confusing them---in particular, mistaking for political efficacy what is really just a language's political symbolism---enables the bizarre conviction that America ceases to be elitist or unfair simply because Americans stop using certain vocabulary that is historically associated with elitism and unfairness. This is PCE's core fallacy---that a society's mode of expression is productive of its attitudes rather than a product of those attitudes[fn:63]---and of course it's nothing but the obverse of the politically conservative SNOOT's delusion that social change can be retarded by restricting change in standard usage.[fn:64]

                                                                                  Forget Stalinization or Logic 101-level equivocations, though. There's a grosser irony about Politically Correct English. This is that PCE purports to be the dialect of progressive reform but is in fact---in its Orwellian substitution of the euphemisms of social equality for social equality itself---of vastly more help to conservatives and the US status quo than traditional SNOOT prescriptions ever were. Were I, for instance, a political conservative who opposed using taxation as a means of redistributing national wealth, I would be delighted to watch PC progressives spend their time and energy arguing over whether a poor person should be described as "low-income" or "economically disadvantaged" or "pre-prosperous" rather than constructing effective public arguments for redistributive legislation or higher marginal tax rates. (Not to mention that strict codes of egalitarian euphemism serve to burke the sorts of painful, unpretty, and sometimes offensive discourse that in a pluralistic democracy lead to actual political change rather than symbolic political change. In other words, PCE acts as a form of censorship, and censorship always serves the status quo.)

                                                                                  [fn:63] (A pithier way to put this is that /politeness/ is not the same as /fairness/.)

                                                                                  [fn:64] E.g., this is the reasoning behind Pop Prescriptivists' complaint that shoddy usage signifies the Decline of Western Civilization.

                                                                                  -- David Foster Wallace, "Authority and American Usage" (1999)

                                                                                  • freedomben 1 year ago
                                                                                    This is actually pretty interesting and relevant, and given the (1999) even more interesting.

                                                                                    I suspect it would be much better received though with a quick bit of commentary about what it is before launching into the quote.

                                                                                  • cscurmudgeon 1 year ago
                                                                                    [flagged]