What Happens When AI-Generated Lies Are More Compelling Than the Truth?

78 points by the-mitr 1 month ago | 105 comments
  • HPsquared 1 month ago
    There was a brief window where photography and videos became widespread so events could be documented and evidenced. Generative AI is drawing that period to an end, and we're returning to "who said / posted this" and having to trust the source rather than the image/message itself. It's a big regression.
    • RcouF1uZ4gsC 1 month ago
      Remember the fairy photo hoax that fooled Lewis Carrol?

      There was never a time when authenticated photo and video could be trusted without knowing the source and circumstances.

      • const_cast 1 month ago
        Sure, but scale matters. 99% of images being fake is a different situation than 1% being fake. We can't just ignore that in favor of a "this always happened" argument.

        Everything has always happened, so who cares? We need to go deeper than that. Many things that are perfectly a-okay today are only so because we do it on a small enough scale. Many perfectly fine things, if we scale them up, destroy the world. Literally.

        Even something as simple as pirating, which I support, would melt all world economies if everyone did it with everything.

        • j-bos 1 month ago
          It fooled Sir Arthur Conan Doyle, a motivated believer, not Lewis Carrol. People will believe what they want. Trust is the fundamental issue.
          • psychoslave 1 month ago
            Lewis Carroll also was fooled by it, when Churchill showed to him. Abraham Lincoln who was there at the moment it happened confirmed that to me, I can show you the original email he sent me about it (bar the elements I'll have to hide due to top secret information being included in the rest of the message).
        • ThinkBeat 1 month ago
          It is more that it is becoming available to the masses. Doctoring photos in all manner of ways has been going on for decades.

          What is happening now will raise awareness of it and of course make it a several magnitudes bigger problem.

          I am sure there are large efforts ongoing to train AI to spot AI photo, video, written production.

          A system like printer tracking dots¹ may already be in widespread use. Who would take the enormous amount of time to figure out if somesuch is hiding somewhere in an llm, or related code.

          ¹ https://en.wikipedia.org/wiki/Printer_tracking_dots

          • const_cast 1 month ago
            Everything is a function of scale. IMO saying "this always happened" means nothing.

            Lying on a small scale is no big deal. Lying on a big scale burns the world down. Me pirating Super Mario 64 means nothing. Everyone pirating everything burns the economy down. Me shooting a glass coke bottle is not note worthy. Nuclear warheads threaten humanity's existence.

            Yes, AI fabrication is a huge problem that we have never experienced before, precisely because of how it can scale.

            • perching_aix 1 month ago
              A mate of mine told me that ChatGPT started injecting zero width spaces into its output. Never fact checked this, but even if it's not done now, I'm sure it's bound to happen. Same for other types of watermarks.
              • HPsquared 1 month ago
                Quite easy to remove something like that. Although ChatGPT has a writing style that's quite recognisable.
            • neepi 1 month ago
              I await cryptographically signed photos coming out of cameras. Actually I’ve said that should be the case for the last 20 years.

              You should be able to follow a chain of evidence towards an unprocessed raw image potentially.

              • perching_aix 1 month ago
                Unfortunately that doesn't prove too much. It all hinges on the cameras, which are just computers, operating in a verifiable fashion. Which they don't, as no consumer available computer currently does, and I don't see this changing in the near future, both for technological and political reasons.

                I put a lot of effort into thinking about what could be proper trustable on photos and videos [0], but short of performing continuous modulation on the light sources the light of which is eventually captured by the cameras' sensors, there's no other way, and even that I'm not sure how would work with light actually doing the things its supposed to (reflection, refraction, scatter, polarization, etc.). And that's not even mentioning how the primary light source everyone understandably relies on, the Sun, will never emit such a modulated signal.

                So what will happen instead I'm sure, is people deciding that they should not let perfection be the enemy of good, and moving on with this stuff anyways. Except this decision will not result in the necessary corresponding disclaimers reaching consumers, and so will begin the age of sham proofs and sham disprovals, and further radicalization. Here's to hope I'm wrong, and that these efforts will receive appropriate disclaimers when implemented and rolled out.

                [0] related in a perhaps unexpected way: https://youtube.com/shorts/u-h5fHOcS88

                • dinfinity 1 month ago
                  Things should simply be signed by people and institutions. People who were there or did extensive research vouching for authenticity and accuracy of representation.

                  Signing in hardware is nice, but you then still need to trust the company making the hardware to make it tamper proof and not tampered with by them. Additionally, such devices would have to be locked down to disallow things like manual setting of the time and date. It's a rabbit hole with more and more inconveniences and fewer benefits as you go down it.

                  Better to just go for reputation, and webs and chains of trust in people based approaches.

                  • neepi 1 month ago
                    Yeah I'm not even talking about that side of things. I'm talking about after it leaves the camera. When someone shows you something, you should be able to say "I want to see the closest to original image". Humans make a lot of changes to images evidently to change perception. Even a simple crop can change the story being told.

                    A good example: https://imgur.com/iqtFHHg

                    • 1 month ago
                      • ramblerman 1 month ago
                        “I thought about it and couldn’t come up with anything so this is a dead end.”

                        What a load of nonsense. A little bit of humility and a basic understanding of history should quickly make you realize that.

                        OPs point is far more interesting and deserves more discussion

                      • bayindirh 1 month ago
                        I think Nikon and Canon's past cameras had signed photos by default, and you could get the software for verification if you're a police dept or similar.

                        Both manufacturer's keys got extracted from their cameras, rendering the feature moot. Now, a more robust iteration is probably coming, possibly based on secure enclaves.

                        I'd love to have the feature and use it to verify that my images are not corrupted on disk.

                        • neepi 1 month ago
                          Used to work on archival. You want to store your images in TIFF and use FEC if you really care. That withstands significant bitrot. Signing will just tell you that the image is broken but not allow you to recover it. You can do fine with SHA256 and 3-2-1 backup.
                        • thih9 1 month ago
                          • lnrd 1 month ago
                            What prevents anyone to take a signed picture by photographing a generated/altered picture? You just need to frame it perfectly and make sure there are no reflections that could tell it's a picture of a picture and not a picture of the real world, very doable with a professional camera. All details that could give it out would disappear just lowering the resolution, which can be done in any camera.
                            • grues-dinner 1 month ago
                              With a bit (OK quite a lot) of fiddling, you could probably remove the CCD and feed the analog data into the controller, unless that's also got a crypto system in it.

                              Presumably if you were discovered you would then "burn" the device as its local key would be known then to be used by bad actors, but now you need to be checking all photos against a blacklist. Which also means if you buy a second hand device, you might be buying a device with "untrusted" output.

                            • k2enemy 1 month ago
                              There have been several initiatives in this direction. Here is one of the latest: https://contentauthenticity.org
                              • holowoodman 1 month ago
                                There have also been calls for a mechanism like this to prevent doctored photos of controversial newsworthy events being spread by news agencies. But afaik the only thing that came off it was "we only pay for camera jpg, only allowed changes are brightness, contrast, color".

                                Edit: sister comments have links.

                                • grues-dinner 1 month ago
                                  That fixes the problem of content being manipulated and then the original being discounted as fake when challenged.

                                  It doesn't do a whole lot for something entirely fictional, unless it becomes so ubiquitous that anything unsigned is assumed to be fake rather than just made on a "normal" device. And even if you did manage to sign every photo, who's managing those keys? It's the difference between TLS telling you what you see is what the server sent and trusting the server to send the truth in the first place.

                                  • rightbyte 1 month ago
                                    Wouldn't compression make the signature invalid? Feels kinda easy to fake anyways.
                                    • hoseyor 1 month ago
                                      I will assume you simply mean cryptographically signed as evidence of having been taken by a camera?

                                      You do realize that would still not provide perfect proof that what was recorded by the camera was real, right? It does seem like an obsolete idea you may not have fully reconsidered in a while.

                                      But considering that same old idea that dates from prior to the current state, I would also not be surprised if you imagined clandestinely including all kind of other things in this cryptographic signature like location, time and day, etc.; all of which can also be spoofed and is a tyrannical system’s wet dream.

                                      You don’t think that would be immediately abused, as it was in other similar ways like through all the on device image scanning that was injected as counter-CSAM appeals to save the children…of course?

                                    • thegreatpeter 1 month ago
                                      This has been a problem with the media for years as well.

                                      You’re forced to trust the source and “read between the lines” or you’re reading something politically motivated.

                                      Nothing new. I hope folks start trusting the source more.

                                      • Vilian 1 month ago
                                        Would be very interesting to require generative AI companies to log their created images for that purpose, it isn't gonna eliminate self hosted alternative, but it can destroy any fake evidence faster
                                        • JimDabell 1 month ago
                                          This rules out any form of self-hosted generative AI. It’s not going to work. It needs to be the other way around; we need to prove authenticity.
                                          • A_D_E_P_T 1 month ago
                                            Logging them would be rather cost prohibitive, but images can be hashed + (invisibly) watermarked and video can be hashed frame by frame, in such a way that each frame authenticates the one before it. Surely there's a way to durably mark generated content.
                                          • jdiff 1 month ago
                                            I don't think this would solve the problem, unfortunately. AI can certainly be coerced into reproducing an existing image, leaving plenty of room for probable deniability.
                                          • rightbyte 1 month ago
                                            Videos and photos have been faked for a long time. Nothing has changed in that regard but decreasing the effort required somewhat.
                                            • SupremumLimit 1 month ago
                                              Your comment follows two persistent HN tropes: (1) ignoring the article, which deals precisely with why the cost of production matters, and (2) steadfastly refusing to recognise that quantity has a quality of its own - in this case a monumental reduction in production cost clearly leads to a tectonic reshaping of the information landscape.
                                              • brookst 1 month ago
                                                …but enough about the printing press.
                                                • rightbyte 1 month ago
                                                  I did read the article and it does deal with fake photos being an old thing and name drops e.g. Stalin's former companions being removed one by one from photos.

                                                  "There was a brief window where photography and videos became widespread so events could be documented and evidenced."

                                                  Photos are video have in themself not been evidence. You need trust the photographer or publisher too, since the camera was invented.

                                                  Moving the cost from big $$$ level to my neighbour level makes scams more personalized, sure.

                                                • jdiff 1 month ago
                                                  "Somewhat" is doing an awful lot of heavy lifting. Fakery has gone from handcrafted in small batches (or at worst, produced through much effort in sweatshops) to fully automated and mass produced.
                                                  • rightbyte 1 month ago
                                                    Ye well "alot" might have been a better wording.

                                                    However, the real breaking point in my view was when shills and semi-automated bots become so prevalent that they could fool people into believing some consensus had changed. Faking photos doesn't add much to that in my view.

                                                  • piva00 1 month ago
                                                    I get bored by myself repeating it but I always think it's worth to mention: the scale and degree matter, in almost all cases of an issue, if not we can reduce a lot of issues to "it has always happened before".

                                                    Localised fires are common in nature, a massive wildfire is just a fire, at a different scale and degree. Lunatics raving about conspiracies were very common in public squares, in front of metro stations, anywhere there was a large-ish flow of people, now they are very common in social media, reaching millions in an instant, different scale and degree.

                                                    Just sheer scale and degree makes an issue a completely other issue, decreasing the effort required to fake a video to the point where a naïve/layperson cannot distinguish it from a real one is a massive difference. Before you needed to be technically proficient with a number of tools, put a lot of work, and get a somewhat satisfying result to fake something, now you just throw a magical incantation of words and it spits out a video you can deceive someone, it's a completely different issue.

                                                    • EGreg 1 month ago
                                                      "Nothing has changed in that regard" really, nothing?

                                                      Does anyone other than me notice this common tendency on HN:

                                                      1. Blockhain use case mentioned? Someone must say "blockchain doesn't solve any problems" no matter what, and always ignoring any actual substance of what's being done until pressed.

                                                      2. AI issue mentioned? Someone must say "nothing at all has changed, this problem has always existed" downplaying the vast leaps forward in scale, and quality of output, ignoring any details until pressed.

                                                      It's like when people feel the need to preface "there is nothing wrong with capitalism, but" before critiquing capitalism. You will not criticize the profit.

                                                      It's not really a shibboleth. What's the name for this type of thing, groupthink?

                                                      • cb321 1 month ago
                                                        I would just call it a "pattern", but if you want to be more specific Re your 1/2, perhaps a "pattern of over-simplification". Over-simplification is, of course, basically human nature, not specific to HN, and something "scientists" of all stripes fight against in themselves. So, there may be a better one-worder.

                                                        EDIT: while oversimplification is essentially always a problem, nuance and persuasion are usually at odds. So, it's especially noticeable in contexts where people are trying to persuade each other. The best parts of HN are not that, but rather where they try to inform each other.

                                                        • 1 month ago
                                                        • raincole 1 month ago
                                                          People had been able to send messages to each other for a very long time. However the internet still changed a lot of things.
                                                        • thomasahle 1 month ago
                                                          Things like the government MAHA report, full of fake references to papers that don't exist, probably didn't happen as blatantly before AI. Even if people could easily have lied back then too. The ease of with which AI lies (or "hallucinates") makes a qualitative difference.
                                                          • tokai 1 month ago
                                                            Fake citations has always been common. Before they weren't just straight up fabrications. The cited papers usually exists, they just don't contain what is claimed by the referring paper.

                                                            At least made up citations are quick and easy to denounce.

                                                          • notTooFarGone 1 month ago
                                                            Only the price changed and that is everything.
                                                            • zettapwn 1 month ago
                                                              The open question is who’s worse: one major tyrant three thousand miles away or three thousand minor tyrants a mile away. If we consent to live under capitalism, then we’re destined to live in a world of lies. We always have; it’s all we’ve ever known.

                                                              What we don’t know is whether we’ll be worse or better off when the technology of forgery is available to random broke assholes as easily as it is to governments and companies. More slop seems bad, but if our immunity against bullshit improves, people might redevelop critical thinking skills and capitalism could end.

                                                          • palmotea 1 month ago
                                                            > Generative AI is drawing that period to an end, and we're returning to "who said / posted this" and having to trust the source rather than the image/message itself. It's a big regression.

                                                            Which just goes to show that one of the core tenets of techno-optimism is a lie.

                                                            The Amish (as I understand them) actually have the right idea: instead of wantonly adopting every technology that comes along, they assess how each one affects their values and culture, and pick the ones that help and reject the ones that harm.

                                                            • AndrewKemendo 1 month ago
                                                              Which implies that they have a foundational epistemological, teleological and “coherent” philosophy

                                                              Not something that can be said of most people. Worse, the number of affinity groups with long term coherence collapses into niche religions and regional cults.

                                                              There’s no free lunch

                                                              If you want structural stability then you’re gonna have to give up individuality for the group vector, if you want individuality you’re not going be able to take advantage of group benefits.

                                                              Humans are never going to be able to figure out this balance because we don’t have any kind of foundational coherent Universal epistemological grounding that can be universally accepted.

                                                              Good luck trying to get the world to even agree on the age of the earth

                                                          • IshKebab 1 month ago
                                                            Interesting question but this article completely failed to answer it and really went off the rails half way through.

                                                            Ars answered this much much better:

                                                            https://arstechnica.com/ai/2025/05/ai-video-just-took-a-star...

                                                            > As these tools become more powerful and affordable, skepticism in media will grow. But the question isn't whether we can trust what we see and hear. It's whether we can trust who's showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

                                                            • dsign 1 month ago
                                                              One thing is to see Sam Altman peddling his wares, another altogether is to hear politicians and big corp executives treating AI as if it were something that should be adopted post-haste in the name of progress. I don't get it.
                                                              • Applejinx 1 month ago
                                                                One point to bear in mind is, lies have proven more effective in the ABSENCE of evidence. I don't know how many times I've run across the idea of 'guess what, Portland (or New York City, or whatever) is burned to the ground because of the enemies!'

                                                                This gets believed not because there's evidence, but because it's making a statement about enemies that is believed.

                                                                So for whoever finds lies compelling, I don't think it's about evidence or lack of evidence. It's about why they want to believe in those enemies, and evidence just gets in the way.

                                                                • EGreg 1 month ago
                                                                  Well, when the evidence can be faked, it becomes harder to claim it.

                                                                  You've seen the "post-truth" attitudes already from the right, after "fake news" of 2016 makes them regard everything from climate change to vaccine data, as faked data with an agenda. It's interesting because for decades or centuries the right wing was usually the one which believed in our existing institutions, and it was the left that was counter-cultural and anti-authoritarian.

                                                                • Elaris 1 month ago
                                                                  This got me thinking. Sometimes it feels like a story doesn’t have to be true as long as it feels right, people believe it. And if it spreads fast and sounds good, it becomes “truth” for many. That’s kind of scary. Now that anyone can easily make something look real and convincing, it’s harder to tell what’s real anymore. Maybe the best thing we can do is slow down a bit, ask more questions, and not trust something just because it fits what we already believe.
                                                                  • alpaca128 1 month ago
                                                                    True, people tend to believe things more if they are stated more confidently. That's basically how many things from scams to conspiracy theories and even cults work. Now with LLMs you have machines that can produce thousands of words per hour in flawless grammar and sounding as sophisticated and confident as you want.

                                                                    The famous quote "A lie can travel halfway around the world while the truth is putting on its shoes" is older than the internet, so this asymmetry was already bad enough back then and whoever coined the quote couldn't have imagined how much farther it would shift.

                                                                  • psychoslave 1 month ago
                                                                    >Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media.

                                                                    Hmm, that's not a totally new stuff. I mean, anyone taking take time to document themselves about how mass media work should already be acquainted by the fact that anything you get in them is either bare propaganda or some catch eye trigger void of any information to attract audience.

                                                                    There is no way an independent professional can make a living while staying in integrity with the will to provide relevant feedback without falling in this or that truism. Audience is already captured by other giant dumping schemes.

                                                                    Think "fabric of the consent".

                                                                    So the big change that might occurs here, is the distribution of how many people do believe what they get thrown at there face.

                                                                    Also, the only thing that you might previously be taken as only reliable information in a publication was that the utterer of the sentence knew the words emitted, or at least had the ability to utter its form. Now you still don't know if the utterer had any sense of the sentence spoken, but you don't even know if the person could actually even utter it, let alone have ever been aware of the associated concepts and notions.

                                                                    • metalman 1 month ago
                                                                      "lies" are always more compelling than the truth. truth=what is

                                                                      vs, a whole wide range of "wouldn't it be nice .....if", "cant we just....", and the massive background of myth, legend, fantasy, dreaming, etc so into this we have created a mega capable machine rendered virtual sur-reality.....much like the ancient myth/legends where odesius sits to table where at a fantastic feast..nothing is as it seems

                                                                      • 0xbadcafebee 1 month ago
                                                                        When something new is happening (or new information comes to light), and that thing has the potential to do harm, people come out of the woodwork to make doomsday predictions. Usually the doomsday predictions are wrong. A lot of these predictions involve technologies we all take for granted today.

                                                                        Like the telephone. People were terrified when they first heard about it. How will I know who's really on the other end? Won't it ruin our lives, making it impossible to leave the house, because people will be calling at all hours? Will it electrocute me? Will it burn down my house? Will evil spirits be attracted to it, and seep out of the receiver? (that was a real concern)

                                                                        It turns out we just adapt to technology and then forget we were ever concerned. Sometimes that's not a great thing... but it doesn't bring about doomsday.

                                                                        • altcognito 1 month ago
                                                                          New technology also usually brings new and more challenging complications. Nuclear energy, combustion engines, electricity, the internet all came with huge new problems that we are still dealing with today. Some of the problems are so severe they threaten human survivability.

                                                                          Even your example contains an unsolved, and serious problem. We still don’t know who is on the other end of the phone.

                                                                          • nthingtohide 1 month ago
                                                                            Foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI. - Neil deGrasse Tyson
                                                                          • keiferski 1 month ago
                                                                            Can someone tell me why this idea isn’t workable and wouldn’t solve most deepfake issues?

                                                                            All camera and phone manufacturers embed a code in each photo / video they produce.

                                                                            All social media channels prioritize content that has these codes, and either block or de-prioritize content without them.

                                                                            Result: the internet is filled with a vast amount of AI generated nonsense, but it’s mostly not treated as anything but entertainment. Any real content can be traced back to physical cameras.

                                                                            The main issue I see is if the validation code is hacked at the camera level. But that is at least as preventable as say, preventing printers from counterfeiting money.

                                                                            • lnrd 1 month ago
                                                                              So all I would have to do to make a "legitimate" fake picture is to generate it, print it, take a signed picture of the print with a camera and then upload it on the web?

                                                                              With the right setup I could probably just take a picture of the screen directly, making it even easier (and enabling it for videos too).

                                                                              • keiferski 1 month ago
                                                                                Presumably the camera would have to incorporate GPS or other systems to ensure that you aren’t just taking a photo of a screen.

                                                                                But yes that does add a wrinkle.

                                                                              • _Algernon_ 1 month ago
                                                                                For one it puts a lot of trust into those corporations which they have not earned. Corporations exist to maximize profit. They already sell advertisement to foreign hostile actors. Why wouldn't they do the same with these types of codes for enough payment?

                                                                                Also it gives them a lot of power to frame anyone for anything. How do you defend yourself against a cryptographic framed "proof" that ties you to a crime? At least when no evidence is trustworthy, the courts have to take this possibility into account. That's not the case when it's only used in rare cases.

                                                                                • keiferski 1 month ago
                                                                                  But this trust is already given to corporations that make printers to prevent counterfeiters. By your logic they would already have sold out to hostile actors.

                                                                                  To your second point: it’s not that my method guarantees absolutely flawless photos. It just makes it more likely to be secure.

                                                                                • loudmax 1 month ago
                                                                                  I think placing trust in technology is misguided. Bad actors will always figure out a way to abuse the machines.

                                                                                  What we can do is place trust in particular institutions, and use technology to verify authenticity. Not verify that what the institution is saying is true, but verify that they really are standing by this claim.

                                                                                  This is challenging because no institution is going to be 100% trustworthy all of the time in perpetuity. But you can make reasonable assessments about which institutions appear more credible. Then it's a matter of policing your own biases.

                                                                                  • keiferski 1 month ago
                                                                                    Sure, but I don't see how that could be implemented in the world in a way other than how I outlined, considering that photos/videos can be created by anyone. There is no central media institution that can control all sharing of content.

                                                                                    It would seem to me that the institution to place your trust in would be the one that implements and verifies the coding system I discussed.

                                                                                    • loudmax 1 month ago
                                                                                      When a news organization hires a journalist to file a report, they put their reputation on the line in the hands of the journalist.

                                                                                      As a consumer of news, you put your trust in the institution to have a reasonable vetting process, and also a process to retract a story if it's later shown to be false.

                                                                                      None of this is completely foolproof. It relies on institutions taking a long term view, and people working out which individuals and institutions are worthy of their trust. This isn't like the the blockchain where you have mathematical proof of veracity that's as strong as your encryption algorithm. I don't see how that level of proof is achievable in the real world.

                                                                                • heresie-dabord 1 month ago
                                                                                  From TFA:

                                                                                      Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square. The reason extraordinarily strange conspiracy theories have spread so widely in recent years may have less to do with the nature of credulity than with the nature of faith. 
                                                                                  
                                                                                  The reason why strange and even outright deranged notions have spread so widely is that they have been monetised. It is a Gibberish Economy.
                                                                                  • altcognito 1 month ago
                                                                                    Capitalism found the least amount of energy to produce a viable product.
                                                                                • indest 1 month ago
                                                                                  lies have always been more compelling.
                                                                                  • 1 month ago
                                                                                  • drweevil 1 month ago
                                                                                    What is a lie, and what is the truth? These are age-old questions, not some recent phenomenon. The Spanish-American War was at least in part precipitated by the infamous "yellow journalism" of the time. Propaganda and disinformation played a large role in the events leading to and including WWII. And what is The Truth? Using photography as an example, lies can easily be told by omission, even without any dark-room chicanery. What is the photographer's subject? What is off-frame? Which photographs did the editor select for publication? What story is not being told?

                                                                                    If anything, the idea that one can take information as "true" based on trust alone (what does the photograph show, what did the New York Times publish) seems to be a recent aberration. AI will be doing us a favor if it destroys this notion, and encourages people to be more skeptical, and to sharpen their critical thinking skills. Forget about what is "true" or "false." Information may be believed on a provisional basis. But it must "make sense" (a whole subject in itself), and it must be corroborated. If not, it is not actionable. There simply is no silver bullet, AI or no AI. Iain M Bank's Culture series provides an interesting treatment of this subject, if anyone is interested.

                                                                                    • logic_node 1 month ago
                                                                                      It is unsettling how AI can create lies that are more persuasive than the truth. This truly challenges our ability to differentiate fact from fiction in the digital age.
                                                                                      • IanCal 1 month ago
                                                                                        > OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

                                                                                        Then he's lying or a complete moron.

                                                                                        People have been able to fake things for ages, since you can entirely fabricate any text because you can just type it. The same as you can pass on any rumour by speaking it.

                                                                                        People are fundamentally aware of this. Nobody is confused about whether or not you can make up "X said Y".

                                                                                        *AND YET* people fall for this stuff all the time. Humans are bad at this and the ways in which we are bad at this is extensively documented.

                                                                                        The idea that once you can very quickly and cheaply generate fake images that somehow people will treat it with drastically more scepticism than text or talking is insane.

                                                                                        Frankly the side I see more likely is what's in the article - that just as real reporting is dismissed as fake news that legitimate images will be decried as AI if they don't fit someones narrative. It's a super easy get-out clause mentally. We see this now with people commenting about how someone elses comment simply cannot be from a real person because they used the word "delve", or structured things, or had an em dash. Hank Green has a video I can't find now where people looked at a space X explosion and said it was fake and AI and CGI, because it was filmed well with a drone - so it looks just like fake content.

                                                                                        • ImHereToVote 1 month ago
                                                                                          Lies are already more compelling than the truth. The difference is whether you like rebel lies, or establishment lies.
                                                                                          • titouanch 1 month ago
                                                                                            Gen AI could have us headed towards a cartesian crisis.
                                                                                            • intended 1 month ago
                                                                                              So this is an actual problem I am considering and have an approach. Talking essentially about our inability to know:

                                                                                              1) if a piece of content is a fact or not.

                                                                                              2) if the person you are acting with is a human or a bot.

                                                                                              I think its easier if you take the most nihilistic view possible, as opposed to the optimistic or average case:

                                                                                              1) Everything is content. Information/Facts are simply a privileged version of content.

                                                                                              2) Assume all participants are bots.

                                                                                              The benefit is that we reduce the total amount of issues we are dealing with. We don’t focus on the variants of content being shared, or conversation partners, but on the invariant processes, rules and norms we agree upon.

                                                                                              So We can’t agree on may be facts - but what we can agree on is that the norms or process was followed.

                                                                                              The alternative, to hold on to some semblance or desire to assume people are people, and the inputs are factual, was possible to an extent in an earlier era. However the issue is that at this juncture, our easy BS filters are insufficient, and verification is increasingly computationally, economically, and energetically taxing.

                                                                                              I’m sure others have had better ideas, but this is the distance I have been able to travel and the journey I can articulate.

                                                                                              Side note

                                                                                              There’s a few Harvard professors who have written about misinformation, pointing out that total amount of misinfo consumed isn’t that high. Essentially : that demand for misinformation is limited. I find that this is true, but sheer quantity isnt the problem with misinfo, its amplification by trusted sources.

                                                                                              What GenAI does is different, it does make it easier to make more content, but it also makes it easier to make better quality content.

                                                                                              Today it’s not an issue of the quantity of misinformation going up, it’s an issue of our processes to figure out BS getting fooled.

                                                                                              This is all putting pressure on fact finding processes, and largely making facts expensive information products - compared to “content” that looks good enough.

                                                                                              • bluebarbet 1 month ago
                                                                                                Article raised interesting questions but suggested no answers.

                                                                                                To the extent there's a technical fix to this problem of mass gaslighting, surely it's cryptography.

                                                                                                Specifically, the domain name system and TLS certificates, functioning on the web-of-trust principle. It's already up and running. It's good enough to lock down money, so it should be enough to suggest whether a video is legit.

                                                                                                We decide which entities are trustworthy (say: reuters.com, cbc.ca), they vouch for the veracity of all their content, and the rest we assume is fake slop. Done.

                                                                                                • JimDabell 1 month ago
                                                                                                  The good news is that AI has been shown to be effective at debunking things too:

                                                                                                  > Durably reducing conspiracy beliefs through dialogues with AI

                                                                                                  > Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

                                                                                                  https://pubmed.ncbi.nlm.nih.gov/39264999/

                                                                                                  A huge part of the problem with disinformation on the Internet is that it takes far more work to debunk a lie than it does to spread one. AI seems to be an opportunity to at least level the playing field. It’s always been easy to spread lies online. Now maybe it will be easy to catch and correct them.

                                                                                                  • ThinkBeat 1 month ago
                                                                                                    Then the AI in question is qualified to become a politician. With congress these days it cant get much worse.
                                                                                                    • ToucanLoucan 1 month ago
                                                                                                      > The concern is valid. But there’s a deeper worry, one that involves the enlargement not of our gullibility but of our cynicism. OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

                                                                                                      > Some experts believe the opposite is true: The risks will grow as we acclimate ourselves to the presence of deepfakes. Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media. We may, in the words of the mathematics professor and deepfake authority Noah Giansiracusa, start to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.

                                                                                                      It is journalistic malpractice that these viewpoints are presented as though the former has anything to actually say. Of course Altman says it's no big deal, he's selling the fucking things. He is not an engineer, he is not a sociologist, he is not an expert at anything except some vague notion of businessness. Why is his opinion next to an expert's, even setting aside his flagrant and massive bias in the discussion at hand!?

                                                                                                      "The owner of the orphan crushing machine says it'll be fine once we adjust to the sound of the orphans being crushed."

                                                                                                      > “Every expert I spoke with,” reports an Atlantic writer, “said it’s a matter of when, not if, we reach a deepfake inflection point, after which forged videos and audio spreading false information will flood the internet.”

                                                                                                      Depending where you go this is already true. Facebook is absolutely saturated in the shit. I have to constantly mute accounts and "show less like" on BlueSky posts because it's just AI generated allegedly attractive women (I personally prefer the ones that look... well, human, but that's just me). Every online art community either is trying to remove the AI garbage or has just given up and categorized it, asking users uploading it to please tag it so their other users who don't want to see it can mute it, and of course they don't because AI people are lazy.

                                                                                                      Also I'd be remiss to not point out that this is, yet again, something I and many many others predicted back when this shit started getting going properly, and waddaya know.

                                                                                                      That said, to be honest, I'm not that worried about the political angle. The politics of fakery, deep or otherwise, has always meant it's highly believable and consumable for the audience it's intended for because it's basically an amped-up version of political cartoons. Conservatives don't need their "Obama is destroying America!" images to be photorealistic to believe them, they just need them to stroke their confirmation bias. They're fine believing it even if it's flagrantly fake.

                                                                                                      • Ukv 1 month ago
                                                                                                        > It is journalistic malpractice that these viewpoints are presented as [...]

                                                                                                        Seems fine to me when it's explicitly stated to be the viewpoint of the OpenAI CEO, and then countered by an expert opinion. It's already apparent that MRDA[0].

                                                                                                        [0]: https://en.wikipedia.org/wiki/Well_he_would,_wouldn%27t_he%3...

                                                                                                        • HPsquared 1 month ago
                                                                                                          It's the same idea as "lies spread faster than truth". Lies are often crafted to be especially juicy and salacious. Gossip has always been a problem; GenAI just extends this problem to other media.
                                                                                                          • EGreg 1 month ago
                                                                                                            What is it that makes some people on HN pop up to always respond to any issue with AI with a version of "it was always this way, the problem always existed, AI does nothing fundamentally new". When it's clear that huge leaps in scale, coordination, and output quality accessible to anyone, are exactly what produces "new" use cases. Not to mention that swarms of agents are a fundamentally new thing, that doesn't require humans in the loop at all.

                                                                                                            Notice that this phenomenon didn't happen as much on HN for other technologies, e.g. when the iPhone came out, very few people said "well, this is nothing new, computers existed for a long time, this is just minituarizing it and unplugging it from the wall."

                                                                                                            • supriyo-biswas 1 month ago
                                                                                                              > when the iPhone came out, very few people said "well, this is nothing new, computers existed for a long time, this is just minituarizing it and unplugging it from the wall."

                                                                                                              This website is, of course, notorious for its Dropbox comment, so regrettably the viewpoint you speak of is rather common.

                                                                                                            • captainbland 1 month ago
                                                                                                              I think the other issue is that those lies can be pumped out at inhuman speeds, and specifically targeted at particular audiences automatically using existing online audience marketing tools. So you can end up in a situation where the lie not only spreads quickly, but different audiences are receiving specialised versions of that lie which makes it particularly compelling to them, generated by AI tools, and totally responsive to real world events and narratives at minimal cost (compared to hiring humans to do the same job) - and this might happen at a really fine grained level.
                                                                                                              • drweevil 1 month ago
                                                                                                                “Falsehood flies, and the Truth comes limping after it.” -- Jonathan Swift
                                                                                                            • dautogel 1 month ago
                                                                                                              [dead]