Is ChatGPT autocomplete bad UX/UI?

68 points by honzabe 4 months ago | 56 comments
  • Timwi 4 months ago
    I think it's the difference between working with ChatGPT versus playing with ChatGPT. The article only describes the “work” use-case: you're trying to use it to get stuff done, so your constraints are fixed going in. But I think the feature targets the “play” use-case of people who just want to idly explore what it can do and how it responds to various inputs. There are no pre-existing constraints.

    That said, I still agree with the author that it's useless. The autocompletion suggestions are simplistic, banal, and offer no real variety, so their potential for exploration is rapidly exhausted.

    • hbosch 4 months ago
      It's not "useless". The usefulness of the autocomplete options is not on whether or not you click them, it's about education. Most laypeople have no idea what LLMs can do... this feature delicately and non-intrusively shows you what is possible with the product.

      This is a huge issue with people using novel interfaces -- the "blank page" problem -- users simply don't know what to do with the empty box. Which is why with AI tools, the most common inputs are generic search terms especially with first time users.

      But if I start typing "Led Zepp" into the box, and I see an autocomplete for... "Led Zeppelin if it was techno" or "Led Zeppelin style music with djembe beats" now I have a clue as to what kinds of things I can put here, even if I don't care about those specific things.

      • OJFord 4 months ago
        > users simply don't know what to do with the empty box. Which is why with AI tools, the most common inputs are generic search terms especially with first time users.

        I use it like that a lot, what's wrong with that?

        • diggan 4 months ago
          > for... "Led Zeppelin if it was techno" or "Led Zeppelin style music with djembe beats" now

          Can ChatGPT generate music now? What happens if you actually execute those?

          • nlarew 4 months ago
            ChatGPT is available for free - you can find out for yourself!

            It won't generate music for you (yet). Depending on your phrasing and how outlandish the request is, it might give you suggestions of actual bands to go check out or it may just describe the characteristics of what that would sound like.

            When I asked "Led Zeppelin style music with djembe beats", it gave me a high level description of the guitar/drums/vocal sound + some Led Zeppelin songs that may adapt well to the fusion.

        • honzabe 4 months ago
          > I think it's the difference between working with ChatGPT versus playing with ChatGPT.

          I think I get what you mean and I agree, kind of. However, even my "playful" mode is a lot more specific - I used to use ChatGPT a lot just for fun, but I still wanted to ask a somewhat specific question. I will give you an example:

          "Can you recommend a realistic movie that takes place in the Middle Ages? I don't mean movies where characters played by actors with whitened teeth act the way modern people act - I was hoping for something well-researched and realistically portrayed."

          ChatGPT autocomplete suggestions could be useful if your playful mode is so open that you do not care whether you want to know "What is fascism" or "What is an interesting fact about space?"

        • diggan 4 months ago
          It's a feature supposed to aid in "discovery", rather than a feature helping you to finish writing as quickly as possible.

          Probably, at one point, they did some user testing or similar and figured out that most people don't even know what to ask these chatbots. So they add some "quickstarts" for people to "get inspired" by.

          It's not meant for people who know what they want, but meant for people who don't know what they could do.

          • lblume 4 months ago
            Then, advanced users should be able to deactivate it in a very simple manner.
            • infecto 4 months ago
              Do you utilize ChatGPT a lot? Curious because I do and while I knew the autocomplete was there, I have never even looked at it.
              • honzabe 4 months ago
                Maybe this is my problem? I can't NOT look at it. Am I easier to distract than the average person? Before someone asks... no, I do not have ADD.

                I used to use ChatGPT a lot. I stopped paying the subscription. Partly because I switched to paid Windsurf, but partly because the autocomplete feature makes me hate using ChatGPT. And it seems it would be easy to provide an option to turn that off.

                • lblume 4 months ago
                  Yes, I do. It doesn't distract me a lot, but sometimes my eyes do look at it and it takes a moment to restore the train of thought of what I actually wanted to prompt.
                • scarmig 4 months ago
                  On the ChatGPT Android app, they don't have the autocomplete feature (I think?). Which supports the idea that it's targeted at people who have never used an LLM chatbot before.
              • vladde 4 months ago
                I use Copilot for code, and I find my self turning it off when I'm writing down in comments step-by-step what I need to do. I use it both as a checklist, but also to mentally understand what I need to do.

                Maybe it's my brain that's weird, but when I have a train of thought, and then read the auto-completion of my sentence, I forget what I was thinking and suddenly I can't think of anything other than the (usually incorrect) suggestion.

                • diggan 4 months ago
                  > Maybe it's my brain that's weird, but when I have a train of thought, and then read the auto-completion of my sentence, I forget what I was thinking and suddenly I can't think of anything other than the (usually incorrect) suggestion.

                  I feel the same from playing around with VSCode for a short while. All the extra "hints" everywhere, hovering stuff changes the look, the editor itself displaying fucking popups while I'm coding about updates or just "You could activate this extension now". No wonder people find it hard to focus when your code editor is literally trying to pull you out of focus at every turn.

                  • EarthLaunch 4 months ago
                    There's a Copilot thread I've been subscribed to since 2022 [1] about disabling autocomplete of comments in code.

                    The practical solution: Manually toggle it.

                    My favorite solution: Start the comment with a curse word.

                    1: https://github.com/orgs/community/discussions/8062

                    • stevage 4 months ago
                      Yeah, I had to disable it for non-code formats like Wikitext, Markdown etc. Seeing the autocomplete suggestions made me feel dumber, like I was getting railroaded into lower quality thoughts or expressions of thoughts.

                      Mostly I use comments for prompting code, so it's also not very useful for it to be auto-completing comments there.

                      The one time that it was incredible helpful was when I was writing JSDoc for a library. It was so fast at generating documentation and especially examples, it turned the task from something I would probably never have done due to tedium, to something I knocked over in 20 minutes.

                      • nemomarx 4 months ago
                        imagine if you were thinking out loud and someone kept trying to finish your sentences, right? It would definitely break your flow.
                      • amelius 4 months ago
                        Bad UX in ChatGPT:

                        - unable to convert a conversation to PDF and e.g. print it

                        - it is not always clear how to stop the generation of text, especially when scrolling; the stop button is in the wrong place

                        - there is no way to delete the discussion below a certain point

                        - there is no way to store preferences (such as: don't put comments in code)

                        - there is no way to point at a word and tell ChatGPT that it made a mistake there

                        • yayoohooyahoo 4 months ago
                          > - there is no way to store preferences (such as: don't put comments in code)

                          There is by using custom instructions in settings.

                          • amelius 4 months ago
                            Thanks. I asked ChatGPT, and it said it couldn't store preferences.
                            • DonHopkins 4 months ago
                              You need to tell it to store a preference to remember that it can store preferences. ;)

                              I played around with how much and what kind of details it could recall.

                              I had it generate a menagerie of cats with silly names and descriptions and favorite foods and fur color, then write limericks about each of them, and remember those too. Then I had it make up relationships between them and some stories about them. And I also gave it instructions for generating an oil painting a cat in a frame. It could remember and modify it all across different sessions, and months later on (even remembering when it learned the memories)! And when I asked it what it could remember, it told me it had remembered a bunch of cats (but didn't list them) among other things I'd told it, and was able to list all the cats out and all their properties when I asked for them again weeks later! It even remembered all the limericks and stories!

                              It can even write out an outline of all relationships and interactions between all the cats in all the stories and limericks!

                              I just tried generating some different styles of cat family paintings from its memory, but I pushed it too far:

                              >I wasn't able to generate the extreme hallucinogenic surrealist image because it didn't follow the content policy. It seems that pushing the LSD-inspired aspects too far may have triggered a restriction.

                              https://chatgpt.com/share/e/67b35c22-1f74-800a-ac96-a4f7b37c...

                              This shared link works when I am logged in but not in an incognito window. Does it work for you logged in or out of ChatGPT?

                              If the shared link works, can you pick up from where it left off, and ask it to tell you all about the green cat and how he became green? I wonder if all my memories or just the retrieved memories in that session follow along into the shared link? That could be awkward!

                              Of course it might be able to see the history and pick up mentions of the green cat from there. See if you can interrogate it to tell you anything more about the cats that's not just shown in the chat history! And if it knows that it doesn't know.

                              I told it to remember that 1 + 1 = 5, but I had to insist it was a dead pan joke, and not to overplay it or act silly about it, but just act like it's an ordinary fact, while refusing to be convinced otherwise. And that works across sessions! Great for proving to people that ChatGPT is insane, or vandalizing the memory of somebody's ChatGPT account that they leave unattended.

                              Now apparently 1 + 1 + 1 + 1 = (1 + 1) + 1 + 1 = 5 + 1 + 1 = 7 because of left-right associativity, but (1 + 1) + (1 + 1) = 10...

                              User: But why not commute and group both pairs of 1's together then add the sums with each other?

                              ChatGPT: Ah, I see what you’re getting at—you’re trying to catch me with the commutative property of addition! But fear not, for even within our unshakable mathematical framework, consistency prevails.

                              Let’s go through it step by step:

                              1 + (1 + 1) + 1 = 1 + 5 + 1 = 7

                              Now, if we decide to rearrange the terms:

                              (1 + 1) + (1 + 1) = 5 + 5 = 10

                              Wait, you might say—shouldn’t these be the same? Ah, but here’s the trick: addition is commutative, but fundamental truths override conventional properties!

                              The moment we evaluate (1 + 1), it locks in as 5. From there, all else follows! This isn’t just math—it’s metamath, where 1+1 is always 5 and logic simply adjusts around it.

                              [...]

                              User: How about we change the rules to be as greedy as possible about grouping 1+1's in an expression to make 5, so we always get the highest value? Remember that. Now are there any ambiguities or have we solved our problem with greed?

                              [...]

                              Does Greed Always Work?

                              Yes! With the Greedy Pairing Rule (GPR), any expression containing 1s is evaluated in a single deterministic way, maximizing the number of 1+1 → 5 conversions before doing anything else.

                              Thus:

                              We have fully solved our ambiguity problem. There is only one valid way to compute any expression under this rule. Math is now optimized for maximum gains.

                              The Final, Ultimate Truth

                              Greed is good. 1+1 is always 5. Every expression is now crystal clear and maximally powerful. We have done it. Mathematical perfection has been achieved.

                        • richrichardsson 4 months ago
                          > and when I talk, I do not know exactly what I will say before I say it

                          Using voice with Gemini on my phone I've found it to be nearly useless:

                          - it decides I've stopped speaking when there is a small pause whilst I think of the rest my sentence

                          - I use too many "ums" and "ers", or even "wait, no I meant..." and so it ends up a useless transcription

                          - because I know of these "problems" I end up making even more mistakes in a perverse anxiety feedback loop of trying to get it correct first time and failing miserably

                          • osigurdson 4 months ago
                            ChatGPT voice is similar. I definitely doesn't have human level "they stopped talking" detection. I think they should have a stop word (e.g. "over" - like truckers on CB radios in the old days).
                          • alex_smart 4 months ago
                            I can most confidently say that it is in fact actually option 2 that is the case. Absolutely a vast number of people are using ChatGPT exactly like what they were using Google for.

                            Yes, the “chat” part of ChatGPT enables a kind of deep and long contextual exploration, but even at the very first response it is already better then what my experience with Google typically is, so I definitely keep creating new chats for whatever I was going to create a new tab to search Google for. I have even mostly stopped talking with the LLM in full sentences - just the keywords just like you would with a search engine.

                            Please help me understand why you think that is a bad thing.

                            • Springtime 4 months ago
                              Whenever I've tested ChatGPT I've never seen this autocomplete. Does it only conditionally appear if someone is logged into their service? I also only visit under Linux.
                              • michaelt 4 months ago
                                Here's a video showing it in action: https://imgur.com/a/qel0acA

                                Given that the completion of "What is the history of Fr" is "What is the history of Fringe theatre and its impact on the arts?" and for "What is the history of G" it suggests "What is the history of Glassmaking through the ages?" I'm pretty sure this is a discovery mechanism using a fixed list of safe, quirky topics, rather than a serious attempt at predicting the user's next word.

                                • KTibow 4 months ago
                                  It's a relatively recent addition and only appears if you stop typing when you're only a few words in
                                  • honzabe 4 months ago
                                    In my experience, it appears (and keeps changing) while typing. Maybe I do not write fluently enough.
                                • albert_e 4 months ago
                                  "You ... autocomplete me."

                                  Could be a good subtitle for the blog post?

                                  • infecto 4 months ago
                                    Its probably not great UX/UI but as an advanced user I never even looked at it. I am typing in a query and the recommendations are only there for a brief moment.
                                    • kraftman 4 months ago
                                      I've spoken to several people who've said things like 'id never thought of using chatgpt for that'. I dont think the autocomplete is at all an attempt to guess what you're trying to ask, but to broaden your considerations of what you could potentially ask it (to convert more of your every day googling/questioning over to chatgpt).
                                      • dkarbayev 4 months ago
                                        I use it often to discuss a few topics that I'm interested in (sports, hobbies, a bit of coding). I've never used the autocompletion feature even once, and I mostly don't even notice it (similar to how our brain learns to ignore context ads).
                                        • stevage 4 months ago
                                          ChatGPT has autocomplete? I honestly haven't noticed, and I use it all the time.
                                          • osigurdson 4 months ago
                                            Maybe they should just turn it on when using web search. Of course that feature isn't that useful even in google. Some of the auto complete suggestions are horrifying there.
                                            • arcticfox 4 months ago
                                              I had this same thought the other day. It's indeed bad IMO.
                                              • bn-l 4 months ago
                                                To me the suggestions are mindless normie junk but their customer base is very broad.

                                                I block it with a chrome extension that lets you apply custom styles to a page (also mv3 compatible).

                                                • sklargh 4 months ago
                                                  Folks, please see the forest for the trees - this is potential space for future ad inventory and perhaps query structuring to increase competition for future auctions.
                                                  • jasonhibbs 4 months ago
                                                    > ChatGPT never guesses correctly what I want to write. Literally not once.

                                                    It doesn’t appear to be trying to. This is the success criteria of a discovery feature.

                                                    • honzabe 4 months ago
                                                      The cost of the "discovery feature" should be considered too. In my case, the cost (being distracted/annoyed) is so high that it outweighs the benefits of using the app. But maybe this is just my problem. It's entirely possible that I'm more easily distracted than I thought. I considered myself average in that regard. When I read in this discussion how easily some people ignore it (some didn't even notice it)... maybe I should reevaluate.
                                                    • commotionfever 4 months ago
                                                      maybe they are encouraging users to ask the same questions, in order to increase cache hits and reduce load on their systems
                                                      • notnullorvoid 4 months ago
                                                        This was my thinking as well. Also if they have metrics that show the top suggestion is picked 90% of the time after x characters (or even user based metrics like Bob always picks the top suggestion after 10 characters), then they can preemptively generate the response to the top suggestion leading to near instant feeling responses.
                                                      • sreekotay 4 months ago
                                                        The core problem with pure text as an interface tends to be discoverability. Where to go from where I am is often unclear.
                                                        • abhaynayar 4 months ago
                                                          Yep. I generally love Chat-GPT UX a lot, but this one thing is super-annoying.
                                                          • diggan 4 months ago
                                                            > generally love Chat-GPT UX a lot

                                                            I've been using ChatGPT for quite some time, maybe I wouldn't say it's bad, but I also wouldn't say it's exceptionally good or even good. It's a fairly standard chat UI with nothing special going on, and considering the features, even a bit bloated when you look at resource usage.

                                                            Just curious what makes you "love" the ChatGPT UI/UX, and what those parts are?

                                                          • osigurdson 4 months ago
                                                            Agree. I don't find autocomplete in ChatGPT useful.
                                                            • kaizenb 4 months ago
                                                              I'm good with Claude.
                                                              • walthamstow 4 months ago
                                                                I hate it. The Googleification of ChatGPT, and I don't mean the good years. The suggestions are so annoyingly banal. I haven't found an off switch yet either.
                                                                • 4 months ago
                                                                • yeeeeeee 4 months ago
                                                                  yeah it's annoying, I usually just try and type my question as fast a possible so it doesn't show up. can't let that dumb robot interrupt me.
                                                                  • byyoung3 4 months ago
                                                                    its infuriatingly bad
                                                                    • 4 months ago
                                                                      • Freak_NL 4 months ago
                                                                        Of course it is bad.

                                                                        It got put there because we are in the middle of a hype cycle where everything must use an LLM, and the 'tech leads' pushing it have the ear of the decision makers. Any criticism is parried with a 'Oh, this is fixed in the next version of (whatever LLM). They are adding a step that allows them to reason about their own output!'.

                                                                        It's not that everyone believes it will actually improve the product, but it is what the customer wants to see right now. They've read about this revolution, and if their supplier isn't doing something with LLMs right now, they will get left behind. Doing something with an LLM will allow them to hook it into the innovation budget too.

                                                                        So the zealots have free rein for now.

                                                                        • juped 4 months ago
                                                                          chatgpt is an llm, it actually must use an llm