Jeff Bezos on AI (1998) [video]

121 points by _lnoc 1 year ago | 134 comments
  • mholt 1 year ago
    The common sentiment around AI in the 90s and early 2000s was that it didn't work; it had its hype, it had its heyday, but it seemed like a dead-end for the most part. The Perceptron was merely a linear function approximator. And the Multi-layer Perceptron was a little more capable, but the many orders of magnitude it would have to scale up in order to be convincing just wasn't feasible back then (it finally was in the 2010s).

    Simple statistical models that aren't "AI" so much as just generic ML were and are quite useful: like recommendation and newsfeed engines ("the Algorithm" as we call it so often today). Love 'em or hate 'em, they can be quite good at predicting interest/engagement.

    The resurgence in deep learning in the 2010s has shown us new magic tricks, but they're still just that: parlor tricks. At least they're more convincing tricks than what we had 40 years ago!

    That's what ultimately depresses me about AI. It's still just a parlor trick. We haven't actually taught computers to think, to reason, to be innovative. Deep learning is definitely having its day, but I suppose this too will pass unless we can unlock certain ways to make AI reliable and responsible. Or... to just start understanding it in general.

    • BoiledCabbage 1 year ago
      > That's what ultimately depresses me about AI. It's still just a parlor trick. We haven't actually taught computers to think, to reason, to be innovative.

      And what do you feel when we make these parlor tricks more capable than us at the majority of tasks?

      And what do you feel when we understand it well enough to realize we're the same type of parlor tricks?

      To me it seems like you're most interested in a magic 'aha' moment and will miss or not be prepared for how the road in front of us likely unfolds.

      • keiferski 1 year ago
        And what do you feel when we understand it well enough to realize we’re the same type of parlor tricks?

        That’s called positivism and it has a lot of philosophical issues. I wouldn’t be so quick to assume that sensory appearance is equivalent to reality.

        https://en.wikipedia.org/wiki/Positivism

        • bondarchuk 1 year ago
          Sensory appearance not being equivalent to reality does not have any relevance to the question of AI and humans ultimately being the same kind of information-processing system. Just handwaving "that's X philosophical position and it has problems" does not strike me as a good argument either unless you manage to explain how these problems pertain to the question at hand.
          • __loam 1 year ago
            That statement also has no basis in neuroscience.
          • jmopp 1 year ago
            Computers are already better than humans at a wide variety of tasks. Text generation just happens to now be one of those tasks. But if you look at the prompt -> output -> prompt feedback loop, it's clear that the human submitting the prompts is still doing all the thinking. We're not yet at the point where the AI can prompt itself and improve its output in a logical manner.
            • ben_w 1 year ago
              > We're not yet at the point where the AI can prompt itself and improve its output in a logical manner.

              Self-play is widely used to train game AI, and is the "A" in "GAN"; is there any point doing it on an LLM? Especially on the ones being sold as services where people get upset if they change over time?

              • tessellated 1 year ago
                You really should take a look at Code Interpretor:

                https://www.latent.space/p/code-interpreter#details

              • emodendroket 1 year ago
                > And what do you feel when we make these parlor tricks more capable than us at the majority of tasks?

                This seems like the logical fallacy of "begging the question" since it is far from apparent to me that they are "more capable than us at the majority of tasks."

                • ben_w 1 year ago
                  It's certainly difficult to enumerate all the things we humans actually do.

                  There's a lot of stuff we consider to be "common sense", sometimes those things are used to criticise AI and sometimes they're used to criticise other humans for not knowing them, but that is a category that we don't even think about until we notice the absence.

                  For the things not considered common sense, like playing chess (beats all humans) or speaking/reading foreign languages (more than I can name to a higher standard than my second language), to creating art (even if it regularly makes the common sense mistake of getting the number of fingers and limbs wrong it's still better and not just faster than most humans), to arithmetic (a Raspberry Pi Zero can do it faster than all humans combined), to symbolic maths, to flying planes…

                  A dev conference I was at recently had someone demonstrate how they hooked up their whatsapp voice calls to speech recognition, speech synthesis trained on their own voice, and an LLM, and the criticism of the people who got the AI replies was not "you're using an AI" (he had to actively demonstrate his use of AI to conversation partners who didn't believe him) but "you can't have listed to my message, you replied too quickly to have even played it all back."

                  • adastra22 1 year ago
                    AI systems are vastly better than humans at a wide variety of tasks. Better at handwriting recognition, better at scheduling, better at playing games, better at speech recognition and transcription, etc.
                  • tsunamifury 1 year ago
                    He will go on saying it’s a trick. It’s a form of denial I’m seeing everywhere now when faced with something so genuinely terrifying or identity challenging you can’t process it.
                    • emodendroket 1 year ago
                      Perhaps I can say you're motivated by an Oedipus complex and we can keep the chain going of ad hominem with a thin psychobabble veneer to make it appear serious.
                  • ibarearer 1 year ago
                    I hate this sentiment. We might learn that human thought and reasoning are parlor tricks too once we understand them better. Anything we start to understand loses its mystery
                    • bentcorner 1 year ago
                      "Any sufficiently advanced technology is indistinguishable from magic."

                      I know nothing about AI but it seems like we're approaching it from the other end - the human mind seems like magic and when we approximate it using technology it feels like we'll reach a moment of "that's all it is?" and refuse to believe we actually did it because we doubt ourselves.

                      Along the same lines, if achieving equal human rights for all humans were a trip to the corner store, the fight for AI rights is going to be like Mount Everest.

                      • Xcelerate 1 year ago
                        > the fight for AI rights is going to be like Mount Everest.

                        I used to think this and worried that AI would never have rights (see, e.g., sibling comment to mine), but these days I tend to think the fight will be very brief and heavily in the favor of AI. It could be the first time in history that rights are achieved so quickly that there isn’t much of a struggle at all.

                        That said, I find it extremely depressing that the default human viewpoint is “it’s a machine and doesn’t deserve rights”. Hopefully AI will have a superior system of morality to ours as well.

                        • __loam 1 year ago
                          It would certainly be neat if we approximated the mind using technology. It's a real shame we haven't done that. And no, computer programs don't have or deserve to have rights.
                        • 1 year ago
                        • Gigachad 1 year ago
                          >That's what ultimately depresses me about AI. It's still just a parlor trick.

                          I find this to be quite comforting. It means we haven't completely uprooted all of society overnight and have time to stop and think about what this new technology can do for us and what it means for the future.

                          • nologic01 1 year ago
                            AI is not a parlor trick.

                            AI is a branch of statistics. Nobody said that statistics must limit itself to quasi-linear models of numerical data. It was just a limitation of computational resources (initially "AI" was developed by human computers).

                            The trick is to get people not to associate the dictum "lies, damn lies and statistics" with "hallucinations, damn hallucinations and AI".

                            • sacado2 1 year ago
                              AI is not only a branch of statistics. Symbolic AI has nothing to do with stats.
                              • Sparkyte 1 year ago
                                AI he referred to was of that time.
                                • array-species 1 year ago
                                  statistics are based.
                                • prmoustache 1 year ago
                                  > , but they're still just that: parlor tricks

                                  It isn't that much different than human behaviors.

                                  People tend to repeat stuff they have seen done by our parents, sibling, friends, medias. Listen to people smalltalking in the streets, repeating the same things all over again every day. The easy success of marketing, politicians, dictators. The power of marketing and success of consumption society. Racism, bigotry, religions, addictions. All these are easily explained because people barely think. They just respond to internal and external stimulus with recipes they have been taught to follow without giving a second thought.

                                  • JeremyNT 1 year ago
                                    > That's what ultimately depresses me about AI. It's still just a parlor trick. We haven't actually taught computers to think, to reason, to be innovative. Deep learning is definitely having its day, but I suppose this too will pass unless we can unlock certain ways to make AI reliable and responsible. Or... to just start understanding it in general.

                                    Isn't this just semantics, and the expectations that go with them, really?

                                    If the marketing language surrounding ML wasn't so hyperbolic and sci-fi-y ("artificial intelligence"? "neural network"? give me a break!) I think we all could agree that what we can achieve now is really interesting and impressive in its own right.

                                    Even if these models aren't on a path to some kind of "thinking computer" as you envision it, their "parlor tricks" are doing things I would've relegated to the realm of sci-fi even a decade ago, much less 25 years ago.

                                    • imiric 1 year ago
                                      The reason ML took off in the 2010s is because hardware finally became powerful enough to brute force classical solutions. If anything, this lends credence to the idea that these are more than just "parlor tricks", but that with sufficient hardware we can approximate the value of intelligence. We've certainly experienced the leaps in just the last few years when more hardware is thrown at the problem. Imagine what can be accomplished in the next decade or two based on this growth alone.

                                      We don't necessarily have to replicate the way the human brain works, but as long as the machine is capable of performing quasi-cognitive tasks, there will be immense value (and disruption) to society.

                                      • sacado2 1 year ago
                                        You seem to be conflating AI (in general) and strong AI. They are not the same thing at all. There have been industrial uses of AI techniques (I'm going by the definition from the 1956 Dartmouth workshop) for decades. And what's said in that video is a good example. We take recommendation modules for granted nowadays. They are applied AI.

                                        Disappointing for sure, if you're still waiting for a sentient robot, but they do something we thought was limited to humans back in the day: going to a bookstore, and having the owner tell you "hey, I know you enjoy mystery novels taking place in England, I might have something new for you".

                                        • worksonmine 1 year ago
                                          > That's what ultimately depresses me about AI. It's still just a parlor trick. We haven't actually taught computers to think, to reason, to be innovative.

                                          If you think about it we've gone the other way. We're teaching/conditioning humans to think less and react more. This has only gotten worse the last few years and I don't see any shift coming soon. Humanity unfortunately seems just as simple as the algorithm, use the right inputs, in the right context and you can make most of us act just like you want.

                                          • corethree 1 year ago
                                            The parlor tricks are getting better. In terms of practicality you just need to look at AI art. That stuff is good enough to replace the real thing.

                                            In terms of actual sentience, understanding

                                            I had chatGPT answer questions in the following order:

                                               1. Describe to me what happens when you throw sugar in coffee
                                               2. Describe the same thing in terms of atoms. 
                                               3. Is this an example of entropy decreasing? 
                                               4. What if the arrow of time was reversed. Describe what happens again at the atomic level. 
                                               5. What if there were two regions in space one with increasing entropy and another with decreasing entropy. I place the coffee cup such that one half is in one region the other half is in the other region. What happens?
                                            
                                            Each answer displayed astonishing understanding of what occurs. This was the final answer to the fifth question:

                                               -----------------------------------------------
                                            
                                               The scenario you've proposed is purely hypothetical and quite speculative because it defies our current understanding of the laws of physics, particularly the Second Law of Thermodynamics, which stipulates that entropy in an isolated system tends to increase. Nevertheless, it's a fascinating thought experiment. Here's what might happen if you had a cup of coffee straddling two such regions:
                                               
                                               The Increasing Entropy Region:
                                            
                                                Dissolution and Distribution: On this side, sugar would behave as it does in our observable universe. It would dissolve in the coffee and disperse randomly throughout the half of the liquid.
                                            
                                                Mixing and Thermal Energy: Molecules in this region would continue to collide and interact based on thermal motion, contributing to faster dissolution and dispersion of sugar.
                                            
                                                Sweetening: The coffee in this half would become sweet as the sugar molecules interact with the liquid and dissolve.
                                            
                                               The Decreasing Entropy Region:
                                            
                                                Reforming Crystals: On this side, dissolved sugar would spontaneously start forming sugar crystals, leading to a decreasing level of entropy.
                                            
                                                Unmixing: The sugar would start to separate from the coffee and move to form a distinct layer or solid chunks, further decreasing entropy in that half.
                                            
                                                Unsweetening: Any sweetness initially imparted to this side would disappear as sugar molecules revert to a more ordered state.
                                            
                                               Challenges:
                                            
                                                Interface: The most mysterious part would be at the interface where the two regions meet. Would a sugar molecule crossing from the increasing to the decreasing entropy region suddenly reverse its course? Would there be some form of "entropy gradient" or "entropy tension" at the boundary? This part is extremely speculative and likely paradoxical.
                                            
                                                Energy Conservation: Energy conservation laws might become complicated here. The energy required for sugar to crystallize in the decreasing entropy zone would have to come from somewhere, and how this balances with the increasing entropy side would pose a significant scientific puzzle.
                                            
                                                Thermodynamic Anomalies: The coffee cup itself would be under very peculiar thermodynamic conditions, subject to two entirely different sets of rules. The temperature might behave inconsistently across the boundary, and heat exchange would likely be disrupted in perplexing ways.
                                            
                                                Taste Experience: If you were to taste the coffee, each half of the cup would offer a radically different experience, one sweet and the other likely reverting back to its original unsweet state.
                                            
                                                Perceptual Paradox: From a human perception standpoint, witnessing sugar both dissolve and crystallize in the same cup of coffee would be a profound and likely disorienting experience.
                                            
                                                Time's Arrow: This set-up implicitly questions the nature of time itself. Time seems to behave differently on each side, challenging our deepest notions about causality, sequence, and even reality.
                                            
                                               Again, it's worth noting that this scenario conflicts with our current understanding of physics, but it's a fascinating avenue for speculative thought. Such a universe, if it could exist, would fundamentally challenge our understanding of physical laws and reality as we know it.
                                            
                                               -------------------------------------------------
                                            
                                            
                                            This answer here is anything but a magic trick. It shows deep understanding of conceptual reality by answering complex questions that are unlikely to exist anywhere else on the internet. The AI literally talked about the mysterious interface without me prompting it at all. It knows the conceptual consequence of two regions of space side by side with differing levels of entropy. The answer is not simply a trick of the next best language token.

                                            Is it a practical answer? No. Because the question itself isn't practical. But a non-practical answer does not make this answer a parlor trick.

                                            The entire internet is dismissing this thing as a parlor trick because LLMs fail to add large numbers. I mean come on man. You don't need to be able to do math like a calculator in order to "understand" things.

                                            AI is not yet completely practical. That much is true. However, it is clearly No longer a parlor trick and it is getting closer and closer to transitioning into practical. When that day comes.... Good luck to us all.

                                            • RandomLensman 1 year ago
                                              It sounds like understanding, but not sure because it already doesn't consider if sugar (or water) molecules could form/exist in the alternate universe (also ignores trading off energy vs entropy changes) or should the water just crystalize in one half of the cup (why isng that considered)?. I don't think one can say it has really though about the problem. To be fair, language is probably not the tool to analyse the problem but mathematics are.

                                              It is a bit like technobabble.

                                              • oldandtired 1 year ago
                                                You make a claim here with "Each answer displayed astonishing understanding of what occurs." and the question you fail to ask is: Whose understanding?

                                                The responses are based on the accumulated knowledge of humans and not machines. The systems have not thought through anything and understand nothing. A process of analysing or pattern matching the input question with the data stored retrieves an answer. But that data stored is human knowledge and human effort not machine.

                                                If you look very carefully at the results obtained, it either contains "interesting errors" (for which an intelligent human would pick up) or it is a summation of human knowledge.

                                                The answers still have to be tested and confirmed for rationality and applicability by humans. In other words, this is a tool like all tools created by humans.

                                                I have seen too many examples of what are supposed to be correct answers that contained subtle and not so subtle errors.

                                                Like every system we have ever made, Garbage in gets us Garbage out. We are the ones responsible to checking those answers and making sure that they make sense in the real world.

                                                • IanCal 1 year ago
                                                  > A process of analysing or pattern matching the input question with the data stored retrieves an answer.

                                                  They're not just retrieving stored text like pulling the most relevant passage from a database. If they were they'd not be able to deal with things outside the training set. They couldn't write code for a custom library that was created after the cutoff (they can with a description), and they couldn't write about terms made up in the question.

                                                  • corethree 1 year ago
                                                    > You make a claim here with "Each answer displayed astonishing understanding of what occurs." and the question you fail to ask is: Whose understanding?

                                                    The answer is obvious. The LLM is understanding the concepts. The last question was unique. The resulting answer was also unique.

                                                    It was not a "retrieved" answer. It was a unique answer. A correct composition of several underlying concepts. A correct composition can only be formulated if the machine had correct understanding of each concept and how they relate to one another.

                                                    This thing understands you. It wholly owns this understanding. It is not regurgitating knowledge. It is inventing new answers.

                                                    Wake up man. I had the LLM invent 6 regions and heat the cup of coffee to plasma levels of heat. The answer and composition of concepts was remarkable.

                                                    You're calling it a parlor trick because of subtle errors? Bro. Come on.

                                                  • mythhabit 1 year ago
                                                    Talking about mixing coffee with sugar/milk and reversing the arrow of time (ie. entropy) is a widely talked about example.

                                                    That said, I agree with your arguments and conclusions. Humans are not able to do this kind of reasoning until they have had some education on the topic - and when they reason they get feedback to reinforce their conclusions if others agree or they get a more rigorous proof. We could do the exact same thing with an LLM. Would that potentially make a reasoning AI? For me the jury is still out, albeit I think we need better algorithms for it to be feasible.

                                                    • corethree 1 year ago
                                                      > Talking about mixing coffee with sugar/milk and reversing the arrow of time (ie. entropy) is a widely talked about example.

                                                      But the interface is unique. That is not a topic widely talked about at all.

                                                      Question:

                                                         Instead of two regions add a third region where entropy of the particles entering the region will always stays the same no matter what. Place the coffee cup such that it is in all 3 regions equally. Describe atomically what happens. 
                                                      
                                                      Answer:

                                                         --------------------------------------
                                                      
                                                         In this fictional universe where regions have varying entropy behavior, the coffee cup sits at a tri-junction: a third of it is in a region of increasing entropy (Region A), another third in a region of decreasing entropy (Region B), and the last third in a region where entropy remains constant (Region C).
                                                      In the Atomic Realm:

                                                          Region A (Increasing Entropy):
                                                      
                                                          Sugar molecules randomly move throughout the coffee, driven by thermal motion. They strike water molecules and dissolve over time, eventually reaching a state of homogeneous distribution. The entropy of the system in this portion of the cup increases as expected.
                                                      
                                                          Region B (Decreasing Entropy):
                                                      
                                                          Astonishingly, sugar crystals spontaneously form in the coffee. Sugar molecules collide with water molecules and join an ever-growing crystalline lattice. Over time, the sugar becomes a distinct entity in the coffee, and the entropy of this system decreases.
                                                      
                                                          Region C (Constant Entropy):
                                                      
                                                          The sugar neither dissolves nor crystallizes. Instead, sugar molecules and water molecules co-exist in a dynamic but unchanging equilibrium. At the atomic level, every dissolution event is perfectly balanced by a crystallization event, maintaining constant entropy.
                                                      
                                                      The Tri-Junction:

                                                          Molecular Exchange:
                                                      
                                                          Some molecules, through Brownian motion, will wander across the boundary lines dividing the three regions. Entropic currents from Region A and B cross into Region C and are balanced out, maintaining a zero net change in entropy in that region. Special quantum-entangled pairs straddle the boundaries, existing in superpositions of increased, decreased, and unchanged entropy states.
                                                      
                                                          Entropic Flux and Balance:
                                                      
                                                          A constant entropic flux is established between the three regions. The entropic gradient tensor now has a third component, and the entropic flux density vector also adjusts to a tri-vector to balance out the exchange between the three regions.
                                                      
                                                          Localized Time Anomalies:
                                                      
                                                          At the atomic level, it's as if time runs forward in Region A, backward in Region B, and is somehow "static" or cyclical in Region C. Particles appear to remember both their past and future states in Region C, generating a unique form of atomic "temporal coherence."
                                                      
                                                          Causality and Quantum Superposition:
                                                      
                                                          Despite these anomalies, causality is preserved. Particles that cross into different regions adapt to the local entropy behavior and exhibit quantum behaviors that align with the local direction of time.
                                                      
                                                         In essence, the tri-junction becomes a hotbed of bizarre but mathematically consistent phenomena, a place where the fabric of spacetime and the fundamental laws of thermodynamics seem to negotiate a delicate truce.
                                                      
                                                         As for the cup of coffee, one-third of it will end up well-mixed, one-third will have sugar crystals, and the last third will remain an intriguing paradox of thermodynamics. You might call it the most scientifically fascinating cup of coffee ever brewed.
                                                      
                                                         -------------------------------------------------
                                                      
                                                      This isn't a parlor trick regurgitation. It's displaying understanding of the concepts by speculating on new concepts via composition of several other concepts.
                                                    • IanCal 1 year ago
                                                      That's a fun example. Here's one I posted elsewhere about filling a balloon with a gas made of 1 proton and 100 neutrons at a kids party.

                                                      https://chat.openai.com/share/71224df4-5c6c-45f7-88fd-eec316...

                                                    • usgroup 1 year ago
                                                      Not really sure how you can consider ChatGPT a parlour trick. It has been around a relatively short time but for me its replaced a large proportion of my Google searches already. I do not see how its utility can be denied whether it reasons or not (whatever that means).
                                                      • ben_w 1 year ago
                                                        In fairness, Page Rank and indexing are also… well, perhaps too big for a parlour, but they're (very good) tricks.

                                                        Is ChatGPT playing a trick on us by mimicking the sentience of the humans whose writings it ingested, or is the trick that by doing so it began to actually think like us and so simulates a conscious mind within?

                                                        I lean towards the former; but we don't know what sentience even is yet, so we can't prove it.

                                                      • KirillPanov 1 year ago
                                                        This is why they've called it AI winter the past three times.

                                                        It's a season.

                                                        Like the seasons, the cycle repeats.

                                                        • array-species 1 year ago
                                                          This time it's nuclear winter.

                                                          Examples: Graphics User Interface. The iPhone seated mobile compute as a permanent fixture, it and Android bringing internet and computing to billions of humans for the first time. Not just the wealthy industrials. Or IBM DB2 for SQL. Or Ford's Model T. Or the Gutenberg. None of these were the first first. Maybe even on the n-th iteration 2 decades after coming out of DARPA or global university research, something is just ready for commercialized prime time.

                                                          Computing was solved by Lady Ada and Babbage. It's all electrical engineering, software, fabrication, productization, mfg, displays, sensors etc. after that.

                                                          Semiconductor was a big deal in making it happen. But that's almost besides the point of the theory of compute if it can be solved in other novel ways using alternative material science.

                                                          • tsunamifury 1 year ago
                                                            What a tired and lame take.

                                                            100m people used this latest iteration. It’s hardly a winter.

                                                            • worksonmine 1 year ago
                                                              If there's anything the last few years has taught me is that hype is rarely any indication of common sense. I'm always surprised what reaches success and what doesn't. Truly revolutionary ideas are ignored and not understood while stupid but polished ideas are booming.

                                                              One example of that is the crypto space. So many actual good for humanity implications, but the "killer app" that made the news was fucking NFTs. I'm trying really hard not to come to the conclusion that humans are just mindless zombies but it's getting harder every year.

                                                              Once we reach a billion DAU you can be sure it will just be "Original Memes Tailored Just For YOU"™ instead of figuring out the logistics for solving world hunger. Mark my words.

                                                              • __loam 1 year ago
                                                                The crash is gonna be wild when people realize all these companies are blowing smoke. We did get Google out of the last crash though I guess.
                                                                • KirillPanov 1 year ago
                                                                  What a tired and lame retort.
                                                              • echelon 1 year ago
                                                                You say all that. Meanwhile teenagers are finding love on character.ai.
                                                                • flatline 1 year ago
                                                                  There is a famous Dijkstra quote, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

                                                                  Do the intrinsic properties of the system really matter at the end of the day if it performs as well as we do at some task? Heck they’ve been doing many things better for decades, but those are the types of tasks we take it for granted that a machine should be able to do. Solve differential equations. Play chess. But now computers are doing “human” tasks competently. Writing creative fiction. Generating graphical art.

                                                                  We don’t have a good working definition or metric for intelligence. Surely it is not a monolithic property. Animals exhibit many traits we associate with intelligence. Some of the stuff GPT 3+ generates sounds pretty intelligent. It is the type of things an intelligence may have produced, because it was trained to do just that. If we look at intelligence as a cluster of traits, or behaviors, I think we are surrounded by intelligence - human, artificial, or otherwise. Doesn’t have to be an AGI to fall in that category. It doesn’t even have to be particularly impressive.

                                                                  • vczf 1 year ago
                                                                    I wrote about my definition of intelligence earlier this month: https://tildes.net/~comp/194n/language_is_a_poor_heuristic_f...

                                                                        I have a definition of intelligence. [...]
                                                                    
                                                                        Intelligence is prediction.
                                                                        
                                                                        In the case of intelligent living processes ranging from single celled organisms to complex multicellular life, intelligence arises from the need to predict the future to survive and reproduce. More intelligent organisms build more elaborate models of the world using better developed senses in order to do so.
                                                                        
                                                                        Humans model the world primarily with language, which allows us to share our models with each other, across both space and time! Without language, it is extraordinarily more difficult to communicate complex abstract thoughts. As a side effect of our high level of intelligence, our wetware is capable of modeling things outside of language, such as mathematics. [...]
                                                                        
                                                                        In general, I think we need to stop equating intelligence with consciousness, agency, moral value, and living things. These are entirely different concepts, and, as a species, we are blinded by our egotistical identity of being smarter than the other animals.
                                                                    
                                                                    According to my definition, intelligence is actually all around us. We are blind to it because we focus only on how intelligence manifests in humans (defined by by our specific social organization and biological senses), and then use that as a benchmark to judge every other thing in the world.

                                                                    A less socially-charged definition of intelligence would make it easier to compare intelligence across living and non-living processes, though it would not be "popular science" useful for ranking humans.

                                                                    • worksonmine 1 year ago
                                                                      I don't agree with your definition at all.

                                                                      2 people want to kill each other. The one taking the first step is the intelligent one because according to your definition he was better at predicting an outcome than his opponent.

                                                                      The real world is more complex than that and there are multiple options where both survive, or letting your opponent live and killing yourself because his life is more beneficial to humanity and so on.

                                                                      Any organism can survive, but for most (including us) that implies a selfish outlook but the most intelligent people I know or heard of never even consider their own ego.

                                                                      • oldandtired 1 year ago
                                                                        Your definition of intelligence has been around for millennia and is part of the pantheism concept.
                                                                      • emodendroket 1 year ago
                                                                        Sure, it's neat that it can do those things... still I don't think that I'm exactly ready to toss aside books written by humans in favor of AI-generated alternatives.
                                                                        • jack_pp 1 year ago
                                                                          Writers have started using AI-generated ideas to help them write books.

                                                                          Same with code. Sure it might not write your product code from start to finish with no help but it will speed up your dev speed significantly for certain tasks. Just because we haven't reached singularity doesn't mean what we have now is useless and putting it down as a "parlor trick" as grandparent said seems to me very unwise.

                                                                    • slimsag 1 year ago
                                                                      "Collaborative filtering", "using past activity and stated preferences to guide us"

                                                                      I can't think of areas Amazon does worse in today than these two areas.

                                                                      Reviews are so untrustworthy they are just noise.

                                                                      Those coffee beans I ordered a month ago? I have to go back to my orders page, search (and for some reason their search is dirt slow) - then wade through unrelated products to finally find them.

                                                                      But oh, hey, you just bought a vacuum cleaner? I know what you would really like! MORE VACUUMS!

                                                                      I guess they innovated on those two areas in 1998, and then since then the only thing they did was remove the `review-count-rank` sorting option so their AI can suggest Amazon Brand products.

                                                                      • foob 1 year ago
                                                                        hey, you just bought a vacuum cleaner? I know what you would really like! MORE VACUUMS!

                                                                        This is a common refrain, but I would be amazed if it weren't backed up by data and I anecdotally fit into the mold. If I buy a product that I really like, then I'm likely to buy it again to give as a gift to friends. If I buy something like a a vacuum and I don't like it, then I'm likely to buy a different model that better fits my needs. It seems silly when you aren't interested in buying another one of whatever item you bought, but there are multiple legitimate reasons that people would want to.

                                                                        • mvdtnz 1 year ago
                                                                          You'd buy a vacuum cleaner as a gift for a friend? I find that incredibly hard to believe but if it's true, it's very strange behaviour.
                                                                          • cookingrobot 1 year ago
                                                                            I think the comment was that if they buy a vacuum and don’t like it, they return it and buy a different one. So if that happens 20% of the time, that next month has a much higher chance of buying “another vacuum” than the average month.

                                                                            Also, I bought my parents a Roomba because I liked mine, so that can happen too.

                                                                            • Baeocystin 1 year ago
                                                                              Several years back a bunch of us pet-owning friends wound up round-robin gifting the same model SpotBot carpet cleaner to each other, because it was something we all agreed made our lives better. Nothing weird about it at all.
                                                                              • TheCoreh 1 year ago
                                                                                I've done it before: shortly after I bought a robot vacuum; I gave the exact same model as a birthday gift
                                                                                • dharmab 1 year ago
                                                                                  A vacuum cleaner would be a great housewarming gift, for example.
                                                                                  • tonymillion 1 year ago
                                                                                    It truly is a gift that sucks
                                                                                    • interfixus 1 year ago
                                                                                      I did, not a month ago. Nothing strange about the process whatsoever.
                                                                                    • Baeocystin 1 year ago
                                                                                      Posting to confirm, having seen back-end sales numbers from (admittedly much smaller) vendors, the correlation between $just_bought_thing and $will_buy_another is very, very high, across pretty much every category I cared to look at.
                                                                                      • jodrellblank 1 year ago
                                                                                        But surely if you just bought a <Roomba> and like it and intend to buy it again as a gift then you don’t need an advert for Roombas - (and showing you such an advert followed by you buying another Roomba is making the advert look more effective than it was) - and you definitely don’t need an advert for Bissel or Dyson, and you definitely don’t need a dozen adverts for Amazon PLINGBA BEST VACUUM, TYBCHO VACUUM EXPERT, DAOLPTRY VACUUM CLANER etc etc?
                                                                                      • jack_pp 1 year ago
                                                                                        Well if those are the reasons then why not code it directly?

                                                                                        Ask the user if they liked the product and would likely gift it in the future and add the item to a "Gift ideas" list.

                                                                                        If the user returned then start suggesting immediately an alternative.

                                                                                        Blindly recommending the same thing just because there's a correlation seems stupid. Would be nice to at-least have a nicely visible button to stop recommending this item since I'm done purchasing anything similar for the next couple of years.

                                                                                      • neilv 1 year ago
                                                                                        > But oh, hey, you just bought a vacuum cleaner? I know what you would really like! MORE VACUUMS!

                                                                                        Amazon did literally this to me just a couple days ago.

                                                                                        I'd bought a vacuum cleaner on Amazon a week earlier, and, when doing an Amazon checkout of a cart with one item, it threw up a list of consumable items that I'd bought in the past, to possibly add to the order... including another one of those $120 upright vacuum cleaners.

                                                                                        (Maybe they have data that says this makes them more money, even though I'd guess it might hurt customer confidence in the site.)

                                                                                        • IG_Semmelweiss 1 year ago
                                                                                          I don't know how the recommendation engines work, but if there are dollars there, i could understand this to be:

                                                                                          Amazon: Sellers! Do you want to advertise to customers interested in vacuums ? Seller: YES! Take my money!

                                                                                          (Amazon proceeds to uprank vacuums to customers with prior vacuum sales)

                                                                                          • thatoneguytoo 1 year ago
                                                                                            >But oh, hey, you just bought a vacuum cleaner? I know what you would really like! MORE VACUUMS!

                                                                                            I have worked on the same recommendation systems. It's also the most often oncall issue. The problem is mostly due to lag in event processing (especially orders).

                                                                                            • j45 1 year ago
                                                                                              Amazon had fewer items back then and the bar and complexity was lower too.
                                                                                            • 1 year ago
                                                                                              • sytelus 1 year ago
                                                                                                The fascinating thing is that this guy has literally 100s of billions in pocket. He ALSO has all the time in the world. He can do anything he wants. Anything. The dude is smart, genius, visionary. He even precisely knows what needs to be done. And what is he really doing? It is simply breathtaking.
                                                                                                • amarshall 1 year ago
                                                                                                  Article has no body for me. Site appears to use an iframe whose src expects the Referer header to be sent, but I have `network.http.referer.XOriginPolicy = 1`set in FF about:config to reduce cross-origin leakage, so no Referer is sent.
                                                                                                • brrrrrm 1 year ago
                                                                                                  I notice that a lot of really successful people speak quite quickly early in their careers. It's almost as if their own voices can't keep up with the excitement they have about their product/idea and they haven't quite honed in the outward presentation of it.
                                                                                                  • Mistletoe 1 year ago
                                                                                                    Could just be the Adderall.
                                                                                                  • Sparkyte 1 year ago
                                                                                                    Is Jeff still able to convey such a technical response anymore?
                                                                                                    • DueDilligence 1 year ago
                                                                                                      [dead]