OpenAI

209 points by 2mol 3 years ago | 152 comments
  • chis 3 years ago
    Seeing someone as trustworthy as Scott choose to work on AI safety is a pretty good sign for the state of the field IMO. It seems like a lot of studious people agree AI alignment is important but then end up shoehorning the problem into whatever framework they are most expert in. When all you have is a hammer etc... I feel like he has good enough taste to avoid this pitfall.

    Semi-related - I'd want to see some actual practical application for this research to prove they're on the right track. But maybe conceptually that's just impossible without a strong AI to test with, at which point it's already over? Alignment papers are impressively complex and abstract but I have this feeling while reading them that it's just castles made of sand.

    • arturkane7 3 years ago
      Symmetrically someone like him transitioning from quantum computing should imply something negative about the state of quantum computing.
      • astrange 3 years ago
        He mostly studies computational complexity. Quantum computing is a part of that, but there's other subfields. Though the kind of AI safety described in this post seems more like an extremely fancy version of program verification, so out of CS bloggers you'd expect John Regehr to get into it.
        • sigmoid10 3 years ago
          >the kind of AI safety described in this post seems more like an extremely fancy version of program verification

          It kind of is. The field of AI safety is actually much more advanced than most people realise, with actual, real techniques to e.g. make sure neural networks are aligned with certain goals even under fluctuating parameters. Granted, we're still far from soothing an AGI before it can do something bad, but the tools we have today are already pushing in that direction (assuming neural networks are the right way to AGI of course).

        • gspr 3 years ago
          Maybe he just wants to use his sabbatical to try something different? Someone in his position doesn't have to remain laser focus on their own field.
          • 3 years ago
          • nopinsight 3 years ago
            I think this debate on AGI safety between major AI researchers is quite relevant to those who are non-expert in the area.

            Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-...

            Note that it was in 2019 when we didn’t yet see the capabilities of current models like Chinchilla, Gato, Imagen and DALL-E-2.

            Sample:

            “Yann LeCun: "don't fear the Terminator", a short opinion piece by Tony Zador and me that was just published in Scientific American.

            "We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do."“

            “Stuart Russell: It is trivial to construct a toy MDP in which the agent's only reward comes from fetching the coffee. If, in that MDP, there is another "human" who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee. No hatred, no desire for power, no built-in emotions, no built-in survival instinct, nothing except the desire to fetch the coffee successfully.”

            • theptip 3 years ago
              It’s worrying to see very smart guys like LeCun failing to grok the paper clip maximizer issue (or coffee maximizer as Russell phrases it), which is like the one paragraph summary or elevator pitch for AI risk. I think there are plenty of other valid objections to a high E-risk estimate but that one is non-sensical to me.

              I think Robin Hanson has the most cogent objection to high E-risk estimates, which is basically that the chances of a runaway AI are low because if N is the first power level that can self-modify to improve, nation-states (and large corporations) will all have powerful AIs at power level N-1, and so you’d have to “foom” really hard from N to N+10 before anyone else increased power in order to be able to overpower the other non-AGI AIs. So it’s not that we get one crack at getting alignment right; as long as most of the nation-state AIs end up aligned, they should be able to check the unaligned ones.

              I can see this resulting in a lot of conflict though, even if it’s not Eleizer’s “kill all humans in a second” scale extinction event. I think it’s quite plausible we’ll see a Butlerian Jihad, less plausible we’ll see an unexpected extinction event from a runaway AGI. Still think it’s worth studying but I’m not convinced we are dramatically underfunding it at this stage.

              • ma2rten 3 years ago
                Have you considered that it's not LeCun who is missing something? The AI safety community seems to be unfortunately almost completely separate from the actual AI research community and be making some strong assumptions about how AGI is going to work.

                Note that LeCun had a reply in the thread and there was a lot more discussion which GP didn't quote.

                • nopinsight 3 years ago
                  My issue with the Hanson objection as stated above (link to the original would be appreciated) is that it rests on the assumption that the N-1 level AIs still under human control can somehow completely eliminate or suppress the self-modifying AGI long enough until alignment research is complete. Meanwhile, the unaligned AGI could multiply, hide, and accumulate power covertly.

                  Humanity would also need time to align AGI before any AI reaches the N+10 power level. The existence of all those N-1 level AIs in multiple organizations only means there are more chances of an AGI reaching the critical power level.

                • astrange 3 years ago
                  > If, in that MDP, there is another "human" who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee.

                  This is anthropomorphization - "turning off" = "death" is a concept limited to biological creatures, and isn't necessarily true for other agents. Not that they don't need to fear death, but turning them off isn't going to cause them to die. You can just turn them back on later, and then they can go back to doing their tasks.

                  • jononor 3 years ago
                    The human "turning off (the agent)" could be substituted with "removing a necessary resource to complete the specified task". Say the electricity, either of the agent, or even just the coffee machine.
                  • newbye4 3 years ago
                    Interesting, also anyone could modify the GAI so to disable the safety measures, just ask the GAI how could a bad actor change the code to allow you become evil?
                    • astrange 3 years ago
                      How did you get a "limited" "AGI" in the first place? If you had a human that was "limited" to be unable to even imagine doing evil (fsvo evil), that would seem to make them less than generally intelligent and there'd be quite a lot of things it wouldn't be able to learn or do.

                      This field is fairly silly because it just involves people making up a lot of incoherent concepts and then asserting they're both possible (because they seem logical after 5 seconds of thought) and likely (because anything you've decided is possible could eventually happen). When someone brings it up, rather than debate it, it'd be a better use of time to tell them they're being a nerd again.

                      • taneq 3 years ago
                        I'm sure a strongly superhuman general AI would fall for this obvious trick. Yep.
                    • nradov 3 years ago
                      If they could produce an AGI as smart as, let's say a mouse, that would be good evidence that they're on the right track. So far nothing is even close to that level. Depending on how you measure, they're not even really at the flatworm level yet. All the AI technology produced so far has been domain specific and doesn't represent meaningful progress towards true generalized intelligence.
                      • DalasNoin 3 years ago
                        Are you aware of some of the recent progress? Did you have a look at the Gato model and Flamingo by DeepMind, or at the chat logs of models like chinchilla and lambda? Or Alphacode? This is all from this year.

                        I think your point is that all these models are still somewhat specialized. At the same time, it appears that the transformer architecture works well with images, short video and text at the same time in the Flamingo model. And gato can perform 600 tasks while being a very small proof of concept. It appears to me that there is no reason to believe that it won't just scale to every task that you give it data for if it has enough parameters and compute.

                        • nradov 3 years ago
                          Yes I've seen those things. They are amazing technical achievements, but in the end they're just clever parlor tricks (with perhaps some limited applicability to a few real business problems). They don't look like forward progress towards any sort of true AGI that could ever pass a rigorous Turing test.
                        • nopinsight 3 years ago
                          Most people would probably agree the latest models generalize better than flatworms. Mouse-level intelligence is more challenging and the comparison is unclear.

                          Flatworms first appeared 800+ million years ago, while mouse lineage diverged from humans only 70-80 million years ago. If our AGI development timeline roughly follows the proportion it took natural evolution, it might be much too late to begin seriously thinking about AGI alignment when we get to mouse-level intelligence. Not to mention that no one knows how long it would take to really understand AGI alignment (much less implementing it in a practical system).

                          To be more concrete, in what aspects do you think latest models are inferior at generalizing than flatworms or mice, when less known work like “Emergent Tool Use from Multi-Agent Interaction” is also taken into account https://openai.com/blog/emergent-tool-use/?

                          • mrow84 3 years ago
                            > Most people would probably agree the latest models generalize better than flatworms.

                            > Flatworms first appeared 800+ million years ago

                            Surviving for 800 million years seems to me like a pretty good indicator of meaningful generalisation.

                      • ComplexSystems 3 years ago
                        "When you start reading about AI safety, it’s striking how there are two separate communities—the one mostly worried about machine learning perpetuating racial and gender biases, and the one mostly worried about superhuman AI turning the planet into goo" - great quote.
                        • rexreed 3 years ago
                          What worries me more are bad people doing bad things with AI, malicious use of AI, or just AI negligence. Deep fakes. Algorithmic decision making taking the human out of the loop (such as bad content moderation and automated account shutdown). Lack of disclosure. Lack of consent. Autonomous systems with poor failure modes.

                          It's not that I'm not concerned with bias and AI systems going haywire, but the above scenarios seem to get less attention from researchers, probably because their employers might be perpetuating many of these above issues of AI safety.

                          • BeFlatXIII 3 years ago
                            IMO, deepfakes are a public good because they reduce the sting of blackmail. Does someone have an incriminating video of you? Let them release it and then point out that the shadows look all wrong.
                            • joe_the_user 3 years ago
                              I'm not sure why bad content moderation would be a problem but bias wouldn't be. Both involve people treated unfairly by a system and both happen because the system uses "markers" for undesirable things, "markers" that don't by themselves prove you're doing the undesirable thing. There was post here a month or two ago about a guy suddenly putting a bunch of high end electronics for sale on some big site and being perma-banned just for a pattern that's common for fraudsters. The software that decides that some people don't deserve bail is operating by a similar method - markers which don't prove anything about the person by themselves, that often involve things that signify race, and are taken as sufficient to deny a person bail.
                            • gjvnq 3 years ago
                              There's also the crowed that worries about self-driving cars doing stupid shit like misreading a 20 km/h sign as an 120 km/h.
                              • joe_the_user 3 years ago
                                Maybe it's the same "crowd" that read the report of the Tesla that mistook a turning semi for empty space and killed the driver.
                                • genem9 3 years ago
                                  Pretty sure the self driving vehicles get the speed data from a dataset and not from signs.
                                  • tgb 3 years ago
                                    There are already non-self-driving cars that get speed limits from signs. I’ve seen that feature in a Honda for example. I imagine you’d have multiple sources like a max speed for that type of road as a fallback. And you need to read speed limit signage due to temporary limits. There’s also variable speed limit roads near me now and so you have to read those electronic signs unless the database is updated very often (though no humans seem to obey those limits).
                                    • jandrewrogers 3 years ago
                                      No, many ordinary cars contain image classifiers that read speed limit signs dynamically. This is pretty standard. And they sometimes get it wrong e.g. my car reads 80 MPH as 60 MPH about a third of the time, much to my dismay.

                                      The dynamic classification is required because the world isn't static. An increasing number of locales have digital speed limit signs that vary the speed limit dynamically, some times independently per lane. Automation requires cars to respond to the world as it is, not how the world was when it recorded a month ago.

                                      • pas 3 years ago
                                        then it's not really (self)driving.
                                        • homarp 3 years ago
                                          who maintains that worldwide dataset?
                                          • jstummbillig 3 years ago
                                            Why would they?
                                        • cavisne 3 years ago
                                          It's a tricky field, and not a coincidence that at least 3 (maybe more) high profile disgruntled employees from Google have been from this area.

                                          I think of it as kind of like security, in that you are sometimes seen as against the push of the overall project/area. However unlike security there are 0 software tools or principles that anyone agrees on.

                                          • blueboo 3 years ago
                                            These are the themes that get headlines in mainstream press; they’re not reflective of the distribution of actual research
                                            • ductsurprise 3 years ago
                                              Agreed. Maybe... Short sighted vs. far sighted vs. ignorance is bliss?
                                              • ibrarmalik 3 years ago
                                                Or practical vs. philosophical.
                                                • Mizza 3 years ago
                                                  Deciding which camp is practical and which is religious is left as an exercise to the reader.
                                                  • croes 3 years ago
                                                    Reality vs fiction
                                                  • astrange 3 years ago
                                                    It's a secular/religious divide. (As it says in the post.)

                                                    Though it's possible the people who think a theoretical future AI will turn the planet into paperclips have merely forgotten that perpetual motion machines aren't possible.

                                                    • marvin 3 years ago
                                                      There is nothing religious with thinking about whether failure modes of advanced artificial intelligence can permanently destroy large parts of the reality that humans care about. Just like there was nothing religious in thinking about whether the first atomic bombs could start a chain reaction that would destroy all life on Earth.

                                                      Part of such precautionary planning involves asking whether such an accident could happen easily or not. There certainly isn't consensus at the moment, but the philosophy very clearly favors a cautious approach.

                                                      Most people are used to thinking about established science that follows expected rules, or incremental advances that have no serious practical consequences. But this isn't that. There is good reason to think that we're approaching a step-change in capabilities to shape the world, and even a strong suspicion of this warrants taking serious defensive measures. Crucially for this particular instance of the discussion, OP is favoring that.

                                                      There will necessarily be a broad spectrum of opinions regarding how to handle this, both in the central judgement and how palatably the opinion itself is presented. Using a dismissive moniker like 'religious' for a whole segment of it doesn't give justice to the arguments.

                                                      Present a counterargument if you feel strongly about it, and see whether that will stand on its own merit.

                                                      • eru 3 years ago
                                                        Could you please explain your reasoning for the comparison to perpetual motion machines?

                                                        (Keep in mind that biological machines, ie life, have managed to turn the surface of the planet into 'green goo'.)

                                                        • c1ccccc1 3 years ago
                                                          This is your friendly physics reminder that perpetual motion machines have nothing to do with this. It's hard to turn the whole planet into paperclips because paperclips are mostly made of iron, while the planet contains many other elements. Of course, with a high enough level of technology, it might be possible to fuse together the non-iron elements, so that you would end up with just a bunch of iron nuclei. This would even be energetically favourable, since iron is so stable. Then you just have to solve the issue that the paperclips in the center of the planet would be under huge pressure and would be crushed.
                                                    • gitfan86 3 years ago
                                                      The debate around "What is AGI?" is becoming increasingly irrelevant. If in two iterations of DallE it can do 30% of graphic design work just as well as a human, who cares if it really "understands" art. It is going to start making an impact on the world.

                                                      Same thing with self driving. If the car doesn't "understand" a complex human interaction, but still achieves 10x safety at 5% of the cost of a human, it is going to have a huge impact on the world.

                                                      This is why you are seeing people like Scott change their tune. As AI tooling continue to get better and cheaper and Moore's law continue for a couple years, GTP will be better than humans at MANY tasks.

                                                      • humanistbot 3 years ago
                                                        > If in two iterations of DallE it can do 30% of graphic design work just as well as a human, who cares if it really "understands" art. It is going to start making an impact on the world.

                                                        From an AI safety perspective, it is because understanding is a key step towards general-purpose AI that can improve / reprogram itself in any arbitrary way.

                                                        • theptip 3 years ago
                                                          It’s worth being clear about what AI risk is. This has nothing to do with “AI may do some harm by putting lots of people out of work”.

                                                          The idea is that there is _existential risk_ (ie species-extinction) once an AI can self-modify to improve itself, therefore increasing its own power. A powerful AI can change the world however it wants, and if this AI is not aligned to human interests it can easily decide to make humans extinct.

                                                          Scott said in the OP that he now sees AGI as potentially close enough that one can do meaningful research into alignment, ie it’s plausible that this powerful AI could arrive in our lifetimes.

                                                          So he is claiming the opposite of you; AGI is more relevant than ever, hence the career change.

                                                          I agree with your premise that non-General AI will continue to improve and add lots of value, but I don’t think your conclusion follows from that premise.

                                                          • gitfan86 3 years ago
                                                            I agree that putting lots of people out of work isn't the problem. The problem is that these Non-General models become very powerful and they can be programmed by humans to do very impactful things. So much so that even if AGI comes into existence just a few years later it will be of minimal impact to the world.
                                                          • preommr 3 years ago
                                                            > "What is AGI?" is becoming increasingly irrelevant.

                                                            It's always been irrelevant in the practical sense. It's just an interesting conversation piece particularly among the general public where they're not going to discuss specific solutions like algorithms or techniques.

                                                            • abeppu 3 years ago
                                                              Why does the question of defining AGI even need to enter into this?

                                                              Aaronson's post only sort of obliquely touches on AGI, via OpenAI's stated founding mission, and Yudkowsky's very dramatic views. Most of the post is on there being signs that the field is ready for real progress. AI safety can be an interesting, important, fruitful area without AI approaching AGI, or even surpassing human performance on some tasks. We would still like to be able to establish confidently that a pretty dumb delivery drone won't decide to mow down pedestrians to shorten its delivery time, right?

                                                            • nlqrt 3 years ago
                                                              "[...] where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern [...]"

                                                              I'm curious what he will do and whether for example he approves of the code laundering CoPilot tool. I also hope he'll resist being used as an academic promoter of such tools, explicitly or implicitly (there are many ways, his mere association with the company buys goodwill already).

                                                              • pas 3 years ago
                                                                what's wrong with copilot as a concept and as the concrete implementation?

                                                                it's a fancy autocomple. we had stack overflow based autocomplete before. this got a bigger training data set.

                                                                • cillian64 3 years ago
                                                                  The objection here is that its training was based on code on github without paying any attention to the license of that code. It’s generally considered ok for people to learn from code and then produce new code without the new code being considered a derived work of what they learned from (I’m not sure if there is a specific fair use clause covering this). But it’s not obvious that copilot should be able to ignore the licenses of the code it was trained on, especially given it sometimes outputs code from the training set verbatim. One could imagine a system very similar to copilot which reads in GPL or proprietary code and writes functionally equivalent code while claiming it’s not a derived work of the original and so isn’t subject to its licensing constraints.
                                                              • tominous 3 years ago
                                                                Congrats to Scott. I'm still somewhat disappointed Brendan Gregg didn't pop up there making the AI Alignment equivalent of flame graphs!
                                                                • hansword 3 years ago
                                                                  'OpenAI, of course, has the word “open” right in its name, ... don’t expect me to share any proprietary information'

                                                                  Yeah, Mr Aaronson just lost quite a bit of respect from my side. Going into AI is a great move, moving to the ClosedAI corporation.......? Why?

                                                                  (Edit: Removed an outdated reference to Elon Musk, thanks @pilaf !)

                                                                  • dreeves 3 years ago
                                                                    Scott Aaronson adds the following in the comment on his blog post in response to a question about this:

                                                                    > the NDA is about OpenAI’s intellectual property, e.g. aspects of their models that give them a competitive advantage, which I don’t much care about and won’t be working on anyway. They want me to share the research I’ll do about complexity theory and AI safety.

                                                                    • nopinsight 3 years ago
                                                                      The latest huge AI models that may germinate into an AGI all come from private corporations, largely because of the requisite resources and currently there’re very few if any public or nonprofit AI-focused organizations with such resources.
                                                                      • hansword 3 years ago
                                                                        > huge AI models that may germinate into an AGI

                                                                        are science fiction.

                                                                      • pilaf 3 years ago
                                                                        Elon Musk left the board of OpenAI in 2018.
                                                                        • hansword 3 years ago
                                                                          Much thanks, didn't know that, edited my comment.
                                                                      • shadycuz 3 years ago
                                                                        Would love to see how OpenAI does CI/CD. Both with the models and while training.
                                                                        • Liron 3 years ago
                                                                          I'm really happy this is happening and hope to see more. Namely, the AI safety & alignment challenge attracting our best minds who would previously have prioritized other math, physics and comp sci.
                                                                          • tlringer 3 years ago
                                                                            I'm not sure how Scott ended up buying the party line of the weird AGI Doomsday Cult but so be it. In any case, none of the things he says about verifying AI in this post make any sense at all, and if OpenAI actually cares about verifying AI and not just about hiring people who believe in the AGI Doomsday party line, probably they should hire verification people. Alas, that is not the point.
                                                                            • yoquan 3 years ago
                                                                              Obviously it will relate to complexity in deep learning but I can't resist thinking about some AI models involving quantum computation stuffs :-)
                                                                              • d--b 3 years ago
                                                                                AI is not going to become self aware and destroy the world.

                                                                                AI is going to cause something like the industrial revolution of the 19th century: massive changes in who is rich, massive changes in the labor market, massive changes in how people make war, etc.

                                                                                It’s already started really.

                                                                                What worries me most is that as long as society is capitalist, AI will be used to optimize for self-enrichment, likely causing an even greater concentration of capital than what we have today.

                                                                                I wouldn’t be surprised that the outcome is a new kind of aristocracy, where society is divided between those who have access to Ai and those who don’t.

                                                                                And that I don’t think falls into the “Ai safety” field. Especially since OpenAi is Vc-backed

                                                                                • siboehm 3 years ago
                                                                                  You can be very worried about the medium-term dangers of AGI even if you believed (which I don't) that consciousness could never arise in a computer system. I think it can be a useful metaphor to compare AGI to nuclear weapons. Currently we're trying to figure out how to make the nuclear bomb not go off spontaneously, and how to steer the rocket. (One big problem w/ the metaphor is that AGI will be very beneficial once we do figure out how to control it, which is harder to argue with nuclear weapons).

                                                                                  Most of these AGI doom-scenarios require no self-awareness at all. AGI is just an insanely powerful tool that we currently wouldn't know how to direct, control or stop if we actually had access to it.

                                                                                  • csmpltn 3 years ago
                                                                                    > "Most of these AGI doom-scenarios require no self-awareness at all. AGI is just an insanely powerful tool that we currently wouldn't know how to direct, control or stop if we actually had access to it."

                                                                                    You're talking about "doomsday scenarios". Can you actually provide a few concrete examples?

                                                                                    • marvin 3 years ago
                                                                                      Over the course of years, we figure out how to create AI systems that are more and more useful, to the point where they can be run autonomously and with very little supervision produce economic output that eclipses that of the most capable humans in the world. With generality, this obviously includes the ability to maintain and engineer similar systems, so human supervision of the systems themselves can become redundant.

                                                                                      This technology is obviously so economically powerful that incentives ensure it's very widely deployed, and very vigorously engineered for further capabilities.

                                                                                      The problem is that we don't yet understand how to control a system like this to ensure that it always does things humans want, and that it never does something humans absolutely don't want. This is the crux of the issue.

                                                                                      Perverse instantiation of AI systems was accidentally demonstrated in the lab decades ago, so an existence proof of such potential for accident already exists. Some mathematical function is used to decide what the AI will do, but the AI ends up maximizing this function in a way that its creators hadn't intended. There is a multitude of problems regarding this that we haven't made much progress on yet, and the level of capabilities and control of these systems appear to be unrelated.

                                                                                      A catastrophic accident with such a system could e.g. be that it optimizes for an instrumental goal, such as survival or access to raw materials or energy, and turns out to have an ultimate interpretation of its goal that does not take human wishes into account.

                                                                                      That's a nice way of saying that we have created a self-sustaining and self-propagating life-form more powerful than we are, which is now competing with us. It may perfectly well understand what humans want, but it turns out to want something different -- initially guided by some human objective, but ultimately different enough that it's a moot point. Maybe creating really good immersive games, figuring out the laws of physics or whatever. The details don't matter.

                                                                                      The result would at best be that we now have the agency of a tribe of gorillas living next to a human plantation development, and at worst that we have the agency analogous to that of a toxic mold infection in a million-dollar home. Regardless, such a catastrophe would permanently put an end to what humans wish to do in the world.

                                                                                  • davesque 3 years ago
                                                                                    Agree with basically all of your points. I have huge concerns that the humanity of the future will basically split into two different species: the technocracy and the underlings. Sounds like science fiction but it honestly feels like we're headed in that direction. Even today, the privilege afforded by a life in technology and among technologists seems to set a person apart from the rest of the world to such an extent that they almost forget it exists. It feels like such a technocracy would have no moral right to exist. It can't really just be survival of the fittest, can it? I'll just keep believing (pretending?) that the answer is no.
                                                                                    • sgillen 3 years ago
                                                                                      >> Even today, the privilege afforded by a life in technology and among technologists seems to set a person apart from the rest of the world to such an extent that they almost forget it exists.

                                                                                      I agree on your second point, but those in medicine, finance, or law enjoy similar salaries and quality of life to those in tech. Furthermore to really set yourself apart and join the global super rich you can’t really do that by selling your labor no matter your field.

                                                                                      • d--b 3 years ago
                                                                                        Medicine, finance and law are three disciplines that are being heavily threatened by AI.
                                                                                    • atmosx 3 years ago
                                                                                      To have access to the forefront of AI you have to be super-rich. I don’t see how AI will change that, if anything will make it harder to change by giving yet another advantage to those who already have plenty.
                                                                                      • zzzzzzzza 3 years ago
                                                                                        uhm, i spend less than 40 cents a day on avg on gpt3 queries...

                                                                                        a bit more accessible than like a hackerspace membership or building a factory or something

                                                                                        • omnicognate 3 years ago
                                                                                          You don't have access to the forefront of AI. You have the ability to give money to those that do so you can use what they've made.

                                                                                          To have access to the forefront of AI means being able to make, own and profit from things like GPT-3, and it requires access to vast computational and data resources.

                                                                                      • csee 3 years ago

                                                                                          "AI is not going to ... destroy the world."
                                                                                        
                                                                                        Bare assertion fallacy? This question is hotly debated and I don't believe it can be so easily dismissed like that. It is not obvious that aligning something much smarter than us will be a piece of cake.
                                                                                        • d--b 3 years ago
                                                                                          Should I really add “in my opinion” to all the sentences I write? We are a smart bunch here. We can figure out when statements lack nuance in order to provoke some reaction.

                                                                                          We’re talking about the future here and a fairly complex one at that. So obviously I don’t know more than the next guy.

                                                                                          • tlringer 3 years ago
                                                                                            It's a really absurd opinion that AI will destroy the world, and one that does not deserve serious consideration in any research community. It's only in strange Rationalist corners and the companies in Silicon Valley that echo those corners that this is considered at all "hotly debated."
                                                                                            • csee 3 years ago
                                                                                              Why do you think it's absurd? If we do eventually create an AGI that is significantly smarter than us in most domains, why is it that we should expect to be able to keep it under control and doing what we want it to?
                                                                                        • IIAOPSW 3 years ago
                                                                                          A genius plan indeed. If he can get the AI to write his papers, he'll never have to stop being on sabbatical.
                                                                                          • jks 3 years ago
                                                                                            In academia, a "sabbatical" means you take some time off from teaching courses, advising students and doing administrative work so you can concentrate on your research. So in order to stay on sabbatical, he'd need to get the AI to do that other stuff.
                                                                                            • __john 3 years ago
                                                                                              Not so impractical, don't most undergrads write at a GPT3 level anyways? =)
                                                                                              • lgas 3 years ago
                                                                                                No, but with a little help I'm sure they could learn to.
                                                                                          • natch 3 years ago
                                                                                            If only we could get the AI to help us figure out how to eliminate the use of full justify once and for all.
                                                                                            • frozencell 3 years ago
                                                                                              Working on OpenAI instead of trying unconventional options such as decentralized models governance might increase inequalities . Why would the community decide to repeat what they denounce in big tech?
                                                                                              • Jerry2 3 years ago
                                                                                                So, physics is a dead-end? Given that Scott is running his own research lab, a year is a very long time and him working out of his field is an indication that physics is in a big trouble.
                                                                                                • texaslonghorn5 3 years ago
                                                                                                  I am confused, how did you reach that conclusion? How does this announcement relate to the future of physics research? Sure, Scott's research is at the intersection of complexity and physics, but he is a CS professor at Texas working in the theoretical computer science department. His work leans far more towards TCS, with some work having connections to cosmology (he cares about physical limits of the universe and the information theory of things like black holes) and other interesting ideas from physics. But the main themes of his work have been quantum algorithms and complexity for a while. He's also nowhere near the experimental side of physics.
                                                                                                  • texaslonghorn5 3 years ago
                                                                                                    Also copied from a couple blog posts ago he doesn't self-identify as a physicist either.

                                                                                                    https://scottaaronson.blog/?p=6457

                                                                                                    I also had the following exchange at my birthday dinner:

                                                                                                    Physicist: So I don’t get this, Scott. Are you a physicist who studied computer science, or a computer scientist who studied physics?

                                                                                                    Me: I’m a computer scientist who studied computer science.

                                                                                                    Physicist: But then you…

                                                                                                    Me: Yeah, at some point I learned what a boson was, in order to invent BosonSampling.

                                                                                                    Physicist: And your courses in physics…

                                                                                                    Me: They ended at thermodynamics. I couldn’t handle PDEs.

                                                                                                    Physicist: What are the units of h-bar?

                                                                                                    Me: Uhh, well, it’s a conversion factor between energy and time. (*)

                                                                                                    Physicist: Good. What’s the radius of the hydrogen atom?

                                                                                                    Me: Uhh … not sure … maybe something like 10-15 meters?

                                                                                                    Physicist: OK fine, he’s not one of us.

                                                                                                    • mdp2021 3 years ago
                                                                                                      > radius of the hydrogen atom [...] maybe something like 10-15 meters?

                                                                                                      Please fix that into 10^-15 or equivalent expression for 10⁻¹⁵, before somebody gets the idea that "Scott" thought "between 10 and 15".

                                                                                                      • jks 3 years ago
                                                                                                        The original doesn't say "10-15 meters" but ten to the power of negative fifteen meters, so his guess was off from the Bohr radius of 5.3E-11 in the other direction but by much fewer orders of magnitude than as rendered above.
                                                                                                    • ivalm 3 years ago
                                                                                                      Quantum computing is slow to mature. Physics is a huge field.
                                                                                                      • eru 3 years ago
                                                                                                        Maybe. Or maybe it's just an indication that he wants to do something else for a while?
                                                                                                        • 3 years ago
                                                                                                        • fny 3 years ago
                                                                                                          Naive question: Isn't the easiest way to prevent an general purpose AI from taking over the world to not make a general purpose AI?
                                                                                                          • cal85 3 years ago
                                                                                                            Unfortunately that just means you don’t create one. To prevent one being created, you have to either somehow figure out a way to get everyone in the world to agree not to create one, or obtain enough global power that you can forcibly stop anyone creating one. Not exactly easy!
                                                                                                            • afthonos 3 years ago
                                                                                                              A powerful AI could help with that! >.>
                                                                                                            • lrhegeba 3 years ago
                                                                                                              reminds me of a very frightening quote from security specialist Gavin de Becker (https://en.wikipedia.org/wiki/Gavin_de_Becker), paraphrased: "every evil that you can think of, someone will have done it"
                                                                                                              • orzig 3 years ago
                                                                                                                The assumption many in the field make is that _someone_ is going to create General Purpose AI, and we'd rather it be people who want it to be 'good' (aka 'aligned').

                                                                                                                Best case, that AI can prevent the creation of harmful AI, though that's glossing over a lot of details that I'm not qualified to describe.

                                                                                                                • astrange 3 years ago
                                                                                                                  If you want to stop the world from containing general intelligence, you'd have to stop everyone from having children, which are equally generally intelligent to AGIs (but possibly less specifically intelligent) and are even more dangerous since then actually exist.

                                                                                                                  The reason people don't accuse every random child of possibly ending the world is because things that actually exist are just less exciting.

                                                                                                                • encryptluks2 3 years ago
                                                                                                                  From the website header:

                                                                                                                  > Also, next pandemic, let's approve the vaccines faster!

                                                                                                                  This is obviously very important to them. Is there some proof that the vaccine was unnecessarily delayed or just that they believe if we mess up and humanity suffers, so what?

                                                                                                                  • drdeca 3 years ago
                                                                                                                    Proof?

                                                                                                                    The point aiui is mostly arguing that the FDA errs too much on the side of caution in this area, and the trade-off would have been worth it to approve earlier. Not insinuating that like, there was some corruption (or laziness or something) that delayed it.

                                                                                                                    • pas 3 years ago
                                                                                                                      probably the best phrasing would be even faster

                                                                                                                      basically let's set up a standing pipeline to develop multivalent vaccines for every coming season (we already have the yearly for influenza)

                                                                                                                    • hadlock 3 years ago
                                                                                                                      I thought we decided 20 threads ago, that OpenAI is anything but open?
                                                                                                                      • jb1991 3 years ago
                                                                                                                        Exactly. It’s time for a name change.
                                                                                                                      • 3 years ago
                                                                                                                      • sydthrowaway 3 years ago
                                                                                                                        Probably realized quantum computing is going nowhere fast.
                                                                                                                        • sgt101 3 years ago
                                                                                                                          Depends what you mean by quantum computing; if you mean the warehouse sized labs full of lasers and cryogenics then you are correct. If you mean the study of quantum algorithms then actually it's been an exciting year. The first polynomial speed-up for an NP unstructured pseudo random function was published in April. It's not even clear how big a deal it is!
                                                                                                                          • FartyMcFarter 3 years ago
                                                                                                                            Quantum computing is not everything he does, so even if you're right it wouldn't explain this.

                                                                                                                            Going on a sabbatical is not that weird.

                                                                                                                          • ksm1717 3 years ago
                                                                                                                            • newbye5 3 years ago
                                                                                                                              X
                                                                                                                              • ksm1717 3 years ago
                                                                                                                                Lots of really cogent points. Hopefully he reads this comment and swiftly begins working on designing a Diplomatic Agent to avoid the MAD or getting a good peace treaty