Evolving OpenAI's Structure
602 points by rohitpaulk 2 months ago | 676 comments- atlasunshrugged 2 months agoI think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
- phreeza 2 months agoThat is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.
- pryce 2 months ago> "Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission."
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
- throwaway48476 2 months ago(b) is true but no so much (a). If investors thought it would be winner take all and they thought ClosedAI would win they'd invest in ClosedAI only and starve competitors of funding.
- sebastiennight 2 months agoActually I'm thinking in a winner-takes-all universe, the right strategy would be to spread your bets on as many likely winners as possible.
That's literally the premise of venture capital. This is a scenario where we're assuming ALL our bets will go to zero, except one which will be worth trillions. In that case you should bet on everything.
It's only in the opposite scenario (where every bet pays off with varying ROI) that it makes sense to go all-in on whichever bet seems most promising.
- sebastiennight 2 months ago
- selfselfgo 2 months ago[dead]
- pryce 2 months ago
- istjohn 2 months agoI'm not surprised that they found a reason to uncap their profits, but I wouldn't try to infer too much from the justification they cooked up.
- pdfernhout 2 months agoAs a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:
https://pdfernhout.net/on-funding-digital-public-works.html#...
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
- SOLAR_FIELDS 2 months agoEnlightening read, especially your last paragraph which touches on the nuance of the situation. It’s quite easy to end up on one side or the other when it comes to charity/nonprofits because the mission itself can be very motivating and galvanizing.
- jasode 2 months ago>"Self-dealing [...] convert some government supported PhD thesis work [...] the public (including me) never gets full access to the results of the publicly-funded work [...]
Your 2001 essay isn't a good parallel to OpenAI's situation.
OpenAI wasn't "publicly funded" i.e. with public donations or government grants.
The non-profit was started and privately funded by a small group of billionaires and other wealthy people (Elon Musk donates $44 million, Reid Hoffman, etc collectively pledging $1 billion of their own money).
They miscalculated in thinking their charity donations would be enough to recruit the PhD machine learning researchers and pay the high GPU costs to create the AI alternative to Google DeepMind, etc. Their 2015 assumptions about future AI development costs were massively underestimated and now they look like bad for trying to convert it to a for-profit enterprise. Instead of a big conversion to for-profit, they now will settle with keeping a subsidiary that's for-profit. Somewhat like other entities structured as a non-profit that owns for-profit subsidiaries such as Mozilla, Girl Scouts, Novo Nordisk, etc.
Obviously with hindsight... if they had to do it all over, they would just create the reverse structure of creating the OpenAI for-profit company as the "parent entity" that pledges to donate money to charities. E.g. Amazon Inc is the for-profit that donates to Housing Equity Fund for affordable housing.
- SOLAR_FIELDS 2 months ago
- pdfernhout 2 months ago
- huijzer 2 months agoThe value investor Mohnish Pabrai once talked about his observation that most companies with a moat pretend they don’t have one and companies without pretend they do.
- monkeyelite 2 months agoA version of this is emphasized in the thielverse as well. Companies in heavy competition try to intersect all their qualities to appear unique. Dominant companies talk about their portfolio of side projects to appear in heavy competition (space flight, ed tech, etc).
- Takennickname 2 months agoI don't know how I feel about a tech bro being credit for an idea like this.
This is originally from The Art of War.
- rcxdude 2 months agoIt's a specific observation that matches some very general advice from The Art of War, it's not like it's a direct quote from it.
- fakedang 2 months agoMohnish isn't a tech bro though, in my books. After selling his company, guy retreated away from the tech scene to get into Buffett-style value investing. And if you read his book, it's about glorifying the small businessmen running motels and garages, who invest bit by bit into the stock market.
- rcxdude 2 months ago
- anshumankmr 2 months agoIts quite true. The closest thing to a moat OpenAI has is the memory feature.
- monkeyelite 2 months ago
- hliyan 2 months agoThere needs to be regulations about deceptive, indirect, purposefully ambiguous or vague public communication by corporations (or any entity). I'm not an expert in corporate law or finance, but the statement should be:
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
- lanthissa 2 months agoAGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
- JumpCrisscross 2 months ago> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
- tbrownaw 2 months ago> The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.
- danenania 2 months agoI think the foundation model companies are actually poorly situated to reach the leading edge of AGI first, simply because their efforts are fragmented across multiple companies with different specializations—Claude is best at coding, OpenAI at reasoning, Gemini at large context, and so on.
The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.
I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.
- whoisthemachine 2 months agoI find these assumptions curious. How so? What is the AGI going to do that captures markets? Even if it can take over all desk work, then what? Who is going to consume that? And further more (and perhaps more importantly), with it putting everyone out of work, who is going to pay for it?
- Davidzheng 2 months agoI'm pretty sure today's models probably can be capable of self-improving. It's just that they are not yet as good as self-improving as the combinations of programmers improving them with the help of the models.
- tbrownaw 2 months ago
- Night_Thastus 2 months agoNothing OpenAI is doing, or ever has done, has been close to AGI.
- abtinf 2 months agoAgreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.
- pinkmuffinere 2 months agoI agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.
- AndrewKemendo 2 months agoI agree with you
I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
- voidspark 2 months agoTheir multimodal models are a rudimentary form of AGI.
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
- dr_dshiv 2 months agohttps://www.noemamag.com/artificial-general-intelligence-is-...
Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
- kvetching 2 months agoPlease, keep telling people that. For my sake. Keep the world asleep as I take advantage of this technology which is literally General Artificial Intelligence that I can apply towards increasing my power.
- abtinf 2 months ago
- aeternum 2 months agoRemember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
- foobiekr 2 months agoThey will just define away all of those terms to make that not apply.
- ljouhet 2 months agoWho defines "value-aligned, safety-conscious project"?
"Instead of our current complex non-competing structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal competing structure where ..." is all it takes
- 2 months ago
- 2 months ago
- foobiekr 2 months ago
- TeMPOraL 2 months agoAGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...
- pdxandi 2 months agoHow would an AGI prevent others from competing? Sincere question. That seems like something that ASI would be capable of. If another company released an AGI, how would the original stifle it? I get that the original can self-improve to try to stay ahead, but that doesn't necessarily mean it self-improves the best or most efficiently, right?
- pdxandi 2 months ago
- jsnider3 2 months agoHomo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.
- dragonwriter 2 months agoWell, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.
That's always been pretty overtly the winner-take-all AGI scenario.
- amelius 2 months agoYou can say the same thing about big companies hiring all the smart people and somehow we think that's ok.
- babyshake 2 months agoAGI can be winner take all. But winner take all AGI is not aligned with the larger interests of humanity.
- NoOn3 2 months agoModern corporations did't seem to care about humanity...
- NoOn3 2 months ago
- TheOtherHobbes 2 months agoAGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.
- 2 months ago
- JumpCrisscross 2 months ago
- sz4kerto 2 months agoOr they consider themselves to have low(er) chance of winning. They could think either, but they obviously can't say the latter.
- bhouston 2 months agoOpenAI is winning in a similar way that Apple is winning in smartphones.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
- KerrAvon 2 months agoApple is not the right analogy. OpenAI has first mover advantage and they have a widely recognized brand name — ChatGPT — and that’s kind of it. Anyone (with very deep pockets) can buy Nvidia chips and go to town if they have a better or equivalent idea. There was a brief time (long before I was born) when “Univac” was synonymous with “computer.”
- retrorangular 2 months ago> I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
- tim333 2 months agoBig difference - Apple makes billions from smartphones, getting most of the industry's profits, which makes it hard to compete with.
OpenAI loses billions and is at the mercy of getting new investors to fund the losses. It has many plausible competitors.
- screamingninja 2 months ago> ban all non-US LLM providers
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
- jjani 2 months agoIE once captured all of the value in browserland, with even much higher mindshare and market dominance than OpenAI has ever had. Comparing with Apple (= physical products) is Apples to oranges (heh).
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
- pphysch 2 months agoSwitching between Apple and Google/Android ecosystems is expensive and painful.
Switching from ChatGPT to the many competitors is neither expensive nor painful.
- wincy 2 months agoCompanies that are contractors with the US government already aren’t allowed to use Deepseek even if its an airgapped R1 model is running on our own hardware. Legal told us we can’t run any distills of it or anything. I think this is very dumb.
- KerrAvon 2 months ago
- bhouston 2 months ago
- ignoramous 2 months ago> I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
Yeah; and:
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!We want to open source very capable models.
- kvetching 2 months agoNot to mention, @Gork, aka Grok 3.5...
- kvetching 2 months ago
- jrvarela56 2 months agoNot saying this is OpenAI's case, but every monopolist claims they are not a monopolist...
- raincole 2 months agoEven if they think it will be a winner-take-all market, they won't say it out loud. It would be begging for antitrust lawsuits.
- whiplash451 2 months agoI read this line as : we were completely off the chart from a corp structure standpoint.
We need to get closer to the norm and give shares of a for-profit to employees in order to create retention.
- sensanaty 2 months agoLmaoing at their casual use of AGI as if them or any of their competitors are anywhere near it.
- DirkH 1 month agoProposition:
Please promise to come back to this comment in 2030 and playfully mock me for ever being worried and I will buy you a coffee. If AGI is invented before 2030 please buy me one and let me mock you playfully.
- addandsubtract 2 months agoIf you change the definition of AGI, we're already there!
- infairverona 2 months agoDamn, didn't know my Casio FX-300 was AGI, good to know!
- infairverona 2 months ago
- DirkH 1 month ago
- dingnuts 2 months agoto me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
- lenerdenator 2 months agoThat, or they don't care if they get to AGI first, and just want their payday now.
Which sounds pretty in-line with the SV culture of putting profit above all else.
- foobiekr 2 months agoIf they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.
- foobiekr 2 months ago
- the_duke 2 months agoAGI is matter of when, not if.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
- runako 2 months agoEither that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
- bdangubic 2 months agoAGI is matter of when, not if
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
- manquer 2 months agoProgress is not just a function of technical possibility( even if it exists) it is also economics.
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
- blibble 2 months ago> AGI is matter of when, not if.
LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day
- foobiekr 2 months agoI think this is right but also missing a useful perspective.
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
- Kabukks 2 months agoCould you elaborate on the progress that has been made? To me, it seems only small/incremental changes are made between models with all of them still hallucinating. I can see no clear steps towards AGI.
- JumpCrisscross 2 months ago> AGI is matter of when, not if
We have zero evidence for this. (Folks said the same shit in the 80s.)
- m_krebs 2 months ago"X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.
- schrodinger 2 months agoI don’t think that’s a safe foregone conclusion. What we’ve seen so far is very very powerful pattern matchers with emergent properties that frankly we don’t fully understand. It very well may be the road to AGI, or it may stop at the kind of things we can do in our subconscious—but not what it takes to produce truly novel solutions to never before seen problems. I don’t think we know.
- otabdeveloper4 2 months ago> AGI is matter of when, not if.
I want to believe, man.
- runako 2 months ago
- tim333 2 months agoI don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.
- lenerdenator 2 months ago
- burnte 2 months agoThe level of arrogance needed to think they'd be the only company to come up with AI/AGI is staggering.
- coryfklein 2 months ago“Appear weak when you are strong, and strong when you are weak.”
― Sun Tzu
- rlt 2 months ago“Fine, we’ll keep the non-profit, but we’re going to extract the fuck out of the for-profit”
Quite the arc from the original organization.
- zeroq 2 months ago"It's not you, it's me."
- 2 months ago
- phreeza 2 months ago
- pants2 2 months agoIt's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
- CorpOverreach 2 months agoI'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- thurn 2 months agoWhich of these statements do you disagree with?
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
- tsimionescu 2 months agoYou could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.
So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.
- geysersam 2 months agoIsn't the question you're posing basically Pascals wager?
I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.
> Predicting the future is famously difficult
That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"
We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.
- quietbritishjim 2 months ago> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.
- pembrook 2 months agoYou bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?
This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.
How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?
If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.
- OtherShrezzing 2 months ago> Superintelligence poses an existential threat to humanity
I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.
- tempfile 2 months ago> There are almost no statements about the future which I'd assign this level of confidence to.
You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.
- ZuFyf4Q6K4wjoS 2 months ago[dead]
- tsimionescu 2 months ago
- digbybk 2 months ago> I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
Any of the signatories here match your criteria? https://safe.ai/work/statement-on-ai-risk#signatories
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
- coryfklein 2 months agoSounds a little too much like, "It's not AGI today ergo it will never become AGI"
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
- Retric 2 months ago> Does the current AI give productivity benefits to writing code? Probably.
> If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
That’s a bit of a stretch, generative AI is least capable of helping with novel code such as needed to make AGI.
If anything I’d expect companies working on generative AI to be at a significant disadvantage when trying to make AGI because they’re trying to leverage what they are already working on. That’s fine for incremental improvement, but companies rarely ride one wave of technology to the forefront of the next. Analog > digital photography, ICE > EV, coal mining > oil, etc.
- Retric 2 months ago
- utbabya 2 months ago> At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
- otabdeveloper4 2 months ago> are just really useful input/output devices that respond to a stimuli
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
- ev7 2 months agoalignmentforum.com
- voidspark 2 months ago> Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
Dependent on UBI, existing in a basic pod, eating rations of slop.
- TobTobXX 2 months agoYes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).
There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.
- cik 2 months ago> So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
This was the fear when the cotton gin was invented. It was the ear when cars were created. The same complaint happened with the introduction of electronic, automated, telephone switchboards.
Jobs change. Societies change. Unemployment worldwide, is near the lowest it has ever been. Work will change. Society will eventually move to a currency based on energy production, or something equally futuristic.
This doesn't mean that getting there will be without pain.
- TobTobXX 2 months ago
- thurn 2 months ago
- esafak 2 months agoLots of people in academia and industry are calling for more oversight. It's the US government that's behind. Europe's AI Act bans applications with unacceptable risk: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
- lenerdenator 2 months agoThe US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
- aylmao 2 months agoI'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
- aylmao 2 months ago
- azinman2 2 months agoUnless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
- esafak 2 months agoWhat kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
- nicce 2 months agoThis thought process it not different than it was with nuclear weapons.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
- immibis 2 months agoCompare the other American "innovations" that Europe mostly rejects.
- saubeidl 2 months ago[dead]
- esafak 2 months ago
- philipwhiuk 2 months ago> Lots of people in academia and industry
Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.
For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.
- 0xDEAFBEAD 2 months agoThe EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
- kgwgk 2 months agoIf we think of “making money” as having more revenue than expenses a lemonade stand makes significantly more money than OpenAI.
- kgwgk 2 months ago
- jimbokun 2 months agoUS government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.
- lenerdenator 2 months ago
- saubeidl 2 months agoThe EU does and has passed the AI act to reign in the worst consequences of this nuclear weapon. It has not been received well around here.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
- rchaud 2 months agoAbsolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.
- cloverich 2 months agoTo be fair many of us arrived at the idea that AI was humanities inevitable endpoint ahead of and independently of whether we would ever see it in our lifetimes. Its easy enough to see how people could independently converge on such am idea. I dont see that view as related to atheism in any way other than it creating space for the belief, in the same way it creates space for many others.
Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them last several millennia more.
- Xenoamorphous 2 months agoI guess they think that the “digital god” has a chance to become real (and soon, even), unlike the non-digital one?
- lenerdenator 2 months agoRoko's Basilisk is basically Pascal's wager with GPUs.
- saubeidl 2 months ago[dead]
- cloverich 2 months ago
- rchaud 2 months ago
- modeless 2 months agoI don't know what sources you're reading. There's so much eye-batting I'm surprised people can see at all.
- atleastoptimal 2 months agoBecause many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.
- xandrius 2 months agoHow is an LLM more powerful than any nuclear weapon? Seriously curious.
- kragen 2 months agoWell, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
- xandrius 2 months agoIt seems that the term LLM and AGI are being muddled here. One is a statistic next text generation and the other is what you're trying to say.
LLMs are great at making you think they are the other but aren't.
- TrapLord_Rhodo 2 months agothe problem is, none of that needs to happen. If the AI can start coming up with novel math or physics, it's game over. Whether the AI is "sentient" or not, being able to break that barrier would send us into an advancement spiral.
- xandrius 2 months ago
- kragen 2 months ago
- jimbokun 2 months agoMost of us are batting our eyelashes as rapidly as possible but have no idea how to stop it.
- otabdeveloper4 2 months agoWell, because it's obviously bullshit and everyone knows it. Just play the game and get rich like everyone else.
- esafak 2 months agoAre you sure about that? AI-powered robotic soldiers are around the corner. What could go wrong...
- otabdeveloper4 2 months ago> AI agent robot soliders that are as inept as ChatGPT
Sounds like payola for the enterprising and experienced mercenary.
- sealeck 2 months agoRobot soldiers != AGI
- devinprater 2 months agoOoo I know, Cybermen! Yay.
- otabdeveloper4 2 months ago
- esafak 2 months ago
- soheil 2 months agoIt'd be odd if people batted eyes before the 1st nuclear weapon came to be, but not batting now.
- gooob 2 months agohave they started hiring people to make maglev trains and permaculture gardens all around urban areas yet?
- squigz 2 months agoWe're all too busy rolling our eyes.
- CorpOverreach 2 months ago
- A_Duck 2 months agoThis is the moment where we fumble the opportunity to avoid a repeat of Web 1.0's ad-driven race to the bottom
Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
- zharknado 2 months agoI feel this. I had a very productive convo with an LLM today and realized that a huge part of the value of it was that it addressed my questions in a focused way, without trying to sell me anything or generate SEO rankings or register ad impressions. It just helped me. And that was incredibly refreshing in a digital world that generally feels adversarial.
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
- sumedh 2 months ago> I like to think that if we learn to pay for it directly
The $20 monthly payment is not enough though and companies like Google can keep giving away their AI for free till OpenAI is bankrupt.
- danenania 2 months agoThe "good" thing is this is all way too expensive to be ad-supported. Maybe there will be some ad-supported products using very small/cheap models, but the leading edge stuff is always going to be at the leading-edge of compute usage too, and someone has to pay the bill. Even with investors subsidizing a lot of the costs, it's still very expensive to use the best models heavily for real work.
- aylmao 2 months agoSubscription services can sell ads too. See Hulu, or Netflix. Spotify might not play "radio ads" if you pay, but it will still advertise artists on your home screen.
These models being expensive leads me to think they will look at all methods of monetization possible when seeking profitability. Rather than ads being off the table, it could feasibly make ads be on the table sooner.
- toxik 2 months agoIt is guaranteed that the models will become salespeople in disguise with time. This is just how the world works. Hopefully competition can stave it off but I doubt it.
It's also why totalitarian regimes love it, they can simply train it to regurgitate a modified version of reality.
- advisedwang 2 months agoThere's no such thing as too expensive to be ad-supported. There might be too expensive to be ONLY ad-supported, but as a revenue stream ads can be layered on top of other sources. For example, see that the ads shown on a $100/mo cable package!
- aylmao 2 months ago
- sumedh 2 months ago
- wrsh07 2 months agoFor all of the skepticism I've seen of Sam Altman, listening to interviews with him (eg by Ben Thompson) he says he really does not want to create an ad tier for OpenAI.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
- pradn 2 months agoI'm hoping there will always be a good LLM option, for the following reasons:
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
- energy123 2 months agoNow the hard part. Design a policy stop this from happening while balancing the need to maintain competition, innovation, etc.
That step, along with getting politicians to pass it, is the only thing that will stop that outcome.
- otabdeveloper4 2 months agoIn the future AI will be commoditized. You'll be able to buy an inference server for your home in the form factor like a wi-fi router now. They will be cheap and there will be a huge selection of different models, both open-source and proprietary. You'll be able to download a model with a click of a button. (Or just torrent them.)
- anticensor 2 months agoThat can be done with today's desktops already, if you beef up the specs slightly.
- otabdeveloper4 2 months agoCheap Chinese single-board computers made specifically for inference is the missing puzzle piece. (No, GPU's and especially Nvidia is not that.)
Also the current crop of AI agents are just utter crap. But that's a skill issue of the people coding them, expect actual advances here soon.
- mlnj 2 months agoThe smaller models are becoming even more capable now. Add that with a suite of tools and integrations and you can do most of what you do online within the infra at home.
- otabdeveloper4 2 months ago
- anticensor 2 months ago
- NoahZuniga 2 months agoAds intermixed into llm responses is so clearly evil that openai will never do it so long as the nonprofit has a controlling stake (which it currently still has), because the nonprofit would never allow it.
- Twirrim 2 months agoThe insidious part is it doesn't have to be so blatant as adverts, you can achieve a lot by just slight biases in text output.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
- gooob 2 months ago>LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle
replace LLMs with TV, or smartphones, or maybe even mcdonald's, and you've got the same idea. through TV, corporations got to control a lot of the social world and people's behavior.
- NoahZuniga 1 month agoOk but this is still clearly evil, so the controlling non profit would not allow this either.
- gooob 2 months ago
- aprilthird2021 2 months agoAds / SEO but with AI responses was so obviously the endgame given how much human attention it controls and the fact that people aren't really willing to pay what it costs (when decent free, open-weights alternatives exist)
- Twirrim 2 months ago
- yread 2 months agoAt least we can self-host this time around
- zharknado 2 months ago
- drewbeck 2 months agoI see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
- stego-tech 2 months agoThat world never existed. Yes, pockets did - IT professionals with broadband lines and spare kit hosting IRC servers and phpBB forums from their homes free of charge, a few VC-funded companies offering idealistic visions of the net until funding ran dry (RIP CoHost) - but once the web became privatized, it was all in service of the bottom line by companies. Web 2.0 onwards was all about centralization, surveillance, advertising, and manipulation of the populace at scale - and that intent was never really a secret to those who bothered to pay attention. While the world was reeling from Cambridge Analytica, us pre-1.0 farts who cut our teeth on Telnet and Mosaic were just kind of flabbergasted that ya'll were surprised by overtly obvious intentions.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
- dgreensp 2 months agoI don’t think the parent was saying that everyone’s intentions were pure until recently, but rather that naked greed wasn’t cool before, but now it is.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
- jon_richards 2 months agoAs recently as the Silicon Valley tv show, the joke was that every startup pitch claimed they were “making the world a better place”.
- jon_richards 2 months ago
- energy123 2 months agoWhat we are observing is the effects of profit maximization when the core value to the user is already fulfilled. It's a type of pathological optimization that is useful at the beginning but eventually pathologizes.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
- JumpCrisscross 2 months ago> That world never existed
It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.
- davesque 2 months agoI have to agree. That's one of the dangers of today's world; the risk of believing that we never had a better one. Yes, the altruism of yesteryear was partially born of convenience, but it still existed. And I remember people actually believing it was important and acting as such. Today's cynicism and selfishness seem a lot more arbitrary to me. There's absolutely no reason things have to be this way. Collectively, we have access to more wealth and power now than we ever did previously. By all accounts, things ought to be great. It seems we just need the current generation of leaders to re-learn a few lessons from history.
- stego-tech 2 months agoSure it has. For every Woz, there was a Jobs; for every Linus, a Bill (Gates). For every starry-eyed engineer or developer who just wants to help people, there are business people who will pervert it into an empire and jettison them as soon as practical. For every TED, there’s a Davos; for every DEFCON, there’s a glut of vendor-specific conferences.
We should champion the good people who did the good things and managed to resist the temptations of the poisoned apple, but we shouldn’t hold an entire city on a pedestal because of nostalgia alone. Nobody, and no entity, is that deserving.
- selfselfgo 2 months ago[dead]
- davesque 2 months ago
- torginus 2 months agoI think it did and still does today - every single time an engineer sees a problem an starts an open-source project to solve it - not out of any profit motive and without any monetization strategy in mind, but just because they can, and they think the world would be better off.
- pdfernhout 2 months agoCoincidentally, and as another pre-1.0 fart myself :-) -- one who remembers when Ted Nelson's "Computer Lib / Dream Machines" was still just a wild hope -- I was thinking of something similar the other day (not USPS-specific for hosting, but I like that).
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public: https://newpublic.org/ "Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war. https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War "According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads: https://www.urban.org/policy-centers/cross-center-initiative... "In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet): "Build 21000 flexible fabrication facilities across the USA" https://web.archive.org/web/20100708160738/http://pcast.idea... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
- 2 months ago
- dgreensp 2 months ago
- ballooney 2 months agoHopelessly over-idealistic premise. Sama and pg have never been anything other than opportunistic muck. This will be my last ever comment on HN.
- byearthithatius 2 months agoI feel this so hard, I think this may be my last time using the site as well. They don't care about advancement, they only care about money.
- stego-tech 2 months agoLike everything, it's projection. Those who loudly scream against something are almost always the ones engaging in it.
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
- stego-tech 2 months ago
- drewbeck 2 months agoOh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.
- kmacdough 2 months agoAnd to those who "say" at least now they're honest, I say "WHY?!" Unconditionally being "good" would be better than disguising selfishness as good. But that's not really a thing. Having to maintain the presence of doing good puts significant boundaries on what you can get away with, and increases the consequence when people uncover some shit.
Condoning "honest liars" enables a whole other level of open and unrestricted criminality.
- kmacdough 2 months ago
- gallerdude 2 months ago[flagged]
- HaZeust 2 months agoinb4 deleted
- byearthithatius 2 months ago
- jimbokun 2 months agoThey deeply believe in the Ayn Rand mindset that the system that brings them the most individual wealth is also the best system for humanity as a whole.
- mandmandam 2 months agoWhen people that wealthy are that delusional... With few checks or balances from politics, media, or even social media... I don't think humanity as a whole is in for a great time.
- jimbokun 2 months agoThey are roughly as delusional as everyone else. There is an image human bias to convince yourself that what benefits you is also best for everyone else.
It’s just that their biases have much more capacity to cause damage as their wealth gives them so much power.
- ArthurStacks 2 months agoIt got you the 20th century
- jimbokun 2 months ago
- torginus 2 months agoThe problem with that mindset is that money is a proxy for the Marxist idea of inherent value. The distinction does not matter when you are just an average dude, doubling your money doubles the amount of material wealth you have access to.
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
- monkeyelite 2 months agoWhich Ayn Rand book says that?
- mandmandam 2 months ago
- sneak 2 months agoIs it reasonable to assign the descriptor “authoritarian” to anyone who simply does not subscribe to the common orthodoxy of one faction in the american culture war? That is what it seems to me is happening here, though I would love to be wrong.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
- tastyface 2 months agoDonating millions to a fascist president (in Altman’s case) seems pretty authoritarian to me. And he seems happy enough hanging out with Thiel and other Yarvin groupies.
- sidibe 2 months agoYup, if Elon hadn't gotten so jealous and spiteful to him I'm sure he'd be one of Elon's leading sycophants.
- sneak 2 months agoI think this is more a symptom of the level of commonplace corruption in the American regulatory environment than any indication of the political views of the person directing such donations.
Tim Apple did it too, and we don’t assume he’s an authoritarian now too, do we? I imagine they would probably have done similarly regardless of who won the election.
It sure seems like an endorsement, but I think it’s simply modern corporate strategy in the American regulatory environment, same as when foreign dignitaries stay in overpriced suites in the Trump hotel in DC.
Those who don’t kiss the ring are clearly and obviously punished. It’s not in the interest of your shareholders (or your launch partners) to be the tall poppy.
- sidibe 2 months ago
- bee_rider 2 months agoI’m not sure exactly what they meant by “liberal” in this case, but since they put it in contrast with authoritarianism, I assume they meant it in the conventional definition of the word (where it is the polar opposite of authoritarianism). Instead of the American politics-as-sports definition that makes it a synonym for “team blue.”
- drewbeck 2 months agocorrect. "liberal" as in the general ideas that ie expanding the franchise is important, press freedoms are good, that government can do good things for people and for capital etc. Wikipedia's intro paragraph does a good job of describing what I was getting at (below). In prior decades Republicans in the US would have been categorized as "liberal" under this definition; in recent years, not so much.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
- drewbeck 2 months ago
- sanderjd 2 months agoNo, "authoritarian" is a word with a specific meaning. I'm not sure about applying it to Sam Altman, but Marc Andreessen has expressed views that I consider authoritarian in his victory lap tour since last year's presidential election.
- drewbeck 2 months agoNo I don't think it is. I DO think those two people want to be in charge (along with other billionaires) and they want the rest of us to follow along, which is in my book an authoritarian POV. pmarca's recent "VC is the only job that can't be done by AI" is a good example of that; the rest of us are to be managed and controlled by VCs and robots.
- blibble 2 months agoare you aware of worldcoin?
altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get
- sneak 2 months agoWorldcoin is opt-in, which is the opposite of authoritarian. Nobody who doesn’t like it is required to participate.
- sneak 2 months ago
- tastyface 2 months ago
- ignoramous 2 months ago> They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
- nickff 2 months agoWhy are you changing the subject? The “War on Terror” was never intended to spread democracy as far as I know; democracy was a means by which to achieve the objective of safety from terrorism.
- danans 2 months ago> The “War on Terror” was never intended to spread democracy as far as I know;
Regardless of intent, it was most definitely sold to the American public on that premise.
- danans 2 months ago
- nickff 2 months ago
- 2 months ago
- stego-tech 2 months ago
- amelius 2 months ago> We did not really know how AGI was going to get built, or used (...)
Altman keeps on talking about AGI as if we're already there.
- wrsh07 2 months agoI don't agree with Tyler on this point (although o3 really is a thing to behold)
But reasonable people could argue that we've achieved AGI (not artificial super intelligence)
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Fwiw, Sam Altman will have already seen the next models they're planning to release
- herculity275 2 months agoThe goalposts seem to have shifted to a point where the "AGI" label will only be retroactively applied to an AI that was able to develop ASI
- ixtli 2 months agohow many times must we repeat that AGI is whatever will sell the project. it means nothing. even philosophers don't have a good definition of "intelligence"
- ixtli 2 months ago
- nickpsecurity 2 months agoThey still can't reliably do what humans can do across our attributes. That's what AGI was originally about. They have become quite capable, though.
- amelius 2 months agoAn AGI would be able to learn things while you are talking to it, for example.
- knowaveragejoe 2 months agoIsn't this already the case? Perhaps you mean in a non-transient fashion, i.e. internalizes the in-context learning into the model itself, sort of an ongoing training, that isn't sort of a "hack" like writing notes or adding to a RAG database or whatever.
- knowaveragejoe 2 months ago
- herculity275 2 months ago
- seydor 2 months agoWe overestimate the impact of AGI. We need 1 Einstein, not millions of Joe IQ100
- wrsh07 2 months ago
- Tenoke 2 months agoFor better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
- photochemsyn 2 months agoThe recent flap over ChatGPT's fluffery/flattery/glazing of users doesn't bode well for the direction that OpenAI is headed in. Someone at the outfit appeared to think that giving users a dopamine hit would increase time-spent-on-app or some other metric - and that smells like contempt for the intelligence of the user base and a manipulative approach designed not to improve the quality of the output, but to addict the user population to the ChatGPT experience. Your own personal yes-person to praise everything you do, how wonderful. Perfect for writing the scripts for government cabinent ministers to recite when the grand poobah-in-chief comes calling, I suppose.
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
- modeless 2 months agoHuh, so Elon's lawsuit worked? The nonprofit will retain control? Or is this just spin on a plan that will eventually still sideline the nonprofit?
- saretup 2 months agoThe whole article feels like justifying a bunch of legal nonsense to get to the end result of removing the capped structure.
- blagie 2 months agoTo be specific: The nonprofit currently retains control. It will stop once more dilution sets in.
- az226 2 months agoYes and no. It sounds like the capped profit PPU holders will get to have their units convert 1:1 with unlimited profit equity shares, which are obviously way more valuable. So the nonprofit loses insanely in this move and all current investors and employees make a huge amount.
- j_maffe 2 months agoIt more sounds like the district attorneys won
- 2 months ago
- saretup 2 months ago
- everybodyknows 2 months ago> transition to a Public Benefit Corporation
Can some business person give us a summary on PBCs vs. alternative registrations?
- fheisler 2 months agoA PBC is just a for-profit company that has _some_ sort of specific mandate to benefit the "public good" - however it chooses to define that. It's generally meant to provide some balance toward societal good over the more common, strictly shareholder profit-maximizing alternative.
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...
[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...
- cs702 2 months agoThe charter of a public-benefit corporation gives the company's board and management a bit of legal cover for making decisions that don't serve to maximize, or may even limit, financial returns to shareholders, when those decisions are made for the benefit of the public.
- croemer 2 months agoBut the reverse isn't true, right? It doesn't prevent the board from maximizing financial returns even when doing so would harm the "public".
- anticensor 2 months agoDepends on the specific charter.
- anticensor 2 months ago
- croemer 2 months ago
- blagie 2 months agoReality: It is the same as any other for-profit with a better-sounding name. It confuses a lot of people into thinking it's a non-profit without being one.
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
- imkevinxu 2 months agoyou could've just asked this to chatgpt....
- fheisler 2 months ago
- ramesh31 2 months agoThe explosion of PBC structured corps recently has me thinking it must just be a tax loophole at this point. I can't possibly imagine there is any meaningful enforcement around any of its restrictions or guidelines.
- asadotzler 2 months agoNot a loophole as they pay taxes (unlike non-profits) but a fig leaf to cover commercial activity with some feel-good label. The real purpose of PBC is the legal protection it may afford to the company from shareholders unhappy with less than maximal profit generation. It gives the board some legal space to do some good if they choose to but has no mandate like real non-profits which get a tax break for creating a public good or service, a tax break that can be withdrawn if they do not annually prove that public benefit to the IRS.
- ralph84 2 months agoIt’s not a tax thing, it’s a power thing. PBCs transfer power from shareholders to management as long as management can say they were acting for a public benefit.
- bloudermilk 2 months agoPBCs don’t get special tax treatment. As far as I know they’re taxed exactly the same as typical C or S corps.
- asadotzler 2 months ago
- TheGrognardling 2 months agoThere are a lot of good points here, by multiple vantage points as far as views for the argument of how imminent, if it - metaphysically or logistically - viable at all even, AGI is.
I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.
That's just me, though.
- marricks 2 months agoThat’s an intentional misdirection, and an all too common one :(
- TheGrognardling 2 months ago[dead]
- TheGrognardling 2 months ago
- marricks 2 months ago
- 2 months ago
- jjani 2 months agoSamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something now.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
- caseyy 2 months agoIt's doubtful if there even is a race anymore. The last significant AI advancement in the consumer LLM space was fluent human language synthesis around 2020, with its following assistant/chat interface. Since then, everything has been incremental — larger models, new ways to prompt them, cheaper ways to run them, more human feedback, and gaming evaluations.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
- orionsbelt 2 months agoSaying LLMs have only incrementally improved is like saying my 13 year old has only incrementally approved over the last 5 years. Sure, it's been a set of continuous improvements, but that has taken it from a toy to genuinely insanely useful.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
- devjab 2 months ago> any company sitting this out risks being unable to capture users back from Open AI at a later date.
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
- apsurd 2 months agoI can't square how OpenAi can capture users and presumably retain them when the incumbents have been capturing users for multiple decades and why can they not retain them?
If every major player had an AI option, i'm just not understanding how because OpenAi moved first or got big first, the hugely massively successful companies that did the same thing for multiple decades don't have the same advantage?
- bigstrat2003 2 months agoNo, it's still just a toy. Until they can make the models actually consistently good at things, they aren't going to be useful. Right now they still BS you far too much to trust them, and because you have to double check their work every time they are worse than no tool at all.
- mplewis 2 months agoIt's been five years. There is no AI killer app. Agentic coding is still hot garbage. Normal people don't want to use AI tools despite them being shoved into every SaaS under the sun. LLMs are most famous among non-tech users for telling you to put glue into pizza. No one has been able to scale their chatbots into something profitable, and no one can put a date on when they'll be profitable.
Why are you still pretending anything is going to come out of this?
- csours 2 months agoTo extend your illustration, 5 years ago no one could train an LLM with the capabilities of a 13 year old human; now many companies can both train LLMs and integrate them into products.
> taken it from a toy to genuinely insanely useful.
Really?
- devjab 2 months ago
- roflmaostc 2 months agoJust to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
- caseyy 2 months ago> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
- caseyy 2 months ago
- fastball 2 months agoSeems like an arbitrary distinction.
I'd say Chain-of-Thought has massively improved LLM output. Is that "incremental"? Why is that more incremental than the move from GPT-2 to GPT-3? Sure you can say that this is when LLMs first passed some sort of Turing test, but fundamentally there was no technological difference from GPT-3 to GPT-4. In fact I would say the quality of GPT-4 unlocked thousands (millions?) more use-cases that were not very viable with the quality delivered by GPT-3. I don't see any reason for more use-cases to keep being unlocked by LLM improvements.
- paulddraper 2 months agoYou saying —- with a straight face —- that post 2020 LLM AIs have made only incremental progress?
- ReptileMan 2 months agoYes. But they have also improved a lot. Incremental just means that the function is going up without breaking points. We haven't seen anything revolutionary, just evolutionary in the last 3 years. But the models do provide 2 or 3 times more value. So their pace of advancement is not slow.
- caseyy 2 months agoYep, compared to beating the Turing test, the progress has been linear with exponentially growing investment. That's diminishing marginal returns.
- ReptileMan 2 months ago
- orionsbelt 2 months ago
- grey-area 2 months agoWell I think you’re correct that they know the jig is up, but I would say they know the AI bubble is about to burst so they want to cash out before that happens.
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
- atleastoptimal 2 months agoAI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients. I know it’s fun to imagine AI is some big scam like crypto, but you’d have to be ignoring a lot of genuine non hype economic movement at this point to assume GAI isn’t making any money.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
- JumpCrisscross 2 months ago> AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
- directevolve 2 months agoDoctors were using Google to diagnose patients before. The thing is, it's still the doctor delivering the diagnosis, the doctor writing the prescription, and the doctor billing insurance. Unless and until patients or hospitals are willing and legally able to use ChatGPT as a replacement for a doctor (unwise), ChatGPT is not about to eat any doctor's lunch.
- gscott 2 months agoWhen the wright brothers made their plane they didn't expect today that there are thousands of planes flying at a time.
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
- horhay 2 months agoLol they are not using ChatGPT for the full diagnosis. They're used in steps of double checking knowledge like drug interactions and such. If you're gonna speak on something like this in a vague manner I'd suggest you google this stuff first. I can tell you for certain that that part in particular is a highly inaccurate statement.
- davidcbc 2 months ago> Doctors are straight up using ChatGPT to diagnose patients
This makes me want to invest in malpractice lawyers, not OpenAI
- krainboltgreene 2 months ago> Doctors are straight up using ChatGPT to diagnose patients.
Oh we know: https://pmc.ncbi.nlm.nih.gov/articles/PMC11006786/
- stale2002 2 months agoPeople aren't saying that AI as a tool is going to go bust. Instead, people are saying that this practice of spending 100s of millions, or even billions of dollars on training massive models is going bust.
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
But "take over the world" good? Unlikely.
- paulddraper 2 months agoYes. The answer is yes.
The world is changing and that is scary.
- JumpCrisscross 2 months ago
- Jefro118 2 months agoThey made $4 billion last year, not really "little to no money". I agree it's not clear they can justify their valuation but it's certainly not a bubble.
- mandevil 2 months agoBut didn't they spend $9 billion? If I have a machine that magically turns $9 billion of investor money into $4 billion in revenue, I need to have a pretty awesome story for how in the future I am going to be making enormous piles of money to pay back that investment. If it looks like frontier models are going to be a commodity and it is not going to be winner-take-all... that's a lot harder story to tell.
- SirensOfTitan 2 months agoI guarantee you that I could surpass that revenue if I started a business that would give people back $9 if they gave me $4.
OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.
- nativeit 2 months agoCognitive dissonance is a psychological phenomenon that occurs when a person holds two contradictory beliefs at the same time.
- mandevil 2 months ago
- atleastoptimal 2 months ago
- crorella 2 months agoBut he said he was doing it just for love!! [1]
1: https://www.techpolicy.press/transcript-senate-judiciary-sub...
- pi-err 2 months agoSounds a lot like "Google+ will catch Facebook in no time".
OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.
Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.
B2B market will stay open but OpenAI has certainly not peaked yet.
- no_wizard 2 months agoFacebook had immense network effects working for it back then.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
So again, I ask, what makes it sticky?
- miki123211 2 months agoOpenAI (or, more specifically, Chat GPT) is CocaCola, not Facebook.
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
- cshimmin 2 months agoYep, I mostly interact with these AIs through Cursor. When I want to ask it a question, there's a little dropdown box and I can select openai/anthropic/deepseek whatever model. It's as easy as that to switch.
- rileyphone 2 months agoFrom talking to people, the average user relies on memories and chat history, which is not easy to migrate. I imagine that's the part of the strategy to keep people from hopping model providers.
- jwarden 2 months agoBrand counts for a lot
- miki123211 2 months ago
- NBJack 2 months agoDefacto victory.
Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.
OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.
- kranke155 2 months agoFacebook couldnt be overtaken because of network effects. What network effects are there to a chatbot.
If you look at Gemini, I know people using it daily.
- chrisweekly 2 months agoIMHO "ChatGPT the default chatbot" is a meaningful but unstable first-mover advantage. The way things are apparently headed, it seems less like Google+ chasing FB, more like Chrome eating IE + NN's lunch.
- jameslk 2 months agoOpenAI is a relatively unknown company outside of the tech bubble. I told my own mom to install Gemini on her phone because she's heard of Google and is more likely going to trust Google with whatever info she dumps into a chat. I can’t think of a reason she would be compelled to use ChatGPT instead.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
- ricardobeat 2 months agoI know a single person who uses ChatGPT daily, and only because their company has an enterprise subscription.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
- JumpCrisscross 2 months ago> OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
- mtrovo 2 months agoNot sure if Google+ is a good analogy, it reminds me more of the Netscape vs IE fight. Netscape sprinted like it was going to dominate the early internet era and it worked until Microsoft bundled IE with Windows for free.
LLMs themselves aren't the moat, product integration is. Google, Apple and Microsoft already have the huge user bases and platforms with a big surface area covering a good chunk of our daily life, that's why I think they're better positioned if models become a commodity. OpenAI has the lead now, but distribution is way more powerful in the long run.
- _Algernon_ 2 months agoSocial media has the benefit of network effects which is a pretty formidable moat.
This moat is non-existent when it comes to Open AI.
- alganet 2 months agoThat reminds me of the Dictator movie.
All dissidents went into Little Wadyia.
When the Dictator himself visited it, he started to fake his name by copying the signs and names he saw on the walls. Everyone knew what he was.
Internet social networks are like that.
Now, this moat thing. That's hilarious.
- alganet 2 months ago
- Analemma_ 2 months agoThat's not at all the same thing: social media has network effects that keep people locked in because their friends are there. Meanwhile, most of the people I know using LLMs cancel and resubscribe to Chat-GPT, Claude and Gemini constantly based on whatever has the most buzz that month. There's no lock-in whatsoever in this market, which means they compete on quality, and the general consensus is that Gemini 2.5 is currently winning that war. Of course that won't be true forever, but the point is that OpenAI isn't running away with it anymore.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
- kortilla 2 months agoMost of the planet doesn’t use chat bots at all.
- jjani 2 months agoThe comparison of Chrome and IE is much more apt, IMO, because the deciding factor as other mentioned for social media is network effects, or next-gen dopamine algorithms (TikTok). And that's unique to them.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
- paulddraper 2 months agoFacebook fundamentally had network effects.
- jay_kyburz 2 months agoGoogle+ absolutely would have won, and it was clear to me that somebody at Google decided they didn't want to be in the business of social networking. It was killed deliberately, it didn't just peter out.
- no_wizard 2 months ago
- tedivm 2 months agoEven Alibaba is releasing some amazing models these days. Qwen 3 is pretty remarkable, especially considering the variety of hardware the variants of it can run on.
- nfRfqX5n 2 months agoask 10 people on the street about chatgpt or gemini and see which one they know
- postalrat 2 months agoNow switch chatgpt and gemini on them and see if they notice.
- jjani 2 months agoAsk 10 people on the street in 2009 about IE and Chrome and ask which one they knew.
The names don't even matter when everything is baked in.
- TrackerFF 2 months agoOn the other hand...If you asked, 5-6-7 years ago, 100 people which of the following they used:
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
- jmathai 2 months agoThat's the wrong question. See how many people know Google vs. ChatGPT. As popular as ChatGPT is, Google's the stronger brand.
- kranke155 2 months agothats just brand recognition.
The fact that people know Coca Cola doesnt mean they drink it.
- jimbokun 2 months agoIt doesn’t?
That name recognition made Coca Cola into a very successful global corporation.
- All4All 2 months agoBut whether the competition will emerge as Pepsi or as RC-Cola is still tbd.
- blueprint 2 months agoor that they would drink it if a well designed, delicious, but no HFCS nor sugar alternative were marketed with funding
- jimbokun 2 months ago
- jampa 2 months agoThe real money is for enterprise use (via APIs), so public perception is not as crucial as for a consumer product.
- asadotzler 2 months agoAsk them about Google or OpenAI and...
- postalrat 2 months ago
- parliament32 2 months agoAgreed on Google dominance. Gemini models from this year are significantly more helpful than anything from OAI.. and they're being handed out for free to anyone with a Google account.
- lossolo 2 months ago> SamA is in a hurry because he's set to lose the race.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently. Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
- laser 2 months agoMakes for a good underdog story! But OpenAI is dominating and will continue to do so. They have the je ne sais quoi. It’s therefore laborious to speak to it, but it manifests in self-reinforcing flywheels of talent, capital, aesthetic, popular consciousness, and so forth. But hey, Bing still makes Microsoft billions a year, so there will be other winners. Underestimating focused breakout leaders in new rapidly growing markets is as cliche as those breakouts ultimately succeeding, so even if we go into an AI winter it’s clear who comes out on top the other side. A product has never been adopted this quickly, ever. AGI or not, skepticism that merely points to conventional resource imbalances misses the big picture and such opinions age poorly. Doesn’t have to be obvious only in hindsight if you actually examine the current record of disruptive innovation.
- moralestapia 2 months agoSorry but perhaps you haven't looked at the actual numbers.
Market share of OpenAI is like 90%+.
- JumpCrisscross 2 months ago> Market share of OpenAI is like 90%+
Source? I've seen 10 to 20% [1][2].
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
- moralestapia 2 months agoHmm ...
I probably need to clarify what I'm talking about, so that peeps like @JumpCrisscross can get a better grasp of it.
I do not mean the total market share of the category of businesses that could be labeled as "AI companies", like Microsoft or NVIDIA, on your first link.
I will not talk about your second link because it does not seem to make sense within the context of this conversation (zero mentions or references to market share).
What I mean is:
* The main product that OpenAI sells is AI models (GPT-4o, etc...)
* OpenAI does not make hardware. OpenAI is not in the business of cloud infrastructure. OpenAI is not in the business of selling smartphones. A comparison between OpenAI and any of those companies would only make sense for someone with a very casual understanding of this topic. I can think of someone, perhaps, who only used ChatGPT a couple times and inferred it was made by Apple because it was there on its phone. This discussion calls for a deeper understanding of what OpenAI is.
* Other examples of companies that sell their own AI models, and thus compete directly with OpenAI in the same market that OpenAI operates by taking a look at their products and services, are Anthropic (w/ Claude), Google (w/ Gemini) and some others ones like Meta and Mistral with open models.
* All those companies/models, together, make up some market that you can put any name you want to it (The AI Model Market TM)
That is the market I'm talking about, and that is the one that I estimated to be 90%+ which was pretty much on point, as usual :).
1: https://gs.statcounter.com/ai-chatbot-market-share
2: https://www.ctol.digital/news/latest-llm-market-share-mar-20...
- moralestapia 2 months ago
- jjani 2 months agoIn 2006 IE's market share was higher than current OpenAI's market share.
- JumpCrisscross 2 months ago
- charlieyu1 2 months agoat least 6-9 months too late
- 2 months ago
- jstummbillig 2 months ago[flagged]
- eximius 2 months agoI have no problem with 'OpenAI', so much as the individual running it and, more generally, rich financiers making the world worse in every capitalizable way and even some they can't capitalize on.
- jjani 2 months agoCome on, at least provide your argument, we're curious! I've brought the bear case, so what's your bull case? :)
- MrDarcy 2 months agoAnd yet the analysis is spot on. Gemini and Claude are both clearly better, today.
- blueprint 2 months agothere are plenty of things that I simply cannot use Gemini for
- blueprint 2 months ago
- 2 months ago
- sp527 2 months agoLiterally the founder of Y Combinator all but outright called Sam Altman a conniving dickbag. That’s the consensus view advanced by the very man who made him.
- reasonableklout 2 months agoThis seems like misinformation, are you talking about how Sam left YC after OpenAI took off? What PG said was "we didn't want him to leave, just to choose one or the other"[1].
- reasonableklout 2 months ago
- eximius 2 months ago
- fooker 2 months agoGoogle is pretty far behind. They have random one off demos and they beat benchmarks yes, but try to use Google’s AI stuff for real work and it falls apart really fast.
- adastra22 2 months agoPeople are using Gemini for real work. I prefer Claude myself, but Gemini is as good (or alternatively: as bad) as OpenAI’s models.
The only thing OpenAI has right now is the ChatGPT name, which has become THE word for modern LLMs among lay people.
- reasonableklout 2 months agoThat's not what early adopter numbers are showing. Even the poll from r/openai a few days ago show Gemini 2.5 with nearly 3x more votes than o3 (and far beyond Claude): https://www.reddit.com/r/OpenAI/comments/1k67bya/what_is_cur...
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
- Nuzzerino 2 months agoDefine “real work”
- 2 months ago
- 2 months ago
- adastra22 2 months ago
- caseyy 2 months ago
- lolinder 2 months agoSo the non-profit retains control but we all know that Altman controls the board of the non-profit and I'd be shocked if he won't have significant stock in the new for-profit (from TFA: "we are moving to a normal capital structure where everyone has stock"). Which means that regardless of whether the non-profit has control on paper, OpenAI is now even better structured for Sam Altman's personal enrichment.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
- elAhmo 2 months agoWe have seen how much power does the board have after the firing of Altman - none.
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
- paulddraper 2 months agoThe board had plenty of power.
There was never a coherent explanation of its firing the CEO.
But they could have stuck with that decision if they believed in it.
- michaelt 2 months agoThe explanation seemed pretty obvious to me: They set up a nonprofit to deliver an AI that was Open.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
- insane_dreamer 2 months agoThe explanation was pretty clear and coherent: The CEO was no longer adhering to the mission of the non-profit (which the board was upholding).
But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.
So the board had no choice but to capitulate and leave.
- freejazz 2 months agoThe question is not if they could, it is if they would.
- michaelt 2 months ago
- ignoramous 2 months ago> We have seen how much power does the board have after the firing of Altman - none.
Right; so, "Worker Unions" work.
- wmf 2 months agoChatGPT is free. That's the public benefit.
- patmcc 2 months agoGoogle offers a great many things for free. Should they get beneficial tax treatment for it?
- sekai 2 months agoThey don't collect data?
- nativeit 2 months agoDefine “free”.
- insane_dreamer 2 months agoThat's like saying AWS is free. ChatGPT has a limited use free tier just like most other SaaS products out there.
- patmcc 2 months ago
- paulddraper 2 months ago
- richardw 2 months agoOr, alternatively, it’s much harder to fight with one hand behind your back. They need to be able to compete for resources and talent given the market structure, or they fail on the mission.
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
- kadushka 2 months agoIt seems like they lost most of their top talent - probably because of Altman.
- richardw 2 months agoOk cool so what should he do today? Close up shop? Resign? Or try?
- richardw 2 months ago
- k__ 2 months agoThe moment we stop treating "bad evil man wants to ge it rich" as a straw man, we can heal.
- thegreatpeter 2 months agoExtra! Extra! Read all about it! "Bad evil man wants to get rich! We should enrich Google and Microsoft instead!"
- thegreatpeter 2 months ago
- asadotzler 2 months agoWhat would they have to do to have a chance supporting the mission they were incorporated and given preferential tax treatment for a decade to make happen? Certainly not this.
- kadushka 2 months ago
- MPSFounder 2 months agoNever understood his appeal. Lacks charisma. Not technically savvy relative to many engineers at OpenAI(I doubt he would pass their own intern interviews, even less so their FT). Very unlikeable in person (comes off as fake for some reason, like a political plant). Who is vouching for this guy. When I met him, for some reason, he reminded me of Thiel. He is no Jobs
- 2 months ago
- gsibble 2 months agoAltman is a clear sociopath. He's a sales guy and good executive. But he's only out for himself.
- 2 months ago
- 2 months ago
- whynotminot 2 months agoIsn’t Sam already very rich? I mean it wouldn’t be the first time a guy wanted to be even richer, but I feel like we need to be more creative when divining his intentions
- sigilis 2 months agoWhy would we need to be more creative? The explanation of him wanting more money is perfectly adequate.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
- whynotminot 2 months ago> Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate. Being rich results in a kind of limitation of scope for ambition.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
- 2 months ago
- whynotminot 2 months ago
- paulddraper 2 months agoOpenAI doesn’t have the lead anymore.
Google/Anthropic are catching up, or already surpassed.
- 6510 2 months agohow? The internet says 400 m weekly chatgpt users, 19 m weekly Anthropic, 47.3 m Monthly Gemini, Grok 6.7 m daily, 430 m Baidu.
- 6510 2 months ago
- viraptor 2 months agoNah, worldcoin is now going to the US. He just wants to get richer. https://archive.is/JTuGE
- senderista 2 months ago"It's not about the money, it's about winning"
--Gordon Gekko
- Yizahi 2 months agoIt seems a defining feature of nearly every single extremely rich person is their belief that they somehow are smarter than filthy peasants, and so he decides to "educate" them of the sacred knowledge. This may take vastly different forms - genocide, war, trying to create via bribes a better government, create a city from scratch, create a new corporate "culture", do public proselytizing of their "do better" faith, write books, classes etc.
St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.
- sigilis 2 months ago
- 2 months ago
- elAhmo 2 months ago
- etruong42 2 months agoThe intro sounds awfully familiar...
> Sam’s Letter to Employees.
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
- giik 2 months agoWhen I got to that part (line 1) I stopped reading.
- giik 2 months ago
- datadrivenangel 2 months ago"Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler."
OpenAI admitting that they're not going to win?
- ru552 2 months agoI wonder if this meets the requirements set by the recent round of outside investors?
- anxman 2 months agoNot according to Microsoft: https://www.wsj.com/tech/ai/sam-altman-satya-nadella-rift-30...
- babelfish 2 months agoI don't see any comments about the PBC in that article (archive link: https://archive.is/cPLWd)
- babelfish 2 months ago
- anxman 2 months ago
- martinohansen 2 months agoImagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
- jb_rad 2 months agoHe's very clearly stating that trusting AI to a few hands was an old, naiive idea that they have evolved from. Which establishes their need to keep evolving as the technology matures.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
- TZubiri 2 months agoTo the benefit of OpenAI. I think LLMs would still exist, but we wouldn't have access to them.
Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.
- jb_rad 2 months ago
- TZubiri 2 months agoI'm not gonna get caught in the details, I'm just going to assume this is legalese cognitive dissonance to avoid saying "we want this to stop being an NFP because we want the profits."
- smashedtoatoms 2 months agoIs there a sport where the actual sport is moving goalposts?
- xpe 2 months agoThere is the game of Nomic where a turn involves proposing a rule change.
- xpe 2 months ago
- granzymes 2 months agoFrom least to most speculative:
* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital
* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai
* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware
* The non-profit won’t be the largest shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)
* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
- foobiekr 2 months agoAnother possibility is that OpenAL thinks _none_ of the labs will achieve AGI in a meaningful timeframe so they are trying to cash out with whatever you want to call the current models. There will only be one or two of those before investors start looking at the incredible losses.
- r00fus 2 months agoI'm fairly sure that OpenAI has never really believed in AGI - it's like with Uber and "self driving cabs" - it's a lure for the investors.
It's just that this bait has a shelf life and it looks like it's going to expire soon.
- r00fus 2 months ago
- az226 2 months agoThe least speculative: PPUs will be converted from capped profit to unlimited profit equity shares at the benefit of PPU holders and at the expense of OpenAI the nonprofit. This is why they are doing it.
- 2 months ago
- foobiekr 2 months ago
- no_wizard 2 months ago> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity
They already fight transparency in this space to prevent harmful bias. Why should I believe anything else they have to say if they refuse to take even small steps toward transparency and open auditing?
- 2 months ago
- simonw 2 months agoMatt Levine on OpenAI's weird capped return structure in November 2023:
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
- bjacobso 2 months agoI think the main issue is they accidentally created an incredible consumer brand with ChatGPT. They should sell that asset to World.
- cma 2 months agoIf you can move from capped profit to unlimited profit, it was never actually capped profit, just a fig leaf
- bloppe 2 months agoDoes anybody outside OAI still think of them as anything other that a "normal" for-profit company?
- bandrami 2 months agoAGI was achieved the first time a model replied "it worked when I ran it"
- jeanlou 2 months agoClosedAI
- alganet 2 months agoCan you commit to a "swords into ploughshares" goal?
We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.
What other AI players we need to convince?
- dankwizard 2 months agoAI actually wrote this article for them which is the craziest thing
- 2 months ago
- LetsGetTechnicl 2 months agoCan't wait to hear Ed Zitron's take on this
- SilverSlash 2 months agoabc.xyz: "Google is not a conventional company. We do not intend to become one"
sam altman: "OpenAI is not a normal company and never will be."
Hmmm
- GPerson 2 months agoEverything about AI really is fraudulent.
- 2 months ago
- mrandish 2 months agoI agree that this is simply Altman extending his ability to control, shape and benefit from OpenAI. Yes, this is clearly (further) subverting the original intent under which the org was created - and that's unfortunate. But in terms of impact on the world, or even just AI safety, I'm not sure the governance of OpenAI matters all that much anymore. The "governance" wasn't that great after the first couple years and OpenAI hasn't been "open" since long before the board spat.
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
- jgalt212 2 months agoWhat are the implications of this for Softbank's $40B?
- jethronethro 2 months agoEd Zitron's going to have a field day with this ...
- morepedantic 2 months agoI wonder which non-profit will be looted next.
- m3kw9 2 months agoThis sounds like a good middle ground between going full capitalism and non-profit. This way they can still raise money and also have the same mission, but a weakened one. You can't have everything.
- ToucanLoucan 2 months ago> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
^1 who subscribe to our services
- Lalabadie 2 months ago> Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs?
Because they're concerned about AI use the same way Google is concerned about your private data.
- Lalabadie 2 months ago
- ronreiter 2 months agoHere’s a breakdown of the *key structural changes*, and an analysis of *potential risks or concerns*:
---
## *What Has Changed*
### 1. *OpenAI’s For-Profit Arm is Becoming a Public Benefit Corporation (PBC)*
* *Before:* OpenAI LP (limited partnership with a “capped-profit” model). * *After:* OpenAI LP becomes a *Public Benefit Corporation* (PBC).
*Implications:*
* A PBC is still a *for-profit* entity, but legally required to balance shareholder value with a declared public mission. * OpenAI’s mission (“AGI that benefits all humanity”) becomes part of the legal charter of the new PBC.
---
### 2. *The Nonprofit Remains in Control and Gains Equity*
* The *original OpenAI nonprofit* will *continue to control* the new PBC and will now also *hold equity* in it. * The nonprofit will use this equity stake to fund “mission-aligned” initiatives in areas like health, education, etc.
*Implications:*
* This strengthens the nonprofit’s influence and potentially its resources. * But the balance between nonprofit oversight and for-profit ambition becomes more delicate as stakes rise.
---
### 3. *Elimination of the “Capped-Profit” Structure*
* The old “capped-return” model (investors could only make \~100x on investments) is being dropped. * Instead, OpenAI will now have a *“normal capital structure”* where everyone holds unrestricted equity.
*Implications:*
* This likely makes OpenAI more attractive to investors. * However, it also increases the *incentive to prioritize commercial growth*, which could conflict with mission-first priorities.
---
## *Potential Negative Implications*
### 1. *Increased Commercial Pressure*
* Moving from a capped-profit model to unrestricted equity introduces *stronger financial incentives*. * This could push the company toward *more aggressive monetization*, potentially compromising safety, openness, or alignment goals.
### 2. *Accountability Trade-offs*
* While the nonprofit “controls” the PBC, actual accountability and oversight may be limited if the nonprofit and PBC leadership overlap (as has been a concern before). * Past board turmoil in late 2023 (Altman's temporary ousting) highlighted how difficult it is to hold leadership accountable under complex structures.
### 3. *Risk of “Mission Drift”*
* Over time, with more funding and commercial scale, *stakeholder interests* (e.g., major investors or partners like Microsoft) might influence product and policy decisions. * Even with the mission enshrined in a PBC charter, *profit-driven pressures could subtly shape choices*—especially around safety disclosures, model releases, or regulatory lobbying.
---
## *What Remains the Same (According to the Letter)*
* OpenAI’s *mission* stays unchanged. * The *nonprofit retains formal control*. * There’s a stated commitment to safety, open access, and democratic use of AI.
- az226 2 months agoYou missed the part where OpenAI the nonprofit gives away the value that’s between capped profit PPUs and unlimited profit equity shares, enriching current PPUs at the expense of the nonprofit. Surely, this is illegal.
- az226 2 months ago
- eximius 2 months agoAgain?
- kraftman 2 months agosounds like they need a few more Dinorwig's
- nova22033 2 months ago>current complex capped-profit structure
Is OpenAI making a profit?
- mumong05 2 months agohi i thik it's alsowm
- I_am_tiberius 2 months agoStill waiting for o3-Pro.
- byearthithatius 2 months ago[removed]
- sho_hn 2 months agoNo, it's good that you feel this. Don't give up on tech, protest.
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
- cjpearson 2 months agoNot all non-profits are doomed. It's natural that the biggest companies will be the ones who have growth and profit as their primary goal.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
- languagehacker 2 months agoamen brother
- sho_hn 2 months ago
- theoryofx 2 months ago"We made the decision for the nonprofit to retain control of OpenAI after hearing from..." [CHIEF LAW ENFORCEMENT OFFICERS IN CALIFORNIA AND DELAWARE]
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
- cadamsdotcom 2 months agoCommercial entities aren’t social beings. They’re asocial goal-maximizers.
Threats of legal action are among the only behavioral signals it can act on while staying in its mandate. Others include regulation and the market.
This is all operating as it was designed, by humans, multiple economic cycles ago.
- HaZeust 2 months agoWhen I read that, I was actually fairly surprised about how brazen they were about who they called on for this action. They simply just said it.
- 2 months ago
- cadamsdotcom 2 months ago
- mythz 2 months agoLots of words to say OpenAI will remain an SABC (Sam Altman Benefit Corporation)
- jampekka 2 months ago> We are committed to this path of democratic AI.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
- softwaredoug 2 months agoDemocratic AI but we don’t want it regulated by any democratic process
- rchaud 2 months agoDemocratic People's Republic of AI
- jampekka 2 months agoI wonder if democracy is some kind of corporate speech homonym of some totally different concept I'm familiar with. Perhaps it's even an interesting linguistic case where a word is a homonym of its antonym?
Edit: also apparently known as contronym.
- JumpCrisscross 2 months ago> wonder if democracy is some kind of corporate speech
It generally means broadening access to something. Finance loves democratising access to stupid things, for example.
> word is a homonym of its antonym?
Inflammable in common use.
- xpe 2 months ago"democratize" has been often abused: https://intage.us/articles/words/democratize/
- JumpCrisscross 2 months ago
- rchaud 2 months ago
- insane_dreamer 2 months agoLenin and the Bolsheviks were also committed to the path of fully democratic government. As soon as the people are ready. In the interim we'll make all the decisions.
- m3kw9 2 months agoPath of, so it's getting there
- jampekka 2 months agoVia a temporary vanguard board composed of the most conscious and disciplined profit maximizers.
- jampekka 2 months ago
- moffkalast 2 months agoThey are committed, they didn't say they pushed yet. Or will ever.
- 2 months ago
- softwaredoug 2 months ago
- programjames 2 months agoCarcinisation in action:
free (foss) -> non-profit -> capped-profit -> public benefits corporation -> (you guessed it)
- blagie 2 months agoNo, this only happens if:
1) You're successful.
2) You mess up checks-and-balances at the beginning.
OpenAI did both.
Personally, I think at some point, the AGs ought to take over and push it back into a non-profit format. OAI undermines the concept of a non-profit.
- MrCheeze 2 months agoWith 2, the real problem is that approximately 0% of the OpenAI employees actually believed in the mission. Pretty much every single one of them signed the letter to the board demanding that if the company's existence ever comes into conflict with humanity's survival, the company's existence comes first.
- blagie 2 months agoThat's the reality of every organization if it survives long enough.
Checks-and-balances need to be robust enough to survive bad people. Otherwise, they're not checks-and-balances.
One of the tricks is a broad range of diverse stakeholders with enforcement power. For example, if OpenAI does anything non-open, you'd like organizations FSF, CC, and similar to be represented on their board and to be able to enforce those rules in court.
- blagie 2 months ago
- MrCheeze 2 months ago
- 2 months ago
- blagie 2 months ago
- bluelightning2k 2 months agoTurns out the non profit structure wasn't very profitable
- purpleidea 2 months agoThere's really nothing "open" about this company. If they want to be, then:
(1) be transparent about exactly which data was collected for the model
(2) release all the source code
If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
- sjtgraham 2 months agoThis restructuring is essentially a sophisticated maneuver toward wealth and power maximization shrouded in altruistic language.
- SCAQTony 2 months agoDoes anyone truly believe Musk had benevolent intentions? But before we even evaluate the substance of that claim, we must ask whether he has standing to make it. In his court filing, Musk uses the word "nonprofit" 111 times, yet fails to explain how reverting OpenAI to a nonprofit structure would save humanity, elevate the public interest, or mitigate AI’s risks. The legal brief offers no humanitarian roadmap, no governance proposal, and no evidence that Musk has the authority to dictate the trajectory of an organization he holds no equity in. It reads like a bait and switch — full of virtue-signaling, devoid of actionable virtue. And he never had a contract or an agreement for with OpenAI to keep it a non-profit.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
- 2 months ago
- 2 months ago
- timewizard 2 months agoReplace Musk with "any billionaire."
He's a symptom of a problem. He's not actually the problem.
- xpe 2 months agoIf you are a materialist, the laws of physics are the problem.
But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
- xpe 1 month ago* tool -> toll
- timewizard 2 months ago> If you are a materialist, the laws of physics are the problem.
I'm a citizen, the laws of politics are the problem.
> Musk is a complex figure
Hogwash. He's greedy. There's nothing complex about that.
> and he often exacts a tool on the people around him
Yea it's a one way transfer of wealth from them to him. The _literal_ definition of a "toll."
> When he goes into "demon mode"
When he decides to lie, cheat and steal? Why do you strain so hard to lionize this behavior?
> you don't want to be in his way.
Name a billionaire who's way you would _like_ to be in. Elon Musk literally stops existing tomorrow. A person who's name you don't currently know will become known and take his place.
His place needs to be removed. It's not a function of his "personality" or "particulars." That's just goofy "temporarily embarrassed billionaire" thinking.
- xpe 1 month ago
- xpe 2 months ago
- 2 months ago
- always_imposter 2 months ago[dead]
- d--b 2 months agoMmh am I the only one who has been offered to participate in a “comparison between 2 chatgpt versions”?
The newer version included sponsored products in its response. I thought that was quite effed up.
- accrual 2 months agoI've gotten those messages, but the products recommended in both versions were the same, down to the model number, so I don't think it's strictly product placement. The products I was looking at were old oscilloscopes.
- 2 months ago
- accrual 2 months ago
- 2 months ago
- sampton 2 months agoOpenAI is busy rearranging the chairs while their competitors surpass them.
- lovelysoni03 2 months ago[dead]
- CooCooCaCha 2 months agoI'm getting really tired of hearing about OpenAI "evolving".
- dang 2 months agoOk, but can you please not post unsubstantive comments to HN? We're looking for curious conversation here, and this is not that.
- 2 months ago
- CooCooCaCha 2 months agoIt seems like there are other comments that have the same amount of substance as mine, yet it looks like mine was the only one that was flagged.
- dang 2 months agoQuite possibly! Consistency in moderation is impossible [1]. We don't come close to seeing everything that gets posted here, and the explanation for most of these things is randomness (or the absence of time travel - https://news.ycombinator.com/item?id=43823271)
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com.
At the same time, though, we need you (<-- I don't mean you personally, but all commenters) to follow HN's rules regardless of what other commenters are doing.
Think of it like speeding tickets [2]. There are always lots of other drivers speeding just as bad (nay, worse) than you were, and yet it's always you who gets pulled over, right? Or at least it always feels that way.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
- dang 2 months ago
- 2 months ago
- dang 2 months ago
- typon 2 months ago[flagged]
- _false 2 months agoHere's a critical summary:
Key Structure Changes:
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure - Converting for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms - No specifics on nonprofit board composition or appointment process - Heavy reliance on buzzwords ("democratic AI") without concrete governance details - Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
- JumpCrisscross 2 months agoWas this AI generated?
- 2 months ago
- JumpCrisscross 2 months ago
- melodyogonna 2 months ago[flagged]
- qwertox 2 months ago[flagged]
- mensetmanusman 2 months agoRandom question, is anyone else unable to see a ‘select all’ on their iPhone?
I was trying to put all the text into gpt4 to see what it thought, but the select all function is gone.
Some websites do that to protect their text IP, which would be crazy to me if that’s what they did considering how their ai is built. Ha