This is important context in the wake of yesterday’s “raise” announcement. A lot of this stuff seems to just quietly never happen once the ink on the PR puff dries.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
For a company bringing a new technology from zero to mainstream, I think it's pretty normal that there will be a lot of failed attempts at productization.
The thing that isn't normal is the degree of experimentation relative to company valuation. Normally once a company reaches $700 B+ valuation, they've figured out their product and monetization strategy. ChatGPT is clearly still iterating heavily on that - not normal for a company that size.
And not normal for a company that has been at it this long.
The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. ChatGPT was opened to the public on November 30th, 2022, which was 1219 days ago- almost 50% more time has elapsed than between the Apple II and Visicalc.
Visicalc is often described as the killer app of the first generation Personal Computer(1). It was the product that drove them into every small business in the country, that blew up sales of personal computers and brought them out of the realm of hobbyists into enterprise. And, honestly, I think Visicalc and spreadsheets are still a greater benefit than what I've seen out of generative AI today. And that happened a lot faster than where we are today with generative AI. Apple had enormous actual profits by 1980 (Apple IPO'd in 1980 with a 21% operating margin). So I think that a lot of the "just got to give it more time" argument misses that the previous computer based revolutions that we know about productized and threw off gobs of cash a heck of a lot faster than this one has.
If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
>If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.
Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.
I took it the other way, spreadsheets shook up the world way more than AI has (to date) - it's possible that history will look back and count AI as the bigger "thing" but if I had to pick a killer app, VisiCalc and computer spreadsheets in general would beat ChatGPT.
IMO, the AI companies are trying to be both T-Mobile and Google Doc at the same time. Even Apple is struggling with being both the platform and the product. The issue with OpenAI is that the platform has no moat (other than money) and the product can be easily copied. In the game console world, the platforms have patents and trademarks, and games are not easily produced.
The Apple II was so simple (by today's standards) that it came with a complete printed circuit diagram. Visicalc was so simple it was written by two guys in a year.
AI is so many orders of magnitude more complex that the comparison is not really useful.
This complexity requires a lot of money- from investors- to sustain. If those investors don't see a return on their investment before they get too anxious, then no more money will be invested and the business is dead. So that would suggest that there will be even less patience from the money than the investors in Apple had. If you are correct that this greater complexity actually makes it harder to productize, then it is hard to see how frontier model generative AI will be viable under a VC funded domain.
It is entirely plausible to me that there are great technologies that are impossible to reach via the normal means of VC/investor financed capitalism. I certainly have encountered market failures requiring extremely patient money (using in the form of government subsidies) to produce a useful product that eventually does have market value. That has worked many times in the past. But so far generative AI has not had that, and looking at my non-technology friends, I very much doubt that there would be much support among them for government subsidies of AI companies. AI companies have made too many people unhappy, served as too much of a punching bag, to be in a good position politically for that.
Which is a good thing. Elon has showed the world, the only thing that limits the upper bound is bureaucracy, extreme risk-averse and no culture for experimenting.
More and more companies will start operating on the correct reward/risk curve or else getting crushed by firms who do. OpenAI has forced Google, Apple, Meta out of their comfort zone because they know OpenAI will eat their lunch
Literally every part of this comment is confusing. Elon hasn't shown anyone anything interesting in at least a decade. OpenAI hasn't forced Apple to do anything - LLMs aren't impinging on hardware or bundled services, and this literally seems right up Google's alley (and they're arguably better at it than OpenAI has demonstrated, now that first-mover-ish is long past).
I suppose Meta's recent comfort zone was simply a stupid bet on VR, so sure, maybe one part of the comment isn't confusing.
OpenAI has stagnated technologically, and is a financial zombie, but that's not true for every part of the industry. Once these early movers flame out, there will be more stability with Google, Microsoft, and AWS.
They nominally come across as a more stable ship with less clouds over its leadership.
However all of the major privately held AI players are struggling to paint a business and financial picture that doesn’t look “terrible” at best and “verge of market moving implosion” at worst.
For now the only thing keeping this all alive is more and more irrational cash being thrown on the pile in the faint hope that something stops the implosion from happening.
Correct. As compared to other AI companies. Tangible product, specific market segment and stable user base.
But whether it is worth a trillion dollars (like some of the peers are pretending to be) is yet to be seen. A lot of companies are using Anthropic products, but whether the spend is worth it, is also yet to be seen. A more realistic end state for Anthropic would be that they’d enterprise customers, with limited but steady spend due to Anthropic finally having to stop subsidizing tokens and a valuation in around $200-350B.
But between their token curtailment and time of day restrictions, and some of the clues in the code leak (regex for sentiment, telling the public client to be "brief") it seems like they are facing some capacity issues.
Im guessing that the accountants at all the AI incumbents drink heavily.
Anthropic can't prop up Nvidia and the chip industry itself. If AI as an industry can't start turning a dollar into $1.05, a lot of stuff starts falling in value
If/when the bubble bursts Anthropic is going down as well. There's nothing unique that sets it apart from OpenAI. Their cash burn is similarly egregious.
Holy f'n Hell, there's such a blatant bias on HackerNews in favor of Anthropic and against OpenAI.
I'm just a user, and in my experience Claude has been consistently crap compared to ChatGPT/Codex.
I use both side-by-side, and have paid for a ChatGPT subscription every month for around 1 year, but only 2 months for Claude; once last year, and again since last month.
Everything from the sign up, the sign in, the payment, the UI, the UX, gosh, just sucks on Claude.
And the AI itself: SO. MUCH. "OoPs you're right! I was mistaken" BACKTRACKING! It's downright DANGEROUS to listen to it! God I can post screenshots of working on the same project and the same prompts with both agents and prove how worse Claude is.
Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
I’ve been a paid subscriber for all three players since day 1. CC (Opus) has been a clear winner for agentic coding starting about 6 months ago. GPT5.4 reduced the gap somewhat but the gap is still there.
The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be). If GPT is a leader in that, they'll take a sizable share of that pot.
There's a lot more money in being Google -> consumer ads, or Amazon -> consumer ads, or Meta -> consumer ads, than there is in being Anthropic -> enterprise.
Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
And of course everybody knows the Google & Meta ad monsters.
The only question remaining is who is going to extract all those LLM ad dollars, how will that break out. Right now it's Gemini and GPT in the obvious lead, with Anthropic in third, and Meta & Grok nowhere to be found (permanent situation for those).
>The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be).
This seems like ... not the situation we are in. LLMs are great for coding now but their text generation capabilities aren't exactly capturing the masses or replacing their jobs yet. People are already tired of the deluge of fake content on the internet, it's not going to drive a second revolution in web ads.
The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
> The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.
If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
It’s not like “best” has won any other b2b arms race in the past.
>If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.
I don't know Gemini's pricing model in detail, but in general pricing doesn't generalize well between personal/hobbyist and enterprise use. Consumer pricing of variable costs is a balancing act, and most Gemini users aren't going to be anywhere near the quota; a company of 1000 can't always buy for $20,000 what 1000 random users with $20 personal plans are theoretically capped at.
In large part because most companies have a set budget for IT spend. Thats how “normal” profitable companies operate outside this cash burning bonanza that’s going on.
And in that reality one can’t just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isn’t retuning value. So “token limits” and cost controls on B2B is entirely expected here.
> especially when said fancy new thing isn’t retuning value
I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.
I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.
If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.
The problem is that AGI fantasy aside, CTOs at companies are expected to deliver results today and tomorrow. Better to let somebody else hold the bag and train models, then once it finally works as advertised you can ease on the brakes.
LLM usage will largely replace traditional search, and that's stage one. To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end. Search will more be a feature of Gemini in the not very distant future, rather than Gemini being bolted onto/into search.
Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.
Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.
> To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end.
We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.
Every year I ask the latest version of Chat GPT a basic facts question about rugby results. It almost always gets it wrong - even when it does web search and cites sources. Wrong scores, hallucinated matches, wrong locations - just gob smacking amounts of wrongness.
The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.
So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.
Who is revolted? I use the AI Google results every day when asking for specific questions, I rarely visit the webpages before anymore. Also Google already injects ads into conversations in the form of Google Shopping affiliate links.
I understand the concern but it's frankly not my problem as a user, that is for the authors and corporations to figure out. No one would (or should) blame car buyers for putting horse and buggies out of business, they're merely participating in the market as a consumer not the producer.
You see it already with how many people use LLMs for everything these days. Google Gemini can also integrate with your other Google apps to personalize further, and Gemini already has product placement ads.
Do you have a concrete example I can reproduce? I searched for things like how to change the filter of X make and model and it seems correct, not sure if that's what you meant.
I'm not the person you replied to but I'm wondering which Google AI product you are referring to that you use for search which is so excellent that you need someone to find for you an example of it failing?
I think Google has several ai products with search features?
>> LLM usage will largely replace traditional search,
This is already happening. I have two teenagers and both of them have stopped using search. They're both using LLM's for almost everything they're looking for. I'll be walking by my son's room and hear him talking and pop my head in, look around and I'm like, "Oh, thought you were talking to someone. You just talking to yourself again? chuckling" My son says "Nah Dad, I'm talking to Gemini about the the differences between the new Flylites and XF skates and which one is actually better."
Instead of typing in some search and then digging through a bunch of reviews and links, LLM's can now do all of your research and footwork for you. The fact Gen Z has latched onto this now means search is dying a much faster death than I think people realize.
Just for some more anecdotal evidence:
I just started a new business with two millennial friends in September. I was still in that mode of "just get the site up, get it indexed and then in a few months, we'll have enough traffic and start getting leads." My partners? "Nah man, search is dead, its all about socials now, nobody uses search, trust us."
We poured about $500/month into FB marketplace, Instagram and TikTok. We created a few original shorts that advertised our new studio. The returns have been pretty staggering. I'm thinking we need 3 years of funding before we start turning a profit. Nope. By concentrating almost solely on socials, we're already cash positive after only 7 months of being in business.
The last few months have really opened my eyes at how much stuff has changed.
Google launched in 1998 and were running ads by 2000. Considering how much more access to adtech product talent there is for OAI a quarter of a century on, what explains their hesitation to pick that route and make billions? After all they had billions avaiable to acquire designer bauble maker Jony Ive's company.
The first AI company to cram their product full of ads will get roasted over the coals for it. My guess is they're all playing chicken and waiting to be the second to do it. I'd also guess that they're all already thinking about ways to introduce it that will generate the least backlash.
Google could do it in 2000 because their search was legitimately so much better, and also because their ads were comparatively more relevant and unobtrusive than modern ads. In comparison, LLMs are relatively similar in performance unless you're picky enough that you're probably already paying and thus wouldn't be in the ad-supported tier.
That said, I wonder if ads are even lucrative enough to move the needle relative to how much training costs are increasing with each generation.
> Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
You can say the same about AWS and then prove the b2b case instead of ad case as well
AWS is legitimately a giant and it should be considered in enterprise broadly. It's infrastructure more than enterprise software of course, which is where Anthropic is at. Anthropic is not trying to host the world's databases and services (at present anyway). Anthropic will however help you write software to compete with Salesforce, Oracle, SAP, et al.
Google's ad business remains far larger and more profitable than AWS. And the advertising segment is drastically larger than the segment AWS is in. Just Google + Meta = nearing $600 billion in ad sales. Amazon will soon have their own $100 billion in ad sales.
I guess the question is how many more $100B of ad sales slots are available, aside from just stealing share from incumbents (who already took it from traditional media channels over last 20 years).
At some point someone needs to add value to the real economy, not just take an ad tax off the top.
These exact words were said tens of thousands of times about Facebook (am old enough to remember those discussions :) ), “no way they can monetize on mobile” (this was the most fun).
rules are simple, if you have Xbn or XXXm users on your system, you will make big bank in ads eventually
It's tempting to look at trends and assume there must be a rule behind them, but it's also intellectually lazy. Please do the hard work of justifying your stance like GGP did.
it is a simple stance - if you have a product that is used by hundreds of millions of people ad monetization strategy will be found cause there are people a lot smarter than you and me that will get it done. here’s intellectual challenge - find a business with comparable number of users to openai which is not swimming in ad revenue - one will do
A counterpoint is that there are many products with significant usage that fail or never attempt advertising monetization. They just increase the cost of the product.
At that time, Facebook provided a free service without any real competitors. The masses will switch to Meta AI or Gemini or Claude at the drop of an ad that annoys them enough.
Gemini, GPT and Claude will all have ads on the consumer side. They will go together in quasi lock-step into the ad future, because that money is gigantic and they're going to need it.
The masses will have no say in the matter. Just as they had no say in the matter with Google's ads getting ever more intrusive, or cable prices previously, or streaming prices going perpetually higher in the present, or YouTube ads, or anything else. Consumers will have no say in the matter, they'll take it and that's that.
With only three relevant competitors (maybe Mistral in Europe), there will be nowhere to flee the deployment of ads.
absolutely not the case. there isn’t a single nerve in human brains that goes “oh imma tolerate ads cause this shit’s free but if I pay a few bucks no way” - if the product you use has utility to you, you will tolerate ads provided no other acceptable alternative. not to tell you something you don’t already know but anthropic is getting ads, eventually, it is a given. so while today you may have an alternative (arguably better even if no ads in the equation) at some point you won’t have an alternative (other than running local) and you’ll tolerate ads. the thing with LLM ads is that companies can make $$$$ from “ads” you don’t see, i.e. I can (not now but in the future) companies to push my product, e.g. claude is setting up architecture and proposes upstash (which I own and am paying anthropic a lot of money) instead of any competitor. or even more silently adding dependencies on my NPM library which has free and commercial offering…
In TFA it is put on the list because some of the users of this GPT version were discontent with its cancellation, which caused even OpenAI to oscillate in its decision, so they first cancelled it, then they resurrected it and then they cancelled it permanently, probably because continuing to run it would have cost more than the generated revenue.
Nothing similar happened when the earlier, presumably worse versions were discontinued.
The stargate, nvidia and amd deals are all linked together and the fallout is not public. Nvidia and amd stock seems to not care about it at all. Oracle fired 30000 employees, not sure if it’s to fund that initiative or a fall out of that
What they really should focus on is making those models more efficient. With them most likely losing money on inference (+model training + salaries + building data centers), I can't see why they would want more compute and more products, since more tokens spent is actually bad for them.
They’ve lost a whole lot of people in prominent roles over the past few years. I wonder how much of the misfires and general thrash in product direction is a result of brain drain and/or so many hands changing. Or maybe I’m confusing cause and effect… hard to tell
That’s pretty crazy, I swear it wasn’t that long ago these companies were about the only people hiring and the comp packages looked absolutely deranged.
For a brief moment I regretted wasting any time of my life on anything but ML research. But I guess the bigger they come…
My guess is Sam Altman is a better VC than CEO. Better at hype, networking, fund raising, and back room political hijinks than shipping a focused product
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.
Interesting. I never had much of an option on Forbes till a few years ago I noticed them posting nearly exclusively NYPost style clickbait. I didn’t think it was that bad of a publication.
I think the VC/investor community needs to take A LOT of blame here. They've created an insane rush to financialize everything to moon at the drop of a hat.
Has there ever been a period o time where people saw a bubble coming and that we were in one, but it just inexorably refused to pop/drug out this long? This isn’t a rhetorical question, I’m wondering how this period compares to other irrational periods of the economy like railroad fever etc.
Not at all, there is a famous saying (often attributed to Keynes but as far as I can tell he never said) “Markets can remain irrational longer than you can remain solvent.”
It’s not been that long really. The dot com bubble was called a bubble for a while before it finally imploded. And just like now folks were in massive denial that it was a bubble.
One of the challenges here is that a lot of folks simply weren’t around then and haven’t seen what happens when everything implodes overnight. Those that have experienced it know what that looks like and know it will happen again.
Bubbles don't pop overnight. In the aftermath of any collapse, you can generally see a pretty clear pattern of red flags (and attempts to minimize them or cover them up). Some parties notice earlier than others, but the realization is generally a much more gradual process than the collapse.
"Disney’s then-CEO Bob Iger... was sold on Sora, too. He lauded Altman’s ability to “look around corners”..."
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
Before he left I use to enjoy enraging a manager several layers above me. In one instance I explained that asking us to cut a few corners to get things done was fine, usually we can figure out acceptable ways of doing it. But then, it is your job to take those fake numbers and figure out how we are doing. No matter how much effort you make if bullshit goes in you know what will come out.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
The thing that isn't normal is the degree of experimentation relative to company valuation. Normally once a company reaches $700 B+ valuation, they've figured out their product and monetization strategy. ChatGPT is clearly still iterating heavily on that - not normal for a company that size.
The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. ChatGPT was opened to the public on November 30th, 2022, which was 1219 days ago- almost 50% more time has elapsed than between the Apple II and Visicalc.
If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
1: https://en.wikipedia.org/wiki/VisiCalc#Killer_app is pretty much the normal narrative on Visicalc and its importance to the Personal Computer.
If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.
Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.
AI is so many orders of magnitude more complex that the comparison is not really useful.
It is entirely plausible to me that there are great technologies that are impossible to reach via the normal means of VC/investor financed capitalism. I certainly have encountered market failures requiring extremely patient money (using in the form of government subsidies) to produce a useful product that eventually does have market value. That has worked many times in the past. But so far generative AI has not had that, and looking at my non-technology friends, I very much doubt that there would be much support among them for government subsidies of AI companies. AI companies have made too many people unhappy, served as too much of a punching bag, to be in a good position politically for that.
More and more companies will start operating on the correct reward/risk curve or else getting crushed by firms who do. OpenAI has forced Google, Apple, Meta out of their comfort zone because they know OpenAI will eat their lunch
I suppose Meta's recent comfort zone was simply a stupid bet on VR, so sure, maybe one part of the comment isn't confusing.
I don't understand what you think you're seeing.
OpenAI has stagnated technologically, and is a financial zombie, but that's not true for every part of the industry. Once these early movers flame out, there will be more stability with Google, Microsoft, and AWS.
However all of the major privately held AI players are struggling to paint a business and financial picture that doesn’t look “terrible” at best and “verge of market moving implosion” at worst.
For now the only thing keeping this all alive is more and more irrational cash being thrown on the pile in the faint hope that something stops the implosion from happening.
Correct. As compared to other AI companies. Tangible product, specific market segment and stable user base.
But whether it is worth a trillion dollars (like some of the peers are pretending to be) is yet to be seen. A lot of companies are using Anthropic products, but whether the spend is worth it, is also yet to be seen. A more realistic end state for Anthropic would be that they’d enterprise customers, with limited but steady spend due to Anthropic finally having to stop subsidizing tokens and a valuation in around $200-350B.
But between their token curtailment and time of day restrictions, and some of the clues in the code leak (regex for sentiment, telling the public client to be "brief") it seems like they are facing some capacity issues.
Im guessing that the accountants at all the AI incumbents drink heavily.
That isn’t saying much.
I'm just a user, and in my experience Claude has been consistently crap compared to ChatGPT/Codex.
I use both side-by-side, and have paid for a ChatGPT subscription every month for around 1 year, but only 2 months for Claude; once last year, and again since last month.
Everything from the sign up, the sign in, the payment, the UI, the UX, gosh, just sucks on Claude.
And the AI itself: SO. MUCH. "OoPs you're right! I was mistaken" BACKTRACKING! It's downright DANGEROUS to listen to it! God I can post screenshots of working on the same project and the same prompts with both agents and prove how worse Claude is.
Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
There's a lot more money in being Google -> consumer ads, or Amazon -> consumer ads, or Meta -> consumer ads, than there is in being Anthropic -> enterprise.
Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
And of course everybody knows the Google & Meta ad monsters.
The only question remaining is who is going to extract all those LLM ad dollars, how will that break out. Right now it's Gemini and GPT in the obvious lead, with Anthropic in third, and Meta & Grok nowhere to be found (permanent situation for those).
This seems like ... not the situation we are in. LLMs are great for coding now but their text generation capabilities aren't exactly capturing the masses or replacing their jobs yet. People are already tired of the deluge of fake content on the internet, it's not going to drive a second revolution in web ads.
The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.
If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
It’s not like “best” has won any other b2b arms race in the past.
Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.
And in that reality one can’t just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isn’t retuning value. So “token limits” and cost controls on B2B is entirely expected here.
I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.
I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.
If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.
Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.
Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.
We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.
The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.
So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.
And what do you think this'll do for future LLM models that need to train on new content if web page traffic collapses?
I think Google has several ai products with search features?
Which one in your experience "seems correct"?
This is already happening. I have two teenagers and both of them have stopped using search. They're both using LLM's for almost everything they're looking for. I'll be walking by my son's room and hear him talking and pop my head in, look around and I'm like, "Oh, thought you were talking to someone. You just talking to yourself again? chuckling" My son says "Nah Dad, I'm talking to Gemini about the the differences between the new Flylites and XF skates and which one is actually better."
Instead of typing in some search and then digging through a bunch of reviews and links, LLM's can now do all of your research and footwork for you. The fact Gen Z has latched onto this now means search is dying a much faster death than I think people realize.
Just for some more anecdotal evidence:
I just started a new business with two millennial friends in September. I was still in that mode of "just get the site up, get it indexed and then in a few months, we'll have enough traffic and start getting leads." My partners? "Nah man, search is dead, its all about socials now, nobody uses search, trust us."
We poured about $500/month into FB marketplace, Instagram and TikTok. We created a few original shorts that advertised our new studio. The returns have been pretty staggering. I'm thinking we need 3 years of funding before we start turning a profit. Nope. By concentrating almost solely on socials, we're already cash positive after only 7 months of being in business.
The last few months have really opened my eyes at how much stuff has changed.
Google could do it in 2000 because their search was legitimately so much better, and also because their ads were comparatively more relevant and unobtrusive than modern ads. In comparison, LLMs are relatively similar in performance unless you're picky enough that you're probably already paying and thus wouldn't be in the ad-supported tier.
That said, I wonder if ads are even lucrative enough to move the needle relative to how much training costs are increasing with each generation.
You can say the same about AWS and then prove the b2b case instead of ad case as well
Google's ad business remains far larger and more profitable than AWS. And the advertising segment is drastically larger than the segment AWS is in. Just Google + Meta = nearing $600 billion in ad sales. Amazon will soon have their own $100 billion in ad sales.
At some point someone needs to add value to the real economy, not just take an ad tax off the top.
And yet every attempt to extract even minimal ad revenue has been canned to date as something nobody wants with AI providers retreating in failure.
I don’t doubt that there’s “some” ad revenue to be had but there’s little evidence that ads are going to save the day here.
GoTo.com -> Google -> $$$
rules are simple, if you have Xbn or XXXm users on your system, you will make big bank in ads eventually
The masses will have no say in the matter. Just as they had no say in the matter with Google's ads getting ever more intrusive, or cable prices previously, or streaming prices going perpetually higher in the present, or YouTube ads, or anything else. Consumers will have no say in the matter, they'll take it and that's that.
With only three relevant competitors (maybe Mistral in Europe), there will be nowhere to flee the deployment of ads.
Billions in projected revenue is nothing but hype/cope. Google and Meta got their edge because their product was offered for "free" to the masses.
If they want to out-ad those companies to the tune of billions, I'll go with the least annoying. OpenAI hasn't earned any loyalty.
Welcome to dot com 2.0
the silicon valley shuffle, tried & true
Why is this on the list? Like... what? How about including GPT 3.5 and GPT 2 here too?
Nothing similar happened when the earlier, presumably worse versions were discontinued.
For a brief moment I regretted wasting any time of my life on anything but ML research. But I guess the bigger they come…
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
He was a partner at YC for 8 years
He has no research/PhD background in AI and is the CEO of an AI company
There is no objective data point in which he's a better CTO than a CEO
Usually company "experiments" are typically hush hush, not blasted on every corporate media channel as a means to boost your company holdings.
For some reason, he does not look like a man whom I would trust with my money, but it appears that there are enough rich investors who disagree.
I mean, even Andresson-Horowitz was taking NFT's seriously as though they weren't a scam only a few years ago (https://a16z.com/the-nft-starter-pack-tools-for-anyone-to-an...).
These people are also looking (and funding) quantum computing companies as though quantum computing is right around the corner after AGI.
They need to cool their jets. AI is certainly a worthwhile and super important development, but it's still possible to go overboard with it.
One of the challenges here is that a lot of folks simply weren’t around then and haven’t seen what happens when everything implodes overnight. Those that have experienced it know what that looks like and know it will happen again.
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.