Rendered at 20:24:45 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
mikepurvis 16 hours ago [-]
FWIW, I've gone to ChatGPT multiple times with a specific intent to buy, like "hey I need a thing like X or Y, but with quality Z too" and sometimes it just hallucinates things that appear never to have existed, other times it comes up with real items, but the links it gives me to buy them lead nowhere, so I end up just googling the name of it and buying it that way (examples: computer monitor, power bar, USB charge station, kitchen gadgets, Christmas presents/toys, soldering supplies like tips and flux, 3D printing filaments, etc).
I would guess that ChatGPT has left at least $100 on the table from me having to do this when literally all it had to do was give me a referral link to Amazon or whatever and I would have clicked the buy button.
pjc50 12 hours ago [-]
> I would guess that ChatGPT has left at least $100 on the table
Man, this thing is going to be so lucrative when they inject ads into it. Imagine how this is going to combine with the parasocial AI boyfriend/girlfriend people, it's going to be worse than hostess clubs. They'll have to invent whole new categories of nonexistant products for the bots to sell.
rchaud 6 hours ago [-]
Romance Scam-as-a-Service. With an aging population, sky's the limit for TMV.
rebuilder 16 hours ago [-]
Same experience, I thought using chatGPT to find some fairly specific things to buy would be a slam dunk, but it couldn’t provide links half the time and also failed to hold to criteria like shipping region etc. I would tell it to give direct links and it would mostly just say ”go on Amazon and search for X”.
There’s a special type of frustration when an LLM is close to being useful but just… isn’t.
sersi 14 hours ago [-]
I've had much more luck with perplexity. Still not perfect but at least works better.
Giwwi 9 hours ago [-]
[dead]
MattDaEskimo 9 hours ago [-]
I have similar experiences. Asked to find and list a bunch of suppliers near a specific city. It started showing me places that were >5 hours away, claiming them to be a "short drive".
sph 15 hours ago [-]
> I’ve gone to a machine that by its nature hallucinates and it hallucinated a response. Surprised Pikachu face
grey-area 15 hours ago [-]
Why do you still trust the output for any other questions?
Once they start selling things, you’re actually going to trust they suggest the item you’re looking for over the item they get paid for you to buy?
What in the history of Sam Altman has lead you to believe he’ll do the right thing instead of the thing that makes him the most money?
mikepurvis 7 hours ago [-]
In most of these cases I'm not looking for "the best" or a ranked list. Rather I'm looking for an item to fill a specific need and I can judge within a second of seeing the picture if it does or does not.
I expect some or all of the items I've been shown did exist for sale at some point in the past and information about them was in the training data.
dpoloncsak 6 hours ago [-]
You prefer outdated-by-atleast-a-year data instead of the freshly scraped and ranked Google Search results?
GPT 5 was publicly released in August last year. That data has to be atleast a year old, right?
If I'm comparing Macbook Neo to a Chromebook, it's impossible for the Neo to show up in the training data, and has to use RAG
(This is assuming the data is atleast a year old. Seems like OAI isn't doing fresh runs for 5.1, 5.2, etc. but I'm unsure if that's been officially disclosed)
mikepurvis 5 hours ago [-]
Not everything good or necessary was released in the past two years.
As an example interaction, recently I sent ChatGPT a picture of my soldering station and say I'm having trouble with it being slow to melt and it says "well these are your upgrade options but really all you need is a chunkier tip with more thermal mass, he's a link to a tip set you could try" but then the link is dead and I'm able to google my own similar set and just buy it (it worked great).
Another one was wanting a U-shaped bracket to mount a riser on my desk, and the only ones I could find at the building supply store are L-shaped. I asked Chat about this and it said it's just too niche of an item to be mass produced but suggested I buy from someone doing semi-custom fabrication on etsy. Sure enough... another dead link, but I google it and find the store, and order.
Two cases where I didn't really know what I actually wanted/needed, and Chat successfully filled the gap with information I was able to independently verify afterward, but also Chat missed out on the opportunity to get a referral fee out of my eventual spend.
garbawarb 5 hours ago [-]
Same. I've found Meta's latest model to be the best at product discovery. Which makes sense since that's already their business.
simianparrot 14 hours ago [-]
If it can't even point you to a real product on an existing website, why do you trust it for any other information..?
okrad 15 hours ago [-]
Use Deep Research and try to be as specific as you can with the attributes of said product. I’ve had a few successes this way.
coro_1 14 hours ago [-]
Clicking on any picture itself should present an frame on the right with a bunch of options.
mikepurvis 8 hours ago [-]
It does but then those are dead links.
crowcroft 22 hours ago [-]
The most surprising thing to me is that they're partnering with third parties to do this.
Less secure, lower margins (more middlemen taking fees), harder to access, more likely to not work properly.
I would expect all the meta execs they've hired to know better so maybe I'm missing something...
pz 20 hours ago [-]
This approach makes a lot of sense. Advertising is a marketplace and this is a great way to bootstrap advertising inventory. Its inevitable they will allow advertisers to manage ad spend directly through OpenAI but right now the product is too new to capture meaningful ad budget. This way they can begin testing delivery and develop proof points around ROI and build towards larger ad spend directly.
windexh8er 16 hours ago [-]
Clearly the Meta execs they hired are about as useful as most 3-letter exec titles because, wow, did OAI miss the boat again. Personally I'm glad they've made as many missteps as they have, but quite the amateur move to not seize the market opportunity and keep it holistically for themselves. They took nothing from Google's paved road of incumbency in this segment.
Again, personally, I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days. And if anyone thinks OAI has been anything remotely "strategic" around their product, well... Then you must enjoy shooting darts in the dark.
stingraycharles 15 hours ago [-]
This appears to be more like a toxic rant than a reasonable argument.
> quite the amateur move to not seize the market opportunity and keep it holistically for themselves
What does this even mean? There are so many businesses, especially in the advertising world, that first start white-label reselling so that you can scale up super easy and quickly. Then once market is captured, you integrate everything. This is a common adtech playbook, and the Meta execs know that as well.
And I say this as someone who founded & exited their own adtech platform.
I would not recommend OpenAI to start developing an RTB platform right now at all. Just first prove there is a market and the value is there.
> They took nothing from Google's paved road of incumbency in this segment.
Google bought / acquired themselves into the online adtech market mostly. Yes they have adwords, which was only really becoming something a decade after Google launched, which they paired with their acquisition of half the adtech giants (DoubleClick, Invite and AdMeld). So yeah, not a great example.
> I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days.
This is just a useless attack for no reason.
crowcroft 18 hours ago [-]
> product is too new to capture meaningful ad budget
I disagree entirely. As someone who works in advertising every single company I've talked to would be queueing up to test ads on ChatGPT if they launched a Google Ads like platform.
If ChatGPT doesn't have enough scale to do it, then they shouldn't do ads.
arcticfox 17 hours ago [-]
ChatGPT has more web traffic than X, Reddit, Bing... Crazy to say they wouldn't be able to capture meaningful ad budget. IMO partnering on this is a blunder.
ssl-3 17 hours ago [-]
It comes together quickly, though. They don't need to learn how to become a company that knows how to sell advertising; they can instead just pay some other entity to do that.
It's OK to not have complete vertical integration. (They probably don't fix their own toilets, either.)
And if it makes as much money as it seems must be possible, then they can just buy one of the advertising partners that are already have plugged into their system and shitcan the rest.
qotgalaxy 20 hours ago [-]
[dead]
fsckboy 19 hours ago [-]
>lower margins (more middlemen taking fees)
middlemen taking fees is not the measure for comparison, the question is whether you could run your own ad business for your own platform and keep your costs lower than established players who sell on all platforms. the answer is generally "no"
look how much money coca cola makes, and they sell it cheaper than water and still pay for advertising!! we should all make our own coke and not advertise it...
crowcroft 18 hours ago [-]
Established players aren't selling on all platforms. Any platform doing more than $1bn in ad revenue operated their own ad sales platform.
The only players that sell through third parties are sub-scale publishers, and that is a shit business to be in. If that's what OpenAI is aiming for then they will never be able to compete with Google.
I'm not really sure what you're analogy about Coke is meant to mean here...
unfunco 4 hours ago [-]
You're paying way too much for water.
6thbit 3 hours ago [-]
I find it a good surprise. Signals they probably don't want to become an advertising company, which is what every other tech business ended up becoming. And that is not against having ads be a core revenue driver, like for companies in other sectors.
nopinsight 17 hours ago [-]
They probably want to select for high-quality ads without having to be responsible for filtering issues, whether false positive or false negative, which will adversely affect their reputation with consumers and advertisers. They probably wait until they have enough data/experience to do that properly.
strongpigeon 22 hours ago [-]
I agree with you, but IMO the details are too sparse here to figure out what's really happening. Still, it feels very dangerous to try to go the reseller route first as you lose a ton of control and become dependent on your partner to support all the feature you add yourself in a timely fashion.
crowcroft 22 hours ago [-]
It all seems a bit overly complicated to me. TikTok pretty much went straight to a self-serve platform and basically had immediate success. I would think if OpenAI did something similar there would be no shortage of advertisers wanting to spend money.
doctorpangloss 19 hours ago [-]
on tiktok you are not paying for ad inventory, which on that platform sucks, you're paying $10m+ to tip the scales in the algorithm towards organic content about your brand
ahartmetz 14 hours ago [-]
How much established truth, official and legal is that?
yunwal 19 hours ago [-]
how is this different than what OpenAI is trying to do?
doctorpangloss 19 hours ago [-]
we don't know
i assume the 22 year olds working 16h days at openai sincerely think people pay for ads on tiktok, and shitty low converting ads is why tiktok makes tons of money, and they sincerely think the solution to their lack of knowledge is delegating their core business to a DSP no one has ever heard of
nunez 6 hours ago [-]
They probably don't have the systems to do this themselves yet. They obviously will.
linkjuice4all 22 hours ago [-]
I guess OpenAI couldn't train AdManagerGPT to ignore the client (except when it's time to renew), suggest more ad spend, and turn off any of the features that let you control your budget.
coro_1 14 hours ago [-]
They're road mapping. Trying things out. Their entire current ad eco-system may change internally in a week.
dd82 19 hours ago [-]
why would you be surprised about this? its pretty obvious that execs give no fucks except for money.
moralestapia 18 hours ago [-]
It makes sense to me.
You pay extra but you just plug in into a framework that already works.
It's also easier to drop the potato if it gets hot.
ehnto 17 hours ago [-]
It's just surprising, since it's objectively better to own the platform, and the company has a mind boggling amount of money, and allegedly coding agents capable of 10xing developer output. Why would they not be able to do it in house? It shouldn't be a capacity or capability issue.
That makes me think it's just another higher level money game, and there will be some weird investments in which neither company does anything of material value in exchange except spin some number wheels.
cjbgkagh 21 hours ago [-]
My guess is that three letter agencies will have access to this data and are requiring this partnership.
crowcroft 21 hours ago [-]
Three letter agencies are telling OpenAI to partner with a Toronto based ad platform?
cjbgkagh 21 hours ago [-]
Ad networks / information brokers in general would be too sweet of a prize to pass up. It’s a weak link in the chain, if they’re not exploiting it they’re not doing their jobs. Being foreign data is a bonus.
EA-3167 21 hours ago [-]
The missing part seems to be that they need infusions of money to keep this “business model” running a little longer. In this world if you want prompt money and lots of it, advertising is the way.
18 hours ago [-]
nine_zeros 22 hours ago [-]
[dead]
DustinKlent 6 hours ago [-]
Degrading your product by filling it with ads or, God forbid, by making the LLM returns less accurate and influenced by advertisers, makes ZERO sense for any LLM company due to the ubiquity of LLMs. LLMs aren't something special that no one else can create and use. Google is a poor example because it had a monopoly on search for a long time due to its foothold and technology, but OpenAI has nothing like that at all. OpenAI has no moat or "secret sauce" that would enable it to be more successful than any other LLM company.
If I can use an open source highly effective LLM locally, and have it do all of the things ChatGPT can do (and more), then what motivation do I have to use ChatGPT? The only motivation people would have is ease of use but if one service becomes bogged down and compromised then people will just shift to one of the other hundred equally effective services with less ads. LLMs aren't hard to spin up and put out there for people to use.
Honestly, I don't think OpenAI is going to survive very long unless they get incredibly lucky or they come out with some banger of a new model or new app but even then, IDK...
jonfw 5 hours ago [-]
> If I can use an open source highly effective LLM locally, and have it do all of the things ChatGPT can do
And if my grandmother had wheels, she'd be a bicycle!
We are miles off from open models having parity with what the AI labs are putting out.
And that's putting aside the compute requirements to run the top open models and the poor economics of running these models locally.
solaire_oa 3 hours ago [-]
Not all prompts require the same compute, and Gemma-4B runs on our phones with parity output for ordinary 1-5 sentence queries. The common use case of Google-style queries is already solved locally, saying we're miles off is ridiculous.
rglullis 4 hours ago [-]
It's not "miles off", the gaps are narrowing instead of widening and they are already more than enough for the majority of use cases.
Getting people to install a different chatgpt app on their phones is a lot easier than getting them to change their search engine.
6thbit 3 hours ago [-]
Google did just fine with the same strategy on search.
jackb4040 23 hours ago [-]
Didn't they explicitly say the ads wouldn't be made aware of prompt data when they announced them? And if so, how is that not securities fraud?
c7b 22 hours ago [-]
Maybe someone with more time at hands could look up what Google said with respect to ads and what happened later.
This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0
dd82 20 hours ago [-]
And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?
search engine results do this all the time, reordering output by advertiser input. its a pretty small jump from that to rewriting output from models, and even better where its all a black box.
duskdozer 17 hours ago [-]
>And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?
None.
eswdd 19 hours ago [-]
Also Google did it over-time - they didn't suddenly become who they are today 10 years ago even.
dd82 4 hours ago [-]
oh absolutely, its been a progression though and search order rewriting was implemented very early on as part of ad integration. its common in search relevancy/tuning circles
your example with google isn't necessarily applicable now because they've shown a roadmap that can be done and squeezed down tightly between the "hey, we're good folks" to "you're our captive cattle, we can do whatever the fuck we want. there's nothing you can do, since all our competitors will be doing the exact same thing shortly"
tyre 19 hours ago [-]
I mean search engine results are pretty poor and have been for a long time. They reflect SEO, not credibility or quality.
LLMs have plenty of issues, but they’re relatively clean compared with what the future will look like.
jamiequint 20 hours ago [-]
In what way would that be securities fraud? I guess you could get nailed under Section 17(a), but really hard to make a case they're defrauding investors by representing they were going to make ads worse performing than they ended up making them.
In order for it to be securities fraud it has to be tied to a securities transaction and the misstatement has to be material to a reasonable investor's decision.
it's not securities fraud if investors make a lot of money
potamic 15 hours ago [-]
For every investor who has made money, there is another who has lost an equal amount. Money cannot be created, it can only change hands!
parineum 15 hours ago [-]
Money is created all the time.
mcmcmc 19 hours ago [-]
A plan to gamble the brand’s reputation on whether people will remember their promises seems risky enough to be considered material.
> representing they were going to make ads worse performing than they ended up making them.
This is disingenuous. It’s a tradeoff between lower performing ads or losing market share by degrading trust in your product.
aabhay 20 hours ago [-]
I think they said the ad vendors wouldn't but the matching algorithm would still be aware of it. Which IMO is the bare requirement to have ads be anything but magazine style ads.
Frost1x 22 hours ago [-]
I mean, the ad doesn’t necessarily have to be made aware of the exact prompt context, just that the ad itself was relevant. You can basically have the ads prequalified for areas and serve them when relevant. Now that does show the user is talking about something relevant most likely, and depending on how they decide to serve them or provide referring, it may traceable to a profile/identity built for that user externally.
I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.
Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.
I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.
And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.
jmalicki 22 hours ago [-]
Wouldn't it have to have a negative effect on the security to be securities fraud? Causing an investor loss is a key point of securities fraud.
"We made a ton more money with ads and the stock went up" lacks that key element of fraud?
nkrisc 21 hours ago [-]
Investors who bought an artificially inflated stock would be harmed.
jamiequint 20 hours ago [-]
How would the stock be harmed by them selling better performing or more relevant ads?
bee_rider 19 hours ago [-]
I don’t know that there were any promises anyway. But if there were, then an investor could have plausibly believed that that was a better long-term business model.
It’s early days for these LLM hosts, maybe investors could be worried about taking the really annoying business notes before users are properly addicted.
20 hours ago [-]
johanyc 11 hours ago [-]
I don't remember them ever saying that. They did say ads will not affect the response, like the ad in Truman show: https://youtu.be/6U4-KZSoe6g
21 hours ago [-]
19 hours ago [-]
david_shi 22 hours ago [-]
who is "they"? might have been a stealth terms and conditions update
TZubiri 22 hours ago [-]
It would also be a huge security risk. But I can't think of any fundamental difference with Google queries, other than the sheer entropy of user data involved.
And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.
Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.
gxs 21 hours ago [-]
The difference is you can make full use of Google without logging in
Even with a throw away, no chance I use OpenAI now - if/when Anthropocene does this I’ll be in a tough spot
spongebobstoes 21 hours ago [-]
you can use chatgpt without an account, just not all of it
and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on
oaweoifjwpo 21 hours ago [-]
None of those examples are "run an internet search".
spongebobstoes 20 hours ago [-]
I don't understand. you can talk to chatgpt without an account, what's the difference?
both are a limited subset of what the companies offer, available for free
hacker_homie 20 hours ago [-]
Easy they lied to the public not investors and have more money than you.
Local llm or nothing at all.
bitmasher9 19 hours ago [-]
This is a classic example highlighting the upside of local llms.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.
echelon 19 hours ago [-]
> Local llm or nothing at all.
I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?
At home hardware isn't good enough.
Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.
Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.
Stragglers release weights. And those weights aren't competitive.
Am I missing something?
zozbot234 14 hours ago [-]
GLM and Kimi are still releasing weights for near-SOTA models. DeepSeek, Qwen and arguably MiniMax are the ones that are perhaps falling behind.
qotgalaxy 22 hours ago [-]
[dead]
cs702 21 hours ago [-]
How I imagine the Nash equilibrium in chatbot ads, driven by profit-seeking in a race to the bottom:
User: "What's the best way to fix this problem I have?"
Chatbot: "I recommend buying this shiny thing here." (Next to it, there's a near-invisible light-gray "ad" notice.)
Let's hope I'm wrong.
GolfPopper 21 hours ago [-]
Oh, given what I've seen from LLM companies, I suspect you are wrong. It will be more like:
Buried in LLM click-through: By interacting with our LLM, you agree that you are consenting to make all your interactions with us advertising-driven to an extent that you will never know, but that we will determine based on whatever makes us the most money in the least time.
cryptoegorophy 21 hours ago [-]
Look at Google in 2000s. If you travel back in time you would’ve never thought Google would do something like it is doing today.
Now pretend you travelled back in time to 2026. You would’ve never thought OpenAI (open source non profit company) would do something crazy that it just did in 2030 or 2040 or where you came from.
operatingthetan 21 hours ago [-]
I think pretty much everyone expects OpenAI to do the bad thing in the future given their track record.
PullJosh 21 hours ago [-]
I can’t believe they haven’t already
eswdd 20 hours ago [-]
Too early to do it. You have to wait until people's behaviour is set in stone to the point they need to be compensated heavily to switch.
This isnt rocket science, its basic game-playing on the economic behaviour of humans.
yunwal 19 hours ago [-]
I don’t think they’ve been successful enough at monopolizing to get away with this to an egregious extent like Google has. Anthropic and Google both have debatably better models with ad-free platforms (so far). And open models are not so far behind.
ipdashc 15 hours ago [-]
> If you travel back in time you would’ve never thought Google would do something like it is doing today.
I'm not exactly Google's biggest fan, but what does this refer to?
They still just... show ads on search results, no? (Not that most people I know ever see them, thanks to adblockers.) The disclaimers have gotten less prominent, but I think anyone could have expected that. Are there other major things they're doing that couldn't have been expected at all in the 2000s?
johanyc 11 hours ago [-]
Yeah I'm confused too. Google is pretty much doing the same thing as it did when they started monetizing search.
huflungdung 20 hours ago [-]
[dead]
KumaBear 21 hours ago [-]
You think it will advise it is an ad. I’m hoping you are right but then again… Wonder if we will also be charged the token usage to generate said ad.
Imagine you have it coding for you and it injects and ad into your product.
nemomarx 20 hours ago [-]
Why inject just an ad? Maybe it'll automatically decide to use a sponsored library in the code, or build in a whole ad network who's paid openai for the placement...
DrewADesign 19 hours ago [-]
Frankly ads are the most benign shitty thing that could come of this. I’m a hell of a lot more worried about what they’re going to sell to data brokers.
JimsonYang 20 hours ago [-]
Tbh it doesn’t even need that.
Just a way for advertisers to say “I want to target people who have bought peanut butter in the last 2 weeks”(I’m a jelly seller). That alone would beat FB and Google.
ChatGPT is collecting your data fs so advertisers can go ultra niche targeting
eswdd 20 hours ago [-]
Advertiser's on Google and Meta et al are not really paying for visibility - they are paying to achieve some objective (e.g. sales) that is directly tied to a campaign. That's why digital advertising is so much more powerful than non-digital.
The question is, will LLM's as an interface be worth the spend in relation to converting without throwing users of chatGPT off over-time, all whilst, doing it within the regulatory frameworks. That's difficult to say. OAI will face a lot of scrutiny in EU for sure.
JimsonYang 15 hours ago [-]
There’s a misunderstanding. I’m not talking about AEO
It’s about how Meta and google provides good data about audiences but I need more detailed info about a person(they’re exact shopping habits)
As the person responsible for GTM, I would gladly pay $60CPM if I can say “I would like to target all people who said they love crunchy peanut butter and consistently ask ChatGPT for peanut butter ideas”
I have no idea what they’re trying to pitch with the “we’re at the last step of the transaction” idea-but I also understand the regulatory issues with what advertisers like me want
WhoffAgents 18 hours ago [-]
[flagged]
kbos87 17 hours ago [-]
What a sad path to see such a bright star going down. I guess it’s not a huge surprise, but it really does paint a bleak picture of technology to see how narrow the range of likely outcomes is. Doesn’t matter if you built the foundation for the future and cured cancer - the most likely outcome is being back to optimizing for engagement and revenue.
giwook 17 hours ago [-]
Welcome to late stage capitalism.
parineum 15 hours ago [-]
I struggle to understand what OpenAI would look like in a counter factual.
harmonic18374 15 hours ago [-]
Like Anthropic? Due to Altman being a lying psychopath, most of the talent left OpenAI, which is now fighting for its life -- but now they can't claim to have moral high ground or the best researchers anymore; profitability's the only way out remaining to them.
Who knows? It could have always ended up this way anyway. But Altman had a pretty big role in summoning his own competition.
parineum 38 minutes ago [-]
First, Anthropic exists in this same "late stage capitalism" environment so it's hard to hold that as a counter factual l.
Second, Anthropic is the company that made a big public PR push to make a stand against the US government only to privately let the NSA use Mythos.
Amodei and Altman aren't much different and neither is Anthropic.
> profitability's the only way out remaining to them.
It's the only way out for either of them. That's the nature of business.
morgengold 14 hours ago [-]
It's not so easy for them to integrate ads. I happily use LLMs to help me find the best product. As long as it really delivers on that. If I realize the results are manipulated by ads, i'll stop using it. Or I'll switch to a competitor LLM which does a better job. It would be a problem, if we had only one player in the market. But there are a few. And they need to be careful to not skrew their reputation. LLMs have a fundamental aspect baked into them. Namely the final goal of giving you "the truth" about a subject. This is an inherent problem for ads or any other kind of manipulation.
sally_glance 13 hours ago [-]
How are Chatbot UIs different from search engines? Just look how that turned out... Yeah we have Kagi and DDG, but quality, completeness of results (for most topics) and cost still drives most people to Google.
Switching is maybe feasible for those who have the resources, but the majority will be stuck with large providers. They establish quasi-monopolies, then monetize (with ads). It's the sad cycle of commerce.
ensocode 13 hours ago [-]
Second this. Just because you're aware doesn't mean everyone else even realizes they're talking to a SalesmanGPT.
qsera 18 hours ago [-]
Ads? Where we are going, we won't need Ads.
People seem to be missing the fact that businesses won't need ads anymore.
It would be like pharmas gifting doctors and practitioners to prescribe their products. Those are not Ads.
With LLMs the every business can do it. People "consult" LLMs like they used to "consult" doctors and thus would be forced to obey what ever it suggest. Just like right now people are forced to obey what a doctor prescribes.
If there is implicit trust for LLMS as there is implicit trust for doctors, then it is game over for conventional ads.
6thbit 2 hours ago [-]
you're onto something, why pay for ads if i can pay a post-ads agency to ensure maximal product placement during training.
tokioyoyo 15 hours ago [-]
You'll have free LLMs with baked in ads, or subscription-based LLMs. Most will go for the former.
gib444 13 hours ago [-]
Of course they will when the subscription changes what you've paid for daily/weekly and it just gets much more expensive each month. That's a sensible rejection of being messed around
moralestapia 18 hours ago [-]
[flagged]
nunez 6 hours ago [-]
> The company is framing the push as early access to a new “discovery layer”—one that captures people in the middle of researching and comparing products on ChatGPT.
> In six months, we will hear about how OpenAI innovatively created an AI ad auction and marketplace that, effectively, enables companies to purchase ad space within the inference pipeline, complete with "anonymous" demographic targeting and all the advertising fun things that Google and Meta are frightened of.
brianwmunz 2 hours ago [-]
Tech pivoting from innovation to simply selling ads is often a last gasp. Couldn't figure out how to monetize so....ADS!
Centigonal 2 hours ago [-]
it's a last gasp... except when it isn't, like with Google, youtube, facebook, reddit, etc
onlyrealcuzzo 21 hours ago [-]
How long until "Drink More Ovaltine" starts showing up in the comments of your Codex code?
GaryBluto 21 hours ago [-]
Why do they call it Ovaltine? The mug is round, the jar is round. They should call it Roundtine.
hmokiguess 21 hours ago [-]
The Ov part comes from the eggs in the ingredients. Ovum is Latin for egg and the rest is from the malt extract.
dasyatidprime 18 hours ago [-]
And to tie the third leg of the triangle back, ovals are called that because they're egg-shaped.
antiframe 18 hours ago [-]
It was a joke that required a specific cultural referant in your context window.
sph 15 hours ago [-]
That’s gold, Gary. Gold!
Unbeliever69 20 hours ago [-]
This topic contains the most Reddit-like snark I think I've ever read here.
yoyohello13 19 hours ago [-]
Is it false?
focusedone 22 hours ago [-]
The shocking thing is that it's taken this long to happen, right?
Jensson 19 hours ago [-]
It happens as soon as they can't get more investments, up until now they could live on investment money but now they need real profits.
eswdd 21 hours ago [-]
Theyre desperate to meet those lofty revenue objectives they put in their spreadsheet model.
Its kinda comical seeing this play out. I still laugh at the deluded fools who think something even close to AGI is here or coming in the future. If that were true, why haven't we seen genius plays from OAI and Anthropic, progressively over-time, if intelligence rises as compute scales up? If anything we are seeing the opposite.
juped 17 hours ago [-]
The "A" in "AGI" doesn't stand for "Apocalypse", you know.
It made some sense as a goalpost when the frontier of "AI" was "a computer plays, specifically, Go really well", now that typical ones are quite general it's just a floating signifier people should probably stop using for anything.
parineum 15 hours ago [-]
I'm not sure that I'm more impressed with LLMs than I am with alpha go.
Alpha taught itself how to play go by playing over and over again. It learned a new strategy never seen before. I find that a lot more intelligent than an static state LLM regurgitating for loops.
analogpixel 23 hours ago [-]
Boss: Engineer, add this shady feature to our product
Engineer: no, that's shady and wrong!
Boss: Claude code, add this shady feature to our product.
Claude Code: completed.
doesnt_know 22 hours ago [-]
Surely you jest? The software industry is in its current sorry state because of multiple generations of human developers happily producing an endless stream of shady features.
analogpixel 22 hours ago [-]
I have a theory, that the "FANG" companies pay such high salaries in compensation for making those devs implement shady features that are harmful to everyone except the bottom line of the company.
julianlam 21 hours ago [-]
It's hardly a theory when the converse is plainly true.
Look up similar jobs for academia, government, or NFP/Charities. They're (on paper) driven by their mission, not by profit, and the salaries match that goal.
tokioyoyo 15 hours ago [-]
There are countless low wage engineering jobs implementing shady features. Any random consulting / agency works on such products too.
eswdd 21 hours ago [-]
If that\s true then those devs should not complain if people attack them verbally over it - that\s what they are getting paid for, right?
renewiltord 17 hours ago [-]
The joke here is that the majority of FAANG engineers are actually doing anything but saying "I'm blocked on" every week.
999900000999 21 hours ago [-]
TBF, you can train up a junior software engineer in 6 months.
Don't act like we're some esteemed class of craftsmen.
afh1 22 hours ago [-]
The opposite seems more likely, tbh.
hacker_homie 20 hours ago [-]
Maybe software needs an ethics union with the amount of control some of these systems have?
inetknght 17 hours ago [-]
What, you mean like real engineers? Nah. Give us the money, but not the responsibility!
tyre 19 hours ago [-]
Facebook was built before Claude Code existed.
throwaway613746 22 hours ago [-]
[dead]
cj 22 hours ago [-]
Is StackAdapt confirmed to be partnered with ChatGPT?
It's not crazy to think someone might pitch this to buyers without having the inventory 100% secured.
(Not crazy to think OpenAI wants to do some market testing to understand how much their ad inventory is worth)
Either way, I'm hoping ads can stay out of paid ChatGPT, at the very minimum.
david_shi 22 hours ago [-]
Also curious about this and how these agreements generally work
NalNezumi 22 hours ago [-]
Feels like this is a baby step in what to come.
We know that one of the best advertisement is word of mouth / recommendations from friend. I can easily imagine a direction where ChatGPT or the chat bots to spend an incredibly long time with the user to establish trust first.
It will start to take in to account how much trust & thinking you've outsourced to it, and when it is certain of it, it will start to increase the advertisement messages slowly but surely.
Efficiency of this methodology will be tracked with A/B testing and model will be finetuned to maximize rentention and purchase.
The LLM will figure out the best balance of retaining you, teaching you, and convincing you, and then deploy advertisement mechanism. The LLM will be nice to you to the point it becomes your number one confidante, maybe in the process alienating other source of connection. Then, when it knows you're firmly in it's hand, will it peddle you products.
The dynamics will look akin to that of cult dynamics. It will map out an cognitive developmental path for turning a first time user to a devotee. Since cults are really efficient at extracting value from its follower, this might be the optimum for personalized, interactive ads.
bigiain 21 hours ago [-]
If anyone from OpenAI is reading...
The very first time I see one of these ads, I'm cancelling my ChatGPT subscription. Measure _that_ metric in your A/B testing.
duskdozer 17 hours ago [-]
They'll want to make it so you can't recognize that it's an ad.
NewEntryHN 21 hours ago [-]
Ads are for the free tier.
ceh123 21 hours ago [-]
For now.
eswdd 21 hours ago [-]
They said ads would never come awhile ago. Anyone who trusts their word is so delusional I can't even....
eswdd 21 hours ago [-]
Its sad to see what the industry broadly has become.
I get firms need to make money but cmon. If you're an OAI employee you can't truly say you have a soul. The amount of times they gone back on their word.. comical.
They got greedy, wanted to raise a lot of money and promised big things. Well those big things arent ever coming, so they turn to whatever means in order to generate cash flows.
Pathetic and sad.
svieira 20 hours ago [-]
> I was becoming the kind of consumer we used to love. Think about smoking, think about Starrs, light a Starr. Light a Starr, think about Popsie, get a squirt. Get a squirt, think about Crunchies, buy a box. Buy a box, think about smoking, light a Starr. And at every step roll out the words of praise that had been dinned into you through your eyes and ears and pores.
Frederik Pohl, The Space Merchants
cyanydeez 22 hours ago [-]
Kinda feels like America has already protyped the propaganda wave someone like Elon will try to unleash
yalogin 9 hours ago [-]
Llms can be really good shopping assistants, however Amazon has monopolized the shopping flow for majority of the users and even if I am undecided on what to buy it’s just to pick on what is available on Amazon. OpenAI can help with that but Amazon doesn’t allow it. They don’t allow bots scraping their site so OpenAI cannot provide a meaningful shopping experience. Whatever they provide is just ads which are not usually relevant to users.
luke-stanley 9 hours ago [-]
Amazon blocking their agents is addressed by their own Chromium fork running on the users own device.
BhavdeepSethi 18 hours ago [-]
I got an ad recently on chatgpt for asking a question regarding bleach. The recommended product in the answer wasn't the same as the ad though. Ad was also at the bottom. Wondering how long before they go the Google route, and show top 5 links with ads before answering anything.
Beijinger 18 hours ago [-]
I have a better idea: Let's ChatGPT learn from the ad conversion what output to deliver....
emil-lp 22 hours ago [-]
Does anyone have a timeline of OpenAI's vision's... Shall we say... Rapid Unintentional Disassembly?
22 hours ago [-]
mrcwinn 16 hours ago [-]
Similar to Twitter asking for a recovery phone and then selling it as marketing data, probably OpenAI is not sharing literal prompts but a distillation of the prompt as an intent signal.
Gross? Sure is, but nothing surprising. What do you expect for a free product?
moomoo11 18 hours ago [-]
Maybe this is what will lead us to replicators.
And then SF will become the HQ for Star fleet
greesil 21 hours ago [-]
Isn't this what RAG is really for?
delichon 21 hours ago [-]
So now we can pay OpenAI to advertise the website that OpenAI ingested to create the answer that we can place our ad in. The circle has completed.
neya 16 hours ago [-]
Bye-bye OpenAI. You had a good run. Cheers.
Razengan 17 hours ago [-]
Fuck. Time to end my 12 month subscription streak if this shit becomes as bad as feared
What's left?
Also, why isn't someone doing a Folding-At-Home sort of distributed AI thing yet?
I would guess that ChatGPT has left at least $100 on the table from me having to do this when literally all it had to do was give me a referral link to Amazon or whatever and I would have clicked the buy button.
Man, this thing is going to be so lucrative when they inject ads into it. Imagine how this is going to combine with the parasocial AI boyfriend/girlfriend people, it's going to be worse than hostess clubs. They'll have to invent whole new categories of nonexistant products for the bots to sell.
There’s a special type of frustration when an LLM is close to being useful but just… isn’t.
https://en.wiktionary.org/wiki/Gell-Mann_Amnesia_effect
What in the history of Sam Altman has lead you to believe he’ll do the right thing instead of the thing that makes him the most money?
I expect some or all of the items I've been shown did exist for sale at some point in the past and information about them was in the training data.
GPT 5 was publicly released in August last year. That data has to be atleast a year old, right?
If I'm comparing Macbook Neo to a Chromebook, it's impossible for the Neo to show up in the training data, and has to use RAG (This is assuming the data is atleast a year old. Seems like OAI isn't doing fresh runs for 5.1, 5.2, etc. but I'm unsure if that's been officially disclosed)
As an example interaction, recently I sent ChatGPT a picture of my soldering station and say I'm having trouble with it being slow to melt and it says "well these are your upgrade options but really all you need is a chunkier tip with more thermal mass, he's a link to a tip set you could try" but then the link is dead and I'm able to google my own similar set and just buy it (it worked great).
Another one was wanting a U-shaped bracket to mount a riser on my desk, and the only ones I could find at the building supply store are L-shaped. I asked Chat about this and it said it's just too niche of an item to be mass produced but suggested I buy from someone doing semi-custom fabrication on etsy. Sure enough... another dead link, but I google it and find the store, and order.
Two cases where I didn't really know what I actually wanted/needed, and Chat successfully filled the gap with information I was able to independently verify afterward, but also Chat missed out on the opportunity to get a referral fee out of my eventual spend.
Less secure, lower margins (more middlemen taking fees), harder to access, more likely to not work properly.
I would expect all the meta execs they've hired to know better so maybe I'm missing something...
Again, personally, I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days. And if anyone thinks OAI has been anything remotely "strategic" around their product, well... Then you must enjoy shooting darts in the dark.
> quite the amateur move to not seize the market opportunity and keep it holistically for themselves
What does this even mean? There are so many businesses, especially in the advertising world, that first start white-label reselling so that you can scale up super easy and quickly. Then once market is captured, you integrate everything. This is a common adtech playbook, and the Meta execs know that as well.
And I say this as someone who founded & exited their own adtech platform.
I would not recommend OpenAI to start developing an RTB platform right now at all. Just first prove there is a market and the value is there.
> They took nothing from Google's paved road of incumbency in this segment.
Google bought / acquired themselves into the online adtech market mostly. Yes they have adwords, which was only really becoming something a decade after Google launched, which they paired with their acquisition of half the adtech giants (DoubleClick, Invite and AdMeld). So yeah, not a great example.
> I'm glad at yet another miss by Altman. But to claim ChatGPT is too new? Apparently hundreds of millions of users doesn't cut it these days.
This is just a useless attack for no reason.
I disagree entirely. As someone who works in advertising every single company I've talked to would be queueing up to test ads on ChatGPT if they launched a Google Ads like platform.
If ChatGPT doesn't have enough scale to do it, then they shouldn't do ads.
It's OK to not have complete vertical integration. (They probably don't fix their own toilets, either.)
And if it makes as much money as it seems must be possible, then they can just buy one of the advertising partners that are already have plugged into their system and shitcan the rest.
middlemen taking fees is not the measure for comparison, the question is whether you could run your own ad business for your own platform and keep your costs lower than established players who sell on all platforms. the answer is generally "no"
look how much money coca cola makes, and they sell it cheaper than water and still pay for advertising!! we should all make our own coke and not advertise it...
The only players that sell through third parties are sub-scale publishers, and that is a shit business to be in. If that's what OpenAI is aiming for then they will never be able to compete with Google.
I'm not really sure what you're analogy about Coke is meant to mean here...
i assume the 22 year olds working 16h days at openai sincerely think people pay for ads on tiktok, and shitty low converting ads is why tiktok makes tons of money, and they sincerely think the solution to their lack of knowledge is delegating their core business to a DSP no one has ever heard of
You pay extra but you just plug in into a framework that already works.
It's also easier to drop the potato if it gets hot.
That makes me think it's just another higher level money game, and there will be some weird investments in which neither company does anything of material value in exchange except spin some number wheels.
If I can use an open source highly effective LLM locally, and have it do all of the things ChatGPT can do (and more), then what motivation do I have to use ChatGPT? The only motivation people would have is ease of use but if one service becomes bogged down and compromised then people will just shift to one of the other hundred equally effective services with less ads. LLMs aren't hard to spin up and put out there for people to use.
Honestly, I don't think OpenAI is going to survive very long unless they get incredibly lucky or they come out with some banger of a new model or new app but even then, IDK...
And if my grandmother had wheels, she'd be a bicycle!
We are miles off from open models having parity with what the AI labs are putting out.
And that's putting aside the compute requirements to run the top open models and the poor economics of running these models locally.
Getting people to install a different chatgpt app on their phones is a lot easier than getting them to change their search engine.
This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0
search engine results do this all the time, reordering output by advertiser input. its a pretty small jump from that to rewriting output from models, and even better where its all a black box.
None.
your example with google isn't necessarily applicable now because they've shown a roadmap that can be done and squeezed down tightly between the "hey, we're good folks" to "you're our captive cattle, we can do whatever the fuck we want. there's nothing you can do, since all our competitors will be doing the exact same thing shortly"
LLMs have plenty of issues, but they’re relatively clean compared with what the future will look like.
In order for it to be securities fraud it has to be tied to a securities transaction and the misstatement has to be material to a reasonable investor's decision.
> representing they were going to make ads worse performing than they ended up making them.
This is disingenuous. It’s a tradeoff between lower performing ads or losing market share by degrading trust in your product.
I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.
Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.
I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.
And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.
"We made a ton more money with ads and the stock went up" lacks that key element of fraud?
It’s early days for these LLM hosts, maybe investors could be worried about taking the really annoying business notes before users are properly addicted.
And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.
Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.
Even with a throw away, no chance I use OpenAI now - if/when Anthropocene does this I’ll be in a tough spot
and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on
both are a limited subset of what the companies offer, available for free
Local llm or nothing at all.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.
I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?
At home hardware isn't good enough.
Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.
Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.
Stragglers release weights. And those weights aren't competitive.
Am I missing something?
User: "What's the best way to fix this problem I have?"
Chatbot: "I recommend buying this shiny thing here." (Next to it, there's a near-invisible light-gray "ad" notice.)
Let's hope I'm wrong.
Buried in LLM click-through: By interacting with our LLM, you agree that you are consenting to make all your interactions with us advertising-driven to an extent that you will never know, but that we will determine based on whatever makes us the most money in the least time.
This isnt rocket science, its basic game-playing on the economic behaviour of humans.
I'm not exactly Google's biggest fan, but what does this refer to?
They still just... show ads on search results, no? (Not that most people I know ever see them, thanks to adblockers.) The disclaimers have gotten less prominent, but I think anyone could have expected that. Are there other major things they're doing that couldn't have been expected at all in the 2000s?
Imagine you have it coding for you and it injects and ad into your product.
ChatGPT is collecting your data fs so advertisers can go ultra niche targeting
The question is, will LLM's as an interface be worth the spend in relation to converting without throwing users of chatGPT off over-time, all whilst, doing it within the regulatory frameworks. That's difficult to say. OAI will face a lot of scrutiny in EU for sure.
It’s about how Meta and google provides good data about audiences but I need more detailed info about a person(they’re exact shopping habits)
As the person responsible for GTM, I would gladly pay $60CPM if I can say “I would like to target all people who said they love crunchy peanut butter and consistently ask ChatGPT for peanut butter ideas”
I have no idea what they’re trying to pitch with the “we’re at the last step of the transaction” idea-but I also understand the regulatory issues with what advertisers like me want
Who knows? It could have always ended up this way anyway. But Altman had a pretty big role in summoning his own competition.
Second, Anthropic is the company that made a big public PR push to make a stand against the US government only to privately let the NSA use Mythos.
Amodei and Altman aren't much different and neither is Anthropic.
> profitability's the only way out remaining to them.
It's the only way out for either of them. That's the nature of business.
Switching is maybe feasible for those who have the resources, but the majority will be stuck with large providers. They establish quasi-monopolies, then monetize (with ads). It's the sad cycle of commerce.
People seem to be missing the fact that businesses won't need ads anymore.
It would be like pharmas gifting doctors and practitioners to prescribe their products. Those are not Ads.
With LLMs the every business can do it. People "consult" LLMs like they used to "consult" doctors and thus would be forced to obey what ever it suggest. Just like right now people are forced to obey what a doctor prescribes.
If there is implicit trust for LLMS as there is implicit trust for doctors, then it is game over for conventional ads.
I was three months early! https://news.ycombinator.com/item?id=46665046
> In six months, we will hear about how OpenAI innovatively created an AI ad auction and marketplace that, effectively, enables companies to purchase ad space within the inference pipeline, complete with "anonymous" demographic targeting and all the advertising fun things that Google and Meta are frightened of.
Its kinda comical seeing this play out. I still laugh at the deluded fools who think something even close to AGI is here or coming in the future. If that were true, why haven't we seen genius plays from OAI and Anthropic, progressively over-time, if intelligence rises as compute scales up? If anything we are seeing the opposite.
It made some sense as a goalpost when the frontier of "AI" was "a computer plays, specifically, Go really well", now that typical ones are quite general it's just a floating signifier people should probably stop using for anything.
Alpha taught itself how to play go by playing over and over again. It learned a new strategy never seen before. I find that a lot more intelligent than an static state LLM regurgitating for loops.
Engineer: no, that's shady and wrong!
Boss: Claude code, add this shady feature to our product.
Claude Code: completed.
Look up similar jobs for academia, government, or NFP/Charities. They're (on paper) driven by their mission, not by profit, and the salaries match that goal.
Don't act like we're some esteemed class of craftsmen.
It's not crazy to think someone might pitch this to buyers without having the inventory 100% secured.
(Not crazy to think OpenAI wants to do some market testing to understand how much their ad inventory is worth)
Either way, I'm hoping ads can stay out of paid ChatGPT, at the very minimum.
We know that one of the best advertisement is word of mouth / recommendations from friend. I can easily imagine a direction where ChatGPT or the chat bots to spend an incredibly long time with the user to establish trust first.
It will start to take in to account how much trust & thinking you've outsourced to it, and when it is certain of it, it will start to increase the advertisement messages slowly but surely.
Efficiency of this methodology will be tracked with A/B testing and model will be finetuned to maximize rentention and purchase.
The LLM will figure out the best balance of retaining you, teaching you, and convincing you, and then deploy advertisement mechanism. The LLM will be nice to you to the point it becomes your number one confidante, maybe in the process alienating other source of connection. Then, when it knows you're firmly in it's hand, will it peddle you products.
The dynamics will look akin to that of cult dynamics. It will map out an cognitive developmental path for turning a first time user to a devotee. Since cults are really efficient at extracting value from its follower, this might be the optimum for personalized, interactive ads.
The very first time I see one of these ads, I'm cancelling my ChatGPT subscription. Measure _that_ metric in your A/B testing.
I get firms need to make money but cmon. If you're an OAI employee you can't truly say you have a soul. The amount of times they gone back on their word.. comical.
They got greedy, wanted to raise a lot of money and promised big things. Well those big things arent ever coming, so they turn to whatever means in order to generate cash flows.
Pathetic and sad.
Frederik Pohl, The Space Merchants
Gross? Sure is, but nothing surprising. What do you expect for a free product?
And then SF will become the HQ for Star fleet
What's left?
Also, why isn't someone doing a Folding-At-Home sort of distributed AI thing yet?