AI/LLMs Plagiarism vs. Inspiration

No, he is claiming their Contribution Margin P&L is profitable. This is not misleading in the least. It’s a different financial view and consideration. He never claimed that under GAAP that he was profitable, in fact just the opposite.
The problem remains that Contribution Margin P&L is primarily used for internal metrics. Talking CM P&L while discussing profitability in interviews & press releases is misleading at best. It’s more suspicious if you downplay or don’t contextualize it with overall P&L.*

Gross Margin

  • Used for company-wide, higher level reporting
  • Fixed overhead is included
  • Used by external parties to measure overall profitability
  • Is included in external reporting
  • Difficult to exclude costs; all COGS are included
Contribution Margin

  • Used at a product-level, internal analysis
  • Fixed overhead is excluded
  • Used by internal management to determine operational strategies
  • Strictly an internal reporting metric
  • Easier to exclude costs when shifted between variable and fixed
If you’re talking to potential investors (“external parties”), Contribution Margin P&L is not the gold standard. While pros, regulators, competitors and experienced investors might be able to properly assess certain risks based on your CM, they’re not everyone. And even they will be asking for further information.





* I will concede that Amodei’s interviews tend to do the former more than the latter. That may be indicative that he’s being enthusiastic and optimistic as opposed to being deceptive. That doesn’t make me more inclined to believe him, though.
 
Last edited:

log in or register to remove this ad

The problem remains that Contribution Margin P&L is primarily used for internal metrics. Talking CM P&L while discussing profitability in interviews & press releases is misleading at best. It’s more suspicious if you downplay or don’t contextualize it with overall P&L.*


If you’re talking to potential investors (“external parties”), Contribution Margin P&L is not the gold standard. While pros, regulators, competitors and experienced investors might be able to properly assess certain risks based on your CM, they’re not everyone. And even they will be asking for further information.





* I will concede that Amodei’s interviews tend to do the former more than the latter. That may be indicative that he’s being enthusiastic and optimistic as opposed to being deceptive. That doesn’t make me more inclined to believe him, though.
I think there's some new stuff here, so I'm going to go ahead and reply.

I think the disagreement comes down to treating “profitability” as if it’s a single concept. In Finance it isn't. Finance uses multiple profitability lenses because each one answers a different question.
  • GAAP profitability - Is the whole company profitable?
  • Contribution margin - Is scaling profitable?
  • Gross margin - What is the cost structure?
  • Unit economics - Is each user profitable?
So when a Big Tech CEO talks about contribution margin, they’re not replacing GAAP - they’re answering a different question, one investors care about: “Does adding more users make money or lose money?”

That’s why companies like DoorDash, Uber, Lyft, and Netflix report contribution margin style metrics externally. It’s not deceptive, it’s the standard way to talk about marginal profitability.

Your point about context is fair - any metric can mislead if you pretend it answers a question it wasn’t designed for. But in this case, it seemed he was clearly providing context around why his company still had merit despite having negative total profitability.

I get being untrusting of him and Big Tech in general - I'm probably one of the biggest skeptics when it comes to CEO motives, but I don’t think this specific thing is evidence of deception. It’s just a different profitability lens being used to answer a different question.

Example Financial Reporting
Door Dash Financial Statements (see last 2 lines)
1776055684455.png
 

Your point about context is fair - any metric can mislead if you pretend it answers a question it wasn’t designed for. But in this case, it seemed he was clearly providing context around why his company still had merit despite having negative total profitability.
Like I said, in interviews, he comes across as hyper-enthusiastic instead of deceptive (to me).

But the financial press is definitely divided on whether Anthropic is investment worthy or not. (Though most agree that it’s one of the most likely to evade bursting due to the oft-predicted bubble.)
 

But the financial press is definitely divided on whether Anthropic is investment worthy or not. (Though most agree that it’s one of the most likely to evade bursting due to the oft-predicted bubble.)
Yea, I'm not saying they are a good investment. I have no idea. But I know analyzing any big tech business while it's still in growth and market domination phase is much different than analyzing most any other type of company. It's always astronomical debt and chasing user counts and then one day flipping the switch and increasing monetization (ads, tracking info about you and selling it, subscriptions to use it add free or more powerful versions, etc).

Alot of companies with that model do fail. But the ones that make it tend to make it real big.
 

I'm not at all in financing. But my understanding is that most of these companies are startups. Huge startups, with tons of investments, but start up still. If I'm looking at a startup for investing (let's say stocks), I'm not looking at the same metrics that I would with an established company.

I haven't followed Anthropic CEO's claims. But I remember reading that as is, when users use their model, they profit more than they spend. Which means the service is profitable (or would be if that's true). But all these companies are in a race and are injecting insane amount of money in training models to stay ahead of the curve. That part maybe not be profitable, but investors are aware of it and are investing on the off chance that Anthropic ends up on top at the end of all this.

Maybe someone from financing can answer a question for me. I know that for many years, Amazon had a really low profit line when compared to its overall revenue, because they were reinvesting so much in the business to become a market leader. But investors understood that, it was just a long-term play. What's the difference between Amazon reinvesting in one-day shipping, new trucks, new entrepots, etc; and an AI company investing in new models?
 

If I'm looking at a startup for investing (let's say stocks), I'm not looking at the same metrics that I would with an established company.

If you're investing in startups you wouldn't be able to buy stock. You'd be doing some kind of early-stage private equity investment, e.g. "angel" investing (direct investment of your own money) or through a venture fund.

And the math behind determining the value of an early stage company is kinda funny. What you do is estimate...guess, really...at what value you think the company will have at the end of some time period, and work backward. So let's say that you think the company will...or could...be worth $100 million in 5 years, and they are raising $1 million today. You want to make 20x your money (because in venture investing you assume you'll lose most of your investments, so the ones that succeed have to have fabulous returns). To get back $20m on a $100m sale you need 20% of the company, so it's "value" today must be $5m in order for your $1m to buy 20%. (There's a lot of fine print, but that's the gist of it.)

It's all kind of voodoo.

Maybe someone from financing can answer a question for me. I know that for many years, Amazon had a really low profit line when compared to its overall revenue, because they were reinvesting so much in the business to become a market leader. But investors understood that, it was just a long-term play. What's the difference between Amazon reinvesting in one-day shipping, new trucks, new entrepots, etc; and an AI company investing in new models?

It has to do with amortization, i.e. for how long the investment costs can be spread out. Amazon's investments build infrastructure that will, in theory, serve them far into the future. Even the trucks, which don't last forever, at least have some number of years of life.

The question with these LLMs is whether the operating profit will replace the amount invested in them before they become obsolete.
 

What's the difference between Amazon reinvesting in one-day shipping, new trucks, new entrepots, etc; and an AI company investing in new models?

From a reductively basic POV, one answer is that Amazon delivers a measurable product and the AI companies in question don't (yet).

When people buy from Amazon, you can show whether they received the product ordered or not. You can quantify things like delivery time, reliability, price comparisons, etc. There are a couple of grayer areas, like how often you get a knockoff or product that doesn't completely match its description, but even then you can record failures vs successes and measure how good the service is and also show when Amazon improves from reinvesting.

When people buy large scale AI, what they receive can't be measured as easily. It's often less clear what money is saved by using AI instead of people. Part of that is because the implementation takes a long time before you get metrics back. Part of that is because major implementation includes management and structural changes (i.e. layoffs). And part of that is you end up with the aforementioned multiple rounds of accounting. Add in the social, moral, and legal issues, and things get even more wonky.

The industries that can easily show the value of AI are the ones that have been using it for years already, like coding and image analysis.
 

From a reductively basic POV, one answer is that Amazon delivers a measurable product and the AI companies in question don't (yet).

When people buy from Amazon, you can show whether they received the product ordered or not. You can quantify things like delivery time, reliability, price comparisons, etc. There are a couple of grayer areas, like how often you get a knockoff or product that doesn't completely match its description, but even then you can record failures vs successes and measure how good the service is and also show when Amazon improves from reinvesting.

When people buy large scale AI, what they receive can't be measured as easily. It's often less clear what money is saved by using AI instead of people. Part of that is because the implementation takes a long time before you get metrics back. Part of that is because major implementation includes management and structural changes (i.e. layoffs). And part of that is you end up with the aforementioned multiple rounds of accounting. Add in the social, moral, and legal issues, and things get even more wonky.

The industries that can easily show the value of AI are the ones that have been using it for years already, like coding and image analysis.

I think you're comparing apples and oranges.

The value that companies get from using AI may be hard to measure, but so is the value I get from reading a book I buy from Amazon.

What we...or what potential investors...do know for sure is that people seem to be willing to buy books, and other people seem to be willing to buy LLM tokens, and we have insight into the prices they will pay, and the operating margins on those products can be calculated.

As for "social, moral, and legal issues"....Amazon struggles with the same things. The questions may differ, but the challenges are not categorically different.
 

I think you're comparing apples and oranges.

The value that companies get from using AI may be hard to measure, but so is the value I get from reading a book I buy from Amazon.

I did say it was reductively basic.

But measuring the value of the book you get from Amazon is trivial. You know the cost of the book at other vendors. You can compare the cost of the book, the cost of the shipping, the delivery times, etc. Amazon could show that they were cheaper from earlier on, and spent millions reinvesting showing that they could also do super fast, and reliable, and returnable, etc. All measurable stats. Much easier than the accounting being discussed above.

Large scale AI is trying to show that it's cheaper than employing people. And the act of testing that is much more expensive than comparing the cost of purchased goods.

As for "social, moral, and legal issues"....Amazon struggles with the same things. The questions may differ, but the challenges are not categorically different.

The legal issues aren't even in the same ballpark. And Walmart and other big box stores were already doing the heavily work of blazing the trail for the social and moral issues. Amazon realy only had to deal with scaling.
 

Large scale AI is trying to show that it's cheaper than employing people.

I don't actually think that's true. It's mostly buyers, as well as commentators, talking about replacement. AI companies tend more to talk about augmentation and productivity.

Now, you may believe (and it certainly may be true) that AI executives think the value they are going to provide is replacing people, but I don't really see them trying to sell their products with that tag line.

EDIT: And I'm sure it would be easy to find quotes that contradict me. And vice versa. I don't really want to get into a debate about which one of us is right; I'm just putting my perception out there.
 

Recent & Upcoming Releases

Remove ads

Top