Not that I’m a friend of OpenAI, but ChatGPT has relatively fine-grained “personalization” options, and it was never sycophantic with the “efficient” tone for me. Rather the opposite, sometimes it seemed slightly indignant when I criticized it.
They want their products to not be used for some purposes. That's fine, that is their right. But that doesn't just stop at direct purchases. If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
> I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
Is it a principled stand or not? In your first comment, you said 'anthropic's "moral" stances are bullshit', their actions here are merely (or at least primarily) a successful marketing exercise, and the result is "a win for both sides". Are you now acknowledging that it's a costly, risky action on Anthropic's part? Because you haven't said anything to refute that; you've just changed the subject.
I believe that anthropic is trying to frame it that way. My point is that if you accept their framing then this whole thing falls apart. That is true regardless of if its actually principled or not.
> Are you now acknowledging that it's a costly, risky action on Anthropic's part?
I'll acknowledge its a risky strategy. Whether its costly depends on the result of that risk.
> If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
It remains to be seen whether they'll actually be able to enforce this. But it clearly goes far beyond what would be justified by the kind of supply chain risk you are describing.
>You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
>> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
If Anthropic is really serious about their moral stances they could themselves refuse to sell indirectly to us military. Militarirs are ultimately about killing people. So yes, if the supply chain risk is that anthropic might suddenly pull out of military projects and leave people depending on them high and dry, this seems like an appropriate response.
> So yes, if the supply chain risk is that anthropic might suddenly pull out of military projects and leave people depending on them high and dry, this seems like an appropriate response.
But it is so much broader than that! He's saying that if any part of a company does any business with the US military, said company cannot do any business with Anthropic. How could that possibly be necessary to avoid the risk "that anthropic might suddenly pull out of military projects"?
They have not; a social media post does not satisfy the requirements of 10 USC section 3252.
They are required to notify Congress (they have not), prepare a report with specific sections (they have not), and the reasons must fall within a set of categories outlined by statute (this does not).
There will be a court fight and they will lose, just like they lost the tariff battle, because of poor competence.
(Trump's post on Truth Social was actually fine. He said the USG would stop doing business with Anthropic, which is within its legal right. Hegseth's follow-on post is the problem. It is possible that Trump did not expect or want Hegseth to do that, that this was meant as bluster to bump along the negotiations; Hegseth has a recent history of stepping out of line within the administration and irritating people like Rubio.)
If the USG can mandate that everyone who works for a company that ever took a federal contract be genetically engineered, then I think they can tell people to not use Claude.
That's part of the recurrent confusion with this administration. In previous administrations, including Trump 1, people didn't need to spend a ton of time thinking about what it means to make a legally effective proclamation, because there was a baseline of competence. When a government official announced "We're doing X", they would do so as a summary of a large amount of legal process with the intent and effect of causing X to be true. If you went to challenge it in court of course, you'd have to identify some specific action as the label, but everyone would understand that this is a formalism.
Here, Hegseth has simply made a social media post. He did not publish any official investigation which led to the report. He did not explain what legal power would permit him to impose all the restrictions the post claims to impose. There is not, five hours later, any order on an official government website about it. So we have a real question. If a Cabinet secretary posts "I am directing the Department of War to designate...", does that in and of itself perform the designation, or is it simply an informal notice that the Department of Fascist Neologisms will perform the designation soon?
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
This is ridiculous. These aren't unreasonable demands and the government has tools to compel tech companies to support the country regardless of any "collective action" shenanigans -- ask your AI to tell you about the Defense Production Act and the history of its use.
The demands are not only unreasonable they are in violation of the contract the DoD signed. Do you really think LLMs should be used in autonomous weapons systems? Do you think they government should use them in mass domestic surveillance? That is reasonable?
Are you an American? Do you understand that your safe easy life depends on a mostly autonomous nuclear deterrence capability maintained by the military you oppose? Deeply think about why you still have right to free speech, and what it takes to sustain those rights.
but even if it did, the nuclear bit is a bold claim, especially when one of the most famous nuclear escalation in the u.s. was resolved by cooler heads in charge going around traditional war hawks and negotiating instead.
What a uniquely American view of the world - yes the only reason you have free speech is by threatening to nuke out of existence the rest of the world lmfao get a load of yourself
The posters question was itself a deflection - and your response is moral blackmail. Why don't you answer my question? Why are you deflecting? See how that works?
So your position is that the United States doesn't get to have it's own Skynet, because Skynet is bad, and that if it really wants to it should fork the Chinese Skynet so that it can have a Skynet if it wants it so much.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
Without English and German scientists and engineers, the United States would not have had a first nuclear weapon or the first successful rocket to land on the moon.
The United States government held scientist at essentially gunpoint in secret towns to make the bomb happen. Not sure what your point is, other than to note that in a previous era people had a better gauge of what time it was.
What a ridiculously nonsensical statement. Several scientists refused to participate, and at least one left part way through. Nobody was held at gunpoint.
Are you saying that we should consider the Chinese government to be an existential threat and menace to world peace on the same level as Nazi Germany?
What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?
There is no evidence that this was a condition of the deal for working with the government on this. PRC already is a Total Surveillance state. The claim made by Anthropic is very specific, and it's that they feel that the law has not caught up to how AI can be used to aggregate very large amounts of data that can be obtained without a warrant through data brokers. The government already does this. Maybe you agree with Anthropic's point here, and it's certainly a good one, but they are building up a face-saving argument over what is already established precedent. An is vs. ought dichotomy and raising it as a redline is ridiculous.
At the end of the day I think many people simply want the United States to lose this race so they can feel good about their principles.
Okay but then why is that also seemingly a red line must have for the Department of War? Isn't it just a tool of domestic surveillance and counterinsurgency for them? Seems like a distraction from any real U.S. national security objectives.
It’s not, the memo that set all this off says nothing about the Terminator or Big Brother. The real objective in this case is that if Anthropic sells the United States a weapon then the United States’ elected leadership gets to decide how to use it. It is not more complicated than this.
Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.
No offense,but this is where having immigrants throughout the power structure of these companies becomes an issue. We have a administration who clearly is not above using all avenues to apply pressure to get the things that they want done done,
How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.
None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.
reply