Hacker Newsnew | past | comments | ask | show | jobs | submit | apawloski's commentslogin

Great news for people who had to bend over backwards pretending this disruptive, nakedly corrupt behavior was "good, actually."

But unfortunately, there are other channels for them to effectively do the same thing, as discussed in oral arguments. So still not a major win for American manufacturers or consumers, I fear.


> Great news for people who had to bend over backwards pretending this disruptive, nakedly corrupt behavior was "good, actually."

Actually they’re still doing it. I saw it not 2 minutes after seeing this post initially. The justifications for why they were “good, actually” has gotten increasingly vague though.


Sure, but now SCOTUS can say they are not a rubber stamp for POTUS. "See, we just ruled against him. Sure, it's a case that doesn't really solve anything and only causes more chaos, but we disagreed with him. This one time."

> ...but we disagreed with him. This one time.

They've actually done so numerous times already and have several cases on the docket that look to be leaning against him as well. There's a reason why most serious pundits saw this ruling coming a mile away, because SCOTUS has proven to not be a puppet of the administration.


Yeah.

If you look a little closely you'll see their current project is to establish the "major questions doctrine," which ultimately reduces executive power by stopping Congress from giving it all to the executive. It looks pro-POTUS when it reduces the power of executive agencies, and it looks anti-POTUS when it reduces the power of executive orders. It's really about resetting what powers Congress can delegate.


If so that’s great. Congress has long become too complacent and willing to just wait for their parties turn to use presidential overreach.

It is not. The conservative justices work to create imperial presidency with no checks, except in major economical issues that threaten to harm themselves.

And even this ruling had 3 of them objecting, claiming tariffs should stand.


>because SCOTUS has proven to not be a puppet of the administration.

Several justices are openly taking bribes


[flagged]



Granting the argument that these are bribes, I don't see how one (not several) justice taking bribes from not Trump means the Court is in Trump's pocket.

I think it's already clear (https://news.ycombinator.com/item?id=47093049) that you struggle a bit with causality.

https://www.citizensforethics.org/news/analysis/harlan-crows...

> Harlan Crow is more than Supreme Court Justice Clarence Thomas’s secret patron—he’s also deeply intertwined with the shadowy world of Republican dark money. In fact, Crow personally took part in the creation of the post-Citizens United dark money system and secretly helped bankroll some of the new groups.


I'm waiting for the link between _Trump_ and alleged bribes to Clarence Thomas.

Despite the "where there's smoke there's fire" idiom, smoke is not fire. You still have to find the fire if you see smoke before you call it fire.

By the analogy, your going linking smoke A to smoke B to smoke C and claiming Fire A caused Fire C. The same broken logic you used in the linked thread.

Proposing an explanation that fits the facts doesn't prove that explanation correct and, more importantly, it doesn't disprove any other explanation that also fits the facts.

Anyway, I stopped responding to the previous thread because your conspiratorial thinking is impervious to argument. If I had noticed it was you I was replying to, I wouldn't have replied.


> I'm waiting for the link between _Trump_ and alleged bribes to Clarence Thomas.

Republican activists bribe Thomas for decades. Republican president in office with… significant need for friendly SCOTUS decisions, and got to appoint several of them.

Connecting those dots seems... trivial?


Except for all the other blatantly unconstitutional rulings in his favor. Presidential immunity one will go down in history as a black stain on America and the courts.

and still this current ruling was a 6-3 vote.


I was flabbergasted that SCOTUS actually said that the concept of no man being above the law had caveats.

Earnestly, I think you need to actually read that opinion. They said some things the president does, he is immune for. And they pushed it back down to the lower courts to define the categories of official acts they laid out.

A hallmark of the Roberts court is leaving something technically intact, but practically gutted and dead.

You can still technically bring charges against the president for things they do while in office.

Practically speaking, after that ruling, you cannot, short of hypothetical scenarios so incredibly unlikely and egregious that even the incredibly unlikely and egregious acts of this administration don't meet that bar.


AFAIK bringing charges in office had much less to do with that case. It was dismissed because he was elected president. Which seems more like a pacing problem for the prosecution. In office, they are the prosecutions boss. You’re never gonna be able to charge a sitting president. That’s what impeachment is for. Then you prosecute.

It was pacing issue only because supreme court created lawless situation. The current state of things is literally their ideological project and work succeeding.

The initial indictment wasn’t until Aug 2023, 3 years into Bidens presidency.

there is no just world in which that man is not in prison for jan 6th and his corrupt handling of classified documents

I never said the world was just. But that doesn’t mean the Supreme Courts decision was as blatantly ideological as everyone imagines. Thomas concurring opinion was blatantly ideological as all his opinions are

Except for the 3 that dissented

Kavanaugh's dissent it honestly deranged

Yep.

The president doing horribly fascist things with ICE like obliterating habeas corpus? Using the military to murder people in the ocean without trial? That's fine.

Screwing with the money? Not okay.

See also how the prez is allowed to screw with any congressional appointees except the federal reserve.


When they rule for Trump it’s proof they are just a rubber stamp. When they rule against Trump it’s somehow also proof they are a rubber stamp?

SCOTUS rules for the rich and powerful. Most of the time Trump is aligned with them. Sometimes he does dumb shit like tariffs, or things that upset the order the rich and powerful want to maintain, and they rule against him.

How do you get that from what I wrote?

How do you not see how they got that from what you wrote?

The argument is obviously that this is not enough to disprove rubber stamping.

"also proof" is a strawman, plain as day.


Is this a serious question? Hahah

The damage goes far beyond the wallets of business and consumers. The unilateral, arbitrary tariff setting has little do with money and everything to do with the power it gave Trump. And was one of the primary instruments used to destroy relationships with our foreign allies including our closes neighbor..

To that point it was always relative to the advantage it gained overall when used as leverage for negotiations, now the issue is what other forms of leverage remain? Whether the outcomes of the agreements are good or not is one thing but there’s room for the argument that perhaps tariffs are a better form of leverage when compared with other available options.

Agree. It is harder to manufacture in America when the party leader breaks critical parts of your supply chain with rapid and unpredictable tariff changes. It is impossible to lower consumer prices on a good by raising taxes on it.

This is not even mentioning the astounding corruption of a president and his family personally and directly benefiting from these tariffs threats.

Does the party not understand the realities of this? Do they understand and are just lying about it because they're afraid of the leader? Afraid of admitting that they're wrong? I believe people are usually rational but I do not understand a rationalization where choosing to harm American manufacturers and consumers on the whims of a visibly corrupt leader is good, actually.


I think there are a lot of people who recognize these changes as not very durable and don't see an immediate political benefit to opposing them right now.


Can you clarify what you mean by "not very durable?"


US presidency is very short


They're assuming normalcy will return after the senile tenant-from-hell is either evicted from his taxpayer-provided housing or just keels over, ignoring that the senile guy writes angry screeds about how he's not going anywhere and was put and is kept there by a whole conspiracy of enablers.


Why wouldn't these changes be durable?

We're a product of a very, very strange time and place in history where the average person had at least some recourse against tyrants. From prehistory to about the mid-point of the 20th century, if you were alive on planet Earth, there's a near guarantee that you lived life basically as a possession of some person or family who controlled where you lived, where you could go, what clothes you wore, what work you could do, whether or not you would be educated, who you would marry, if you would have children, which god (if any) you could worship, what you could say, and even whether you lived or died.

That was your existence.

This whole thing where you have some control over your destiny? That's the fragile set of changes. Someone behaving like Trump is historically insanely durable.


If it's not a serious case or a national security threat, why impose de-naturalization quotas? Surely if there are real threats out there we should be dedicating the energy to those?

(Also since you brought up Obama, why was Obama able to deport so many more people than Trump? And able to do it without terrorizing US cities with secret/poorly trained police, or needing a DHS with a larger budget than most other countries' militaries?)

You're fixated on a "technically this is legal" argument. But you're (perhaps willfully) missing the larger repercussions. This administration has lied and misled about their opponents committing fraud. You know they are not acting in good faith. So why would we want to further empower capricious, inconsistent, and politically motivated behavior?


You need to enforce it because illegal immigration is harmful in and of itself, even if the immigrants aren’t criminals or national security threats. Why do we enforce speed limits even when the person doesn’t cause a serious accident? Because the point of the law is to create a deterrent effect that compels people to follow a certain process.

Obama had an easier time deporting people because, at the time, most people in his party accepted the view that illegal immigration is harmful even without some other crime: https://www.foxnews.com/media/2010-obama-clip-goes-viral-whe.... Back then, even most Democrats embraced requiring immigrant to assimilate. If you think assimilation is important, then it naturally follows that we have to control the number of immigrants at a level where America changes them before they change America. Today, many of them reject assimilation in favor of multi-culturalism. If you embrace multi-culturalism, it’s hard to justify any limit on the number of immigrants. And at that point, illegal immigration just becomes a technicality.


Can you elaborate more about how you think it's harmful despite having nothing to do with crime?


For the same reason people fishing without a license is harmful even if it's not otherwise criminal. It's not about one fish. It's about a system that's designed to avoid social harm by limiting the aggregate volume of an activity, and people fraudulently bypassing those limits.


Does fishing without a license warrant the same large-scale violent carceral approach that DHS is taking? That would be an insane, disruptive overreaction for something that poses no public safety danger.


> So why would we want to further empower capricious, inconsistent, and politically motivated behavior?

Well because I want the laws enforced. Other politicians had my whole life to enforce immigration law and they chose not to. If it's between this and unchecked immigration status quo, I choose this. This is a lesson to respectfully enforce the rule of law and the will of the people lest they enforce it disrespectfully later.


What Trump proved was that prior administrations simply chose not to enforce the immigration laws. Last year was the lowest number of border crossings since 1970, a nearly 90% reduction compared to 2022: https://www.pewresearch.org/short-reads/2026/02/02/migrant-e... https://www.pewresearch.org/short-reads/2026/02/02/migrant-e.... It was accomplished by simply choosing to enforce the law.


> It was accomplished by simply choosing to enforce the law.

They accomplished it by terrorizing people based on the color of their skin. There's nothing "simple" about creating a gigantic secret police force. There's nothing "lawful" about blatantly ignoring court orders.

You already conceded that there is no public danger. Your argument boils down yet again Great Replacement nonsense about immigrants being bad for America.


> Well because I want the laws enforced.

All laws? Because there are several that the administration are actively breaking. Surely you want those enforced too? How about court orders?

> Other politicians had my whole life to enforce immigration law and they chose not to.

I mean, Obama was way more effective at deporting illegal immigrants than Trump. Even by raw numbers. So I'm not sure how you can honestly argue that de-naturalization quotas are necessary now, when they weren't before for an even more effective administration.


For expensive GPU instances I have a crontab one-liner that shuts the node down after 2 hours if I don't touch an override file (/var/run/keepalive).

/5 * * * [ -f /var/run/keepalive ] && [ $(( $(date +\%s) - $(stat -c \%Y /var/run/keepalive) )) -gt 7200 ] && shutdown -h now


For somebody so concerned about China, how can you be so naive about the consequences of isolating America on the global stage? Actively destroying our alliances is an unforced error. How do you see our military working without the global logistical networks required to project US power?


It's so baffling to see you on this site consistently implying that Italians, Germans, or $WHOEVER are somehow worse Americans. Because if that were true, then you'd also have to acknowledge that you and I are worse Americans, which I don't think you believe.

And in general, your obsession with of the British is strange to me, because as you note, most Americans are not British and it's been that way for most of American history. Of course, there have been many great British Americans. But if we're weirdly keeping score, it's seems obvious that there would be a larger number of great Americans who weren't British?


For immigration policy, the issue is the aggregate cultural, political, and social impact of large groups of immigrants. It has nothing to do with individuals.

Cedar Rapids, Iowa, reflects the impact of mass German immigration. Little Bangladesh in Queens reflects the impact of mass immigration from my country, Bangladesh. Would I rather live in a country where the government, institutions, etc., were like Little Bangladesh, or like Cedar Rapids? That’s not even a serious question. My fear about immigration is that, over time, the country will become more like Little Bangladesh and less like Cedar Rapids.

Most Americans aren’t British, but most Americans do carry on British culture and norms to varying degrees. If American soil really was magic, and you could take 100,000 Bangladeshis and they’d become cultural New England Puritans instantly, I’d be in favor of open borders.


5 years ago did you picture yourself defending the United States invading Greenland? Absolutely wild to watch the Overton Window move.


It's been fascinating in a very morbid way. And it makes you wonder: where is the line? Or even: Is there a line?


Yet another incoherent policy for this administration that will be interesting to see people defend. Why does Maduro get invaded and captured but convicted drug smuggler (and ex Honduran president) Juan Orlando Hernandez get pardoned?


I think economic sanctions are a stupid policy, but the sanctions on Venezuela were in place long before this administration.


It's interesting that you're anti-economic sanctions but pro-tariffs. This administration talking points specifically justify tariffs as punishments for countries' behaviors.

Anyway be clear, I'm talking about this administration. Specifically their choice to invade Venezuela and capture their head of state, while simultaneously pardoning the ex-Honduran head of state who was convicted for the exact same thing. When I say inconsistent, I mean: they are saying (vocally and militarily) that they are anti-drug cartel, but also they are apparently pro-some-cartels? It makes no sense to me.


> It's interesting that you're anti-economic sanctions but pro-tariffs. This administration talking points specifically justify tariffs as punishments for countries' behaviors.

I agree that tariffs and economic sanctions are similar. But tariffs are in theory targeted at economic conduct that affects us. While sanctions are used to police the moral behavior of other countries, which I don’t support.


I've seen the Microsoft Aurora team make a compelling argument that weather is an interesting contradiction of the AI-energy-waste narrative. Once deployed at scale, inference with these models is actually a sizable energy/compute improvement over classical simulation and forecasting methods. Of course it is energy intensive to train the model, but the usage itself is more energy efficient.


There's also the efficiency argument from new capability: even a tiny bit better weather forecast is highly economically valuable (and saves a lot of wasted energy) if it means that 1 city doesn't have to evacuate because of an erroneous hurricane forecast, say. But how much would it cost to do that with the rivals? I don't know but I would guess quite a lot.

And one of the biggest ironies of AI scaling is that where scaling succeeds the most in improving efficiency, we realize it the least, because we don't even think of it as an option. An example: a Transformer (or RNN) is not the only way to predict text. We have scaling laws for n-grams and text perplexity (most famously, from Jeff Dean et al at Google back in the 2000s), so you can actually ask the question, 'how much would I have to scale up n-grams to achieve the necessary perplexity for a useful code writer competitive with Claude Code, say?' This is a perfectly reasonable, well-defined question, as high-order n-grams could in theory write code without enough data and big enough lookup tables, and so it can be answered. The answer will look something like 'if we turned the whole earth into computronium, it still wouldn't be remotely enough'. The efficiency ratio is not 10:1 or 100:1 but closer to ∞:1. The efficiency gain is so big no one even thinks of it as an efficiency gain, because you just couldn't do it before using AI! You would have humans do it, or not do it at all.


> even a tiny bit better weather forecast is highly economically valuable (and saves a lot of wasted energy) if it means that 1 city doesn't have to evacuate because of an erroneous hurricane forecast

Here is the NOAA on the improvements:

> 8% better predictions for track, and 10% better predictions for intensity, especially at longer forecast lead times — with overall improvements of four to five days.(1)

I’d love someone to explain what these measurements mean though. Does better track mean 8% narrower angle? Something else? Compared to what baseline?

And am I reading this right that that improvement is measured at the point 4-5 days out from landfall? What’s the typical lead time for calling an evacuation, more or less than four days?

(1)https://www.noaa.gov/news/new-noaa-system-ushers-in-next-gen...


To have a competitive code writer with ngrams you need more than to "scale up the ngrams" you need to have a corpus that includes all possible codes that someone would want to write. And at that point you'd be better off with a lossless full text index like an r-index. But, the lack of any generalizability in this approach, coupled with its markovian features, will make this kind of model extremely brittle. Although, it would be efficient. You just need to somehow compute all possible language before hand. tldr; language models really are reasoning and generalizing over the domain they're trained on.


Now that we’ve saved infinite energy all carbon tax credit markets are unnecessary! Big win for the climate! pollutes


Obviously much simpler Neural Nets, but we did have some models in my domain whose role was to speed up design evaluation.

Eg you want to find a really good design. Designs are fairly easy to generate, but expensive to evaluate and score. Understand we can quickly generate millions of designs but evaluating one can take 100ms-1s. With simulations that are not easy to GPU parallelize. We ended up training models that try to predict said score. They don’t predict things perfectly, but you can be 99% sure that the actual score designs is within a certain distance of said score.

So if normally you want to get the 10 best design out of your 1 million, we can now first have the model predict the best 1000 and you can be reasonably certain your top 10 is a subset of these 1000. So you only need to run your simulation on these 1000.


Heuristical branch-and-bound


It's definitely interesting that some neural nets can reduce compute requirements, but that's certainly not making a dent on the LLM part of the pie.


Sam Altman has made a lot of grandiose claims about how much power he's going to need to scale LLMs, but the evidence seems to suggest the amount of power required to train and operate LLMs is a lot more modest than he would have you believe. (DeepSeek reportedly being trained for just $5M, for example.)


I saw a claim that DeepSeek had piggybacked off of some aspect of training that ChatGPT had done, and so that cost needed to be included when evaluating DeepSeek.

This training part of LLMs is still mostly Greek to me, so if anyone could explain that claim as true or false and the reasons why, I’d appreciate it


I think the claim that DeepSeek was trained for $5M is a little questionable. But OpenAI is trying to raise $100B which is 20,000 times as much as $5M. Though even at $1B I think it's probably not that big a deal for Google or OpenAI. My feeling is they can profit on the prices they are charging for their LLM APIs, and that the dominant compute cost is inference, not training. Though obviously that's only true if you're selling billions of dollars worth of API calls like Google and OpenAI.

OpenAI has had $20B in revenue this year, and it seems likely to me they have spent considerably less than that on compute for training GPT5. Probably not $5M, but quite possibly under $1B.


So LLMs predict the next token. Basically, you train them by taking your training data that's N words long and, for X = 1 to N, and optimizing it to predict token X using tokens 1 to X-1.

There's no reason you couldn't generate training data for a model by getting output from another model. You could even get the probability distribution of output tokens from the source model and train the target model to repeat that probability distribution, instead of a single word. That'd be faster, because instead of it learning to say "Hello!" and "Hi!" from two different examples, one where it says hello and one where it says hi, you'd learn to say both from one example that has a probability distribution of 50% for each output.

Sometimes DeepSeek said it's name is ChatGPT. This could be because they used Q&A pairs from ChatGPT for training or because they scraped conversations other people posted where they were talking to ChatGPT. Or for unknown reasons where the model just decided to respond that way, like mixing up some semantics of wanting to say "I'm an AI" and all the scraped data referring to AI as ChatGPT.

Short of admission or leaks of DeepSeek training data it's hard to tell. Conversely, DeepSeek really went hard into an architecture that is cheap to train, using a lot of weird techniques to optimize their training process for their hardware.

Personally, I think they did. Research shows that a model can be greatly improved with a relatively-small set of high quality Q&A pairs. But I'm not sure the cost evaluation should be influenced that much, because the ChatGPT training price was only paid once, it doesn't have to be repaid for every new model that cribs its answers.


And an LLM can be more energy efficient than a human -- and that's precisely when you should use it.


That's precisely when, (insert hand wavy motion), we should use any of this.


If its more energy efficient it is doing something different there is no guarantee that its more accurate long term. Weather is horrible difficult to predict and we are only just alright at it. If LLM are guessing at the same rate we are calculating but I am doubtful


Well that was a failed response opps. I am just cautious because while transformers get the random guessing right you can get the right answer statistically but fail on accuracy improvement long term. Clearly this model does better than the current model but extending it to be even better seems basically intractable besides throw more data at it but what if it derived the wrong model you simply cannot actually know


This jumped out at me as well - very interesting that it actually reduces necessary compute in this instance


The press statement is full of stuff like this:

"Area for future improvement: developers continue to improve the ensemble’s ability to create a range of forecast outcomes."

Someone else noted the models are fairly simple.

My question is "what happens if you scale up to attain the same levels of accuracy throughout? Will it still be as efficient?"

My reading is that these models work well in other regions but I reserve a certain skepticism because I think it's healthy in science, and also because I think those ultimately in charge have yet to prove reliable judges of anything scientific.


> My question is "what happens if you scale up to attain the same levels of accuracy throughout? Will it still be as efficient?"

I've done some work in this area, and the answer is probably 'more efficient, but not quite as spectacularly efficient.'

In a crude, back-of-the-envelope sense, AI-NWP models run about three orders of magnitude faster than notionally equivalent physics based NWP models. Those three orders of magnitude divide approximately evenly between three factors:

1. AI-NWP models produce much sparser outputs compared to physics-based models. That means fewer variables and levels, but also coarser timesteps. If a model needs to run 10x as often to produce an output every 30m rather than every 6h, that's an order of magnitude right there.

2. AI-NWP models are "GPU native," while physics-based models emphatically aren't. Hypothetically running physics-based models on GPUs would gain most of an order of magnitude back.

3. AI-NWP models have fantastic levels of numerical intensity compared to physics-based NWP models since the former are "matrix-matrix multiplications all the way down." Traditional NWP models perform relatively little work per grid point in comparison, which puts them on the wrong (badly memory-bandwidth limited) side of the roofline plots.

I'd expect a full-throated AI-NWP model to give up most of the gains from #1 (to have dense outputs), and dedicated work on physics-based NWP might close the gap on #2. However, that last point seems much more durable to me.


"it's more efficient if you ignore the part where it's not"


> "it's more efficient if you ignore the part where it's not"

Even when you include training, the payoff period is not that long. Operational NWP is enormously expensive because high-resolution models run under soft real-time deadlines; having today's forecast tomorrow won't do you any good.

The bigger problem is that traditional models have decades of legacy behind them, and getting them to work on GPUs is nontrivial. That means that in a real way, AI model training and inference comes at the expense of traditional-NWP systems, and weather centres globally are having to strike new balances without a lot of certainty.


It's more efficient anyway because the inference is what everyone will use for forecasting. Researchers will be using huge amounts of compute to develop better models, but that's also currently the case, and it isn't the majority of weather simulation use.

There's an interesting parallel to Formula One, where there are limits on the computational resources teams can use to design their cars, and where they can use an aerodynamic model that was previously trained to get pretty good outcomes with less compute use in the actual design phase.


I suggest reading up on fixed costs vs variable costs and why it is generally preferable to push costs to fixed.

Assuming you’re not throwing the whole thing out after one forecast, it is probably better to reduce runtime energy usage even if it means using more for one-time training.


I mean that’s cute, but surely you can add up the two parts (single training plus globally distributed inference) and understand that the net efficiency would be an improvement?


Can you be clearer about The Times screaming about Trump trying to fight China economically? What are you referring to specifically from them?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: