Hacker Newsnew | past | comments | ask | show | jobs | submit | lugao's commentslogin

Why would this new paradigm create interesting tooling? From your description I expect wrose not better tools.

Worse it better for you when it meets your needs better.

I use a lot of my own software. Most of it is strictly worse both in terms of features and bugs than more intentional, planned projects. The reason I do it is because each of those tools solve my specific pain points in ways that makes my life better.

A concrete example: I have a personal dashboard. It was written by Claude in its entirety. I've skimmed the code, but no more than that. I don't review individual changes. It works for me. It pulls in my calendar, my fitbit data, my TODO list, various custom reminders to work around my tendency to procrastinate, it surfaces data from my coding agents, it provides a nice interface for me to browse various documentation I keep to hand, and a lot more.

I could write a "proper" dashboard system with cleanly pluggable modules. If I were to write it manually I probably would because I'd want something I could easily dip in and out of working on. But when I've started doing stuff like that in the past I quickly put it aside because it cost more effort than I got out of it. The benefit it provides is low enough that even a team effort would be difficult to make pay off.

Now that equation has fundamentally changed. If there's something I don't like, I tell Claude, and a few minutes - or more - later, I reload the dashboard and 90% of the time it's improved.

I have no illusions that code is generic enough to be usable for others, and that's fine, because the cost of maintaining it in my time is so low that I have no need to share that burden with others.

I think this will change how a lot of software is written. A "dashboard toolkit" for example would still have value to my "project". But for my agent to pull in and use to put together my dashboard faster.

A lot of "finished products" will be a lot less valuable because it'll become easier to get exactly what you want by having your agent assemble what is out there, and write what isn't out there from scratch.


To be clear I never said custom vibe coded personal software is bad. But clearly that's not the point from OP. Quoting directly:

> you download a skill file that tells a coding agent how to add a feature

This is suggesting a my_feature.md would be a way of sharing and improving software in the future, which I think is mostly a bad thing.


It is a way of sharing and improving software already today. Not a major way, yet, but I don't agree with you it would be a bad thing for that to become more common, in as much as - to go back to my dashboard example - sharing a skill that contains some of the lessons learned, and packages small parts would seem far more flexible and viable as a path for me to help make it easier for others to do the same, than packaging up something in a way that'd give the expectation that it was something finished.

But also, note that skills can carry scripts with them, so they are definitely also more than a my_feature.md.


Only people who never interacted with data center reliability think it's doable to maintain servers with no human intervention.


Microsoft did do the experiment (Project Natick) where they had "datacenters" in pods under the sea with really good results. The idea was simply to ship enough extra capacity, but due to the environment, the failure rates where 1/8th of normal.

Still, dropping a pod into the sea makes more sense than launching it into space. At least cooling, power, connectivity and eventual maintenance is simpler.

The whole thing makes no sense and is seems like it's just Musk doing financial manipulation again.

https://news.microsoft.com/source/features/sustainability/pr...


> The whole thing makes no sense and is seems like it's just Musk doing financial manipulation again.

It's a fig leaf for getting two IPOs in one. There's no sense in analyzing it any further.


Exactly. He can croon about DOGE all day, but the reality is his entire fortune was built on feeding at the trough of government largess. That's why he talks about Mars all the time. He's not stupid enough to think we could actually live there, but damn if he couldn't make a couple trillion skimming off the top of the world's most expensive space program.


No, I think he is that stupid.


Right, let's not forget that he's selling it to himself in an all stock deal. He could have priced it at eleventy kajillion dollars and it would have had the same meaning.

He's basically trading two cypto coins with himself and sending out a press release.


The experiment may have been successful, but if it was why don't we see underwater datacenters everywhere? It probably is a similar reason why we won't see space datacenters in the near future either.

Space has solar energy going for itself. With underwater you don't need to lug a 1420 ton rocket with a datacenter payload to space.


Salt water absolutely murders things, combined with constant movement almost anything will be torn apart in very little time. It's an extremely harsh environment compared to space, which is not anything. If you can get past the solar extremes without earths shield, it's almost perfect for computers. A vacuum, energy source available 24/7 at unlimited capacity, no dust, etc.


The vacuum is the problem. It might be cold but has terrible heat transfer properties. The area of radiators it would take to dissipate a data center dwarfs absolutely anything we’ve ever sent to orbit


Also solar wind, cosmic rays etc. We don't have perfect shielding for that yet. Cooling would be tricky and has to be completely radiative which is very slow in space. Vacuum is a perfect insulator after all, look how thermos work.


I can't see any reason to put them underwater rather than in a field somewhere. I think the space rationale is you may run out of fields.


Placing them underwater means you get free, unlimited cooling.

Exactly the opposite of space, where all cooling must happen through radiation, which is expensive/inefficient


It's not free cooling underwater, you still have to circulate the water and seal the components.

Makes far more sense to build on the beach than underwater, if all you want is unlimited sea water to pump around.


I understood that part of Microsoft's experiment was to see how being hermetically sealed would affect hardware durability. Submerging is a good way to demonstrate the seal, but that part might have been just showmanship.


Dropping a pod into the sea makes more sense than launching it into space, and Microsoft decided it wasn't worth doing.


There are a class of people who may seem smart until they start talking about a subject you know about. Hank Green is a great example of this.

For many on HN, Elon buying Twitter was a wake up call because he suddenly started talking about software and servers and data centers and reliability and a ton of people with experience with those things were like "oh... this guy's an idiot".

Data centers in space are exactly like this. Your comment (correctly) alludes to this.

Companies like Google, Meta, Amazon and Microsoft all have so many servers that parts are failing constantly. They fail so often on large scales that it's expected things like a hard drive will fail while a single job might be running.

So all of these companies build systems to detect failures, disable running on that node until it's fixed, alerting someone to what the problem is and then bringing the node back online once the problem it's addressed. Everything will fail. Hard drives, RAM, CPUs, GPUs, SSDs, power supplies, fans, NICs, cables, etc.

So all data centers will have a number of technicians who are constantly fixing problems. IIRC Google's ratio tended to be about 10,000 servers per technician. Good technicians could handle higher ratios. When a node goes offline it's not clear why. Techs would take known good parts and basically replacce all of them and then figure out what the problem is later, dispose of any bad parts and put tested good parts into the pool of known good parts for a later incident.

Data centers in space lose all of this ability. So if you have a large number of orbital servers, they're going to be failing constantly with no ability to fix them. You can really only deorbit them and replace them and that gets real expensive.

Electronics and chips on satellites also aren't consumer grade. They're not even enterprise grade. They're orders of magnitude more reliable than that because they have to deal with error correction terrestial components don't due to cosmic rays and the solar wind. That's why they're a fraction of the power of something you can buy from Amazon but they cost 1000x as much. Because they need to last years and not fail, something no home computer or data center server has to deal with.

Put it this way, a hardened satellite or probe CPU is like paying $1 million for a Raspberry Pi.

And anybody who has dealt with data centers knows this.


Great comment on hardware and maintenance costs, and in comparison Elon wrote "My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space." It's a pity this reads like the entire acquisition of xAi is based on "Elon's napkin math" (maybe he checked it with Grok)


The deal they made values xAI at $230 Billion. It’s a made up number, with no trustworthy financial justification to back it up. It is set to provide a certain return to xAI’s investors (the valuation decides the amount you get per share), who in turn are bailing out the earlier acquisition of X (Twitter). All of this is basically a shell game where Elon is using one company to bail out another. It’s a way of reducing the risk of new ventures by spreading them out between his companies. It’s also really bad for SpaceX employees and investors, who are basically subsidizing other companies.

The thing is, everyone knows Elon is not a real CEO of any of these companies. There isn’t enough time to even be the CEO of one company and a parent. This guy has 10 companies and 10 children. He’s just holding the position and preventing others from being in that position, so he can enact changes like this. And his boards are all stacked with family members, close friends, and sycophants who won’t oppose his agenda.


As both are private companies none of this matters if the investors of both companies are happy.


Most of the investors don’t even have a choice. Nor do all the other shareholders like employees. And the boards of Musk companies are stacked with his yes men.


This was well known before the investments were made. If the investors were not happy with this they would not have invested.


Ah yes, my favourite kind of engineering: financial engineering


He's bailing out one of his failing ventures with one of his so far successful ones. The BS napkin math isn't the reason he's doing it. It's the excuse for doing it.


Or he's having another mental break because he knocked up yet another woman and is going to have yet another kid he can't remember the name of.


Can you provide a link for that quote, because that quote is absolute stupidity.


It's in the article that you're commenting on, https://www.spacex.com/updates#xai-joins-spacex.


Oh, ffs.


Haha. It's less than 1,000 words that would take less than 5 minutes to read.

I bet much less than half of the hundreds of HN commenters here bother to read it. Many are clearly unfamiliar with its content.


I can't, I don't want it in my head :/


Thanks for putting words to that; the paragraph which most stuck out to me as outlandish is (emphasis mine):

    The basic math is that launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, *with no ongoing operational or maintenance needs*.
I'm deeply disillusioned to arrive at this conclusion but the Occam's Razor in me feels this whole acquisition is more likely a play to increase the perceptual value of SpaceX before a planned IPO.


"what if we move all our data center needs into my imagination, things are running so much smoother there"


for me trying to apply some liquid TIM on a CPU in a space station in a big ass suit would be a total nightmare, maybe robots could make it bearable but the racks would get greassy fast from many failed attempts


I'm pretty sure they don't harden compute in space anymore, that's one thing SpaceX pioneered with their cost-cutting approach early on.


Excellent comment.


[flagged]


> Twitter had basically no downtime since he bought it

I'm sorry, but what? Not only has it had multiple half days of downtime, two full days+, but just two weeks ago had significant downtime.

https://www.thebiglead.com/is-x-down-twitter-suffers-major-o...


Significant downtime being like an hour, maybe, per year? Please. Microsoft has had outages all day for EMAIL service. It's not even comparable.


No, as I mentioned, full days in several cases, multiple 8+ hour outages.

Keep in mind this is in response to "basically no downtime"

If you told me Microsoft had "basically no downtime" I'd have the same reaction.


The sock puppet account is angry!


I went looking through your comments. 75% of them (and probably 90% in the lasst 2 years) were Elon related. Tesla, SpaceX, Grok, Twitter, DOGE, etc. Quite a lot of comments for 101 karma if I'm being real.

Why do you feel this kneejerk reaction to defend Elon and his companies? You'll never be him. He doesn't care about you. He'd use you for reactor shielding for an uptick in Tesla share price without a second's hesitation. This is cultish behavior.

Do you have any idea who you're defending? I'll give you just one example. A right-wing influencer named Dom Lucre uploaded CSAM to Twitter, a video. But he didn't just upload it. He watermarked it first so had it on his computer and then postporcessed it. It was I believe up for days. This was apparently a video so bad that mere possession should land you in prison. And the fact that the FBI didn't arrest him basically tells you he'd an FBI asset. After taking days to ban him, Elon personally intervened to unban him. Why? Because reasons.

And this is the same man who it's becoming clear was deeply linked with Jeffrey Epstein, as was his brother [1].

Bringing this back to the original point: this is why Twitter lost 80% of its value after Elon acquired it. Advertisers fled because it became a shithole for CSAM and Nazis.

As for "basically no downtime" that's hilarious. I even found you commenting the classic anecdote "it was fine for me" (paraphrased) on one such incident when Twitter DDOSed itself [2].

Your cultish devotion here is pretty obvious eg [3]. I'm genuinely asking: what do you get out of all this?

[1]: https://www.axios.com/local/boulder/2026/02/02/kimbal-musk-j...

[2]: https://news.ycombinator.com/item?id=36555897

[3]: https://news.ycombinator.com/item?id=42836560


Lol, did you spot one of his alts?

But yeah, otherwise agree that his conduct, within a corporate context and otherwise, do not merit the kind of public adulation he's getting.

I also remember (vividly at that) his comments on distributed systems when he bought twitter back in the day and was starting to take it over. I remember thinking to myself, if he's just spewing so much bullshit on this, and I can understand this because it's closer to my body of knowledge, what other such stuff is he pronouncing authoritatively on other domains I don't know so much about?


If people stop posting completely wrong things, I'll stop correcting them.


I've never once seen any CSAM or nazi things except the highly publicized Ye postings on Twitter, who also did a super bowl commercial that linked to a nazi tshirt by the way, so go ahead and boycott the NFL too if you like. You simply don't use the service if you think these things. Also, I recall that day that the internet was all a buzz with how twitter is down, obviously only because Elon haters wanted so badly to be right, but I used it all day uninterrupted and wouldn't have even noticed if I didn't read an article about it. You are seriously being brainwashed to hate Elon by mostly democrat mouthpieces who make hit piece articles about how he waved while saying thank you wrong. I defend innoent and wrongly accused people like Elon because so many people clearly making this up out of no where, you literally linked to an article here claiming he has ties to Epstein, like seriously if you ever decide to exit your bubble you'll find out how many people are lying to you.


You may not have seen it, but everyone else did, including the French government, who has summoned musk to court.

So your claim is that the Trump FBI fabricated evidence linking Musk to Epstein? Why do you think they'd do that?


It's not that the FBI fabricated evidence, its that the article misrepresents evidence entirely. Just having your name mentioned in an e-mail means absolutely nothing, what matters is the content of the e-mails. The ring wing, including Elon, has always wanted Epstein files released for years, while the left wing has only wanted it for the last couple months since they realized they can just write lies to convince people that some how Trump being an informant on Epstein and helping the FBI somehow is a strong enough connection to also 'connect' him for being in it? That's like you calling the police on a robber and democrats saying you were 'connected' to the robbery.


> but they cost 1000x as much

Compute power has increased more than 1000x while the cost came down.

I recall paying $3000 for my first IBM PC.

> they need to last years and not fail

Not if they are cheap enough to build and launch. Quantity has a quality all its own.


Have you heard of cosmic radiation?


Cosmic rays take time to destroy them.


It's not only about destruction. It's also about reliability. Without proper shielding and error correction you're going to have lots and lots of reliability issues and data corruption. And if we're talking about AI and given the current reliability problems of the Nvidia hardware, plus the radiation, plus the difficulty for refrigerating all that stuff on space... That's a big problem. And we still haven't started to talk about the energy generation.

I think there's a very interesting use case on edge computing (edge of space, if you wanna make the joke) that in fact some satellites are already doing, were they preprocess data before sending back to Earth. But datacenter-power-level computing is not even near.

I have no idea and numbers to back it up, but I feel it would be even easier to set up a Moon datacenter than an orbital datacenter (when talking about that size of datacenter)


We'll see!

Keep in mind that the current state of space electronics is centered around one-off very expensive launches, where the electronics failure would be a fiscal disaster. (See JWST)

Being able to rapidly launch cheap electronics may very well change the whole outlook on this.


People already do that (launch cheap, redundant, unshielded electronics) for LEO, but sounds like these data centers would pretty explicitly not be in LEO.

Also AI GPUs are the exact opposite of cheap electronics


Might be why he's also investing in building their own fabs - if he can keep the silicon costs low then that flips a lot of the math here.


Whoa there, space-faring sysadmin. You really want that off-world contract tho?


Haha, hard pass on the job. I prefer my oxygen at 1 atm.

I'm not a data center technician myself, but I have deep respect for those folks and the complexity they manage. It's quite surprising the market still buys Musk's claims day after day.


> It's quite surprising the market still buys Musk's claims day after day.

More disturbing than surprising.


I did some more reading and want to walk back my skepticism a bit. There is actually serious effort going into this, such as Google’s research on space-based AI infrastructure: https://research.google/blog/exploring-a-space-based-scalabl...

They highlight the exact reliability constraint I was thinking of: that replacing failed TPUs is trivial on Earth but impossible in space. Their solution is redundant provisioning, which moves the problem from "operationally impossible" to "extremely expensive."

You would effectively need custom, super-redundant motherboards designed to bypass dead chips rather than replace them. The paper also tackles the interconnect problem using specialized optics to sustain high bitrates, which is fascinating but seems incredibly difficult to pull off given that the constellation topology changes constantly. It might be possible, but the resulting hardware would look nothing like a regular datacenter.

Also this would require lots of satelites to rival a regular DC which is also very hard to justify. Let's see what the promised 2027 tests will reveal.


I'd assume datacenters built for space would have different reliability standards. I mean, if a communication satellite (which already has a lot of electronic and computing components) can work unattended, then a satellite working as a server could too.


You are right. But in the future we'll be refueling the satellites anyway. Might as well maintain the servers using robots all in one go.


Right now that’s not the case. Satellites just store whatever fuel they need for orbital adjustments and by default, they fall back to earth and burn up at the end of their life. All the Starlink satellites are configured to fall back to earth within 5 years (the fuel is used to re-raise their orbit). The new proposed datacenters would sit in a higher orbit to avoid debris, allegedly, but that means it is even more expensive to get to them and refuel them, and the potential for future debris is far worse (since it wouldn’t fall back to earth and burn up for centuries or millennia).


But … but what if we had solar-powered AI SREs to fix the solar-powered AI satellites… /in space/?


Maintaining modern accelerators requires frequent hands-on intervention -- replacing hardware, reseating chips, and checking cable integrity.

Because these platforms are experimental and rapidly evolving, they aren't 'space-ready.' Space-grade hardware must be 'rad-hardened' and proven over years of testing.

By the time an accelerator is reliable enough for orbit, it’s several generations obsolete, making it nearly impossible to compete or turn a profit against ground-based clusters.


On the other hand, Tesla vehicles have similar hardware built into them, and don't require such hands-on intervention. (And that's the hardware that will be going up.)


Car-grade inference hardware is fundamentally different from data center-grade inference hardware, let alone the specialized, interconnected hardware used for training (like NVLink or complex optical fabrics). These are different beasts in terms of power density, thermal stress, and signaling sensitivity.

Beyond that, we don't actually know the failure rate of the Tesla fleet. I’ve never had a personal computer fail from use in my life, but that’s just anecdotal and holds no weight against the law of large numbers. When you operate at the scale of a massive cluster, "one-in-a-million" failures become a daily statistical certainty.

Claiming that because you don't personally see cars failing on the side of the road means they require zero intervention actually proves my original point: people who haven't managed data center reliability underestimate the sheer volume of "rare" failures that occur at scale.


https://x.com/elonmusk/status/2017792776415682639

For what it's worth, this project plans to use Tesla AI5/AI6 hardware for the first launches.


Not only the sibling comments points, but cars aren't exposed to the radiation of space...


Well, one car is... and it's a Tesla!


Thank you. The waste heat problem is so bad but no one gets around to mentioning the fact that you can't have AI grade chips and space at the same time.


Do they need to be maintained? If one compute node breaks, you just turn it off and don't worry about it. You just assume you'll have some amount of unrecoverable errors and build that into the cost/benefit analysis. As long as failures are in line with projections, it's baked in as a cost of doing business.

The idea itself may be sound, though that's unrelated to the question of whether Elon Musk can be relied on to be honest with investors about what their real failure projections and cost estimates are and whether it actually makes financial sense to do this now or in the near future.


AI clusters are heavily interconnected, the blast radius for single component failure is much larger than running single nodes -- you would fragment it beyond recovery to be able to use it meaningfully.

I can't get in detail about real numbers but it's not doable with current hardware by a large margin.


eh? They're not gonna lay cable in space. The laser links will be retargetable.


How are you doing pci express x16 with lasers without fiber optics? Have you touched data center hardware in your life?


Lasers, space, super geniuses, and most importantly money. You're worrying too much about the details and not enough about the awesomeness.

But seriously, why are all the stans in these comments as unknowledgeable as Elon himself? Is that just what is required to stan for this type of garbage?


What if every installed twitter app just acted as a proxy for grok to post as millions of different elon stans? Diabolical.


This guy invented reusable rockets that land themselves. I'm sure xAI is not just one guy. Plenty of talented people work there.


I made a comment earlier that was rightly flagged for its tone, and I would like to restate the technical point more constructively.

The post author, Dan Piponi, clearly knows about fractals, but his post raises the question of whether asking Fermi questions in interviews is actually effective. I am skeptical that such questions would have prevented this type of bug.

I suspect the issue stems from small measurement imprecisions accruing over long distances, which is—in my view—tied to the fractal nature of roads traversing natural landscapes.

However, as others have pointed out, it may also be tied to road closures: if closed segments are set to a higher length internally (to discourage routing), these values might be getting summed up blindly over longer distances.

None of these issues would have been prevented by being good at estimating quantities alone.

Apologies again for the unconstructive tone of my previous comment.


Here's a Fermi-ish question: given the fractal nature of typical landscape, should a truck driver budget for 10 times as much time as would be predicted by using the known typical speed of the truck and the distance as the crow flies?

(Aside: the nearest grocery store to my house is 1 mile away but the shortest route by car, as measured by the car odometer, is 10 miles.)


A pure spirit is required to admit your own mistake. Don’t be let down by an error, never lose the opportunity to fix it. May this be of example to everyone to keep improving this already wonderful but not perfect community.


This video is truly remarkable. I'm so grateful to artists like 2step for sharing this kind of work on YouTube. It reignites a passion for math that many of us might have forgotten, especially those of us who have been away from formal math education for a while.


*2swap


It's understandable to hold different perspectives on the video. However, simply dismissing it as "AI" without any supporting evidence or referring to the audio as "noise" is unproductive and doesn't foster a constructive discussion.


It's funny because you can't measure borders due to the coastline paradox.


I'm not aware of any 2 countries having their border be a natural coastline


Same principle applies to rivers, though, and there are lots of river borders.


It isn't the same principle. Firstly, coastlines are more jagged because they are hit by waves perpendicularly, while rivers are shaped by water flowing along the banks, smoothing them. Secondly, the borders are typically defined by either the thalweg (greatest depth) or median line, either being smoother lines than the banks. Thirdly, river borders are then in practice defined by measuring the coordinates of particular sample points along the idealized line and then using straight lines or simple mathematical curves to connect these, forming a simple non-fractal boundary.


You could say the exact same set of objections to shoreline paradox.


But most borders are not defined as "on the shoreline", they are defined using something reliable.


Exactly. The coastline paradox is a mathematical curiousity, not a practical objection to measuring things. Coastlines are not infinite length in practice. You define a system of measurement then a length in that system


What? Neither of those three applies to a shoreline.


Physical shorelines instantiations of a true fractal are always limited. I'd go so far as to say that there is no such real object in the world.


I think I'm in agreement with you, but not sure if I'm agreeing that the are no fractals in the world, or that there are no shorelines.

Anyway, true fractal shorelines definitely never put sugar on their porridge.


Doesn't matter in this context.


Eh. If you care about relative measurements it doesn't matter much. Pick a sensible resolution, stick with it, you got yourself a ranking.


It also makes sense to pick a resolution because coastline changes on an hourly basis (and minute-basis, for rivers during rainfall), so that these differences would not massively affect the measurement every single second


Strong agree, they can even make the formatter configurable in pyproject if you want to use something else.


Anthropic also uses TPUs for inference.


Do they rent them from Google? Or are they a different brand?


Google provides them.


Ah cool I'll have to read up on that, I had thought that google was hoarding them.


That seems exactly why it happened.

Why should a platform allow sharing ways of violating its terms of service? Sure, any tech savvy person will be able to figure it out, but business are businesses.

Should supermarkets allow you to ressel coupons in their premises for a profit? Because he's 1. monetizing the video, 2. being sponsored by a third party in the video and 3. showing ways of circumventing the platform TOS.

He could remove that frame where he shows the yt plugin, but he's using this to farm engagement.


> On top of that, there is no easy way to create a template

Templates are just functions [0].

I think much of the frustration comes from typesetting being a harder problem than it seems at first. In general a typesetting system tries to abstract away how layout is recomputed depending on content.

Supporting contextual content -- cases where the content depend on other content, e.g. numbered lists, numbered figures, references, etc -- involves iterative rendering. This is evidentidly a complexity sinkhole and having a turing complete script language will bite you back when dealing with it. I recommend reding their documentation about it [1] where they explain how they propose solving this problem.

[0]: https://typst.app/docs/tutorial/making-a-template/

[1]: https://typst.app/docs/reference/context/#compiler-iteration...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: