Worse it better for you when it meets your needs better.
I use a lot of my own software. Most of it is strictly worse both in terms of features and bugs than more intentional, planned projects. The reason I do it is because each of those tools solve my specific pain points in ways that makes my life better.
A concrete example: I have a personal dashboard. It was written by Claude in its entirety. I've skimmed the code, but no more than that. I don't review individual changes. It works for me. It pulls in my calendar, my fitbit data, my TODO list, various custom reminders to work around my tendency to procrastinate, it surfaces data from my coding agents, it provides a nice interface for me to browse various documentation I keep to hand, and a lot more.
I could write a "proper" dashboard system with cleanly pluggable modules. If I were to write it manually I probably would because I'd want something I could easily dip in and out of working on. But when I've started doing stuff like that in the past I quickly put it aside because it cost more effort than I got out of it. The benefit it provides is low enough that even a team effort would be difficult to make pay off.
Now that equation has fundamentally changed. If there's something I don't like, I tell Claude, and a few minutes - or more - later, I reload the dashboard and 90% of the time it's improved.
I have no illusions that code is generic enough to be usable for others, and that's fine, because the cost of maintaining it in my time is so low that I have no need to share that burden with others.
I think this will change how a lot of software is written. A "dashboard toolkit" for example would still have value to my "project". But for my agent to pull in and use to put together my dashboard faster.
A lot of "finished products" will be a lot less valuable because it'll become easier to get exactly what you want by having your agent assemble what is out there, and write what isn't out there from scratch.
It is a way of sharing and improving software already today. Not a major way, yet, but I don't agree with you it would be a bad thing for that to become more common, in as much as - to go back to my dashboard example - sharing a skill that contains some of the lessons learned, and packages small parts would seem far more flexible and viable as a path for me to help make it easier for others to do the same, than packaging up something in a way that'd give the expectation that it was something finished.
But also, note that skills can carry scripts with them, so they are definitely also more than a my_feature.md.
Microsoft did do the experiment (Project Natick) where they had "datacenters" in pods under the sea with really good results. The idea was simply to ship enough extra capacity, but due to the environment, the failure rates where 1/8th of normal.
Still, dropping a pod into the sea makes more sense than launching it into space. At least cooling, power, connectivity and eventual maintenance is simpler.
The whole thing makes no sense and is seems like it's just Musk doing financial manipulation again.
Exactly. He can croon about DOGE all day, but the reality is his entire fortune was built on feeding at the trough of government largess. That's why he talks about Mars all the time. He's not stupid enough to think we could actually live there, but damn if he couldn't make a couple trillion skimming off the top of the world's most expensive space program.
Right, let's not forget that he's selling it to himself in an all stock deal. He could have priced it at eleventy kajillion dollars and it would have had the same meaning.
He's basically trading two cypto coins with himself and sending out a press release.
The experiment may have been successful, but if it was why don't we see underwater datacenters everywhere? It probably is a similar reason why we won't see space datacenters in the near future either.
Space has solar energy going for itself. With underwater you don't need to lug a 1420 ton rocket with a datacenter payload to space.
Salt water absolutely murders things, combined with constant movement almost anything will be torn apart in very little time. It's an extremely harsh environment compared to space, which is not anything. If you can get past the solar extremes without earths shield, it's almost perfect for computers. A vacuum, energy source available 24/7 at unlimited capacity, no dust, etc.
The vacuum is the problem. It might be cold but has terrible heat transfer properties. The area of radiators it would take to dissipate a data center dwarfs absolutely anything we’ve ever sent to orbit
Also solar wind, cosmic rays etc. We don't have perfect shielding for that yet. Cooling would be tricky and has to be completely radiative which is very slow in space. Vacuum is a perfect insulator after all, look how thermos work.
I understood that part of Microsoft's experiment was to see how being hermetically sealed would affect hardware durability. Submerging is a good way to demonstrate the seal, but that part might have been just showmanship.
There are a class of people who may seem smart until they start talking about a subject you know about. Hank Green is a great example of this.
For many on HN, Elon buying Twitter was a wake up call because he suddenly started talking about software and servers and data centers and reliability and a ton of people with experience with those things were like "oh... this guy's an idiot".
Data centers in space are exactly like this. Your comment (correctly) alludes to this.
Companies like Google, Meta, Amazon and Microsoft all have so many servers that parts are failing constantly. They fail so often on large scales that it's expected things like a hard drive will fail while a single job might be running.
So all of these companies build systems to detect failures, disable running on that node until it's fixed, alerting someone to what the problem is and then bringing the node back online once the problem it's addressed. Everything will fail. Hard drives, RAM, CPUs, GPUs, SSDs, power supplies, fans, NICs, cables, etc.
So all data centers will have a number of technicians who are constantly fixing problems. IIRC Google's ratio tended to be about 10,000 servers per technician. Good technicians could handle higher ratios. When a node goes offline it's not clear why. Techs would take known good parts and basically replacce all of them and then figure out what the problem is later, dispose of any bad parts and put tested good parts into the pool of known good parts for a later incident.
Data centers in space lose all of this ability. So if you have a large number of orbital servers, they're going to be failing constantly with no ability to fix them. You can really only deorbit them and replace them and that gets real expensive.
Electronics and chips on satellites also aren't consumer grade. They're not even enterprise grade. They're orders of magnitude more reliable than that because they have to deal with error correction terrestial components don't due to cosmic rays and the solar wind. That's why they're a fraction of the power of something you can buy from Amazon but they cost 1000x as much. Because they need to last years and not fail, something no home computer or data center server has to deal with.
Put it this way, a hardened satellite or probe CPU is like paying $1 million for a Raspberry Pi.
And anybody who has dealt with data centers knows this.
Great comment on hardware and maintenance costs, and in comparison Elon wrote "My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space."
It's a pity this reads like the entire acquisition of xAi is based on "Elon's napkin math" (maybe he checked it with Grok)
The deal they made values xAI at $230 Billion. It’s a made up number, with no trustworthy financial justification to back it up. It is set to provide a certain return to xAI’s investors (the valuation decides the amount you get per share), who in turn are bailing out the earlier acquisition of X (Twitter). All of this is basically a shell game where Elon is using one company to bail out another. It’s a way of reducing the risk of new ventures by spreading them out between his companies. It’s also really bad for SpaceX employees and investors, who are basically subsidizing other companies.
The thing is, everyone knows Elon is not a real CEO of any of these companies. There isn’t enough time to even be the CEO of one company and a parent. This guy has 10 companies and 10 children. He’s just holding the position and preventing others from being in that position, so he can enact changes like this. And his boards are all stacked with family members, close friends, and sycophants who won’t oppose his agenda.
Most of the investors don’t even have a choice. Nor do all the other shareholders like employees. And the boards of Musk companies are stacked with his yes men.
He's bailing out one of his failing ventures with one of his so far successful ones. The BS napkin math isn't the reason he's doing it. It's the excuse for doing it.
Thanks for putting words to that; the paragraph which most stuck out to me as outlandish is (emphasis mine):
The basic math is that launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, *with no ongoing operational or maintenance needs*.
I'm deeply disillusioned to arrive at this conclusion but the Occam's Razor in me feels this whole acquisition is more likely a play to increase the perceptual value of SpaceX before a planned IPO.
for me trying to apply some liquid TIM on a CPU in a space station in a big ass suit would be a total nightmare, maybe robots could make it bearable but the racks would get greassy fast from many failed attempts
I went looking through your comments. 75% of them (and probably 90% in the lasst 2 years) were Elon related. Tesla, SpaceX, Grok, Twitter, DOGE, etc. Quite a lot of comments for 101 karma if I'm being real.
Why do you feel this kneejerk reaction to defend Elon and his companies? You'll never be him. He doesn't care about you. He'd use you for reactor shielding for an uptick in Tesla share price without a second's hesitation. This is cultish behavior.
Do you have any idea who you're defending? I'll give you just one example. A right-wing influencer named Dom Lucre uploaded CSAM to Twitter, a video. But he didn't just upload it. He watermarked it first so had it on his computer and then postporcessed it. It was I believe up for days. This was apparently a video so bad that mere possession should land you in prison. And the fact that the FBI didn't arrest him basically tells you he'd an FBI asset. After taking days to ban him, Elon personally intervened to unban him. Why? Because reasons.
And this is the same man who it's becoming clear was deeply linked with Jeffrey Epstein, as was his brother [1].
Bringing this back to the original point: this is why Twitter lost 80% of its value after Elon acquired it. Advertisers fled because it became a shithole for CSAM and Nazis.
As for "basically no downtime" that's hilarious. I even found you commenting the classic anecdote "it was fine for me" (paraphrased) on one such incident when Twitter DDOSed itself [2].
Your cultish devotion here is pretty obvious eg [3]. I'm genuinely asking: what do you get out of all this?
But yeah, otherwise agree that his conduct, within a corporate context and otherwise, do not merit the kind of public adulation he's getting.
I also remember (vividly at that) his comments on distributed systems when he bought twitter back in the day and was starting to take it over. I remember thinking to myself, if he's just spewing so much bullshit on this, and I can understand this because it's closer to my body of knowledge, what other such stuff is he pronouncing authoritatively on other domains I don't know so much about?
I've never once seen any CSAM or nazi things except the highly publicized Ye postings on Twitter, who also did a super bowl commercial that linked to a nazi tshirt by the way, so go ahead and boycott the NFL too if you like. You simply don't use the service if you think these things. Also, I recall that day that the internet was all a buzz with how twitter is down, obviously only because Elon haters wanted so badly to be right, but I used it all day uninterrupted and wouldn't have even noticed if I didn't read an article about it. You are seriously being brainwashed to hate Elon by mostly democrat mouthpieces who make hit piece articles about how he waved while saying thank you wrong. I defend innoent and wrongly accused people like Elon because so many people clearly making this up out of no where, you literally linked to an article here claiming he has ties to Epstein, like seriously if you ever decide to exit your bubble you'll find out how many people are lying to you.
It's not that the FBI fabricated evidence, its that the article misrepresents evidence entirely. Just having your name mentioned in an e-mail means absolutely nothing, what matters is the content of the e-mails. The ring wing, including Elon, has always wanted Epstein files released for years, while the left wing has only wanted it for the last couple months since they realized they can just write lies to convince people that some how Trump being an informant on Epstein and helping the FBI somehow is a strong enough connection to also 'connect' him for being in it? That's like you calling the police on a robber and democrats saying you were 'connected' to the robbery.
It's not only about destruction. It's also about reliability. Without proper shielding and error correction you're going to have lots and lots of reliability issues and data corruption. And if we're talking about AI and given the current reliability problems of the Nvidia hardware, plus the radiation, plus the difficulty for refrigerating all that stuff on space... That's a big problem. And we still haven't started to talk about the energy generation.
I think there's a very interesting use case on edge computing (edge of space, if you wanna make the joke) that in fact some satellites are already doing, were they preprocess data before sending back to Earth. But datacenter-power-level computing is not even near.
I have no idea and numbers to back it up, but I feel it would be even easier to set up a Moon datacenter than an orbital datacenter (when talking about that size of datacenter)
Keep in mind that the current state of space electronics is centered around one-off very expensive launches, where the electronics failure would be a fiscal disaster. (See JWST)
Being able to rapidly launch cheap electronics may very well change the whole outlook on this.
People already do that (launch cheap, redundant, unshielded electronics) for LEO, but sounds like these data centers would pretty explicitly not be in LEO.
Also AI GPUs are the exact opposite of cheap electronics
Haha, hard pass on the job. I prefer my oxygen at 1 atm.
I'm not a data center technician myself, but I have deep respect for those folks and the complexity they manage. It's quite surprising the market still buys Musk's claims day after day.
They highlight the exact reliability constraint I was thinking of: that replacing failed TPUs is trivial on Earth but impossible in space. Their solution is redundant provisioning, which moves the problem from "operationally impossible" to "extremely expensive."
You would effectively need custom, super-redundant motherboards designed to bypass dead chips rather than replace them. The paper also tackles the interconnect problem using specialized optics to sustain high bitrates, which is fascinating but seems incredibly difficult to pull off given that the constellation topology changes constantly. It might be possible, but the resulting hardware would look nothing like a regular datacenter.
Also this would require lots of satelites to rival a regular DC which is also very hard to justify. Let's see what the promised 2027 tests will reveal.
I'd assume datacenters built for space would have different reliability standards. I mean, if a communication satellite (which already has a lot of electronic and computing components) can work unattended, then a satellite working as a server could too.
Right now that’s not the case. Satellites just store whatever fuel they need for orbital adjustments and by default, they fall back to earth and burn up at the end of their life. All the Starlink satellites are configured to fall back to earth within 5 years (the fuel is used to re-raise their orbit). The new proposed datacenters would sit in a higher orbit to avoid debris, allegedly, but that means it is even more expensive to get to them and refuel them, and the potential for future debris is far worse (since it wouldn’t fall back to earth and burn up for centuries or millennia).
Maintaining modern accelerators requires frequent hands-on intervention -- replacing hardware, reseating chips, and checking cable integrity.
Because these platforms are experimental and rapidly evolving, they aren't 'space-ready.' Space-grade hardware must be 'rad-hardened' and proven over years of testing.
By the time an accelerator is reliable enough for orbit, it’s several generations obsolete, making it nearly impossible to compete or turn a profit against ground-based clusters.
On the other hand, Tesla vehicles have similar hardware built into them, and don't require such hands-on intervention. (And that's the hardware that will be going up.)
Car-grade inference hardware is fundamentally different from data center-grade inference hardware, let alone the specialized, interconnected hardware used for training (like NVLink or complex optical fabrics). These are different beasts in terms of power density, thermal stress, and signaling sensitivity.
Beyond that, we don't actually know the failure rate of the Tesla fleet. I’ve never had a personal computer fail from use in my life, but that’s just anecdotal and holds no weight against the law of large numbers. When you operate at the scale of a massive cluster, "one-in-a-million" failures become a daily statistical certainty.
Claiming that because you don't personally see cars failing on the side of the road means they require zero intervention actually proves my original point: people who haven't managed data center reliability underestimate the sheer volume of "rare" failures that occur at scale.
Thank you. The waste heat problem is so bad but no one gets around to mentioning the fact that you can't have AI grade chips and space at the same time.
Do they need to be maintained? If one compute node breaks, you just turn it off and don't worry about it. You just assume you'll have some amount of unrecoverable errors and build that into the cost/benefit analysis. As long as failures are in line with projections, it's baked in as a cost of doing business.
The idea itself may be sound, though that's unrelated to the question of whether Elon Musk can be relied on to be honest with investors about what their real failure projections and cost estimates are and whether it actually makes financial sense to do this now or in the near future.
AI clusters are heavily interconnected, the blast radius for single component failure is much larger than running single nodes -- you would fragment it beyond recovery to be able to use it meaningfully.
I can't get in detail about real numbers but it's not doable with current hardware by a large margin.
Lasers, space, super geniuses, and most importantly money. You're worrying too much about the details and not enough about the awesomeness.
But seriously, why are all the stans in these comments as unknowledgeable as Elon himself? Is that just what is required to stan for this type of garbage?
I made a comment earlier that was rightly flagged for its tone, and I would like to restate the technical point more constructively.
The post author, Dan Piponi, clearly knows about fractals, but his post raises the question of whether asking Fermi questions in interviews is actually effective. I am skeptical that such questions would have prevented this type of bug.
I suspect the issue stems from small measurement imprecisions accruing over long distances, which is—in my view—tied to the fractal nature of roads traversing natural landscapes.
However, as others have pointed out, it may also be tied to road closures: if closed segments are set to a higher length internally (to discourage routing), these values might be getting summed up blindly over longer distances.
None of these issues would have been prevented by being good at estimating quantities alone.
Apologies again for the unconstructive tone of my previous comment.
Here's a Fermi-ish question: given the fractal nature of typical landscape, should a truck driver budget for 10 times as much time as would be predicted by using the known typical speed of the truck and the distance as the crow flies?
(Aside: the nearest grocery store to my house is 1 mile away but the shortest route by car, as measured by the car odometer, is 10 miles.)
A pure spirit is required to admit your own mistake. Don’t be let down by an error, never lose the opportunity to fix it. May this be of example to everyone to keep improving this already wonderful but not perfect community.
This video is truly remarkable. I'm so grateful to artists like 2step for sharing this kind of work on YouTube. It reignites a passion for math that many of us might have forgotten, especially those of us who have been away from formal math education for a while.
It's understandable to hold different perspectives on the video. However, simply dismissing it as "AI" without any supporting evidence or referring to the audio as "noise" is unproductive and doesn't foster a constructive discussion.
It isn't the same principle. Firstly, coastlines are more jagged because they are hit by waves perpendicularly, while rivers are shaped by water flowing along the banks, smoothing them. Secondly, the borders are typically defined by either the thalweg (greatest depth) or median line, either being smoother lines than the banks. Thirdly, river borders are then in practice defined by measuring the coordinates of particular sample points along the idealized line and then using straight lines or simple mathematical curves to connect these, forming a simple non-fractal boundary.
Exactly. The coastline paradox is a mathematical curiousity, not a practical objection to measuring things. Coastlines are not infinite length in practice. You define a system of measurement then a length in that system
It also makes sense to pick a resolution because coastline changes on an hourly basis (and minute-basis, for rivers during rainfall), so that these differences would not massively affect the measurement every single second
Why should a platform allow sharing ways of violating its terms of service? Sure, any tech savvy person will be able to figure it out, but business are businesses.
Should supermarkets allow you to ressel coupons in their premises for a profit? Because he's 1. monetizing the video, 2. being sponsored by a third party in the video and 3. showing ways of circumventing the platform TOS.
He could remove that frame where he shows the yt plugin, but he's using this to farm engagement.
> On top of that, there is no easy way to create a template
Templates are just functions [0].
I think much of the frustration comes from typesetting being a harder problem than it seems at first. In general a typesetting system tries to abstract away how layout is recomputed depending on content.
Supporting contextual content -- cases where the content depend on other content, e.g. numbered lists, numbered figures, references, etc -- involves iterative rendering. This is evidentidly a complexity sinkhole and having a turing complete script language will bite you back when dealing with it. I recommend reding their documentation about it [1] where they explain how they propose solving this problem.
reply