Hacker Newsnew | past | comments | ask | show | jobs | submit | aw1621107's commentslogin

> Space Stations are in orbit - the space battleship doesn't have to be

I mean, you did say:

> space battleship - one that never comes down to the surface, just sits in orbit.

So I think it's understandable for people to take that at face value.

Furthermore, if it isn't in orbit, then where would it be?

> and a lot more of tug-boating it into the exact right spot, in the way of earth, so that earth hits the asteroid - not anything complicated like the asteroid hitting earth.

From an orbital mechanics standpoint I don't think there's actually a difference. You're changing an orbit either way.


If I were holding earth hostage with my Space Battleship - I would sit in a lunar orbit. Also, I am not kidding about tug-boating - if I fly up, match an asteroids speed and velocity, why cant I just throw a tow strap on that, accelerate, and park it an area that only has to be accurate enough for a planet to hit it - I dont need to stop it, or have it flying at the earth, it only needs to be in the way, moving a little slower than the earth.

What if I make that the space battleship's job? What if a drone can do that?

Im not really worried about resupplying the space battleship holding earth hostage -> someone will "volunteer" to do that, bc they want to live life.


> I would sit in a lunar orbit

Ah, so by "orbit" you were talking about orbit around Earth specifically?

> why cant I just throw a tow strap on that, accelerate, and park it an area that only has to be accurate enough for a planet to hit it - I dont need to stop it, or have it flying at the earth, it only needs to be in the way, moving a little slower than the earth.

Again, from a high-level orbital mechanics perspective there is little difference between the two. You start with two non-intersecting orbits and you're changing one orbit to intersect the other at the same time and place. How you go about doing so is just a question of how much time/fuel you're willing to expend, for various values of "just".

That being said, assuming I'm interpreting you correctly what you propose is probably technically possible (e.g., change an asteroid's orbit to a slightly-larger-than-Earth-sized one), but it's also very fuel-intensive compared to skipping the "parking"/"in the way" part.

If you haven't tried it already I can't recommend Kerbal Space Program enough for experimenting with this kind of thing, especially if you are alright with playing with mods. Real Solar System (changes the in-game solar system to match the our real-life one) and Principia (replaces the simplified patched conics system KSP uses for orbits with n-body gravity) would be particularly relevant here.


I absolutely will check out Kerbal - I have done nothing more than thought experiments - which I'm sure is obvious, its obvious to me. I'm sure I am saying things exactly wrong - the idea is to save fuel and remove all of the difficulties that may arise with timing or aiming. Using more fuel is exactly opposite intent.

I may be confused but I dont mean a "larger orbit than the earth" -> I mean the exact identical orbit, the exact path that earth takes around the sun -> ahead (or behind, it does not matter) of where we are and instead of 365 days to circle the sun, the asteroid is moving at a rate that will take MORE days -> so the earth will smash into the asteroid, bc it can't do anything else. I dont mean "park" in the sense that I stop its movement, nor would I select an asteroid that has such an orbit that it couldn't be manipulated into position with little difficulty.

Like, imagine the solar system was a record on record player (I've never used one either) and the earth is on a line/groove - a choice asteroid is moving in the same direction on an immediately adjacent line/groove - the asteroid only needs to move onto the earth's groove (anywhere on that specific groove the earth occupies on the record works) and then the asteroid is then sped up or slowed down (not much tho) on that exact orbit -> either will result in a collision with earth.

The only real way to stop such activities is with spaceships. That is my entire argument - you are saying that is less feasible than making a missle out of an asteroid? I appreciate the explanation fr

Tbh, it wasn't until the game Terra Invicta that I really considered the solar system, as it actually is. That game has no other relevance to this particular conversation - good game, very different kind of 4x that I recommend but unrelated.


> I mean the exact identical orbit, the exact path that earth takes around the sun -> ahead (or behind, it does not matter) of where we are and instead of 365 days to circle the sun, the asteroid is moving at a rate that will take MORE days

Unfortunately that's not really possible. To a first approximation, Earth's orbit is a circle with the Sun at its center, and the size of that circle is determined entirely by Earth's orbital speed around the Sun. Assuming you're also in a circular orbit, if you move at Earth's speed, the size of your orbit will be the same as that of Earth's. If you move faster or slower, your orbit will be smaller or larger, respectively, unless you wish to continuously burn fuel to maintain your distance from the Sun. That's why I said the asteroid's orbit must be slightly larger than that of Earth's for an Earth-catches-up-to-asteroid-in-similar-orbit scenario.

Obviously things get more complicated once you consider non-circular orbits, but the end result is similar - you can't continuously hang out in Earth's path while moving slower than the Earth around the Sun without burning a stupendous amount of fuel.

> you are saying that is less feasible than making a missle out of an asteroid? I appreciate the explanation fr

I think it's more that I think that "making a missile" is likely to require less fuel since you only need to adjust the asteroid's orbit ~once (only need to get it on a collision course) instead of ~twice (get the asteroid on a near-collision course, then adjust it again for the "right" kind of collision).


I cant reply to your other comment - that is what I assumed you were saying but it does not make sense to me outside the process that naturally occurs - I'm assuming the suns gravity simply cant move objects of such different mass, at the same rate, and thereby the orbit and position changes accordingly?

The speed doesn't have to be much different - 366 days and earth will eventually hit asteroid - 364 days and it will eventually hit the earth.

Ahh, Im still having a hard time figuring out why that would take more energy - I'm going to be researching this all morning tomorrow.

Thanks for the help!


> I'm assuming the suns gravity simply cant move objects of such different mass, at the same rate, and thereby the orbit and position changes accordingly?

Kind of? An object moving in a circular motion at a constant speed must have an acceleration towards the center of the circle of (velocity^2)/(radius). This means that two objects in the same circular orbit moving at different speeds must be experiencing different accelerations towards the center of the circle.

In the simplified case of orbits around the Sun, that acceleration towards the center of the orbit is due to the Sun's gravity. However, gravity accelerates everything at a given distance at the same rate. As a result, you can't have two objects solely influenced by the Sun's gravity that orbit around the Sun with the same orbital shape but moving at different speeds. You'd need something in addition to the Sun's gravity to pull that off.

> The speed doesn't have to be much different - 366 days and earth will eventually hit asteroid - 364 days and it will eventually hit the earth.

Sure. When I said slightly-larger-than-Earth-sized orbit, I really meant it. Kepler's third law of planetary motion states (approximately) that (orbital period)^2 is proportional to (radius)^3. Assuming I did my math correctly, if your orbital period goes from 365 to 366 days your orbital radius gets ~0.18% larger, which is roughly 274000 km increase over the radius of Earth's orbit. That would fit inside the Moon's orbit (~385000 km from the Earth)!

> Ahh, Im still having a hard time figuring out why that would take more energy

At least the way I was thinking, the short answer is that one alteration to an orbit is likely to be cheaper than two, especially if you aren't particularly concerned in what manner the asteroid eventually collides with Earth.


> Could you share a situation where the behavior is necessary?

The effects mentioned in the article are not too uncommon in embedded systems, particularly if they are subject to more stringent standards (e.g., hard realtime, safety-critical, etc.). In such situations predictability is paramount, and that tends to correspond to proving the absence of the effects in the OP.


Ah, the embedded application. Very valid point. I'm guilty of forgetting about that discipline.

I do wonder if it is possible to bin certain features to certain, uh, distributions(?), of rust? I'm having trouble articulating what I mean but in essence so users do not get tempted to use all these bells and whistles when they are aimed at a certain domain or application? Or are such language features beneficial for all applications?

For example, sim cards are mini computers that actually implement the JVM and you can write java and run it on sim cards (!). But there is a subset of java that is allowed and not all features are available. In this case it is due to compute/resource restrictions, but something to a similar tune for rust, is that possible?


I guess the no_std/alloc/std split is sort of like what you're talking about? It's not an exact match though; I think that split is more borne out of the lack of built-in support some targets have for particular features rather than trying to cordon off subsets of the language to try to prevent users from burning themselves.

On that note, I guess one could hypothetically limit certain effects to certain Rust subsets (for example, an "allocates" effect may require alloc, a "filesystem" effect may require std, etc.), but I'd imagine the general mechanism would need to be usable everywhere considering how foundational some effects can be.

> Or are such language features beneficial for all applications?

To (ab)use a Pixar quote, I suppose one can think of it as "not all applications may need these features, but these features should be usable anywhere".


> and they force [] libraries to support multiple modes at once,

I'm not entirely sure I agree? I don't think any library except for the standard library needs to "support multiple modes at once"; everything else just sets its own edition and can remain blissfully unaware of whatever edition its downstream consumer(s) are using.

> which is a different kind of maintenance tax than evolving C++ compilers and feature test macros impose.

I'm not sure I agree here either? Both Rust and C/C++ tooling and their standard libraries needs to support multiple "modes" due to codebases not all using the same "mode", so to me the maintenance burden should be (abstractly) the same for the two.

> Require RFCs to include an interaction test matrix, compile time and code size measurements, and a pass from rust-analyzer and clippy

IIRC rustc already tracks various compilation-related benchmarks at perf.rust-lang.org. rustc also has edition-related warnings [0] (see the rust-YYYY-compatibility groups), so you don't even need clippy/rust-analyzer.

[0]: https://doc.rust-lang.org/rustc/lints/groups.html


In practice library authors must consider the editions used by downstream crates because public APIs cross crate boundaries. Even if a crate compiles under a single edition, exported APIs often avoid edition specific idioms that could cause friction for consumers compiled under older editions. This leads to a conservative design style where libraries effectively target the lowest common denominator of the ecosystem. The result is that authors informally maintain compatibility across editions even if the compiler technically allows them to ignore downstream edition choices.

Large Rust organizations often run mixed-edition workspaces because upgrading hundreds of crates simultaneously is impractical. Libraries in the workspace therefore interact across editions during migration periods. So while technically each crate chooses its edition, ecosystem reality introduces cross-edition friction.

Feature test macros in C and C++ primarily gate access to optional APIs or compiler capabilities. Rust editions can change language semantics rather than merely enabling features. Examples include changes to module path resolution, trait object syntax requirements such as dyn, or additions to the prelude. Semantic differences influence parsing, name resolution, and type checking in ways that exceed the scope of a conditional feature macro.

Tooling complexity is structurally different. Rust tools such as rustc, rust analyzer, rustfmt, and clippy must understand edition dependent grammar and semantics simultaneously. The tooling stack therefore contains logic branches for multiple language modes. In contrast, feature test macros generally affect conditional compilation paths inside user code but do not require parsers or analysis tools to support different core language semantics.

Rust promises permanent support for previous editions, which implies that compiler infrastructure must preserve older semantics indefinitely. Over time this creates a cumulative maintenance burden similar to maintaining compatibility with many historical language versions.


> Even if a crate compiles under a single edition, exported APIs often avoid edition specific idioms that could cause friction for consumers compiled under older editions.

Do you have some concrete examples of this outside the expected bump to the minimum required Rust version? I'm coming up blank, and this sounds like it goes against one of the primary goals of editions (i.e., seamless interop) as well.

> So while technically each crate chooses its edition, ecosystem reality introduces cross-edition friction.

And this is related to the above; I can't think of any actual sources of friction in a mixed-edition project beyond needing to support new-enough rustc versions.

> Rust tools such as rustc, rust analyzer, rustfmt, and clippy must understand edition dependent grammar and semantics simultaneously.

I'm not entirely convinced here? Editions are a crate-wide property and crates are Rust's translation units, so I don't think there should be anything more "simultaneous" going on compared to -std=c++xx/etc. flags.

> Over time this creates a cumulative maintenance burden similar to maintaining compatibility with many historical language versions.

Sure, but that's more or less what I was saying in the first place!


> You can do basically the same thing with stackfull coroutines.

...Minus the various tradeoffs that made stackful coroutines a nonstarter for Rust's priorities. For example, Rust wanted:

- Tight control over memory use (no required heap allocation, so segmented stacks are out)

- No runtime (so no stack copying and/or pointer rewriting)

- Transparent/zero-cost interop over C FFI (i.e., no need to copy a coroutine stack to something C-compatible when calling out to FFI)


"Tight control over memory use" sounds wrong considering every single allocation in rust is done through the global allocator. And pretty much everything in rust async is put into an Arc.

I don't understand what kind of use case they were optimizing for when they designed this system. Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.

Using stackfull coroutines, having a trait in std for runtimes and passing that trait around into async functions would be much better in my opinion instead of having the compiler transform entire functions and having more and more and more complexity layered on top of it solve the complexities that this decision created.


> "Tight control over memory use" sounds wrong considering every single allocation in rust is done through the global allocator.

In the case of Rust's async design, the answer is that that simply isn't a problem when your design was intentionally chosen to not require allocation in the first place.

> And pretty much everything in rust async is put into an Arc.

IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.

I think it would be nice if there were less reliance on specific executors in general, though.

> Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.

I would say less that the Rust devs were optimizing for such a use case and more that they didn't want to preclude such a use case.

> having a trait in std for runtimes and passing that trait around into async functions

Yes, the lack of some way to abstract over/otherwise avoid locking oneself into specific runtimes is a known pain point that seems to be progressing at a frustratingly slow rate.

I could have sworn that that was supposed to be one of the improvements to be worked on after the initial MVP landed in the 2018 edition, but I can't seem to find a supporting blog post so I'm not sure I'm getting this confused with the myriad other sharp edges Rust's sync design has.


> > And pretty much everything in rust async is put into an Arc.

> IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.

Well, if you're implementing an async rust executor, the current async system gives you exactly 2 choices:

1) Implement the `Wake` trait, which requires `Arc` [1], or

2) Create your own `RawWaker` and `RawWakerVTable` instances, which are gobsmackingly unsafe, including `void*` pointers and DIY vtables [2]

[1] https://doc.rust-lang.org/std/task/trait.Wake.html

[2] https://doc.rust-lang.org/std/task/struct.RawWaker.html


Sure, but those are arguably more like implementation details as far as end users are concerned, aren't they? At least off the top of my head I'd imagine tokio would require Send + Sync for tasks due to its work-stealing architecture regardless of whether it uses Wake or RawWaker/RawWakerVTable internally.

I find it interesting that there's relatively recent discussion about adding LocalWaker back in [0] after it was removed [1]. Wonder what changed.

[0]: https://github.com/rust-lang/libs-team/issues/191

[1]: https://github.com/aturon/rfcs/pull/16


You can do rust async by moving instead of sharing data, for example

> I actually don’t see how this is any more beneficial than the existing no_panic macro

I think looking at the caveats listed in the no_panic docs should give you some ideas as to how a "proper" no_panic effect could improve on the macro.

Furthermore, a "proper" effect system should make working with effects nicer in general - for instance, right now writing functions that work independently of effects is not particularly ergonomic.

> The vast majority of rust programs don’t need such validation.

I think you also need to consider the niches which Rust wants to target. Rust is intended to be usable for very low-level/foundational/etc. niches where being able to track such effects is handy, if not outright required, so adding such support would be unblocking Rust for use in places the devs want it to be usable in.

> And for those that do, the Ferrocene project is maintaining a downstream fork of the compiler where this kind of feature would be more appropriate.

Given this bit from the Ferrocene website:

> Ferrocene is downstream from Rust

> It works with existing Rust infrastructure and the only changes made in the code were to cover testing requirements of ISO 26262, IEC 61508 and IEC 62304 qualification. All fixes are reported upstream for constant improvement.

I would suspect that such changes would be out of scope for the Ferrocene fork because that fork is more intended to be a qualified/certified Rust more than Rust + completely novel extensions.

> The compiler itself provides a powerful api via build.rs and proc macros which let downstream maintainers build their desired customization.

Given the complexity of the features listed this feels tantamount to asking each individual consumer to make their own fork which doesn't seem very likely to attract much interest. IIRC async even started off like that (i.e., using a macro), but that was painful enough and async thought to be useful enough to be promoted to a language feature.

I'm curious to what extent one can implement the described features using just build.rs/proc macros in the first place without effectively writing a new compiler.


> when mixing crates from various editions and how changes interact together.

Could you elaborate more on this? It's not obvious to me right now why (for example) Crate A using the 2024 edition and Crate B using the 2015 edition would require both full access to both crates' source beyond the standard lack of a stable ABI.


Because in order to have standard library breaking changes across editions, if those types are exposed in the crate public types, or change their semantics across editions, the compiler has to be able to translate between them when generating code.

See the Rust documentation on what editions are allowed to change, and the advanced migration guide on examples regarding manual code migration.

Not so much what has happened thus far, rather the limitations imposed in what is possible to actually break across editions.


Or put another way, a hypothetical feature that you made up in your head is the thing that requires source access. Editions do not let you change the semantics of types.

To be fair, Rust tooling does tend toward build-from-source. But this is for completely different reasons than the edition system: if you had a way to build a crate and then feed the binary into builds by future compilers, it would require zero additional work to link it into a crate using a different edition.


Exactly, hence why people should stop talking about editions as if they sort out all Rust evolution problems, in your own words it doesn't allow changing type semantics

I think you're too stuck on the current implementation. Work is going into investigating how to evolve the standard library over editions. The "easiest" win would be to have a way to do edition-dependent re-exports of types.

What I am stuck is Rust folks advocating editions as the solution for everything in language evolution, when it clearly isn't.

What you're describing sounds more like a potential issue with editions if/when they allow breaking stdlib changes more than a problem with editions as they exist today, which is more what I took the original comment to be talking about.

Exactly because they don't allow it, they don't cover all scenarios regarding language evolution

OK, sure, but again what breaking changes editions do/don't currently allow is independent from what SkiFire13/I was responding to, which was the "requires full access to source code" bit.

How do you expect a compiler to be able to mix and match changes across editions between crates, if those happen to be changes in semantic behaviour?

Depends on the change. Obviously the compiler doesn't need to care about cross-edition compatibility between crates if the changes in question don't impact the public API. Otherwise, I'd expect the compiler to canonicalize the changes, and from what I understand that is precisely how edition changes are chosen/designed/implemented.

> because if the tariffs were found to be unlawful, it could easily refund them.

I think it's worth emphasizing that the US government argued not only that it could issue refunds, but that it would issue refunds and that it would not oppose an order to do so. In addition to the quote from the US government's opposition to a motion for preliminary injunction, there are these quotes mentioned in the opinion for the linked order [0]:

> [E]ven if future entries are liquidated, defendants do not intend to oppose the [c]ourt’s authority to order reliquidation.... Such reliquidation would result in a refund of all duties determined to be unlawfully assessed, with interest.

> Defendants “will not oppose the [c]ourt’s authority to order reliquidation of entries of merchandise subject to the challenged IEEPA duties and that they will refund any IEEPA duties found to have been unlawfully collected, after a final and unappealable decision has been issued finding the duties to have been unlawfully collected and ordering defendants to refund the duties.”

> “If tariffs imposed on plaintiffs during these appeals are ultimately held unlawful, then the government will issue refunds to plaintiffs, including any post-judgment interest that accrues.”

> For any plaintiff who is an importer, even if a stay is entered and defendants do not prevail on appeal, plaintiffs will assuredly receive payment on their refund with interest. ‘[T]here is virtually no risk’ to any importer that they ‘would not be made whole’ should they prevail on appeal. The most ‘harm’ that could incur would be a delay in collecting on deposits. This harm is, by definition, not irreparable. Plaintiffs will not lose their entitlement to a refund, plus interest, if the preliminary injunction is stayed, and they are guaranteed payment by defendants should the [c]ourt’s decision be upheld. And defendants do not oppose the reliquidation of any entries of goods subject to IEEPA duties paid by plaintiffs that are ultimately found to be unlawful after appeal.

> To the extent that any future entries are liquidated, the [c]ourt may order reliquidation of entries subject to the challenged de minimis exemption if the duties paid by Axle are, in a final and unappealable decision, found to have been unlawfully collected. Such reliquidation would result in a refund of all duties determined to be unlawfully assessed, with interest.

[0]: https://www.cit.uscourts.gov/sites/cit/files/25-154.pdf


> As sibling says, the Court very definitely did not order them to refund anything.

> You may see other judges rule that the refunds don't have to be paid, for any of several reasons.

I think the government might have a bit of an uphill battle given arguments they have previously made to courts. For example, consider this decision from the US Court of International Trade from 2025-12 [0]:

> However, as the Government notes in its response to Plaintiffs’ motion for a preliminary injunction here, it “[has] made very clear—both in this case and in related cases—that [it] will not object to the [c]ourt ordering reliquidation of plaintiffs’ entries subject to the challenged IEEPA duties if such duties are found to be unlawful.”

> <snip>

> Judicial estoppel would prevent the Government from taking an inconsistent approach after a final result in V.O.S. [] The Government has emphasized this point itself, citing to Sumecht NA, Inc. v. United States, which holds that “the Government would be judicially estopped from taking a contrary position” regarding a prior representation involving the availability of relief in the form of reliquidation. [] Having convinced this court to accept that importers who paid IEEPA tariffs will be able to receive refunds after reliquidation, and having benefited from the court’s subsequent conclusion that importers will not experience irreparable harm as a consequence of liquidation, the Government cannot later “assume a contrary position” to argue that refunds are not available after liquidation.

> <snip>

> Additionally, the panel in In re Section 301 Cases unanimously agreed—as we do now—that the USCIT has “the explicit power to order reliquidation and refunds where the government has unlawfully exacted duties.” [] The Government acknowledges that “a decision [to the contrary] would be inconsistent with years of [the court’s] precedent.”

Obviously all this doesn't prevent the government from appealing anyways, but they'll need to get creative to get around their previous representations.

[0]: https://www.cit.uscourts.gov/sites/cit/files/25-154.pdf


> It seems that it tries to solve the problem of excessive template instantiations

No, I don't think the way Rust implements dynamic dispatch has much, if anything, to do with trying to avoid code bloat. It's just a different way to implement dynamic dispatch with its own set of tradeoffs.


> AFAICT, you cannot turn a block scoped defer into the function one.

You kinda-sorta can by creating an array/vector/slice/etc. of thunks (?) in the outer scope and then `defer`ing iterating through/invoking those.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: