Comments explaining what the code does, which is what an LLM could answer, are basically useless comments. Comments that describes why the code is how it is, is a bit more valuable, but also something LLMs cannot really reliably infer by just looking at the code.
Mathematics is the FORTRAN of the real source. Closer to a real source is probably "real" things like atoms and other universal things.
If I remember correctly, Stargate-SG1 at one point had some ideas about this sort of universal language, that multiple species could use for communication, as any sufficiently intelligent specie probably been able to see atoms and so on, but may have completely other way of doing "math-like" stuff.
Game developers sometimes make the “randomness” favor the player, because of how we perceive randomness and chance.
For example in Sid Meier’s Memoir, this is mentioned.
Quoting from a review of said book:
> People hate randomness: To placate people's busted sense of randomness and overdeveloped sense of fairness, Civ Revolutions had to implement some interesting decisions: any 3:1 battle in favor of human became a guaranteed win. Too many randomly bad outcomes in a row were mitigated.
The original link being discussed in that thread is 404 now, but archived copies of the original link exist such as for example https://archive.is/8eVqt
I used to get so many comments about how the computer opponent in a tile-based board game of mine cheats and got all the high numbers while they always got low numbers, and I'd be like "that's mathematically impossible. I divide the number of spaces on the board in half, generate a deck of tiles to go into a 'bag', and then give a copy of those same tiles to the other player.
So over the course of the game you'll get the exact same tiles, just in a different random order.
Now to be fair, I didn't make that clear to the player that's what was happening, they were just seeing numbers come up, but it was still amazing to see how they perceived themselves as getting lower numbers overall compared to the opponent all the time.
Meanwhile on the base game difficulty I was beating the computer opponent pretty much every game because it had such basic A.I. where it was placing its tiles almost totally at random (basically I built an array of all possible moves where it would increase its score, and it would pick one at random from all those possibilities, not the best possibility out of those).
My Dad used to play a lot of online poker, and he used to complain when other players got lucky with their hands, be like 'I know the chances are like 5% of them getting that! They shouldn't have gotten that!' and it always reminded me of those people.
The better option would be to just increase the flat odds. DQM: The Dark Prince is brutal with it's odds, but fair. A 45% chance is 45%.
In games like Civ/EU/Stellaris/Sins/etc It makes sense that a 3:1 battle wouldn't scale linearly, especially if you have higher morale/tech/etc. Bullets have a miss ratio, 3x as many bullets at the same target narrows that gap and gives the larger side an advantage at more quickly destroying the other side. So just give it an oversized ratio to scale the base (1:1) odds at.
That keeps "losing" realistic...a once in an occasion happenstance of luck/bad tactics/etc but also a generally very favorable and reliable outcome for your side.
I worked on a game where we added a "fairness" factor to randomness. If you were unlucky in one battle, you were lucky in the next, and vice versa. Mathematically you ended up completely fair. (The game designer hated it, though, and it wasn't shipped like that)
Games like Battle for Wesnoth which have it implemented right, you’ll look at a 90-10 scenario with 2 attacks and end up with the 1% scenario. Enough to make a man rage. I have degrees in Mathematics, I am aware of statistics, and all that. And yet when I played that game I would still have an instant “wait what, that’s super unlikely” before I had to mentally control for the fact that so many battles happen in a single map.
Was good because it identified a personal mental flaw.
For a small while I've had the idea of a [game engine/fantasy console/Scratch clone?] that comes packed with a bunch of example games. The example games should be good enough that people download it just to play them, but they are also encouraged to peek into their source code. I'd hope for it to be a sneaky gateway into programming.
For that, I'll keep this in mind: "Unlucky players may look at the source code of a chance-based effect to check if the odds are actually as stated."
The Steam version was created by one guy, but the platform ports have a couple different authors. The Google Play and Xbox PC versions, for instance, have divergences.
I wonder how the ports influence the upstream and each other. How do they keep the codebases in sync, while also accounting for platform differences?
Can't say for sure how Balatro did it, but typically you do one shared core and any platforms basically use that core in their own suitable way. Considering it's lua, would feel very natural and be relatively simple for Balatro to do it this way too. Not much to keep in sync, just ensuring the core remains reusable in the ways the platforms need it.
The Android and Xbox PC versions look more like forks for a shared codebase. Most of the platform-specific code is abstracted to a bridge, but even the bridges aren't consistent across the codebases. (Android's save system uses different methods than Xbox 's.)
Yes and they will hide their sushi-grabbing because somewhere deep inside they know it's not part of the deal, while at the same time still strongly feeling that they have indeed paid for it.
I'd argue they hide their takeaway because of what GP comment said — not because of anything innate, but because a staff member will not let them.
I grew up in an Asian household of six. We definitely took food home at AYCE places. My parents definitely knew it wasn't OK, but they felt like they were gaming the system (like a dubious life hack of sorts) and saving money, so they were actually quite proud of it, bragging to friends how much they were able to get.
In the Eastern Bloc states, it used to be so common for workers to steal from the workplace that new moral norms were established around this; if you're not stealing from work, you're stealing from your own family!
Goes to show just how fragile a high-trust society is. Theft and corruption can easily be normalized to such an extent that not participanting gets reframed as immoral.
The slogan of the Russian Revolution of 1917 was: "Factories to the workers, land to the peasants."
If the factory is yours, then everything inside is yours ;)
But it's funny how low wages under the broken Soviet economic system turned such things into a semi-official, informal work perks, allowing people to make ends meet.
It was less the low wages and more the general unavailability of things (shortages). Lots of things you couldn't just buy but you had to know somebody who knew somebody.
I wouldn't call it "funny" though. It ws quite sad and I'm glad it's over.
As I mentioned in another commebt, I don't even consider anything related to that to be a viable government system.
That said, the general unavailability of everything was caused by an incompetent government rather the the system itself but the system itself caused the government.
My point is that it was a succession of demagogueries hiding personal interests that caused the recurring and unrecoverable tragedies of that state. Being controlled and misguided is not exclusive to any particular government or political system.
This is not false but totally an oversimplification.
I don't think communism is a good form of government and I don't think the soviet union was marching the right way.
But the biggest blunts came from other much more serious mistakes caused by politicians ignoring science, like the big famine and many others, including the Chernobyl connerie
Calling them thieves is a bit harsh, it's not like they didn't pay for the food, just not able to transport it unless it's in your own internal containers.
Yes sorry, in case it wasn't clear, I wasn't agreeing with the commenter or calling my family thieves :) just because a restaurant kicks you out because you took too much food doesn't mean you're a criminal.
It is not a justification, but, it is not like Anthropic didn't pirate tons of books and burnt evidence... The only difference is that books don't have a terms of service
For the microseconds-chasers, there's microwave relay links, say between Chicago and New York (ref e.g. https://bullseye.ac/blog/economics/inside-the-world-of-high-...). Sending a signal up a few hundred km and down again a few hundred km adds way too much latency, and signal-hopping between fast-moving satellites adds way too much jitter for "such applications".
wow, that's weird. One would think that updating the reference table is something a team or individual - who just spent a lot of time and effort on implementing a feature - would also do.
For a while now cppreference.com has been in "temporary read-only mode" in which it isn't updated. Eventually I expect a "temporary" replacement will dominate and eventually it won't be "temporary" after all. Remember when some of Britain's North American colonies announced they were declaring independence? Yeah me either, but at the time I expect some people figured hey, we send a bunch of troops, burn down some stuff, by Xmas we'll have our colonies back.
Getting music on an ipod was always a pain unless you bought the music on itunes or ripped a music CD directly with itunes (yes, that was an actual feature. hard to imagine these days).
No simple drag and drop onto a mounted USB drive like all other mp3 players back in the day. Maybe more of a lock-in attempt instead of lock down, but related imo.
> ripped a music CD directly with itunes (yes, that was an actual feature. hard to imagine these days).
These days? Last week (though WMP). My retired father's old computer died, his new one, no CD slot. Emails me from Australia asking how to rip his CDs for his media player. He's not an audiophile but he's not a technophile (and his blues music collection is sufficiently large that at least one of the blues radio stations in his city will on occasion ask him to borrow something because they don't have it in their library.
Told him to get a USB CD player and a card reader (his media player is on micro/SD).
You can still rip CDs with Apple Music. In fact, that's the only use I have for that app (I recently lost a hard drive with music and I'm in the process of backing up all my CDs again).
That's a pretty excellent take, IMO. Just an undirected AI model doesn't do much, especially when the core team has time with the code, domain expertise, _and_ Claude.
As a specific example: The generated diagram showing the expression tree under "build in python" is simply wrong. It doesn't correspond to the expression x * 2 + 1, which should have only 1 child node on the right. The "GIL Released - Released" is just confusing. The dataflow omits the fact that the results end up back in python - there should be a return arrow. etc., etc.
If you use diagrams like this, at least ensure they are accurately conveying the right understanding.
And in general, listen to the person I'm responding to -- be really deliberate with your graphics or omit. Most AI-generated diagrams are crap.
> Also your saxpy example seems to be daxpy. s and d are short for single or double precision.
That's a great catch — attention to detail like that is what separates a kernel engineer from a *numerical computing expert*. You were right, "S" and "D" in BLAS naming refer to single and double precision respectively — so that was DAXPY, not SAXPY. Let me rewrite the kernel with the proper type...
I once checked if the odds stated on a card were implemented wrong. Turns out no, the code checks out, I'm just that unlucky.
reply