Since they bought bitcoin while their stock was worth ~2-4x what it is today, I’d say the “arbitrage paper certificates for digital 1s and 0s” play worked out pretty well overall.
Bought btc for $10k and $51k (about 60/40 respectively) and it’s trading for $65k 5 years later. Dunno what other buying/selling they may have done.
From Wikipedia:
> In October 2020, Square put approximately 1% of their total assets ($50 million) in Bitcoin (4,709 bitcoins), citing Bitcoin's "potential to be a more ubiquitous currency in the future" as their main reasoning.[52] The company purchased approximately 3,318 bitcoins in February 2021 for a cost of around $170 million, bringing Square's total holdings to around 8,027 bitcoins (equivalent to around US$500 million in 2021, around US$481 million as of July 2024).[53]
You have to compare it to what else they could have done with the money, such as investing in their own growth, or even giving it back to shareholders if they had no good ideas what to do with the money.
I did! If they invested it in themselves it would have been a 50-75% loss, same with doing a buyback (return the cash to stockholders) at a high stock price.
Dunno what better proxy I could
use for how it would have went other than their actual stock price. Unless we are to think their next best idea that they didn’t invest in would have done better than all the other things they did invest in. But that’s very speculative.
Instead they got a blended 300% gain on btc.
Should have sold the entire company for cash and bought bitcoin at the timelines they did.
Maybe if they'd invested in themselves they would have been able to expand (eg. they could have hired more sales people or spent more on advertising).
If they truly were unable to find a reliable investment then they should have given the money back to shareholders instead of speculating on a non-productive non-asset with awful negative externalities.
Most C compilers let you use variable length arrays on the stack. However they're problematic and mature code bases usually disable this (-Werror -Wvla) because if the size is derived from user input then it's exploitable.
Although it's quite a flawed novel compared to brilliant space opera like Hyperion, I have a bit of a soft spot for Carrion Comfort. I think it'd make a great movie!
It obviously owes a lot to Stephen King’s IT. But it stands on its own merits…and I give it extra credit because it was set in my home town. (https://en.wikipedia.org/wiki/Summer_of_Night)
I would also rate this above hyperion, like hyperion book 1 it crossed into the horror genre quite well, the rest of the hyperion books were a little bit too preachy but a good series never the less. RIP Dan.
I think the interesting thing about having protection in software is you can do things differently, and possibly better. Computers of yesteryear had protection at the individual object level (eg https://en.wikipedia.org/wiki/Burroughs_Large_Systems). This was too expensive to do in 1970s hardware and so performance sucked. Maybe it could be done in software better with more modern optimizing compilers and perhaps a few bits of hardware acceleration here and there? There's definitely an interesting research project to be done.
Since we're talking about defining our own processor, that means we need to define one with cheaper traps.
Expanding on what I wrote above about "bits of hardware acceleration", maybe adding a few primitives to the instruction set that make page table walking easier would help.
And with a trusted compiler architecture you don't need to keep the ISA stable between iterations, since it's assumed that all code gets compiled at the last minute for the current ISA.
Taking this to an extreme, the whole idea of a TLB sounds like hardware protection too?
As a thought experiment, imagine an extremely simple ISA and memory interface where you would do address translation or even cache management in software if you needed it... the different cache tiers could just be different NUMA zones that you manage yourself.
You might end up with something that looks more like a GPU or super-ultra-hyper-threading to get throughput masking the latency of software-defined memory addressing and caching?
To some extent (not capabilities), Haiku fits the bill here (https://en.wikipedia.org/wiki/Haiku_%28operating_system%29). Applications are bundles (but not WASM of course). The UI is very clean. The whole OS is also elegant and very fast on modern hardware.
Yes because BeOS was way ahead of its time. A complete new OS, doing all system things so much more efficiently, that it could allow wasting cpu time on high level actions, like moving windows in real-time while they were playing videos.
On 1993s hardware, impossible with Windows or OS/2.
reply