The Thunderbolt offerings on the current Mac lineup offer dramatically less bandwidth in total if that matters for a given use case. Thunderbolt 5 is the equivalent of PCI-E Gen 4 x4. So if all 4 of the Thunderbolt 5 ports on a Mac Studio can run at full speed, that's still only the equivalent of a single gen 4 x16 slot. That's less than half the bandwidth of a basic consumer x86 CPU, to say nothing of the Xeon that was in the previous Intel Mac Pro or a modern Epyc/Threadripper (Pro).
This is a big reason why things like eGPUs kinda suck. Thunderbolt is fast for external I/O, but it's quite pathetic compared to internal PCI-E.
Reports as pointed out here have shown that x4 to x16 for most GPUs and common loads is a 1% to 10% loss of performance - hardly pathetic. In many (gaming) cases, it would be unnoticeable.
Their compact solution doesn't cover all needs, they just decided that they didn't care about some of those needs. The Intel Mac Pro was the last Apple offering with high end GPU capabilities. That's now a market segment they just aren't supporting at all. They didn't figure out how to do it compactly, they just abandoned it wholesale.
Similarly if your use case depends on a whole lot of fast storage (eg, the 4x NVME to PCI-E x16 bifurcation boards), well that's also now something Apple just doesn't support. They didn't figure out something else. They didn't do super innovative engineering for it. They just walked away from those markets completely, which they're allowed to do of course. It's just not exactly inspiring or "deserves credit" worthy.
When they introduced the cheese grater Mac Pro the new high end GPUs were a showcase feature of it. Complete with the bespoke "Duo" variants and the special power connector doohickey (MPX iirc?). So I'd consider that an attempt to re-enter that market at least.
I use Nix for my homelab servers, and I'm using AI to be my IT support staff essentially. I don't need to ask AI for help installing hyperland, that is trivial as you say, but setting up nginx port forwarding? samba configs? k3s or k8s? Yeah individually any one of those things isn't very hard. But instead of spending 30 minutes reading through config examples and figuring out where it's setup I can instead spend 30 seconds just telling AI what I want, skimming the output to see if it's looks reasonable, and then doing a good ol' `git commit` of the config file & kicking off the "now go do it" nix build command.
And, critically, at no point does an LLM ever have access to sudo, shell, etc.. It just works with plain text files that aren't even on the machine I'm deploying it to.
yeah I'm using mobile Firefox and it has an awfully high overlap with Safari. Almost like a bunch of the stuff Chrome supports isn't actually a standard at all yet...
javac for better or worse is aggressively against doing optimizations to the point of producing the most ridiculously bad code. The belief tends to be that the JIT will do a better job fixing it if it has byte code that's as close as possible to the original code. But this only helps if a) the code ever gets JIT'd at all (rarely true for eg class initializers), and b) the JIT has the budget to do that optimization. Although JITs have the advantage of runtime information, they are also under immense pressure to produce any optimizations as fast as possible. So they rarely do the level of deep optimizations of an offline compiler.
The problem is it's quite easy to poke holes in a sandbox when you're outside the sandbox looking in, especially when the user is granting you special permissions they don't understand. These apps aren't doing things like manipulating the heap of the banking app, they are instead just taking advantage of useful but powerful features like screen mirroring to read what the app is rendering.
Your 9900k at 5ghz does work slower than a Ryzen 9800X3D at 5ghz. A lot slower (1700 single core geekbench vs 3300, and just about any benchmark will tell the same story). Clock speed alone doesn't mean anything.
>8 Cores and 16 processing threads, based on AMD "Zen 5" architecture
which is the same thread geometry as my 9900K.
My main concerns at the time were:
1. More cores for running large workloads on k8s since I had just upgraded to 128G RAM
2. More thread level parallelism for my C++ code
Naively I thought that, ceteris paribus and assuming good L1 cache utilization, having more physical cores with a higher clock rate would be the ticket for 2.
Does the 9800X3D have a wider pipeline or is it some other microarchitectural feature that makes it faster?
You don't even need to go into the pipeline details. The 9800X3D has 8x more L2 cache, 6x more L3 cache, 2x the memory bandwidth than the now 8 years old i9 9900K. 3D V-cache is pretty cool.
I purposely picked a CPU with the same thread geometry as your 9900K to avoid calls of "apples & oranges" or whatever. If you want more threads, the 9950X is right there in the same socket. Or Core Ultra 9 285k. Either of which will run circles around a 9900K in code compilation.
I think my i9 was released right after the Spectre and Meltdown mitigations in 2019, but I seem to remember even more recent vulns in that family… so that could also be a factor.
If M5 has 9-18 cores and takes ~20w, then that's ~1-2w per CPU core. If these are 200-300W, and have ~100-200 CPU cores, then guess what? That's also ~1-2w per CPU core.
Xeons, Epycs, whatever this is - they are all also typically optimized for power efficiency. That's how they can fit so many CPU cores in 200-300W.
The CPU is capable. The 8GB of RAM not so much. If this had even just the 12GB of the A19 Pro that'd be a huge upgrade. Unless the RAM shortage gets developers to actually start giving a shit about RAM efficiency, but that seems unlikely to happen honestly.
Especially not when a certified macbook air refurb straight from Apple isn't that much more if you're not able to get the $500 EDU pricing on the Neo. $850 gets you a 16GB RAM / 512GB M4 Air, which is significantly better than the $700 Neo in every way.
Honestly the 8GB is not really an issue. As opposed to basically every other computer on this price range, Apple puts real storage in their machines, making a well-tuned swap simply transparent. I'd also bet they have very performant hardware engines for memory (de)compression.
A few years ago, my parents asked me for a laptop for my sisters, for university use. We targeted this price range. It's shocking but pretty much all laptops from Dell, HP, etc come with some form of eMMC storage. And I'm not speaking about the other specs like display or build quality. We ended up buying second-hand M1 and M2 macbook airs, and both I and my sisters are very happy about it.
(also, as the "tech support guy" of the family, I'm oh my so happy about them not running windows)
The SSD in the Neo only manages around 1,500 MB/s in sequential benchmarks, it's not an impressive drive.
> It's shocking but pretty much all laptops from Dell, HP, etc come with some form of eMMC storage.
I just went to Dell's website and picked a random $400 laptop and it had an NVME SSD. The $650 Dell 14 Essential also is NVME. Both of which are M.2 so easily upgraded, replaced, or have data recovery done on them. The only eMMC options I'm seeing are the $300 Chromebooks? Which is no where close to "pretty much all laptops." In fact it'd be "pretty much none of the laptops"
> The SSD in the Neo only manages around 1,500 MB/s in sequential benchmarks, it's not an impressive drive.
That's sequential, not what you want for swap, but already a good start. I agree that it's not impressive, but already leagues ahead of a SATA SSD. And for swapping a 8GB machine it's more than enough (when the swap pattern is sequential though): you swapped your whole system memory in 3 seconds, which is impressive.
> The only eMMC options I'm seeing are the $300 Chromebooks? Which is no where close to "pretty much all laptops." In fact it'd be "pretty much none of the laptops"
Then it's good the situation improved, genuinely! Less e-waste being on the store shelves. Pretty sure windows is nigh unusable on eMMC. And yes, those were sold alongside chromebooks, but at a markup of a "real computer" despite having roughly the same internals.
Another thing that could impact, though, is availability in different markets. I am in France, and the offerings are perhaps worse than in the US? (quite likely, in fact). Add to that the usual price markup where US companies tend to do, at best, 1 USD = 1 EUR, and we get worse machines for the equivalent price range.
> you swapped your whole system memory in 3 seconds, which is impressive.
As a user a 3 second hang is unusable. Also, critically, swap consumes the life of the drive. Since the Neo's isn't user-replaceable, a 3-5 year lifespan before death is actually a non-trivial compromise, although time will tell on that one I suppose.
Should be fast enough to swap in a browser page I guess. Overall you're right that it's the wrong device for memory hungry applications, but it's not the target audience.
Not sure why you're taking the rk3588 as a milestone for ARM, when it's a low end chip using core designs that were old when it released. Cortex-A76 is from 2018, so if that's the yardstick then the K3 is 8 years behind. Even then at the time the A76 was released Apple was significantly ahead with their own ARM CPUs.
> SpacemiT K3 is on par with Rockchip RK3588. So, about 4 years behind ARM.
That'd be ~7 years behind, not 4. Cortex A76 came out in late 2018. Also what benchmarks are you looking at?
> Tenstorrent Atlantis (first Ascalon silicon) should ship in Q2/Q3 and be twice as fast. About as fast as Ryzen5. So, about 5 years behind AMD.
Which Ryzen 5? The first Ryzen 5 came out in 2017, which was a lot more than 5 years ago.
> But even the K3 has faster AI than Apple Silicon or Qualcomm X Elite.
Which isn't RISC-V. Might as well brag about a RISC-V CPU with an RTX 5090 being faster at CUDA than a Nintendo Switch. That's a coprocessor that has nothing to do with the ISA or CPU core.
> Current trend-lines suggest ARM64 and RISC-V performance parity before 2030.
L. O. fucking. L. That's not how this works. That's not how any of this works.
This is a big reason why things like eGPUs kinda suck. Thunderbolt is fast for external I/O, but it's quite pathetic compared to internal PCI-E.
reply