Can't wait for it to be complete trash in Windows and continue to suffer the same issues I've had with every snap dragon powered windows machine including the devbox 2023. GPU driver will crash under heavy web rendering loads (twitter gif heavy threads), It will probably be undercooled causing P cores to be kneecapped push applications to the E cores as soon as it warms up, and never let them go back. Although I will note the MS ARM devbox at least was adequately cooled, if poorly vented so this was a rare occasion.
On top of that linux support will be non-existent leaving you to struggle as microsoft's most underserved user population.
Good luck anyone who makes the dive, even as an ARM enthusiast I gave up on qualcomm products as a whole.
Did they show any GPU benchmarks or otherwise give an indication that the Linux support will be relatively full-featured? Because what I've seen so far is just a Geekbench CPU score on Linux that was significantly higher than the Windows score because fan control wasn't working on Linux.
Dunno myself, but I do remember that the company behind the initial chip design was targeting the server market, so that’s probably the reason for Linux support. I kinda doubt that the GPU has any Linux support, it’s just a regular Qualcomm developed GPU.
By upstream support, do you mean the relevant device trees and drivers have been contributed to the kernel already? (I'm asking because there are some listed here https://kernelnewbies.org/LinuxChanges#Linux_6.5.ARM but they seem to be a different model)
"It will probably be undercooled causing P cores to be kneecapped push applications to the E cores as soon as it warms up, and never let them go back."
I disagree, this article shows how it compares to their own benchmarks.
In particular, you can add a 7840U laptop to the comparisons to see how the 23W Qualcomm part compares to the current low power AMD CPU with a comparable wattage.
It almost reads like a snapdragon x marketing flyer. Is this page a legitimate? Reading through the article and scanning a few benchmarks I see that it’s still behind most of today’s chips but is in “striking distance”. What does that even mean? The chip is going to become sentient and attack the m2/amd/intel chips and get its performance revenge?
I think it means that instead of being 3-4 years behind they will only be 2 years behind. It's hard to say though because Apple has been a hell of a moving target.
> . It's hard to say though because Apple has been a hell of a moving target.
Have they?
They were stuck on Intel's incremental schedule for years and years, then they made a gigantic leap with the M1-series, and it has been incremental again since then. Bigger increments than 7th gen > 8th gen Intel, but its a bit ridiculous to pretend they're making 8th gen Intel > M1 leaps every year.
It’s important to remember that they are 3-4 years ahead against ARM competitors (ie mobile). They were the first to reach sufficient volume (+ years of planning for SW and HW) that their mobile CPU tech could be scaled jump to migrate their laptop/desktop offerings and outpace on a per watt basis with AMD/Intel (and be at least competitive with on an absolute basis).
For example, they have a unified 800GB/s memory bandwidth with zero copy sharing between GPU and CPU. That’s not something anyone else has managed yet. They’re not making the huge leaps every year, but they’re making enough to keep the advantage of their previously leap. Their previous leap was a result of process node and architectural improvements and they’re continuing to buy up manufacturing capacity from TSMC to retain their 6-12month edge on other competitors.
They have certainly built an amazing architecture.
I'll say that I stated it rather too miserly, a bit out of annoyance from the extreme hyperbole surrounding the M-series processors ("its better than an i9 + 3080 system! Next year it'll beat an i9 + 4090!!").
Funnily enough I plan to switch to an M-series Macbook as soon as Asahi Linux is completely out of beta.
I replaced a 16" i9 MacBook Pro with an M1 MacBook Air right after it was released. The Air was at least as fast as the MBP running full tilt with my workloads. The Air has no fan so it's completely silent and I've rarely experienced any thermal throttling on it. The i9 MBP's fans would spin up if you looked at it sideways and running full out sounded like a jet engine. The Air also lasts all day on the battery at a fraction of the weight and a fraction of the thickness of the MBP it replaced.
The M-series chips beat a lot of Intel and AMD offerings in some form factors. It's not that the greatest chip of all time and there's Intel and AMD offerings that beat them in single thread performance or offer more multi thread performance at some price points.
At the high end I think Intel and AMD (plus GPU) are more competitive with Apple's kit so long as the OEM makes nice hardware. At the low end the entry level M2s have ridiculous capability compared to x86 machines. Maybe Qualcomm will actually be in the running with the Snapdragon X if they can ditch the Windows millstone.
In other words, if you are willing to compromise, Apple isn't so bad of a deal. Well if I willing to compromise I'll do down to dirt cheap and then Apple doesn't even compete there.
M2 have nothing special against similarly priced competition unless for some reason you ABSOLUTELY need to run your powerful hardware in the desert with zero power available. But I am told most people runs their workload at home with plenty of cheap power available. I must be wrong.
granted, I think maybe more eyes are on this snapdragon release as it might have some Apple/Nuvia special sauce in this batch. Albeit not sure how motivated Gerald Williams and co are to be designing mobile/laptop chips again given the whole point of their leaving were to do server chips...
You are a fool for believing the 800Gb/s memory bandwidth meme that is only true for the most expensive chip. The chips most people can buy are nothing special and actually quite underwhelming when you consider the price.
GPU/CPU integration is also a meme that keeps on giving zero results. Supposedly so much better yet everything runs slower than the competition. It consumes less power? It better with node advantage bought with monopoly money it would be a shame to both suck and use more ressource.
And then you pretend they make any progress at all when the truth is when you account for node upgrades, frequency boost and similar upgrades they are not making any progress at all. But I guess double standard is a de facto status of the Apple fanboy.
Apple is doing something interesting but it's not at all unclear how they're doing it:
> outpace on a per watt basis with AMD/Intel (and be at least competitive with on an absolute basis)
They're the first on TSMC's new process nodes, so whenever a new node comes out, people will compare the new Apple chip on a new node to the previous generation AMD chip on the previous node. If you compare them this way then the newer node outperforms the older one as expected. Then AMD releases one on the new node and the advantage basically disappears, or is just making different trade offs, e.g. selling chips with more cores which consequently have a higher TDP in exchange for higher multi-thread performance.
Intel hasn't been competitive with either of them on power consumption for some time because Intel's fabrication is less power efficient than TSMC's.
But if Apple's advantage is just outbidding everyone at TSMC, that only lasts until TSMC builds more fabs (the lead times for which get shorter as the COVID issues subside), or someone else makes a better process.
> For example, they have a unified 800GB/s memory bandwidth with zero copy sharing between GPU and CPU. That’s not something anyone else has managed yet.
It's also not something which is at all difficult to do. It's a straightforward combination of two known technologies. Integrated GPUs have unified memory and discrete GPUs have high memory bandwidth.
There is no secret to attaching high bandwidth memory to a CPU with an integrated GPU, it's just a trade off that traditionally isn't worth it. CPUs typically have more memory than GPUs but if it's unified and you want to use the fast memory you're either going to get less of it or pay more, and CPU applications that actually benefit from that amount of memory bandwidth are uncommon.
One of the rare applications that do benefit from it are LLMs, and it's possible that's enough to create market demand, but there's no real question of if they can figure out how to make that -- everybody knows how. It's only a question of if customers want to pay for it.
And what we may see is something better -- high bandwidth memory as a unified L4 cache. So then your CPU+iGPU gets the amount of HBM traditionally found on a discrete GPU and the amount of DRAM traditionally attached to a CPU and you get the best of both worlds with no increase in cost over a CPU + discrete GPU. But currently nobody offers that -- the closest is the Xeon Max which has HBM but no integrated GPU.
And none of these are why Qualcomm isn't competitive with Apple -- they're not competitive with AMD or Intel either. It's not the fabs or the trade offs. Their designs just aren't as good. But all that means is they should hire more/better engineers.
> Thus, it is becoming increasingly clear that the Snapdragon X Elite's single-core performance can rival the best of current offerings from Intel, AMD, and Apple.
The word "laptop" is missing from this sentence.
And since laptops have soldered cpus i think this would not help the adoption of an exotic archirecture with no software support. (Supports DirectX and OpenGL - he just forgot to mention the numbers).
I would love to see computers other than x86, but, at the moment, they are limited to routers, laptops and things like Raspberry or Banana Pi.
Whatever warts they are hiding with this chip, I’m still excited at the state of compute hardware right now. With Intel, AMD, Nvidia, Apple, Qualcomm, and even Huawei all pumping out high performance CPUs, GPUs, and SoCs at a rapid clip, I can’t think of a time when there was more competition going on.
On top of that linux support will be non-existent leaving you to struggle as microsoft's most underserved user population.
Good luck anyone who makes the dive, even as an ARM enthusiast I gave up on qualcomm products as a whole.