Hacker Newsnew | past | comments | ask | show | jobs | submit | matdehaast's commentslogin

Spotify are probably reacting to https://annas-archive.li/blog/backing-up-spotify.html where basically the whole archive was downloaded

that was later.

It was known publicly later.

Not from Tigerbeetle, but having looked at his code this is what I saw https://news.ycombinator.com/item?id=45896559


I'm a bit worried you think instantiating a new client for every request is common practice. If you did that to Postgres or MySQL clients, you would also have degradation in performance.

PHP has created mysqli or PDO to deal with this specifically because of the known issues of it being expensive to recreate client connects per request


Ok your comment made me double check our benchmarking script in Go. Can confirm we didn't instantiate a new client with each request.

For transparency here's the full Golang benchmarking code and our results if you want to replicate it: https://gist.github.com/KelseyDH/c5cec31519f4420e195114dc9c8...

We shared the code with the Tigerbeetle team (who were very nice and responsive btw), and they didn't raise any issues with the script we wrote of their Tigerbeetle client. They did have many comments about the real-world performance of PostgreSQL in comparison, which is fair.


Thanks for the code and clarification. I'm surprised the TB team didn't pick it up, but your individual transfer test is a pretty poor representation. All you are testing there is how many batches you can complete per second, giving no time for the actual client to batch the transfers. This is because when you call createTransfer in GO, that will synchronously block.

For example, it is as if you created an HTTP server that only allows one concurrent request. Or having a queue where only 1 worker will ever do work. Is that your workload? Because I'm not sure I know of many workloads that are completely sync with only 1 worker.

To get a better representation for individual_transfers, I would use a waitgroup

  var wg sync.WaitGroup
  var mu sync.Mutex
  completedCount := 0

  for i := 0; i < len(transfers); i++ {
    wg.Add(1)
    go func(index int, transfer Transfer) {
     defer wg.Done()

     res, _ := client.CreateTransfers([]Transfer{transfer})
     for _, err := range res {
      if err.Result != 0 {
       log.Printf("Error creating transfer %d: %s", err.Index, err.Result)
      }
     }

     mu.Lock()
     completedCount++
     if completedCount%100 == 0 {
      fmt.Printf("%d\n", completedCount)
     }
     mu.Unlock()
    }(i, transfers[i])
   }

  wg.Wait()
  fmt.Printf("All %d transfers completed\n", len(transfers))
This will actually allow the client to batch the request internally and be more representative of the workloads you would get. Note, the above is not the same as doing the batching manually yourself. You could call createTransfer concurrently the client in multiple call sites. That would still auto batch them


Appreciate your kind words, Kelsey!

I searched the recent history of our community Slack but it seems it may have been an older conversation.

We typically do code review work only for our customers so I’m not sure if there was some misunderstanding.

Perhaps the assumption that because we didn’t say anything when you pasted the code, therefore we must have reviewed the code?

Per my other comment, your benchmarking environment is also a factor. For example, were you running on EBS?

These are all things that our team would typically work with you on to accelerate you, so that you get it right the first time!


Yeah it was back in February in your community Slack, I did receive a fairly thorough response from you and others about it. However then there were no technical critiques of the Go benchmarking code, just how our PostgreSQL comparison would fall short in real OLTP workloads (which is fair).


Yes, thanks!

I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.

IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.


I've had billing issues, and they have let it be resolved a couple of weeks later.


It feels like Intel and AMD are asleep at the wheel with their mobile lineup. I've been looking at non-apple equivalents that have similar performance/power as the M lineup and it seems they all lag about 20%+.

For $800 the M4 Air just seems like one of the best tech deals around.


The reduced horsepower relative to M-series isn't a problem for me as much as efficiency is. Both Intel and AMD seem to struggle with building a CPU that doesn't guzzle battery without also seriously restricting performance.

This really sucks. The nice thing about high end (Mx Pro/Max) MBPs is that if you need desktop-like power, it's there, but they can also do a pretty good job pretending to be MacBook Airs and stretch that 100Wh battery far further than is possible with similarly powerful x86 laptops.

This affects ultraportables too, though. A MacBook Air performs well in bursts and only becomes limited in sustained tasks, but competing laptops don't even do burst very well and still need active cooling to boot.

On the desktop front I think AMD has been killing it but both companies need to start from scratch for laptops.


> building a CPU that doesn't guzzle battery

It may be the software problem as well. On Windows I regularly need to find which new app started to eat battery like crazy. Usually it ends up being something third-party related to hardware, like Alienware app constantly making WMI requests (high CPU usage of svchost.exe hosting a WMI provider, disabling Alienware service helped), Intel Killer Wi-Fi software doing something when I did not even know it was installed on my PC (disabling all related services helped), Dell apps doing something, MSI apps doing something... you get the idea.

It seems like a class of problems which you simply can't have on macOS because of closed ecosystem.

Without all this stuff my Intel 155H works pretty decently, although I'm sure it is far away from M-series in terms of performance.


The Mac ecosystem isn’t as closed as you’re alluding to. You can easily download unsigned binaries and run them. Furthermore, if you’re looking for a battery hog, look no further than Microsoft Defender, Jamf Protect, and Elasticbeat. All 3 of those are forcibly installed on my work laptop and guzzle up CPU and battery.


> You can easily download unsigned binaries and run them

It's definitely becoming less easy over time. First you had to click approve in a dialog box, then you had to right-click -> open -> approve, now you have to attempt (and fail) to run the app -> then go into System Settings -> Security -> Approve.

I wanted to install a 3rd party kernel extension recently, and I had to reboot into the safety partition, and disable some portion of system integrity protection.

I don't think we're all that far from MacOS being as locked-down as iOS on the software installation front...


Yep, they will lock all that down. It's been coming for years. Tech companies have learned to do their anti-consumer work slowly and progressively over time instead of dropping it all at once. The whole frog in boiling water thing...

Microsoft is working towards this too. They wish so bad that they were Apple.


> You can easily download unsigned binaries and run them

Of course, but I assume you don't really need to install third-party apps to control hardware. In my case Alienware and Dell bloat came from me setting up an Alienware monitor. MSI bloat came from setting up MSI GPU. Intel Killer stuff just got automatically installed by Windows Update, it seems.

> Microsoft Defender

This one I immediately disable after Windows installation so no problems here :)

On work we get CrowdStrike Falcon, it seems pretty tame for now. Guess it depends on IT-controlled scan settings though.


Re: Microsoft Defender, I’m actually talking about defender on macOS. It is a multi platform product. I hear infosec is pretty happy with it. Me? It uses 100% CPU even when I’m doing nothing. I’m not happy.


Try some of the steps on this page [1]. In particular, enabling real-time protection stats and then adding exclusions for the processes causing the most file scans can help.

1. https://learn.microsoft.com/en-us/defender-endpoint/mac-supp...


I’m not in control, I’m just a user, but thanks. I have talked to the owners on occasion and plan to keep bringing it up so they can investigate.


What's mad is that you would have thought that Microsoft would use the Surface devices to show hardware manufacturers what could be done if you put some effort in, but I've heard so many horror stories from Surface owners about driver issues


Windows doesn't do it any favors, for sure. Running Linux with with every tweak under the sun for better life still leaves a large gap between x86 laptops and MacBooks, however, and while there's probably some low hanging optimization to be taken advantage of there I think the real problem is that x86 CPUs just can't idle as low as M-series can, which is exacerbated by the CPU not being able to finish up its work and reach idle as quickly.


I wonder if Windows and Linux just can't yet work on heterogeneous CPUs as well as macOS does. Intel chose an interesting direction here, going straight from one to three kinds of cores in one chip. I almost never see LPE cores being used on Windows, and on Linux you have obscure soft like Intel LPMD which I tried but was not able to notice any battery life improvements.


I'm a bit out of my depths here, but I believe a significant contributing factor is how early Apple made multi-CPU Macs available, with the earliest being the summer 2000 revision of PowerMac G4 tower (dual 500Mhz PPC G4s), pre-dating the release of OS X. They made it easier for devs to take advantage of those cores in OS X, because this yielded performance boosts that were difficult to match in the x86 world, which was still heavily single-CPU.

Because the OS and apps running on it were already taking advantage of multithreading, making them efficiency core friendly was easy since devs only had to mark already-encapsulated tasks as eligible for running on efficiency cores, so adoption was quick and deep.

Meanwhile on Windows there are still piles of programs that have yet to enter the Core 2 Duo era, let alone advance any further.


> Apple made multi-CPU Macs available, with the earliest being the summer 2000 revision of PowerMac G4 tower

Earlier. I did some multiprocessing work on an SMP PowerPC Mac in 1997.


Would this have been with MacOS 7’s Multiprocessing Services? I managed to play with an SMP Mac clone (DayStar Genesis MP), but all I really could do in the end is use some plugins for Photoshop.


That does not ring a bell. I believe it was macOS 8 or 9.


> The reduced horsepower relative to M-series isn't a problem for me as much as efficiency is

Same here. I actually don't care for macOS much, and I'm one of those weirdos who actually likes Windows (with WSL).

I tried the surface laptop 7 with the snapdragon X elite, and it's..OK. Still spins up the fans quite a bit and runs hotter than my 14" M4 Pro. It's noticeably slower than the MacBook too, and isn't instant wake from sleep (though it's a lot better than Wintel laptops used to be).

So I've been on Apple Silicon macs for the last 4.5 years because there's just no other option out there that even comes close. I'm actually away from my desk a lot, battery life matters to me. I just want a laptop with great performance AND great battery life, silent, runs cool, high quality screen and touchpad, and decent speakers and microphone.

MacBooks are literally the only computer on the market that checks all boxes. Even if I wanted to/preferred to run Windows or Linux instead, I can't because there just isn't equivalent hardware out there.


It’s even worse: Parallels on MacBooks runs windows better than a dedicated windows laptop…


Before the Macbook ARM switch, from 2015 onwards I used to run Linux via Parallels and it ran better than any Linux I ever used on a natively on a modern laptop. After installing the parallel tools you had 2D/3D/video acceleration, clipboard sharing, Wifi/Ethernet bridging, and most importantly - seemless and stable suspend/resume.


And if you don’t want to pay a subscription, VMWare also breaks no sweat since a long time ago, and it’s very polished at this point.

I recently tried VirtualBox and it’s finally catching up, seems to work without any problems but I didn’t test it enough to find out the quirks.


VMWare's desktop hypervisors were announced to be free to use a little while ago:

* https://blogs.vmware.com/cloud-foundation/2024/11/11/vmware-...

You need to register an account/e-mail address for a free account:

* https://knowledge.broadcom.com/external/article?articleNumbe...

After which you can download VMware Fusion Pro and/or VMware Workstation Pro:

* https://knowledge.broadcom.com/external/article/368667/downl...

This seems to be a perpetual licence (?), so as long as it can run on the underlying OS you can continue to use it. Not sure if there's any 'phone home' functionality to track things (like has been seen with Oracle VirtualBox).


I doubt that's true except for in your mind. What are you comparing it to? And old-ass Windows laptop?


I moved from a lenovo thinkpad p1 gen 2 core i9 32Gb (2020) to a macbook pro m1 max 32gb (2021), and the experience in parallels beats the experience in the lenovo machine.


> On the desktop front I think AMD has been killing it but both companies need to start from scratch for laptops.

IMO Apple is killing it with the mac mini too. Obviously not if you're gaming (that has a lot to do with the OS though), but if you're OK with the OS, it's a powerhouse for the size, noise, and energy required.


Yeah for most "normal" users the Mini is pretty ideal. It's got enough power that it's overkill for most folks while being the least intrusive a desktop could possibily be: it's tiny, it doesn't have a power brick, it doesn't make any noise, and it's not going to impact your power bill hardly at all.


>it doesn't make any noise

You can hear the fan at full load, especially on the M4 Pro. I really wish Apple went with a larger case and fan for that chip, which would allow quieter cooling.

Also, many units are affected by idle (power supply) buzzing: https://discussions.apple.com/thread/255853533?sortBy=rank

The Mac Mini is quieter than a typical PC, but it's not literally silent like, say, a smartphone.


That might be a recent phenomenon caused by the inevitable heat of the CPU getting closer and closer to its limit? Like explained in this video: https://youtu.be/AOlXmv9EiPo

My Mac Mini M2 never does any noise, even when I run FFMpeg the fans don’t spike. It just gets slightly warmer. Still, unless I’m doing these high CPU bound activities, every time I touch it it’s cold as if it was turned off, which is very different than my previous Intel one that was always either warm or super hot.


Even if you are into gaming, between native builds and Crossover, it’s quite capable. It’s not going to match a top of the line Windows build with a dedicated GPU, but it’s shockingly capable.


I've been running a Mac mini as a gaming machine for years; an egpu is much cheaper than building a whole new desktop tower.


Apple Silicon Macs don't support eGPUs. (At the moment anyway.)


This is just my perspective, but it seems that whatever is leading them to do so, the focus on supporting the Windows environment is extremely hamstringing. Apple effectively controls the whole hardware and software stack of any given device, AMD/Intel don't even really control the main board, let alone efficiencies between all the compatibilities.

No wonder the Ferrari of computers if more efficient and effective than hobbled together junk yard monstrosity... ok, I'll be more generous... the Chrysler of computers.

I don't want to suggest that Apple is ideal with its soldered constrictions, or that modularity should be done away with, but reality is that it seems to me that standards need to be tightened down A LOT if the PC market really wants to compete. I for one have no problem not dealing with all the hassle of non-Apple products because I can afford it. If Apple got its botoxed, manicured head out of their rear ends and started offering their products at competitive prices, they would likely totally dominate the majority of computing market, which would likely atrophy and effectively die out over time.

Let's hope that Apple remains pretentious and sturdy greedy so that at least we have choice and it gives the PC sector at least a chance to get their standards in order, maybe even funding a gold standard functional linux distro that could at least hold water to MacOS without drooling all over itself.


If you are OK with the closed apple ecosystem, sure, but I mean, 20% is not that much for 99% of the population.

Don't get me wrong, I really admire what apple has done with the M CPUs, but I personally prefer the freedom of being able to install linux, bsd, windows, and even weirder OSes like Haiku.


> but I mean, 20% is not that much for 99% of the population.

As long as you're ok being tethered to the wall, and even then, guzzling power.

The whole point of Apple Silicon is that its performance is exactly the same on battery as tethered to the wall AND it delivers that performance with unmatched power efficiency.

Its the same on pure desktop. Look at the performance per watt of the Mac Mini. Its just nuts how power efficient it is. Most people's monitors will use more power than the Mac Mini.


My “fancy” Windows work laptop has 45 minutes of battery life, while my M3 MacBook Pro will go 14 hours compiling C++ or running JavaScript and Docker images, and do so twice as fast as my work laptop could. I’d say you get what you pay for, but my work laptop was around the same price as my M3.

I wouldn’t be opposed to going back to Linux. But once you stop looking for power sockets all the time and start treating your laptop like a device you can just use all day at any moment, it’s hard to go back.


That's because your company's security department has virus scanners scanning every bit of code (including 99% of the virus scanner itself).


My company literally has four different apps “protecting” me now, including two different malware scanners. Neovim runs like it’s a 286. That said, before they’d installed everything it still wasn’t any faster than my Mac.


I was just looking at an HP laptop with a snapdragon X processor that claimed 34 hours of battery life while watching video.

It'd be tempting if I had any idea what the software compatibility story would be like. For example, the company I'm contracting with now requires a device monitor for SOC2 compliance (ensuring OS patches are applied and hard drive encryption remains on). They don't even want to do it, but their customers won't work with them without it.

Surprise surprise, a quick check of the device monitor company's website shows they don't support ARM architecture devices at all.


It may still work. The prism emulation is pretty good, almost on par with Rosetta2.

I have the surface laptop 7 with the X elite in it. The only thing I've ran into that outright didn't run was MSSQL server.

It's not my main machine, that is still an M4 Macbook pro but I hop on it occasionally to keep up with what Windows is doing or if I need to help someone with something windows specific. I've got WSL2, Docker, VSCode, etc. all running just fine.

It's decent, but not amazing. Feels a little slower than my M2 Air I have but not much, most of that is probably just windows being windows.

Would be nice to be able to get Linux running on one of these


Sadly, I'm doing dotnet work, including a legacy webforms codebase. Not running mssql server directly, but lots of other tools- visual studio, sql server profiler, sql server management studio, that sort of thing. EVEN IF all of that worked, I have already verified from the company that supplies the device management software that they don't support non-x86 architectures.


Bummer. They are neat little laptops, and with the X elite 2 (assuming they end up in some windows laptops and aren't exclusively for the new android chromebooks) it's about the closest we'll get to a MacBook on Windows for now.

I wish Microsoft put more pressure on vendors to support ARM.


The last Snapdragon X Elite claims really didn't pan out though.

Which left me bitter quite honestly as I was looking forward to them a lot.


I keep hearing this, but I'd venture that a majority of those making it will most likely end up on Windows full time anyway. Which is not materially worse than MacOS, no matter how much MacOS is shooting themselves in the foot.


With WSL2, Windows is better, sad, but true.


Hahahahhahahahahhahahahahaha

No.

Even if it was better than lima (and the builtin posix/unix environment), which: it ain’t, it doesn’t nearly make a dent in the mandatory online account, copilot shit and all the rest.


This is a very subjective take.

If you like Windows, you’ll find it better with WSL2. In fact, I see many developers at my org who claim they’ll switch to Windows (from Mac) when we make it available internally.

However, if you love Mac, you'll never find Windows palpable no matter what.

And then there’s all shades of gray.


You may like Windows better, but WSL2 is just a virtual machine with all the downsides (slower, no docker) that brings . In fact, on my windows PC I still use WSL1 for that reason.


It does not appear to me that Macs are closed in the sense that iOS is. It is possible, at least to install Linux on Apple silicon Macs.

There are certainly many more options on the PC side, but it's not because Apple actively blocks users from running another OS.


As far as I understand, the only Linux you can install on an M CPU is Asahi linux. Apple is not doing anything actively, but is also doing nothing to help linux be ported.


A big issue there is that there’s a massive backlog of patches to land in the kernel and Asahi are currently working on reducing that.

Once that’s done, any distro should be able to work.


You have no idea how much work everyone inside the kernel and iBoot teams at Apple put into making it possible to run Linux on those MacBooks!


20% is just the performance difference. They noted the low cost for an Air model as well. What would an equivalent be at that price point? Would it have the same passive cooling and weight features?


How about running your Linux and Windows etc virtually on a Mac? From what I've understood, people say it works great. But I haven't any experience myself.


Agreed. Even as an enthusiast if I could take the performance hit and keep the M4's battery life, I'd do it in a heartbeat just for the ability to run linux.


Huge majority people don't really case about whether an ecosystem is closed or not. Power users, such as developers, actively chose Macbooks, and those users are most likely to care about that.

You really think an average person shopping for a computer at Bestbuy cares about installing a different OS on their machine?


I'd like to have haiku as a boot option, but how well does it work on modern laptop hardware?


> I personally prefer the freedom of being able to install linux, bsd, windows, and even weirder OSes like Haiku.

I certainly don't think that matters to the vast majority of the population


The majority of the population is running a $300 laptop from Amazon. They certainly aren't popping used car money every 2-3 years like the real enthusiasts are.


> They certainly aren't popping used car money every 2-3 years like the real enthusiasts are.

Sorry, I don't get the reference. What sort of expenses are you referring to? For the price of a used car you can get pretty much any workstation money can buy.


Depends how you frame it; in my eyes, I'd be paying $1.4k USD after sales tax (at least here in the EU) for a laptop with measly 16 gigs of RAM… I could buy two normal laptops to outperform that for a price of one!


The 16/256 M4 MacBook Air costs 850€ including 19% vat here in DE, so your numbers are way off or you live somewhere with very unfortunate peicing in the eu.


Certainly not from the Apple Store, where it's 1.199,00€.

Cheapest, I found was about 1000€. Buying a one-off offer from some random webshop, means you would have to deal with them in case of repairs or warranty issues.

And yeah, effectively max. 200GB non-upgradable SSD storage, certainly makes cheap offers likely, cause that's borderline unusable for almost everyone, who needs more than a web browser.


If you use the $1.4k USD laptop for 2 years, that works out to around $2 / day. If it’s a Mac, it probably has some resale value at the end of the time bringing the cost down closer to $1 / day.

For a work machine, that’s pretty easy to justify.


I buy Macbooks with Applecare as a self-employed contractor (so no VAT and expense is tax deductible). I sell it after 2-3 years (before apple care expires!) on ebay and you would actually make a profit if I would not tax-declare it. What I personally do not do, as that would be tax evasion. But I've heard of people who don't do that and basically use Macbooks for free/with a profit. Especially since Germany changed the write-off period from 3 years to 12 month recently.


I wouldn't expect an Apple product to last that long… This is from my personal experience and also family members who tried Apple, so your mileage may vary, I just wouldn't trust it.

Ignoring that though, if work machine means an Excel machine, then it's probably overspending IMO. If work machine means workstation, then you'd probably rather want one of the >1.6k models with more working memory… or just don't go Apple.


Every iMac, Mac Pro, MacBook Air, Mac Mini and MacBook Pro I’ve had (for me and family) has been indestructible.

A few months ago Spotify on an ancient Intel Mac mini in the living room started complaining that the new version of Spotify is no longer compatible with that Mac. Then I ran Open Core and updated the MacOS to a much newer version, and Spotify is happy. Now I’ll get even more years out of that machine.


Interesting, most people have the opposite experience. 4 year old M1 here still amazing perf and OS support.


Seems like you had a run of bad luck - I hand my old Macs off to family members, and multiple are well past the 10 year mark at this point.

Probably need to get the batteries replaced somewhere past the 5 year mark, but otherwise the durability is unmatched.


Is there a laptop that can outperform a macbook M4? Genuine question.


Honestly asking, can you use either of them for a decade?

From replacement parts and physical endurance perspective, I mean.


I don't think it's realistic to expect any tech made in the twenties last a decade…


I still expect Mac and Thinkpad hardware to last a decade, sans their batteries. A good desktop PC made from better parts will also endure without much effort.


> For $800 the M4 Air just seems like one of the best tech deals around.

Only if you don't mind macOS.


I understand the point you're making, but FWIW I run Windows and Linux under Parallels and it works great. Colima/Lima is excellent, too: https://github.com/abiosoft/colima

Windows on ARM performance is near native when run under macOS. `virtiofs` mounts aren't nearly as fast as native Linux filesystem access, but Colima/Lima can be very fast if you (for example) move build artifacts and dependency caches inside the VM.


There's also UTM, available free for download or you can make a donation to the devs by purchasing it from the App Store.


> Colima/Lima is excellent

Except when you need something like UDP ports, for example. I tried it for 2-3 weeks, but I always encountered similar issues. At the end I just started to use custom Alpine VMs with UTM, and run Docker inside them. All networking configured with pf.


> I understand the point you're making, but FWIW I run Windows and Linux under Parallels and it works great.

See, that's where the MacOS shitshow begins: Parallels costs €189.99 and it looks like they are pushing towards subscriptions. I am not in the ecosystem, but Parallels is the only hypervisor I've ever seen recommended.

Another example is Little Snitch. Beloved and recommended Firewall. Just 59€! (IIRC, MacOS doesn't even respect user network configuration, when it comes to Apple services, e.g. bypassing VPN setups...)

Now, don't get me wrong, I am certain there are ways around it, but Apple people really need to introspect what it commonly means to run a frictionless MacOS. It's pretty ridiculous, especially coming from Linux.

I mean c'mon... paying for a firewall and hypervisor? Even running proprietary binaries for these kind of OS-level features seems moderately insane.


And therein lies the problem. Apple has managed to push a hardware advantage into something that makes a difference.


I used to not, but it’s getting worse and worse.

Still better than all the alternatives for someone like me that has to straddle clients expecting MS Office, gives me a *nix out of the box, and can run logic, reaper , MainStage.


> gives me a *nix out of the box, and can run logic, reaper , MainStage.

Reaper has a native Linux client. Logic and MainStage... are you serious? :D


>$800 the M4 Air just seems like one of the best tech deals

Not at all, you are stuck with a machine that has only 256GB of ssd(and not upgradeable) and 60hz LCD screen, also only 2 IO ports.

the M4 maybe the best mobile CPU, but that does not mean it will make every machine with it the best.


As an owner of an Macbook Pro M4 I am actually looking to sell and downgrade to the Air - because during my day-to-day developing I can't get close to utilizing the M4. Regarding the limitations: 256GB is enough for me, because 95% of the time I use my Macbook docked into my Thunderbolt dock, where I have an external drive (also for Timemachine backups) so plenty of space for big files that I rarely use of can do without on the go. That also solves the limitation of I/O ports, because the Dock brings plenty of USB ports (plus the ones from my screens which are connected to the Dock).

Not sure why 60Hz is a limitation, I've been on 60Hz for 20 years+, outside of gaming I do not see any value in going higher.


I went from a base M1 Air to a mid-spec’d M4 Pro. My main reasons were the 14” screen size, and the lack of a wedge shape for the new Airs (I loved the wedge). IMO, 14” is the perfect size for a laptop that actually sees time in your lap. My 13” Air was fine, but the extra inch is just enough to make a difference, while not feeling overly bulky.

Re: refresh rate, it’s nice, but I wouldn’t miss it most of the time. I have my external monitor for my work M3 running at 120 Hz because I can, not because I need it.


Not everyone has the luxury of having a rigid setup, that you can just plug in your dock and good to go. In fact, if that is your usecase, mac mini is a much better option.


Well the price difference between an M4 MacbookPro and Air buys you several docking stations.

Also a MacMini is not mobile, and even if I take the mini, it neither has a screen or keyboard. The thing is I don't need it WHILE I'm traveling, I need my laptop at the destination. And because that is less than 5% of the time I use it, there is no issue in carrying the external 2.5" HDD that I keep on my dock. You personal use-case may vary.


I use a MacBook Air 15" has my full stack main development machine. It is just light and portable on the go. At home I just plug it into a docking station with 10 GigE and output to a 48" OLED monitor - a beautiful setup.


just curious what brand etc monitor and docking station you're using here.


Not the OP, but I use my Macbook Pro M4 with a 5-year old Dell TB19 Thunderbolt dock (not the usb-c one!). I have 2x 1080p screens aged 10+years connected to the dock, and I use the Macbook as 3rd display and the keyboard of the laptop as my daily driver. I've used the very same setup with my 2019 MBP so I assume the M4 Air can handle it too.


I have had really good experience with OWC docking stations - rock solid compared to Dell ones I've had in the past: https://www.owc.com/solutions/connectivity

I won't recommend my monitor because it has auto-dimming you can not turn off. Good but not great.


> it seems they all lag about 20%

So they are about one generation behind? That's not bad really. What's the AMD or Intel chip equivalent to the M2 or M3? Is somebody making a fanless laptop with it?


Apple is also always at least one generation ahead on the process/lithography too because they buy out all of the initial capacity. That alone accounts for a decent chunk of the difference.

I don't think the market is there for fanless non-Mac laptops. Most people would rather have a budget system (no $ for proper passive cooling) or more powerful system.


> Most people would rather have a budget system

The low end of the market is for sure bigger but I think Apple has shown that the higher end can be profitable too. Dell, HP, Lenovo, and the other big laptop makers aren't afraid of having a thousand different SKUs. They could add one more for a higher end machine that's fanless.


Most of those SKUs are component swaps. A proper passively cooled laptop would require a completely different chassis design to act as an extension of the heatsink.

I bought a MacBook Air because it was cheaper and met my needs. Being passively cooled was just a nice bonus.


As someone who has been buying Think Pads for the past 20 years, Lenovo needs to spend more time working on thermals anyway. If I don't power my laptop down before stuffing it in my backpack, I'll have a hot, almost dead machine by the time I get to my destination.


I wonder how much of that issue is related to crappy lid open sensors. AFAIK most of them work by sensing a magnet placed in the frame. MacBooks don't do this so your laptop doesn't sleep randomly when a magnet passes over your laptop. It's dumb that they use a single magnet instead of one on each side, but it sure is cheaper.


I think it's 75% Microsoft's fault with modern sleep. They want the machine to go into a low power state rather than sleep so that it can still receive email and other notifications just like a phone does. But since the thermals are total garbage on most PCs, that means even the lower power modes need active cooling which doesn't really work when the machine is in a bag. After a few minutes all the fans are blasting.


> I've been looking at non-apple equivalents that have similar performance/power as the M lineup and it seems they all lag about 20%+.

Most of it can be explained away with TSMC. If you compared Apple's 5nm parts (M2), with AMD's 4nm parts, you'll see the performance swing in just about the same magnitude but in favor of AMD. M5 is 3rd gen 3nm.


AMD asleep? I didn't think this is accurate


If you read the first half of the sentence then yeah.... The complete sentence clarifies "with their mobile lineup"


The 395 max is in laptops and slots between the M3 and M4 on geekbench scores. That's not top of the line, but a decent result IMHO.


I find this usually doesn't matter as much as you seem to suggest.

I've been running linux laptops with AMD/intel for years, and while some focus on more battery life would be welcome, the cpu never bothered me.

My primary limiation is available RAM (esp. when debugging react native against a complete local docker setup), which unfortunately on both AMD/Intel but far more so on Apple is usually limited to higher compute CPU's, which drives up cost of laptop (not even including the extra cost of RAM on Apple).

The only really locally CPU intensive processes I run on a laptop are Rust builds, and then still I would prioritize RAM over CPU if that would be possible, because Rust builds are fast enough, especially when incrementally compiling (and I almost never do lto release-like builds locally unless doing benchmarking or profiling work)


> Rust builds are fast enough

I love Rust but I think you might be the first person to say that!


Each Ryzen generation increased performance significantly, so AMD is definitely not asleep at the wheel.


I'm talking specifically about their mobile lineup, not desktop. And more specifically the performance to power efficiency the M series is getting. It is more than 2 generations behind.


Each Ryzen mobile generation also improved efficiency, in particularly the last generation (AI PRO 300).

Intel, on the other hand, started a few generation ago with an edge in terms of efficiency, and now they're behind; they are definitely the one that fell asleep.

The fact that ARM may have unreachable efficiency doesn't mean that AMD, as x86 producer, is doing nothing.


For the same price you have an AMD Ryzen AI 9 365 which has 50% higher performance in cpubenchmark


Think you are mistaken. The M4 beats the Ryzen AI 365 in both single and multicore benchmarks



Passmark is an outdated benchmark not well optimized for ARM. Even so the single thread marks are 3864 (AI365) vs 4550 (M4)

OTOH, Geekbench correlates (0.99) with SPEC standards, the industry standard in CPU benchmark and what enterprise companies such as AWS use to judge a CPU performance.

https://medium.com/silicon-reimagined/performance-delivered-...


I see you are citing a 6 month old post which itself isn't really well sourced isn't really reaching consensus and doesn't have a definitive answer.

https://news.ycombinator.com/item?id=43287208

The article in question doesn't mention subpar ARM optimizations.


Hmm, why would you need to optimize a benchmark for something? Generally it's the other way round.


> Hmm, why would you need to optimize a benchmark for something? Generally it's the other way round.

It had always been both ways. This is why there exist(ed) quite a lot of people who have/had serious thoughts whether [some benchmark] actually measures the performance of the e.g. CPU or the quality of the compiler.

The "truce" that was adopted concerning these very heated discussions was that a great CPU is of much less value if programmers are incapable of making use of its power.

Examples that evidence the truth of this "truce" [pun intended]:

- Sega Saturn (very hard to make use of its power)

- PlayStation 3 (Cell processor)

- Intel Itanium, which (besides some other problems) needed a super-smart compiler (which never existed) so that programs could make use of its potential

- in the last years: claims that specific AMD GPUs are as fast as or even faster than NVidia GPUs (also: for the same cost) for GPGPU tasks. Possibly true, but CUDA makes it easier to make use of the power of the NVidia GPU.


> - Intel Itanium, which (besides some other problems) needed a super-smart compiler (which never existed) so that programs could make use of its potential

Well, no such thing is possible. Memory access and branch prediction patterns are too dynamic for a compiler to be able to schedule basically anything ahead of time.

A JIT with a lot of instrumentation could do somewhat better, but it'd be very expensive.


Interesting, on Geekbench they have very different scoring

365: 2515/12552 M4: 3763/14694

https://browser.geekbench.com/processors/amd-ryzen-ai-9-365 https://browser.geekbench.com/v6/cpu/11020192


395 is still higher on Geekbench at 2781/17644


Double lol

https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_ai_9_365...

Edit: Looks like OP stealth edited "double" to "50%". Still lol.


I edited 15sec after posting but you got me on speed.

It's interesting I've seen that often on trending posts. There is enough traffic that any variation of a comment will have readers


Not just GCP, most of Googles services are out of action


I'm on a meet, in cal, editing a dozen docs, in GCP, pushing commits and launching containers; it's not clear yet what exactly is going on but it's certainly intermittent and sparse, at least so far


stop it. you're overloading their system by doing three things at once. let the rest of us have a turn.


I think OpenAI has caught the non tech mind share. I noticed just this week that my friend group and girlfriend no longer say we will Google something, but rather let me ask ChatGPT…


Chatgpt is going to be more like "jeep". My relatively old grandma just calls all SUVs jeeps. Meanwhile, she doesn't even drive an SUV.

I doubt anyone from gen Z, or any gen, really, is entrapped to chatgpt. They'll all just use what is easiest and requires the lowest amount of input energy to get a semi passable output.


Yeah, I even sometimes use "ChatGPT" to refer to the general concept when talking to friends and family.


It is pretty funny how such a clumsy programmer-name ended up catching on regardless. At this point even if they move on to a totally different model architecture they'll have to keep calling it GPT forever because that's the name everyone knows.


ChatGDP? Yeah I use it


I spent 2 seconds clicking on your bio and saw this account was created 4 hours ago.

Makes me wonder why you felt the need to create a burner account.

This isn't to say anything one way or another about you, its just my 2 second of reading about you.


I find it a very interesting approach to what she is doing.

The problem with most philanthropic organizations is that they come to rely on a constant stream of money. I've heard the Gates Foundation have to be very intentional with how they deploy capital. Because whole ecosystems come to rely on that money in an unsustainable way. So when they have met their goals or decided its not working and pull funding, those that relied on the funding basically collapse overnight. Which could lead to even worse outcomes.

With her approach, I do wonder if this will occur with many of the organizations she is giving large amount of money to.

EDIT: Reminds me of the saying "Give a person a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime".


It depends on the terms of what she does. Things like the gates foundation want control over what is done and provide a stream of money. That creates continuous dependence on funding.

IMO this is deliberate - it means you can "give away" the money but keep the power and the status which are the only benefits for having that much money.

If she is making a series of large one off donations that problem does not exist.


Part of the (slightly vague and confusingly written) message is that she's also investing in for-profit organisations that have overlapping goals with the charities.


IMHO, I dont think this is her responsibility.


Is is a donor's responsibility if they're seeking better outcomes.


If she cares which she most probably is, she / experienced people in charge should put this fact into project planning.

But there are limits and nobody sees future, dont expect impossible miracles just because it would be nice.


If she just wants it gone/the responsibility off her chest she could send it to one of the really big re-granters.


Memory usage for that synthetic web server benchmark is massive! 90% reduction


I have a web app that allocates too much memory relative to what it actually needs and upgrading to .NET 9 reduced its usage by two thirds!

A pretty good result for just changing a dropdown in the project settings tab.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: