Hacker Newsnew | past | comments | ask | show | jobs | submit | zapnuk's commentslogin

Framework Laptop is more expensive than a Macbook Air with all around worse hardware. For a framework 13 I'd have to pay 1900€ with a 16GB setup. For 1450 I get a MBA with 24GB ram. Similar with a dell or lenovo who get smoked in performance comparisons.

It might still be worth it for those who hugely value open source and repairability but as for value I think its save to say that Apple is currently in a league of their own. Even if the altest os update is a flop.

Also, the Macbook has improved repairability. While its still not great its better than a few years ago.


> Framework Laptop is more expensive than a Macbook Air with all around worse hardware.

Is it though? I'd agree the hardware is less capable but if your Macbook anything is really just one 'top case' repair away from being more expensive. RAM failure is 'motherboard replace', the display? it is similarly expensive to replace.

So I would agree that it is more expensive to purchase a Framework laptop than a Macbook laptop, but also feel it is more expensive to own a Macbook laptop than a Framework laptop. Also I just replaced the screen on my FW13 not because it was broken but because they have one with 4x the pixels on it now. That's not something I could have done with the Macbook.


What is the probability of those things failing during the time you have the MacBook? I've had Apple portables since they were called PowerBooks and the only problem I've had that wasn't caused by violence was a battery swelling, and that cost me something like $120 to replace, not a big deal. If you add 5% to the price, that's probably about your expected cost for repairs or premature replacements if you don't have a habit of damaging your equipment.

If'd rather not take a low risk of a big repair/replacement bill and you don't mind helping Big Fruit make a bit more of a profit, you can pay them $50-150/year (depending on model) to take that risk. Multiply that by the number of years you expect to own the device to come up with a "real" cost including repairs/replacements.


My Framework 13 is a bit long in the tooth. I can pay 529 EUR to get a new mainboard and keep the same case/battery/speakers/camera/keyboard/mouse/screen/etc. Or, I can replace the keyboard for 32 EUR.

It's not just repairs, to upgrade a Mac you have to throw away all that perfectly working hardware just to get a new mainboard.


> I can pay 529 EUR to get a new mainboard and keep the same case/battery/speakers/camera/keyboard/mouse/screen/etc.

Or you can spend 50 euros more and get an entire new laptop that is not only much more powerful than your old framework but is almost as repairable: the neo.

At some point your argument begins to work against you, you should just have talked about the keyword repair being cheap. Not how you can get a new motherboard for "only" 530 euros.


> Or you can spend 50 euros more and get an entire new laptop that is not only much more powerful than your old framework but is almost as repairable: the neo.

You forget to mention - less powerful than his old FW 13 with new mainboard/CPU.


I assume he's referring to the AMD AI 340 for 530 euros.[0]

Macbook Neo 31% faster ST speed and a bit slower on the MT.[1]

I wouldn't call the Neo less powerful than his 530 euros upgrade. In fact, I'd much rather have the faster ST speed in this kind of laptop. Most of the apps you're running with this class of laptops will be ST bound anyway.

You can literally get a brand new Macbook Neo using Apple EDU pricing for the price of a slower AMD motherboard upgrade. This is why Framework is an absolutely terrible deal overall. I'm not even convinced that Framework is better for the environment since Apple laptops last extremely long and will very often have second and third hand buyers.

[0]https://frame.work/nl/en/products/mainboard-amd-ai300?v=FRAN...

[1]https://browser.geekbench.com/v6/cpu/compare/17360869?baseli...


> What is the probability of those things failing during the time you have the MacBook?

and

> ... you can pay them $50-150/year (depending on model) to take that risk.

These things are related, Apple knows what the failure rate in the field for their hardware is, and they "price in" that failure rate into their AppleCare costs. On my iPad pro, that's $90/year.

That said, it is entirely a 'bet' on your part as to whether or not you're in a position to cover costs of repair/replacement in the event of damage. That depends on a lots of factors and includes how much you can tolerate not having the equipment for a while, Etc.


The downside of an Apple is generally you can’t improve the hardware by replacing it piecemeal as new hardware comes out.

That was my goal buying a Framework… to get to refresh hardware regularly as better stuff came out rather than waiting 10 years to buy a new laptop.

Will it work that way in reality? No idea, but I thought it was at least interesting enough to take a gamble.


I can configure a 1400E framework 13 with a bring-my-own ssd + linux.

I can drop it down to 1050E without the ram if i take ram from my older laptop.

Upgrading or fixing this is very easy. RAM/SSD i can take with me over multiple generations of a laptop.

I can't do that on a macbook, if anything breaks there (screen, ssd, ram, keyboard, battery bulging...) I might as well buy another.

Then there's the issue of macos... you're stuck with it, if you don't like it, it's a dealbreaker.

There's also issue of waste... I can make a router/firewall from an old framework mobo. I can't do that with a macbook.


Sure, a poweruser can bring their own ram/ssd. But again they pay almost as much and have a worse system performance wise.

Normal users don't profit from anything you listed. They do have to buy a notebook with all components, and thus currently have to pay more for linux/windows hardware compared to Apple.

Also, RAM isn't backwards compatiple. Literally had this problem with my old ddr4 not fitting in the newer ddr5 slots when my ddr5 acted up.


> Normal users don't profit from anything you listed.

They can get their technical friends to set up a laptop for them and profit from what I mentioned.

> They do have to buy a notebook with all components

Sure, first time they do that, then they can reuse.

> and thus currently have to pay more for linux/windows hardware compared to Apple.

Sure, first time they do that. If they try framework, there are plenty of other cheaper options with pretty good specs.

> Also, RAM isn't backwards compatiple. Literally had this problem with my old ddr4 not fitting in the newer ddr5 slots when my ddr5 acted up.

Of course... but once you have something with ddr5 it should last you a long time, same as DDR4 did.

Now... you missed another point, some people just don't like or want MacOS, as nice as hardware might be, it's not acceptable software wise.

As for normal people, they'll just buy whatever is cheapest. If they even bother since phones/tables have already taken over.

I'm not sure laptops will have a market other then power users going forward...


It's not just Tahoe; macOS is simply insufferable for many users. You can pitch Apple Silicon to gamers, warship captains or datacenter users, but they won't care when the dust settles. It's a device for people that want a Mac, and if you want a PC, server or homelab then you gotta get different hardware. It's entirely a software limitation, imposed by Apple.

I don't value open source or repairability that much. I just want to develop server software, and on macOS I always end up with the same janky VM-based workflow I suffer through on Windows. On the desktop I have no reason to waste my time with macOS, and I don't use a laptop often enough to justify reincorporating macOS into my life.


Thats just one of the interpretations of a skill.

A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.

Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.

When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).


Gemini 3 was:

1. unreliable in GH copilot. Lots of 500 and 4XX errors. Unusable in the first 2 months

2. not available in vertex ai (europe). We have requirements regarding data residency. Funny enough anthropic is on point with releasing their models to vertex ai. We already use opus and sonnet 4.6.

I hope google gets their stuff together and understands that not everyone wants/can use their global endpoint. We'd like to try their models.


Thats like being proud of not using google or stackoverflow and only reading manuals, or using notepad instead of an IDE (or editor with language server support).

A 10$ GitHub Copilot or 20$ ChatGPT/Claude subscription get you a long way.

And if the employer isn't willing to spend this little money to improve their workers productivity they're pretty dumb.

There are valid concerns like privacy and oss licences. But lack of value or gain in productivity isn't one of them.


gemini 2.0 flash is and was a godsend for many small tasks and ocr.

There needs to be a greater distinction between models used for human chat, programming agents, and software-integration - where at least we benefitted from gemini flash models.


Seems like their whole business model was based on the fact that tailwind was difficult to use, and now with llm we have a simple way to use it in a good-enough way.

They, and other companies, should rather depend on corporate users. Don't let multi-billion revenue companies use your tech for free.

Seems like many companies leaned it a bit late, we always have the same news every fewe years (docker, mongodb, terraform, elastic).


> Seems like their whole business model was based on the fact that tailwind was difficult to use

Uhhh no... People already struggle with CSS. No one would use Tailwind if it made it even more difficult. I've used and loved Tailwind for 5 years and some without ever having any components written for me. At worst it's as difficult as CSS (centering a div is not any easier, you just write it in a different place), and in some areas like responsiveness (media queries like screen size breakpoints) the syntax is way easier to read and write.

The problem their business model was solving is first that good design is hard, and second that even if you can design something that looks good, you might not be good at implementing it in CSS. They did those things for you, and you can copy-paste it straight into your app with a single block of code thanks to Tailwind.

You're right that LLMs essentially solved this same issue in a more flexible way that most people would prefer, and it's just one feature of many.


Nah. Plenty people struggles with the use of tailwind or at least were interested in shortcuts. Thats the whole what tailwind plus offers. In some ways tailwind is like matplotlib/pandas/numpy. Increadibly powerfull but some methods/classes are difficult to remember to you keep googleing the same things.

Doesn't matter anyways wether their customers are people who search for shortcuts or people who search for "the best designs".

Their problem was and is that tailwind is used by many of the most profitable companies in the world for free.

Thats so unbelievable stupid. You have corporations paying millions for MS 365 subscriptions, confluence, and other software and basically nothing for a totally optional ui library. If the use of tailwind saves 10 engineering hours per month then it's worth it to pay a few hundred $ for a licence.

Given that their team isn't big they don't even need that many customers. Add a bit consulting for a decent hourly rate and they should be golden.

The more I think about it the more I blame the CEO for poor decisions.


I assume its still x86-64?

What actually makes it an AI platform? Some tight integration of an intel ARC GPU, similar to the Apple M series processors?

They claim 2-5x performance for soem AI workloads. But aren't they still limited by memory? The same limitation as always in consumer hardware?

I don't think it matters much if you're limited by a nvidia gpu with ~max 16gb or some new intel processor with similar memory.

Nice to have more options though. Kinda wish the intel arc gpu would be developed into an alternative for self hosted LLMs. 70b models can be quite good but still difficult / slow to use self-hosted.


These processors have NPU (Neural Processing Unit) which is supposed to accelerate some small local neural networks. Nvidia RTX GPUs have much more powerful NPUs, so it's more about laptops without discrete GPU.


And as far as I can see, it's a total waste of silicon. Anything running in it will anyway be so underpowered that it doesn't matter. It'd be better to dedicate the transistors to the GPU.

The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.


If you ask NVIDIA, inference should always run on the GPU. If you ask anybody else designing chips for consumer devices, they say there's a benefit to having a low-power NPU that's separate from the GPU.


Okay, yeah, and those manufacturers’ opinions are both obvious reflections of market position independent of the merits, what do people who actually run inference say?

(Also, the NPUs usually aren't any more separate from the GPU than tensor cores are separate from an Nvidia GPU, they are integrated with the CPU and iGPU.)


If you're running an LLM there's a benefit in shifting prompt pre-processing to the NPU. More generally, anything that's memory-throughput limited should stay on the GPU, while the NPU can aid compute-limited tasks to at least some extent.

The general problem with NPUs for memory-limited tasks is either that the throughput available to them is too low to begin with, or that they're usually constrained to formats that will require wasteful padding/dequantizing when read (at least for newer models) whereas a GPU just does that in local registers.


Depends on how big the NPU is and how much power/memory the inference model needs.


But like.....what for example. As a normal windows PC user, what kind of software can I run that will benefit from that NPU at all?


We don't ask that question. In reality everything is done in the cloud. Maybe they package some camera app that applies snapchat-like filters with NPUs, but that's about the extent of it.

Jokes aside: they really seem to do some things like live captions and translations. Pretty sure you could also do these things on the iGPU or CPU at a higher power draw.

https://blogs.windows.com/windows-insider/2024/12/18/releasi...



No for sure, but afaik you get all of those features even if you don't have an NPU. And even if you do have one, it's unclear to me which one of them actually use the NPU for extra power or if they all just run on the CPU. Like the thing that is missing for me is "this is the thing that you can only do on a Copilot PC and it's not available otherwise".


Try searching for some like "My mouse pointer is too small"

https://x.com/rfleury/status/2007964012923994364


Incredible. 100% typical microsoft though. I'm a "veteran" windows/xbox developer and none of this surprises me.


They're going to find a way to accelerate the Windows start menu with it.


Oh boy, instead of building an efficient index or optimizing the start menu or its built-in web browser, they're adding more power usage to make the computer randomly guess what I want returned since they still can't figure out how to return search results of what you actually typed.


God I hope so


It is another way Microsoft has tried to cater to OEMs as means to bring PC sales back to the glory exponential growth days, especially under the CoPilot+ PC branding, nowadays still siloed into Windows ARM.

In fairness NPUs can use less hardware resources than a general purpose discrete GPU, thus better for laptop workloads, however we all know that if a discrete GPU is available, there is not a technical reason for not using it, assuming enough local memory is available.

Ah, and NPUs are yet another thing that GNU/Linux folks would have to reverse engineer as well, as on Windows/Android/Apple OSes they are exposed via OS APIs, and there is yet no industry standard for them.



That is not an industry standard that works across vendors in an OS and GPU agnostic way, which is why Khronos has started a new standard effort.

https://www.khronos.org/events/building-the-foundation-for-a...


Windows Recall?


1) tick AI checkbox 2) ??? 3) profit


Are we calling tensor cores NPUs now?


How did we end up with Tensor Cores and a Tensor SoC from two different companies?


The same way we ended up with both Groq and Grok branded LLMs

Maybe these people aren't that creative....


Thats the whole problem. No consistency. Some configurations work, others not - eventhough they should be way more capable.

That's not even limited to linux or gaming. A few weeks ago i tried to apply the latest Windows update to my 2018 lenovo thinkpad. It complained about insufficient space (had 20GB free). I then used a usb as swap (required by windows) and tried to install the update. Gave up after 1 hour without progress...

Hardware+OS really seems unfixable in some cases. I'm 100% getting a macbook next time. At least with Apple I can schedule a support appointment.


For gaming macOS does not seem a great choice. I have friends with macOS and, at least on Steam, there are very few games running on that platform.

Additionally when I was using macOS for work, I had also some unexpected things if I wanted to use anything a bit more special (think packages installed using homebrew, compiling a thing from source, etc.).

So for me the options are: either use a locked device where you can't do anything other than what the designers thought of and if you are lucky it will be good OR use something where you have complete freedom and take the responsibility to tweak when things dont'work. MacOS tries to be the first option (but in my opinion does not succeed as much as it claims to), Linux is the second option (but it is harder than it could be in many cases) and Windows tries to do both (and is worse than the two other alternatives)


No one is claiming that it's a bad move.

It's just an anti-competitive move that could be very bad for the consumer as it makes the inference market less competitive.


For me it was about 8 years ago. Back then TF was already bloated but had two weaknesses. Their bet on static compute graphs made writing code verbose and debugging difficult.

The few people I know back then used keras instead. I switched to PyTorch for my next project which was more "batteries included".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: