Hacker Newsnew | past | comments | ask | show | jobs | submit | revolvingthrow's commentslogin

Anthropic’s $20 plan gives you such a pittance of tokens that it’s borderline unusable for anything more than a few scripts or a toy app. If $20 is all you have you’d do _much_ better going with chatgpt

The Codex plan for the $20 ChatGPT plan goes much further than Claude's $20 plan, but it's still not enough if you plan to work full-time with it.

My usage is in the $60 tier, but that doesn't exist so I have to cough up $100. And then get all shaky if I don't use up my weekly quota.

Do you mostly just hit the session limits? If so I know it's not ideal but you could wait an hour or two for that to reset. Not sure if that would work for you but just a suggestion

I get to 80% when on a single session and cap out a hour off the rest if I’m working on two.

But I like to have that forced hour to stop, it’s moment to take a breath.

It depends on the kind of work though, some things are more token intensive.


That's simply not true at all.

I really wish the benchmarks were even slightly trustworthy for AI models. ~120B are the largest models I can run locally. Naturally I grabbed the 122B Qwen3.5, which had great benchmarks and… frankly, the model is garbage, worse than glm air 4.5 IMO. But then, qwen famously benchmaxxes.

And here we have another release. The benchmarks are just a tiny bit worse than qwen3.5 (for far less tokens). Am I to take it that the model is worse? Or does qwen’s benchmaxxing mean that slightly worse result of non-qwen models means a better model? I’d rather not spend hours testing things myself for every noteworthy release.

Ah well. Mistral has been fairly decent so worth taking a look. Obviously they’re behind the big 3, but in my experience their small models are probably the best you can get for several months after each release. I’m not sure how it works as a sales funnel for their paid models, same as with chinese models - people likely just go for google/openai/anthropic in this case - but I’m thankful for their existence.


So far it's better for equivalent Qwen 3.5 workloads, and much less expensive. As you mention Qwen spends way too much time/tokens reasoning, so it ends up being more expensive than you'd think based on its model card (also IME, flaky).

I actually think this model is a Big Deal because there's a whole world out there of people building on top of Qwen and other Chinese models, and now Mistral has just released one of the best generalist FOSS models in its price/size range at an excellent price ($0.60/1M output is a steal). Mistral could potentially grab a lot of that.

Personally I am going to build off of it and invest in their ecosystem now, with this model, because it's definitely worth paying for at the current price. Whether Mistral or some other venture comes out with the next big thing in that category is anybody's guess, but I hope now that labs are starting to converge on more rapid release cycles, I'm hoping Mistral won't be far behind.

The main thing for me though is that for small model use cases, it just doesn't make sense to pay a ton for Haiku/Gemini and other expensive small models that you can't self-host or finetune or generally build upon. They cost too much and can't be tinkered with. Also, the range in which you'd want the incrementally better performance of something like Haiku over Mistral, but not enough to think about the benefits of tuning or self-hosting inference, are few for me. But at the same time, if you're going to invest in building on top of someone else's product, you want them to be trustworthy and long-term partners.

I'm excited to give them a shot


Eh, I found several interesting things from various tracking tools. Take a nap? Sleep is destroyed this night. Exercise in the evening? Same. Not something I’d pay attention to without noticing the chart afterwards.

There’s also the motivation factor. I’m not sure of the total %, but I certainly did some exercising just to fill the daily goal. Nothing life-changing, but for the price of a cheapo apple watch se once every 5 years or so, more than worth it.

It’s not unlike simplistic time tracking on my iphone. I spent a lot of time on bullshit websites. Obviously I knew it was happening, but the sheer magnitude was surprising. It’s akin to acute pain letting you know there’s a health problem vs something brewing in the background that you are vaguely aware of, but have no motivation to truly care about - one is far more noticeable than the other


It is difficult to put into words how much I dislike macos 26. I held out on upgrading for a long time since there were so many horror stories, but to my surprise both iOS and ipadOS 26 aren’t really any different than 18. Maybe because you don’t really do any proper work on it? The graphical differences aren’t anything major when the apps fill the whole viewport anyway.

But macOS? Good lord. I can only hope 27 will unfuck things somewhat, there are so many small annoyances and all of them add to a constant sense of unhappiness throughout the day. I’m really tempted to downgrade back to Sequoia. At least the M4 will be good enough for years if this truly is the new path Apple will take.


I usually end my rides at 24-28 kmh average and outside of cyclists and e-bikes the amount of people that ride faster than me is pretty much zero. 70 is completely absurd, especially for "youth".

I don’t really care if a bunch of reckless kids want to gamble with their lives, but the place to do it is clearly on the street, not on the sidewalk. Those ebikes are pretty much dirt bikes, and nobody sane argues those should slalom between pedestrians, just like cars or motorcycles don’t drive on sidewalks. I don’t want to constantly be on the lookout for a 60 kmh vehicle careening right into me whenever I’m outside, which increasingly happens with food delivery "bikes" as well. There’s no place for them on crowded beaches either.

Wanna go as fast as cars? Cool, do it on the streets, it’s what they’re built for. Helmet (or even clothing) optional, I suppose, it really isn’t my problem.


It doesn’t. I’m not sure it outperforms chatgpt 3


You are not being serrious, are you? even 1.5 years old Mistral and Meta models outperform ChatGPT 3.


3 not 3.5? I think I would even prefer the qwen3.5 0.8b over GPT 3.


I don't understand the target audience of ipad air.

The base ipad is "really big iphone, with a few laptop-esque features". It's reasonably cheap for what it offers, especially if you want a highly mobile media consumption device and handwritten input.

Then there's ipad pro, which is wildly overpriced for its specs -- m4 pro has half!! the ram that the cheaper m4 macbook air has, which is laughable for a 'pro' anything, especially if you have apple intelligence enabled - you get what, 3GB of usable ram once you take OS and apple intelligence into account? Yet, aside from the crazy sticker price, the hardware is a lot better - the 120 Hz OLED display looks amazing and is way brighter, the speakers are quite an upgrage, full blown thunderbolt port for external display and so on. The OS is still toy-like, and ram is pitiful, but there is place for an ipad pro.

And then there's air which is... base ipad with an M-series chip and pretty much nothing else? The display is barely any better than base ipad, the storage and ram are pitiful, the speakers are from the baseline ipad and so on. Just about the only saving grace of the M4 one announced here is 12GB ram, which is the absolute lowest those really ought to have, and really puts into perspective how utterly miserly Apple was about ram pre-AI. I don't understand the value proposition - you want the baseline you buy a much cheaper base model, you want more you get the pro, right?

To be fair the asking price is far less than pro but the upgrades over base model seem so minuscule that I just don't know.


Larger screen option, much better screen, better pencil support - not better support, but a much better pencil (this is HUGE for my daughter for example).

It's crazy to me that someone can look at a $350 device and a $1000 device and say there's not room for something in the middle...


> I don't understand the target audience of ipad air.

For me — 13" laptop replacement with cellular connectivity.

If a 13" version of the base iPad existed, I'd probably get that, but as-is the iPad Air is the cheapest 13" iPad.


I live in Asia and I see all students using iPads instead of laptops. The limitations of the OS are really not felt by the general public. Whatever you listed doesn't even make sense to them, they buy things based on what they can afford. Every iPad works the same to them.


You're not wrong, but I hate the idea of an entire generation growing up without ever using a full powered computer. (Full powered is the wrong word, more like fully capable computer)

We have an entire generation who only knows how to interact with "usability optimized" interfaces with zero friction and zero learning curve.

Not knowing how to use a regular computer creates a barrier to entry for programming and other computing industries that didn’t exist before.


I get it, there's now definitely a hurdle between "I got a computing device" and "I can program on it". I don’t think it's a huge hurdle though, you can and will be able to upgrade or just buy a full computer when and if you encounter its limitations.

Your car can't compete in races, but it doesn't affect you because you're probably not interested in racing. You're more interested in comfort and price.


Manual vs. automatic is a better comparison.

Driving a manual isn’t a required life experience by any means. But the overwhelming majority of people who know how to drive manual appreciate the knowledge and experience. (And it’s not necessarily more expensive, if anything Apple products are typically more expensive)


Because it has a large screen and my wife uses it as her only computer and uses it with a regular $30 Bluetooth keyboard and mouse


It's the cheapest iPad that supports pressure sensitivity on the better apple pencil.


The Air has a better display (laminated, AR coating, P3 colors).


My kids (4 and 6) like to use the iPad Air with the pencil.


I guess my buddies using laptops in electrical engineering 10 years ago also got dumber? Ought to have done programming and CAD with pen and paper.

I wish I had a laptop earlier - or even better, a tablet with a good pen and attachable keyboard. I’m struggling to think of a disadvantage vs dead tree [note]books. Doodle right on the pdf textbook, dump things to remember into some flashcards app, have notes as searchable files / the ability to share them with everybody, or just a calendar of what’s happening when so you’re not surprised by a test that was announced when you blew off school for a day to do stupid teenager things.

The only actual issue is that computers are excellent slaves but terrible masters, and it’s a lot easier to get distracted by doom or tiktok when you got a computer you’re actively using. Yet surely this is solvable? Given how annoyingly locked down the average company-given dev machine is, surely it’s possible to restrict it for students during school time? It should certainly be much easier than to control private smartphones.


When I first started learning C in uni many years ago, we were forced to use vi and command line, despite there being functional IDEs.

The argument then was IDEs cause cognitive offloading and you don't actually learn to the fullest extent. By forcing us to do everything manually helped us understand how the compiler works, how to debug errors, etc.

This is what current systems are doing. There is a good article that explains it much better

https://papers.cnl.salk.edu/PDFs/Memory%20Paradox_%20Why%20O...

> Oakley, B., Johnston, M., Chen, K.-Z., Jung, E., & Sejnowski, T. (2025).

> “The Memory Paradox:

> Why Our Brains Need Knowledge in an Age of AI.” In The Artificial Intelligence Revolution:

> Challenges and Opportunities (Springer Nature, forthcoming).


I thought at first that you said its easier to get distract by Doom as a comment to how this problem is quite old


Did you read the article?


I think he might have gotten too distracted.

Yeah, in an academic setting, in higher education, it might make sense like he mentioned. Still a personal preference. for me a laptop will never beat taking notes by hand on paper.


I wish the excel clones were better. LibreOffice’s UI is extremely dated imo, to the point it doesn’t even let you make a damn table, but at least what’s there works correctly. OnlyOffice is not only missing some pretty basic functionality such as preferences (???), it also inexplicably deleted a single spreadsheet out of a multi-sheet file on two occasions on macOS and generally has some peculiar functionality and ux here and there.


I'm only a light user of office programs, both at work and at home. I have access to M365, but for my personal usage I prefer LibreOffice over MS Office, especially when it comes to spreadsheets. I generally don't mind the UI of the MS suite, but I find it's getting increasingly bloated and slow, and sometimes updates move UI elements around for no benefit that I can perceive. I haven't experienced the same with LibreOffice; it's lighter than MS Office I find it easier to find the options I'm looking for, which I know exist but don't always remember _where_ they live, because of the low frequency with which I use them.

With Excel in particular, there is something I can't put my finger on that I just don't get along with. It's unintuitive in a way that I can't describe, but which I notice about half the time I use it. Sometimes clicking doesn't do what I expect it to do, clipboard contents are lost all the time, scrolling resets or jumps around for reasons I don't understand. I don't have the same issues with LibreOffice Calc, which is why I choose it for my personal work. In fact, I think Google Sheets is the most pleasant to use of the options I've tried, which is something I thought I'd never say about a web-based alternative to a native app...


Regarding Excel's weird warts... Microsoft knows all about them but they're stuck with it for backwards compatibility. The business world has a billion Excel scripts and macros done by barely technical users that all inadvertently depend on the details of things like the scrolling and clipboard behavior. Trying to improve that would break all of that. Same as all the weirdness in JavaScript, Microsoft has to just call it a feature and live with it.


that's not 100% true

excel 2003 did 95% of what "modern" excel does without most of those issues...


What I experienced with excel is that it provides an ability to edit cells, but then it suddenly jumps to another cell (IIRC when you press arrow keys as a text editor reflex). To disable this behavior click the cell then click the cell content field and edit there.


I'm surprised at all the comments deriding LibreOffice's interface on here. It's never given me any trouble (even when making tables) and I've been using it preferentially for 20 years over MSOffice, even when schools or employers are actively paying for my Microsoft subscription. In fact, LibreOffice does something very important a lot better than MSOffice: importing CSV files correctly across locales.


> “LibreOffice's interface on here. It's never given me any trouble (even when making tables)

LibreOffice Calc doesn’t have tables in the sense of Excel “insert > table”. People have been looking for it and asking for it for fourteen years in this thread: https://ask.libreoffice.org/t/creating-tables-in-calc/1433


Agreed. LibreOffice's sane WIMP interface is a feature, not a bug, when the alternative is to use those horrid ribbon-like interfaces.


IronCalc to the rescue?

https://www.ironcalc.com/


Thanks for the mention! That's indeed the plan



Yeah. This is the curse on any legacy software that doesn't enforce strict separation of logic and UI. Any larger change to the UI requires an awful lot of manpower that open source projects usually don't have.

I wonder if it would be possible to extract the spreadsheet data model and logic into a library completely separate from the UI. This would enable a diversity of UIs, and also interoperability between different tools.


each time someone sends me an xls or xlsx file, i am scared to open it in libresoft to mess up its formatting or miss something important. I always then rever to gsheet.


> LibreOffice’s UI is extremely dated imo

It feels so bland and hard to read. Maybe that's because of java. How did Excel 5.0 look so good?


There is no Java in core LibreOffice, it just has some weird Java-based extension system because of its Sun history.

LibreOffice uses an extremely dated, also messy, homegrown UI toolkit and has resisted the idea of switching to something last (really) updated this millennium (sic).


I am currently trying to wrangle this UI. It is called VCL and it was initially created in the early 1990s.

If you want to see my efforts, I have a number of branches on github:

Phase 5 can be located here:

https://github.com/chrissherlock/libreoffice-experimental/tr...

Check the source here:

https://github.com/chrissherlock/libreoffice-experimental/tr...

in particular keep an eye on:

https://github.com/chrissherlock/libreoffice-experimental/tr...


I think I've tried every spreadsheet program still being maintained at this point. Try gnumeric, it's a clear cut above everything else.

Mandatory Excel rant: Excel can't be trusted with data destined for publication. It's bloated, buggy as hell, user hostile, and has set genetics research back with its utterly braindead autocorrect. The default plot options are the exact polar opposite of how data are presented in science, and almost impossible to make serviceable. Everything Excel touches ends up looking like a hastily thrown together 6th grade science project. Libreoffice is also riddled with serious bugs and also loses data, but hey it's free and not a decades old flagship product from a multi billion dollar tech company.


> Try gnumeric, it's a clear cut above everything else.

Gnumeric rocks, even features Montecarlo built-in, I have it installed in my personal machine, but a major limitation is that they stopped providing windows builds, up to the last time I checked, so I can't use it at work.


>> Libreoffice is also riddled with serious bugs and also loses data

As a user of Libreoffice for years, me thinks you are doing fud.


I'm also a Libreoffice user and have been since its inception. It's good software and I recommend it to people. The fact is just that gnumeric is better than calc. Not just in terms of features or feel either. I have personally lost data in calc spreadsheets that gnumeric handles without issue.


Hey, I'm very interested in this because LibreOffice annoys me and I can't explain why. It's not the "dated look" that everybody complains about; but I suspect it's related to UX somehow.

Could you articulate why Gnumeric is better than everything else?



From your second link

> Therefore, the problem is not necessarily with Excel. Equally, the problem is not with the IEEE 754 standard either. It’s just the complex nature of the world of mathematics and computing that we live in.


The IEEE 754 standard covers decimal floating point arithmetic, too. Decimal floating point avoids issues like 0.1 + 0.1 + 0.1 not being equal to 0.3 despite usually being displayed as 0.3. Maybe it's reasonable to use that instead?

Some earlier spreadsheets such as Multiplan used it (but not in the IEEE variety) because it was all soft-float for most users anyway.


Excel the one who decided to use only 15 digits. It is an Excel problem...


I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: