Hacker Newsnew | past | comments | ask | show | jobs | submit | softfalcon's commentslogin

This... so much this.

> too many SKUs and models - it takes a paragraph to figure out how 2 Dell laptops from the same release year differ.

And yet, I just watched a YouTube video where a "PC guy" was like, "adding the Neo just completely confuses the Apple product line. Are we heading towards having too many Apple options that confuse the buyer here?"

I get it, other than price, the Neo and Air are a bit confusing product wise. Have they looked at how Asus, Lenovo, and Dell are doing their products though? It's absolutely wild the disparity between PC and Apple for laptops.

I run both PC's and Mac devices in our house, we use what fills the job. Recommending PC laptops for family members feels like a total crapshoot though. Every time, I do all I can to find the right device for their needs and there are just so many trade-offs. Maybe I get all the right specs, ensure it doesn't thermal throttle, keyboard/trackpad are A-OK... but the webcam is trash. Ooof... now Mom is complaining about how no one can see her properly at bridge club call.

I brought up how the Neo might do to the PC industry what the Air did to Ultrabooks back in the day. The amount of hate I got on YouTube/Verge with copy-paste, "hahaha, wut, with 8 GB of RAM? lmao, lol, you Apple bot?!" was expected, but also disappointing. There is clearly a market segment happy to continue to put up with the mess that Dell/Lenovo are selling (anything but a Mac).

Wild how tribal we are to our corporate computer overlords.

The era where something like Framework with its fully customizable, repairable, modular laptops becomes the standard can't come soon enough.

For the time being, I'll let Apple/PC continue to duke it out. Hope some competition helps in the long run. :shrug:


> I get it, other than price, the Neo and Air are a bit confusing product wise. Have they looked at how Asus, Lenovo, and Dell are doing their products though? It's absolutely wild the disparity between PC and Apple for laptops.

Yep.

I'm a long-time ThinkPad user, but I have no idea how Lenovo's ThinkPad T series differs from the ThinkPad E series or ThinkPad L series or ThinkPad X series, and their website certainly isn't going to tell me. I keep on buying T series because I'm honestly afraid of trying anything else.

To say nothing of Lenovo's non-ThinkPad laptop brands, including Ideapad, Legion, Yoga, ThinkBook (!), and LOQ.

I really don't know what laptop to recommend to a friend. One friend showed me specs for an Asus they found at Best Buy, and it looked okay, so I said "It's probably fine." Turns out it was shoddily made and overpriced: they had to sent it back not once but twice because the wifi and then the camera didn't work out of the box, then a few months later the hinge broke.

I am not a Mac fan, but it's easy to recommend them because you at least know they are universally well-built machines.


> I have no idea how Lenovo's ThinkPad T series differs from ...

My personal rundown and how they get assigned:

E - Educational / Lower office personnel spec

L - Office personnel you hate spec, but don't offer the E because they might complain.

T - Give this to all the technicians because they can't take care of anything and it will survive typically.

P - Give this to the engineers who believe having an RTX gpu will actually help them so that they are happy, and to the CAD operators who actually need it.

X - Smaller/Ultrabooks before the term got started, now somewhat a blurry line because T series have gotten lighter/thinner. But the X1 Carbon sure is a great way to spend a ton of money for a light laptop when a T-series would suffice.

Personally I stick to older used X series (currently x250) because I just enjoy a small laptop and they are dirt cheap now.


This still doesn't tell me how they differ. What are the factual objective measurable differences between E/L/T/P?

I was assigned an E14 once. Compared to a T14:

The case is all thick ABS.

It weighs like 2.4 kg, and the weight is unbalanced.

The USB-C charge only works at 20V, nothing less.

While charging it overheats and spins up the fans.

It came with a TN screen with terrible viewing angles, that could not be used in a brightly lit room. I didn't use the laptop for two months while I waited for a replacement screen from aliexpress.

Keyboard is much thinner, the trackpoint drifts easily.

Camera quality is worse, somehow it cannot handle sun-lit scenes. Microphone and speakers are similar to the T14.

It stopped receiving firmware updates after two years.

It uses about 0.5 W while suspended, so its tiny 48 Wh battery typically doesn't last the weekend with the lid closed.

The motherboard has design issues, a missing protection diode in the headphone jack microphone input ended up frying the CPU due to a ground loop. Meanwhile the T14 has eaten the same ground loop and even a 48V passive PoE in an accident and dealt with it by rebooting. A T450 from 2015 is still running.


Interesting, I own an E14 and it charges with 12V PD profile, stock ugreen powerbank. Maybe they differ across models?

Spoiler: they are all identical hardware, but marketed differently.

I think I got it:

- E is for economy

- L is for loser

- T is for tank

- P is for power

- X is for executive


Fine, but how is anyone supposed to divine all that nuance from a single letter?

As much as I hate Apple, they really do have product names down to a science.


Neo and Air are quite simple when looking at it from the bottom up. Air is the "nice" Neo for basically $500 more. Backlit keyboard, MagSafe, Thunderbolt 4, M5, way faster SSD speeds, double the RAM, larger display, Force Touch trackpad.

> "hahaha, wut, with 8 GB of RAM? lmao, lol, you Apple bot?!"

And it would seem they never learn either. I saw the same comments when the M1 Air came out, then they quickly shut up when people were pushing those little base model airs well beyond what anyone thought they were capable of.

The same thing is happening with the Neo now. It feels like an M1 moment all over again for the PC OEM industry.

If you aren't a gamer, there is zero reason at this point to consider any other laptop besides a macbook. Apple now has one for every price point. This neo is going to destroy the consumer PC space. Dell, HP, Acer are probably sweating right now.


They're not sweating at all; they'll do what they always do. They'll release a new model to compete in time for Christmas 2026. They'll call it the ASUS Nuevo X856G-L or the Acer Nova 9500X or the Alienware Morpheus ZS and that will be it. They won't even consolidate their line at the 600$ price point; just one more model, bro!

Their sales will continue tapering off and they'll do what they always do; reduce investments, fire some designers and engineers, keep old models out even longer, and move out of Apple's way by selling even more 380$ laptops for 400$ while Apple siphons even more profits by selling a 400$ laptop at 600$.

That's how PCs die.


Are you absolutely sure they don't want us to add the capacity for them with a pathway for further government subsidies?

Almost everything in tech has been subsidized in one way or another via tax avoidance schemes or outright lobbying and manipulation of the market.

Why would this be any different?


So... have we now confirmed that the only thing preventing us from running macOS off our iPhones is a software limitation?

(I'm being facetious, if the hardware was open, someone would have already written a custom boot loader for this :P)


What happens when I send an extremely high throughput of data and the scheduler decides to pause garbage collection due to there being too many interrupts to my process sending network events? (a common way network data is handed off to an application in many linux distros)

Are there any concerns that the extra array overhead will make the application even more vulnerable to out of memory errors while it holds off on GC to process the big stream (or multiple streams)?

I am mostly curious, maybe this is not a problem for JS engines, but I have sometimes seen GC get paused on high throughput systems in GoLang, C#, and Java, which causes a lot of headaches.


Yeah I don't think that's generally a problem for JS engines because of the incremental garbage collector.

If you make all your memory usage patterns possible for the incremental collector to collect, you won't experience noticeable hangups because the incremental collector doesn't stop the world. This was already pretty important for JS since full collections would (do) show up as hiccups in the responsiveness of the UI.


Interesting, thanks for the info, I'll do some reading on what you're saying. I agree, you're right about JS having issues with hiccups in the UI due to scheduling on a single process thread.

Makes a lot of sense, cool that the garbage collector can run independently of the call stack and function scheduler.


OP doesn’t know what he’s talking about. Creating an object per byte is insane to do if you care about performance. It’ll be fine if you do 1000 objects once or this isn’t particularly performance sensitive. That’s fine. But the GC running concurrently doesn’t change anything about that, not to mention that he’s wrong and the scavenger phase for the young generation (which is typically where you find byte arrays being processed like this) is stop the world. Certain phases of the old generation collection are concurrent but notably finalization (deleting all the objects) is also stop the world as is compaction (rearranging where the objects live).

This whole approach is going to be orders of magnitude of overhead and the GC can’t do anything because you’d still be allocating the object, setting it up, etc. Your only hope would be the JIT seeing through this kind of insanity and rewriting to elide those objects but that’s not something I’m aware AOT optimizer can do let alone a JIT engine that needs to balance generating code over fully optimal behavior.

Don’t take my word for it - write a simple benchmark to illustrate the problem. You can also look throughout the comment thread that OP is just completely combative with people who clearly know something and point out problems with his reasoning.


Even if you stop the world while you sweep the infant generation, the whole point of the infant generation is that it's tiny. Most of the memory in use is going to be in the other generations and isn't going to be swept at all: the churn will be limited to the infant generation. That's why in real usage the GC overhead is I would say around 15% (and why the collections are spaced regularly and quick enough to not be noticeable).


I've been long on JS but never heard things like this, could you please prove it by any means or at least give a valid proof to the _around 15%_ statement? Also by saying _quick enough to not be noticeable_, what's the situation you are referring too? I thought the GC overhead will stack until it eventually affects the UI responsiveness when handling continues IO or rendering loads, as recently I have done some perf stuff for such cases and optimizing count of objects did make things better and the console definitely showed some GC improvements, you make me nerve to go back and check again.


Yeah I mean don't take my word, play around with it! Here's a simple JSFiddle that makes an iterator of 10,000,000 items, each with a step object that cannot be optimized except through efficient minor GC. Try using your browser's profiler to look at the costs of running it! My profiler says 40% of the time is spent inside `next()` and only 1% of the time is spent on minor GCs. (I used the Firefox profiler. Chrome was being weird and not showing me any data from inside the fiddle iframe).


To me this is just "fake test". As I have said really world cases involves consistent IO loads and/or rendering loops, for example in my case I need to load tons of pixel data and decode them in works, then at the same time use canvas to render the decoded image and huge chunk of array data, they are real world high loads, there are tons of objects created during the process and way less counts than the "fake test", yet still optimizing the object counts made huge difference to the final performance.

Let's say talk about this in another more general case: virtual windowing. If anyone has tried to implement stuff and hit performance bottle neck and then find virtual windowing could help, it definitely involves two problems to solve, first is the UI responsiveness when more and more stuff got created and rendered, the object count usually should be way less than "10,000,000", yet still you could hit the wall.

I think I might be too negative about it, but I just want to share the real cases here.


JSFiddle link missing.



Thanks for this. I was feeling similarly reading the original post.

I was trying to keep an open mind, it's easy to be wrong with all that's going on in the industry right now.

Thanks for clarifying some of the details back to what I was originally thinking.


> Adding types on top of that isn't a protocol concern but an application-level one.

I agree with this.

I have had to handle raw byte streams at lower levels for a lot of use-cases (usually optimization, or when developing libs for special purposes).

It is quite helpful to have the choice of how I handle the raw chunks of data that get queued up and out of the network layer to my application.

Maybe this is because I do everything from C++ to Javascript, but I feel like the abstractions of cleanly getting a stream of byte arrays is already so many steps away from actual network packet retrieval, serializing, and parsing that I am a bit baffled folks want to abstract this concern away even more than we already do.

I get it, we all have our focuses (and they're ever growing in Software these days), but maybe it's okay to still see some of the bits and bytes in our systems?


My concern isn't with how you write your network layer. Use buffers in there, of course.

But what if you just want to do a simple decoding transform to get a stream of Unicode code points from a steam of bytes? If your definition of a stream is that it has UInt8 values, that simply isn't possible. And there's still gonna be waaay too many code points to fall back to an async iterator of code points.


I think we're having a completely different conversation now. The parent comment I originally replied has been edited so much that I think the context of what I was referring to is now gone.

Also, I wasn't talking about building network layers, I was explicitly referring to things that use a network layer... That is, an application receiving streams of enumerable network data.

I also agree with what you're saying, we don't want UInt8, we want bits and bytes.

I'm really confused as to why the parent comment was edited so heavily. Oh well, that's social media for you.


Not the person originally replying, but as someone who avoids JS I have to ask whether the abstraction you provide may have additional baggage as far as framing/etc.

Ironically, naively, I'd expect something more like a callback where you would specify how your input gets written to a buffer, but again im definitely losing a lot of nuance from not doing JS in a long while.


> I'd expect something more like a callback where you would specify how your input gets written to a buffer

Yes, and this is the way it works JS currently.

The reason I'm commenting is because it appeared folks were advocating to stop doing this (despite the fact it seems to work just fine).


This reminds me of the Unreal Tournament: Xan episode from the Secret Level series.

Link for those curious or confused as to what I'm talking about: https://www.youtube.com/watch?v=1F-rAW3vXOU

Forcing AI to fight in an arena for our entertainment, what could go wrong? (this was tongue in cheek, I am fully aware LLM's currently don't have conscious thoughts or emotions)


I also want what you're describing. It seems like the ideal "data-in-out" pipeline for purely compute based shaders.

I've brought it up several times when talking with folks who work down in the chip level for optimizing these operations and all I can say is, there are a lot of unforeseen complications to what we're suggesting.

It's not that we can't have a GPU that does these things, it's apparently more of a combination of previous and current architectural decisions that don't want that. For instance, an nVidia GPU is focused on providing the hardware optimizations necessary to do either LLM compute or graphics acceleration, both essentially proprietary technologies.

The proprietariness isn't why it's obtuse though, you can make a chip go super-duper fast for specific tasks, or more general for all kinds of tasks. Somewhere, folks are making a tradeoff of backwards compatibility and supporting new hardware accelerated tasks.

Neither of these are "general purpose compute and data flow" focuses. As such, you get the GPU that only sorta is configurable for what you want to do. Which in my opinion explains your "GPU programming seems to be both super low level, but also high level" comment.

That's been my experience. I still think what you're suggesting is a great idea and would make GPU's a more open compute platform for a wider variety of tasks, while also simplifying things a lot.


This is true, but what the parent comment is getting at is we really just want to be able to address graphics memory the same way it's exposed in CUDA for example. Where you can just have pointers to GPU memory in structures visible to the CPU, without this song and dance with descriptor set bindings.


Ah, yeah. I see what you're saying. That is more of an open vs proprietary platform problem.

I am fairly sure that nVidia intentionally wants to keep addressable memory as a feature only for CUDA (among many other features).

Having CUDA be far superior to other shader code methods is good for vendor lock-in to their hardware, their software, their drivers, etc.

It is really sad seeing that the addressing is possible, but they won't open it up to everyone.


> My method was simply to think. To think hard and long... This method never failed me. I always felt that deep prolonged thinking was my superpower. I might not be as fast or naturally gifted as the top 1%, but given enough time, I was confident I could solve anything.

This mindset is a healthy and good one. It is built on training yourself, learning, and practicing a discipline of problem solving without giving up.

Persistence is something we build, not something we have. It must be maintained. Persistence is how most good in the world has been created.

Genius is worthless without the will to see things through.


I don't mean to come across as far too cynical, but in what world has a software license ever stopped the greedy and powerful from pillaging the IP of other people smaller and weaker than them?

In my opinion, libertarianism in software is a hollow dream that leads people to make foolish decisions that can't be protected. This makes it easy for corporations to exploit and quash any barely audible opposition.

Almost as if by plan, the libertarian mindset has eroded and weakened open source protections, defanging and declawing it every step of the way.


Exactly


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: