Hacker Newsnew | past | comments | ask | show | jobs | submit | microtherion's commentslogin

I wonder whether that was the inspiration for the extensive use of green in the interiors of Severance.

I've been wondering about that as well. It appears he & Vox Day were friends/collaborators and may have shared some political views: https://postcardsfromtheageofreason.com/2026/03/01/bpr1/

good athlete != good person

good actor != good person

good writer != good person

good programmer != good person

good person != nice person

nice person != talented person

TANJ TANSTAAFL SLATFATF


He may have been a bit of a Milkshake Duck

stable as in "close the stable doors after the horse has bolted"

It's sort of ironic that at the time, there were many complaints that Apple made its devices thin at the expense of more important features. Now that M series MacBooks are thicker again, there are complaints that they are too thick.


I owned an i9 MBP with a discrete GPU. It absolutely was too thin. The CPU and GPU ran hot, it throttled like crazy. It would drain battery while USB-C docked while idling. Worst laptop I've ever owned.

The M1 Max I replaced it with was the opposite. I don't think I heard the fans for the first month. But it was much larger.

Based on the fanless Air, I strongly suspect an M1 Max in the old chassis would have been totally fine for non synthetic workloads and an M1 Pro would probably have been fine in all scenarios.

But I think they over corrected on the chassis design when they were shipping borderline faulty products and haven't walked it back yet.


I speculate they gave themselves a lot of thermal engineering margin to bump up TDP with the M-series MBP design (or perhaps they underestimated how good the M-series chips were going to be) The battery being at the TSA limit of 100Wh is quite nice as well. Another benefit is that it now differentiates the "Pro" line from the rest of the laptop lineup quite significantly. For most people the Air has enough power now and its plenty thin and light. The pro line is for "true" pros with actually intense workflows.

I'm a dev and the MBP line is definitely overkill for me. The 15" MBA handles everything I can throw at it.


To me, the speech sounds impressively expressive, but there is something off about the audio quality that I can't quite put my finger on.

The "Anger Speech" has an obvious lisp (Maybe a homage to Elmer Fudd?). But I hear a similar, but more subtle, speech impediment in the "Adoration Speech". The "Fearful Speech" might have a slight warble to it. And the "Long Speech" is difficult to evaluate because the speaker has vocal fry to an extent that I find annoying.


> speaker has vocal fry to an extent that I find annoying.

Was it trained on Sam Altman?


There's a subtle modulation that happens on all of the samples. It sounds almost like some kind of harmonic or phase shift? This is something I notice with every AI generated speech out there.


This is bound to be a question that will be increasingly harder to answer. For instance, Apple processors have at least two different neural accelerators/matrix coprocessors (ANE and AMX) in addition to the integrated GPU. Do these count as "CPU"?


I think the answer is rather simple and boring -- only the CPU type commonly used in cheap cloud machines counts. This still is x86 only.

The homes at home, such as by Apple, don't count for serious workflows that must run reliably.


Personally, I love synthesis that can be generated on the client machine, in real time. For some applications, like screen readers, this is a really important feature.

Of course, the big interest these days is in cloud based assistants, where synthesizing on server and piggybacking on the rest of the answer is quite reasonable.


> Not only is the moon further, you also need to use more fuel to land on it

And take off again, if reusable spacecraft are meant to be used.


I'm quite skeptical of Tesla's reliability claims. But for exactly that reason, I welcome a company like Lemonade betting actual money on those claims. Either way, this is bound to generate some visibility into the actual accident rates.


One thing that was unclear to me from the stats cited on the website is whether the quoted 52% reduction in crashes is when FSD is in use, or overall. This matters because people are much more likely to use FSD in situations where driving is easier. So, if the reduction is just during those times, I'm not even sure that would be better than a human driver.

As an example, let's say most people use FSD on straight US Interstate driving, which is very easy. That could artificially make FSD seem safer than it really is.

My prior on this is supervised FSD ought to be safer, so the 52% number kind of surprised me, however it's computed. I would have expected more like a 90-95% reduction in accidents.


I think this might be right, but it does two interesting things:

1) it let's lemonade reward you for taking safer driving routes (or living in a safer area to drive, whatever that means)

2) it (for better or worse) encourages drivers to use it more. This will improve Tesla's training data but also might negatively impact the fsd safety record (an interesting experiment!)


> ...but also might negatively impact the fsd safety record (an interesting experiment!)

As a father of kids in a neighborhood with a lot of Teslas, how do I opt out of this experiment?


Do your kids randomly run into the road? I was worried about that but then mine just don’t run into the road for some reason, they are quite careful about it seemingly by default after having “getting bumped into by a car” explained to them. I’m not sure if this is something people are just paranoid about because the consequences are so bad or if some kids really do just run out into the road randomly.


Some kids really do just run into the road seemingly randomly. Other kids run in with a clear purpose, not at all randomly, and sometimes (perhaps very rarely, but it only takes once and bad luck) forget to look both ways. Kids are not cookie cutter copies that all behave the same way in the same circumstances (even with the same training).


> Some kids really do just run into the road seemingly randomly. ... sometimes (perhaps very rarely, but it only takes once and bad luck) forget to look both ways.

Just this week I was telling my law school contract-drafting class that part of our job as lawyers and drafters is to try to to "child-proof" our contracts, because sometimes clients' staff understandably don't fully appreciate the possible consequences of 'running into the street,' no matter how good an idea it might seem at the time.


I'm more worried about the Teslas hitting my kids when they're on bicycles or Teslas swerving off the road into the yards. Regardless, it sure would be nice if technology controlling multi-ton vehicles on public roads were subject to regulations, or at least had clearly define liability.


Kids will randomly run into the road. They might run behind a ball or a dog so that it doesn’t end up on the other side or runned over or are simply too excited to remember your stern road safety talk.

The first thing I was taught when I picked up a car was: if you see a ball on the road you stop immediately. This valuable lesson has saved one kid (and my sanity) with me on the wheel.


This guy couldn't follow that rule https://www.youtube.com/watch?v=7E_FtC1BLH0


Yes it does happen. Otherwise smart kids will do dumb stuff sometimes. Like see their friend across the road, but at that moment someone on a motorcycle is accelerating out of their driveway, kid runs across, dead



Same way you opt out of having drunk drivers drive home along your street and pass out while driving, or drivers getting a stroke or other blood clot while driving and crashing into parked cars.


The insurance industry is a commercial prediction market.

It is often an indicator of true honesty, providing there is no government intervention. Governments intervene in insurance/risk markets when they do not like the truth.

I tried to arrange insurance for an obese western expatriate several years ago in an Asian country, and the (western) insurance company wrote a letter back saying the client was morbidly obese and statistically likely to die within 10 years, and they should lose x weight before they could consider having insurance.


I could see prediction markets handing insurance in the future, it could probably get fairer prices but would have to be done right to avoid bad incentives, interesting to think about how that might work.


> providing there is no government intervention.

You mean like forcing people to buy it ad then shaping what product can ad cant be offered with a spiderweb of complex rules?


The clearest example is the state of California preventing insurance companies from increasing annual premium when risks increase. Please understand I have no political opinion about this. As a result, a lot of insurers have completely withdrawn and now its not possible to insure houses properly for many people.

https://www.theguardian.com/us-news/2023/may/27/state-farm-h...

With no government intervention, the price of all fire insurance in California would increase materially to reflect the genuine risk of wildfire damage.


> quite skeptical of Tesla's reliability claims

I'm sceptical of Robotaxi/Cybercab. I'm less sceptical that FSD, supervised, is safer than fully-manual control.


Where I live isn't particularly challenging to drive (rural Washington), but I'm constantly disengaging FSD for doing silly and dangerous things.

Most notably my driveway meets the road at a blind y intersection, and my Model 3 just blasts out into the road even though you cannot see cross traffic.

FSD stresses me out. It's like I'm monitoring a teenager with their learners permit. I can probably count the number trips where I haven't had to take over on one hand.


> I'm constantly disengaging FSD for doing silly and dangerous things.

You meant “I disable FSD because it does silly things”

I read “I disable FSD so I can do silly things”


Exactly. Every bad situation I’ve been in with FSD was when I misread the situation and disengaged it during a maneuver that it was handling safely


It feels unlikely that blindly entering cross traffic, as described in the previous post, is going to be a safe maneuver, though.


I use it for 90% of my driving in Austin and it’s incredible


Do you have HW3 or HW4?


The newest FSD on HW4 was very good in my opinion. Multiple 45min+ drives where I don’t need to touch the controls.

Still not paying $8k for it. Or $100 per month. Maybe $50 per month.


It's your sanity (and money) ¯\_(ツ)_/¯


HW3, unfortunately. Missed the HW4 refresh by a couple of months.


it's edging into the intersection to get a better view on the camera. it's further than you would normally pull out, but it will NOT pull into traffic.


It's not edging; it enters the street going a consistent speed (usually >10mph) from my driveway. The area is heavily wooded, and I don't think it "sees" the cross direction until it's already in the road. Or perhaps the lack of signage or curb make it think it has the right of way.

My neighbor joked that I should install a stop sign at the end of my driveway to make it safer.


Or just manually drive in your own driveway.

The fact that it does't handle some specific person's driveway well is far from a condemnation of the system. I'm far more concerned about it mishandling things on "proper" roads at speed.


The software probably has a better idea of their car’s dimensions than a human driver, so will be able to get a better view of traffic by pulling out at just the right distance.


Having handed over control of my vehicles to FSD many times, I’ve yet to come away from the experience feeling that my vehicle was operating in a safer regime for the general public than within my own control.


Keeping a 1-2 car's length stopping distance is likely over a 50% reduction in at fault damages.


You can get this with just a fairly dumb radar cruise control system, though.


I think you greatly overestimate humans


The problem IMO is the transition period. A mostly safe system will make the driver feel at ease, but when an emergency occurs and the driver must take over, it's likely that they won't be paying full attention.


We aren’t talking about the average human here.

On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.

But you can know most of those adverse conditions don’t apply when you engage FSD on a given trip. As such the standard needs to be extremely high to avoid increased risks when you’re sober, wide awake, the conditions are good, and you have no need to speed.


> On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.

Are those people also able to suprevise FSD like the law and Tesla expects them to? That's also a question.


FSD will pull over and stop if it detects the driver has passed out. Can the law do that automatically?


> you greatly overestimate humans

Tesla's FSD still goes full-throttle dumbfuck from time to time. Like, randomly deciding it wants to speed into an intersection despite the red light having done absolutely nothing. Or swerving because of glare that you can't see, and a Toyota Corolla could discern with its radars, but which hits the cameras and so fires up the orange cat it's simulating on its CPU.


Yeah even corollas have better sensors than a Tesla for driving in fog. It's embarrassing.


> I'm less sceptical that FSD, supervised, is safer than fully-manual control.

I'm very skeptical that the average human driver properly supervises FSD or any other "full" self driving system.


Supervised FSD — automating 99.9% of driving and expecting drivers to be fully alert for the other .1% — appears to go against everything we know about human attention.


this ^^


> betting actual money on those claims

Insurance companies can let marketing influence rates to some degree, with programs that tend to be tacked on after the initial rate is set. This self driving car program sounds an awful lot like safe driver programs like GEICO Clean Driving Record, State Farm Good Driver Discount, and Progressive Safe Driver, Progressive Snapshot, and Allstate Drivewise. The risk assessment seems to be less thorough than the general underwriting process, and to fall within some sort of risk margin, so to me it seems gimmicky and not a true innovation at this point.


Lemonade will have some actual claim data to support this already, not relying on the word of Tesla.


They don’t bet money on just “I’m quite skeptical because I hate the man”, but on actual data provided by the company.

That’s the difference.


The skepticism and hate is based on observing decades of shameless dishonesty, which is itself a form of data provided by the company: https://motherfrunker.ca/fsd/


Still doesn’t change my point: as of today being skeptic because relying on outdated data or historical series is just nonsense. I mean, insurance quotes work in a totally different way.


Do you drive a HW4? I’m 90% FSD on my total car miles


It's all a part of focusing on their core business, like… paying an $28M bribe to Melania Trump.


The other huge, huge difference is that one of the Steves has demonstrated he was able to build a successful product without the other's assistance.


You could say that about the iPod or the iPhone which Woz wasn't involved in, but when you do the math, there's only one Woz and he was essential to define the company in the 20th century, and look how many people it took to "replace" him when it came to Jobs "alone" defining the company in the 21st century.


You could also say it about the Mac, which Woz was, at best, peripherally involved in. Not saying that Jobs created these products "alone" — he obviously did not. But he was a key contributor.

Meanwhile, Woz has been involved in all sorts of products, including a cryptocurrency, and I can't think of a single one that got significant traction.


Another thing that people fail to remember is that Woz designed the Apple II, which is what made Apple a highly profitable company for many years, but instead of embracing that success, Jobs repeatedly tried to kill and replace the Apple II with the Lisa, then the Macintosh, and drove Apple into financial trouble. Apple would have done better, at that time, by simply building more advanced and backwards compatible followups to the Apple II, which is what consumers actually wanted (the original Macintosh was an expensive piece of shit).

The Apple II had 7 expansion slots and was easy to open and service yourself. It was a machine designed for hackers, and it was highly flexible. Jobs kept trying to push his all-in-one closed design when it made no sense. He did unfortunately succeed eventually. What Jobs did after his return was to turn Apple into a "luxury brand", where iPhones are perceived a bit like Prada handbags. One thing I will give Apple is that there is still no PC equivalent to Apple laptops. That can probably only really happen if mainstream PC manufacturers fully embrace Linux.


As Henry Ford is (spuriously) claimed to have said: "If I'd asked my customers what they wanted, they'd have said a faster horse."

Apple did build Apple II models, up to and including the Apple IIgs. They had a good run. And the line was not without its flops — the Apple III was a notorious disaster, though allegedly more due to Jobs than Wozniak.

But none of the pure 8-bit PC vendors survived the 1980s. One of the better qualities of Jobs was that he was not afraid of the company disrupting itself — foregoing the short term success of the Apple II line in favor of the Mac, which in the long run was vastly superior. The same situation played out with the iPhone disrupting the iPod.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: