Hacker Newsnew | past | comments | ask | show | jobs | submit | nfiedel's commentslogin

Lines or amount of code written is a terrible, and easily gamed metric.

Some of the very best engineers write the fewest lines of code, often making fewer yet better abstractions that fulfill more business use cases more simply. They might also help others write less code.

Some of the most prolific coders are great. And some are terrible. The challenge is discerning the two. Coaching a prolific yet bad coder to slow down can be as challenging as coaching up a slower / not yet confident coder. And the slower coder will do vastly less damage in the interim.


I understand this as theoretical possibility, and it could lead you astray if you're trying to tell a 40% developer from a 60% developer... but in practice I have never met a superstar IC who doesn't also ship a ton of code (even while they help out others). If you fire the 10% of ICs who push the least code, to within a rounding error you're extremely unlikely to regret it.


Your best engineers are facilitators, not coding monkeys.


Those are not engineers then, they're managers.

It's strange to see this confusion when it comes to software. Any other discipline and you wouldn't say engineers that don't do any engineering are the best engineers.


They are still ICs, not people managers.


Project management != people management.

Either way, IC means primarily a producer of work product, not a manager.


Project management!= manager. And your PM/PO whatever won't coach your junior devs...


That’s usually what poor engineers tell themselves


Your best engineers are both.


You can't do both unless you are one in your team.

The more people, the less code you'll do.

Software engineering goals is to solve human problems, you can't escape human relation, which eventually means facilitating stuff.


That may be true but people here aren't talking about firing the bottom 10% of ICs, they're talking about firing "50% of Google".


Am surprised to see the pushback asserting this will cause prices to rise. This speed was practical on comcast's cable system back in 2007 (when we got it) and probably earlier. There is a wealth of evidence that higher bandwidth has minimal to marginal costs for typical residential usage.

On to the actual proposal, it raises the minimums from: * Down: 25mbps --> 100mbps * Up: 3mbps --> 20mbps

As a head of household of 5, I carefully monitored and managed our internet usage during covid, and upgraded from comcast to AT&T fiber (1g symmetric). I can say with high certainty that the previous minimum (25/3) would be severely inadequate for a family of 5. This bump to 100/20, while not amazing, is a good step in the right direction. It would make 5 people working/learning remotely at least _possible_.

TLDR: The tech is cheap and decades old. There should be minimal cost to ISPs. The best arguments in this thread, imho, are that we should strive for even higher.


The tax is proportional to the number of days in California, and is only on wealth above $30M (married) or $15M (single).

How many people do you think have $30M and spend >= 60 days a year in California, that do not benefit from business/property/services/educated employees in California? Very, very few.


but do people who spent >= 60 days in california one time continue to accrue benefits from their visit over the next decade? that's the egregious part.


The tax is based on the fraction of time spent in CA, both days and years. The intent (for years) is likely to avoid people striking it rich in startups/movies/etc, and then immediately leaving the state to avoid a tax on the windfall. The intent (for days) is probably to avoid a high net worth CEO "moving" to a close state and flying in frequently to conduct an in-state business. I don't know many working people who spend 60 days/year on vacation :-)

From the bill itself: http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml...

"... percentage of days in the year such taxpayer was present"

"... the portion of a taxpayer’s wealth subject to the tax imposed by this part shall be multiplied by a fraction, the numerator of which shall be years of residence in California over the 10 last years, and the denominator of which shall be 10"

So taking a contrived example of someone most affected who visited for the briefest of 60 day visits one time in 10 years, we would have 60/365 * 1/10 * 0.4% = .006%/yr tax. That seems pretty reasonable for a CEO/movie-star/whatever who visits (most likely to earn more money based on in-state activities). Even over the ten year period this amounts to 0.06% which is TINY. For someone with $100M net worth, over ten years it would amount to $42K of state tax (note that the first $30M is not taxed).


We moved about three years ago and replaced 50+ bulbs with 2 1/2 brands of LEDs.

Phillips - around 40 regular socket, BR30 recessed, and 4x hue bulbs. Not a single failure or flicker. Am particularly a fan of the warm glow for nicer dimming at night.

Hyperikon. They were the only brand with the smaller PAR16 bulbs at the time, at least in warmer color-temp. They look absolutely fantastic as down-wash lights on walls. However, 2/8 are out or intermittent after just 3 years of low usage (they were in a low traffic area).

Random brands from promos / utility give-aways. Used three of these in the garage, and 2/3 already burned out.

TLDR: Quality matters, and high quality bulbs definitely last IME.


Also a fan of warm glow bulbs for any LEDs that get dimmed. FYI Ikea also has non-smart bulbs with warmer colour temperature as they dim.


Skimmed a bit and found some snippets, from which I can't take this paper seriously as it dismisses unsupervised learning / language models over large datasets. Yes, sec 4.3.4 briefly discusses recent work in this area, but only briefly and dismisses it by cherry-picking the least positive result of many.

"Only if we have a sufficiently large collection of input-output tuples, in which the outputs have been appropriately tagged, can we use the data to train a machine so that it is able, given new inputs sufficiently similar to those in the training data, to predict corresponding outputs"

This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.

"supervised learning with core technology end-to-end sequence-to-sequence deep networks using LSTM (section 4.2.5) with several extensions and variations, including use of GANs"

This reads like something generated from a LM (e.g. GPT-2): * Where is any mention of attention or Transformer? * GANs? Have any recent works used GANs successfully for text? There are a few, e.g. CycleGAN, but not widespread afiact.


>> This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.

Which work is that?


OpenAI trained a large (1.5B parameter) Transformer model called GPT-2 on a diverse set of pages from the web. From their paper, GPT-2 "achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting"

Blog entry with link to the paper: https://openai.com/blog/better-language-models/


Thank you for the link.

I'm not sure I'm convinced by OpenAI's claim that their model performs zero-shot learning. It depends on what exactly do they mean by zero-shot learning. My understanding, from reading the linked article (again; I remember it from when it was first published) is that, although their GPT-2 model was not trained on task-specific datasets there was no attempt to ensure that testing instances for the various tasks they used to evaluate its zero-shot accuracy were not included in the training set. The training set was a large corpus of 40 gigs of internet text. The test set for e.g. the Winograd Schema challenge was a set o 140 Winograd schemas (i.e. short sentences followed by a shorter question), so it's very likely that the training set had comprehensive coverage of the testing set, for this task anyway. I don't know about the other tasks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: