Hacker Newsnew | past | comments | ask | show | jobs | submit | tristanm's commentslogin

This is a good illustration of the core problem we have in "anomaly detection" in data science. Often we are presented with a challenge that if solved, would negate the presence of the challenge itself: We have to look for events that aren't explained or predicted to exist by our current understanding of the given system. To find them, we collect all events and evaluate their likelihood under our best model, taking the least-likely as our "anomalous" events. Then, once found, we have to explain them. But to explain them requires that we understand the system well enough to predict the existence of those events. If we did, we could have produced a better model, and that model would have rated those events as more likely. So they wouldn't have shown up. This contradiction seems to be inherent to the whole concept of anomaly detection.


It's not a contradiction. The anomalous events tell you to improve your model, so while your model yesterday was insufficient your model tomorrow will not be. If you're wondering why the model yesterday is not the best possible already, it's because you make guesses about what's important and what's not; guesses which are refined by correcting your model in the presence of anomalous events.


If you start with a weak model that doesn't contain all the knowledge you have available, your anomalies will contain many irrelevant or already known things. If you start with a strong model representing the best current understanding, then correcting the model is not so straightforward.


Suggesting any model that’s not 100% consistent with all known information is weak clearly misses the point. Models which can be automated in a reasonable timeframe on limited hardware beat those who can’t.

The goal is to find interesting things in the data, not simply take years of data and return “everything looks normal.”


There is one way that it would not solve the problem to build more housing. That way would be that most renters and homebuyers are basically irrational, or systematically overvaluing the location of their residence when better options exist. I would argue that this is actually the case in the SF Bay Area.

For example, I live in Pittsburg, almost at the very end of the yellow line (it recently got extended to Antioch). I work in SF making a decent salary. My commute is pretty long, nearly an hour one way, sometimes longer. My rent? Well, for a two bedroom, two and a half bath condo with a garage, porch, and backyard, it costs me roughly $2100/month, and we don't have rent control here. My rent has been raised only twice in the 3 years I've lived here, and only by about $50 each time. The same amount of space in San Francisco, by my best estimate, would cost somewhere between $4k and $5k a month. Also, that space would probably be in a much much older building, in a denser and more dangerous place than my town.

So, one could immediately ask, if I'm able to work in SF with an SF-commensurate salary, and pay this low in rent, why is no one else doing this? As far as I can tell, people even living in SF are very lucky to have less than a 30min commute one way. So I tack on an extra hour per day of commute, or, lets say, roughly 20-25hrs per month.

People do vary in their subjective valuation of a long commute. I probably do consider it to be less of a problem than most people. But do people really consider it to be worth nearly $2k-$3k a month? For someone making near what I am that would be more than my average hourly salary for time spent on the train. Again, people do vary in their respective valuations of time spent doing something other than optimally, but I would be surprised if it was worth that much.

Also consider that housing is continuing to be built on the other side of the mountains in the east bay, and that in Pittsburg and especially in Antioch it is possible to get a lot of space for very cheap, still. Its not unreasonable to expect transportation to get better over time, either.


> People do vary in their subjective valuation of a long commute. I probably do consider it to be less of a problem than most people. But do people really consider it to be worth nearly $2k-$3k a month?

Yes. Two hours of commute every day make me extremely impatient and unhappy. And grump with coworkers and just generally fellow humans. Even 30min one-way, especially on BART, would make me unhappy. 30min on a bike is OK though.

If it were on a nice train, with seating and desks (to work on a laptop), it'd be a bit different. But BART definitely isn't that.

Before I started to mostly WFH, I had a ~7 min bike ride to work, that was fairly nice.

> Its not unreasonable to expect transportation to get better over time, either.

Do you see any realistic signs that BART into the city will become meaningfully faster or more pleasant in the next ~10 years?


Just FWIW I have a very similar living space in the city (admittedly older) for $3500. Personally I would consider the $1400/mo worth it to have my 20-30 minute commute over an hour commute, because to me the extra ~60 minutes a day at home is worth a lot.

I don’t think it’s morally better or anything like that, just worth it to me.

My assumption based on market rents is that there are enough people who feel it’s “worth it” for them to drive up the rent, otherwise you are right that more people would be moving out to the edges.


It really depends on a number of factors.

An hour including a short country walk either side, via a big comfy train with air conditioning, leg room, quiet passengers etc can be manageable. Nice, even. Time to read a book, sit with a laptop, look out of the window, unwind after a day's work, etc.

An hour via a cramped, warm, standing-room-only tube train, with multiple changes, and walking along busy roads on either side - hellish.

The latter, for me, would be a temporary measure for any amount of money. I wouldn't consider that place a home. It's a pit stop - I'd consider myself on secondment, basically, you're saving to go and live somewhere that isn't a bolthole.


"So, one could immediately ask, if I'm able to work in SF with an SF-commensurate salary, and pay this low in rent, why is no one else doing this? As far as I can tell, people even living in SF are very lucky to have less than a 30min commute one way. So I tack on an extra hour per day of commute, or, lets say, roughly 20-25hrs per month. "

To me this increase in commute time means going from having time for extracurricular activities during the week to losing the whole work week.


Two points I forgot to add:

One, there seem to be a lot of people who commute via car / bart / caltrain from places in the south peninsula, san jose, or south-east bay (fremont area roughly). In these places, rent is lower than SF but not notably lower, especially in the peninsula. And traffic / commute times are notoriously horrible coming from there.

Two, if people are rational about real estate values, it suggests that people truly don't expect any expansion at all to happen in the greater bay area. Not only that, but they don't expect work-from-home or telecommute to become more commonplace. Either of those things would substantially lower the cost of distant living, either by increasing the (future) value of potential real estate investments made now, or by reducing the current cost via reducing lost hours. I can think of arguments why we should expect more WFH, and to a lesser degree more expansion, but that is a whole other debate.


It's interesting that in order for his pitch to work (if you invest in OpenAI, you will get up to 100x returns), assuming they do build AGI, it still requires that their AGI acquires a very stable, virtually guaranteed advantage of large magnitude. This very strongly requires that they cannot share anything they discover whatsoever. Especially since they apparently plan on using it to make strategic investments to beat the market by a huge margin. That would mean they obtain information (about the economy, world affairs, technology, the future, etc.) not possessed by anyone else, or that information would be reflected in the market already. Any information leakage, whether regarding their AI or whatever it learns about the world, would compromise that advantage.

In other words, what Altman says about "we can't only let one group of investors have that" can't be true, or at least not sincere. The more investors who have access to it, the more its returns get distrubuted across society more evenly (which would be a good thing, obviously), but lowers the incentive for initial investments. They will want to keep it contained within a small group of investors for as long as possible.


Yeah, there's a big assumptions about the nature of an AGI breakthrough, mainly that it will be a snowball of run away value. Why assume this? Is it because we think AlphaGo/Zero can produce human-like cognition? Why wouldn't it be a long, incremental process of X thousand small breakthroughs, over say a decade, where the result is something like average human-level intelligence; maybe the most important invention of our time, but not super intelligence, and not "run away".

(Then after another X years (or decades) you might figure out super intelligence, if regulations haven't intervened by then)

If the trajectory is incremental as described, it seems untenable that OpenAI could keep some major monopolistic advantage on AGI, without being completely un-open/sealed off for decade(s).


Going from 0 human-intelligence level pieces of software to 1 is the hardest part. Once you have 1, you can duplicate it as much as you want given resources. It can also be pointed inward to improve its own effectiveness.

Actually, there are a lot of good arguments for logistic growth. The only ones for linear or sublinear I’ve heard are not strong and mostly take as an implicit assumption “those alarmists and their exponential growth! They probably didn’t even consider that it could be a slower, more incremental growth” instead of actual fully-fledged arguments.

There’s also a meta-argument that I have yet to hear reproduced in anti-alarmist sentiments. Which case demands more attention, if it does happen? If there’s a 5% chance of the growth being exponential, how much attention should we devote to that case, where the impact is much higher than linear or sublinear growth. This is such a big deal - it’s like Pascal’s wager but with a real occurrence that I believe most would admit has at least a small chance of happening.

Apologies for any brashness coming across. I’m still figuring out how to communicate effectively about a thing I feel a lot of emotions when thinking about.


I didn't say linear, but devil's advocate, isn't that roughly how humans learn? We start out as "pre-intelligent" little creatures who, slowly, methodically, with help of others, develop aptitudes and learn about the world. Learning in fact continues in this manner your entire life should you continue... slow, incremental progress requiring teachers, peers, trials/ errors, crises, 1/3 of your life being unconscious in sleep, etc., in the absence of which no learning at all may happen ... and the bot may potentially have greater computation constraints than humans under current technologies, as the brain is far more efficient than any computer today.

I'm not convinced that each of the arcs, elementary intelligence --> average intelligence --> super intelligence, wouldn't be painstaking and roughly linear.


>It can also be pointed inward to improve its own effectiveness.

Assumption. Intelligence (which isn’t defined) may be something that can grow without bound, or it may be something that plateaus just above the brightest human yet (again this is ill defined. “IQ is a number. There are numbers that are higher, so intelligence must be able to grow” is about as much thought as some people put into it) or maybe it is something that can grow without bound, but the effort required grows too.


Use "capability" instead of "intelligence" then. Defined as "ability to solve any problem dwighttk has ever dreamt up."

There's pretty much no reason to believe capability peaks roughly above the brightest human.

Our brains aren't yet even integrated with hardware-optimized algorithm solvers on which to offload minimax or tree search problems, or solve simple game-theoretic situations, or any number of things a computer system is much better and faster at than a human.

It's just another one of those things that you can believe if you want to not spend time worrying about the ethics problem.


The implication is that Altman belongs to the 'hard takeoff' sect of AI religion that believes a feedback cycle of recursive self-improvement kicks in, so that the first AI to surpass human levels of intelligence is also the last.


You think so? I think if he thought hard takeoff was a real risk, he would be devoted to actually making their research "open", in the original sense of the word when Musk was in charge. According to the hard takeoff theory, it is more likely to occur due to "hardware overhangs", where big organizations accrue lots of compute infrastructure over time, but AI progress occurs in sudden jumps. Then all that infrastructure is just sitting there waiting to be eaten up by a hungry AI. These sudden jumps are more likely to occur if AI research is generally undertaken in a secretive manner, with leaks or espionage resulting in staggered spread of knowledge. This is at least my understanding.


I'm not up on all the epicycles in hard takeoff theory, so I'd be happy to hear more from you or other people who have followed it into more fantastical territory.


If openai ends up becoming some sort of hedge fund I'd be very very disappointed...


Or AGI would provide so much benefit that it does not need to be exclusive to OpenAI for the company to accrue significant economic returns.


Yes. This is in the scenario where the benefits are widely distributed. But in that case the value of the entire planet goes up; You would not necessarily need to make an investment in OpenAI in order to reap benefits. Unless you think that scenario is more likely to happen with OpenAI and not via the usual academic research + shared industry research (which I don't see why that would be the case).


Something along this line: Google increases the entire value of the whole internet by a lot, but you also benefit more by investing in it in the early day.


In the normal case of one innovation in the free market, there is a punctuated moment of growth followed by a plateau. The owners of the innovation get the biggest share of the rewards, but after the plateau the benefits become more distributed. They would need to keep innovating if they want to gain further.

In the case of AGI, there is no plateau after the initial acceleration. With recursive self improvement, there is just an exponential explosion of growth. Unless this event was initiated such that distributed rewards were specifically intended to occur, the feedback loop would prevent external pressures from incentivizing any kind of distribution of rewards.


So you're saying that OpenAI is producing a public good in which the whole of society has a stake?


The "hivemind" argument seems to predict that as society scales up (either through massive population growth or through faster and better interconnectedness, such as through the internet) that as a result we should be seeing much faster gains in technological progress especially in the last few decades or so. However there are quite a few observations that a lot of this progress has sort of slowed down compared to the early 20th century (see the arguments for "technological stagnation"). At the very least, technological progress hasn't increased linearly with population growth and better communication. In other words, the rate of technological progress looks more discontinuous and not obviously a function of societal coherence.


I'd argue that what we're seeing is both a centralization of computational infrastructure as well as a massive infiltration of everyday life with digital (computational) technology, often for no good reason other than it's a way to make money. We certainly have not stagnated w.r.t. to hardware or software, but just keep trying to scale to the stars. I'm cynical in that I believe the "AI first" movement is basically offloading the next phase of computational progress on computers themselves, because we either too tired or ran out of ideas or can't keep up, or it's necessary as a competitive business strategy, or it's simply cheaper.

Technology keeps giving people nice gizmos and plenty of flashy entertainment, but it's mostly self-serving, technology begetting yet more technology without clear purpose, as evidenced by stupid shit like Bluetooth toothbrushes (https://www.lookfantastic.de/oral-b-pro4000-x-action-toothbr...) and the insane number of cryptocurrency Ponzi schemes. Toothbrushes that need a freaking network connection and bits worth absolutely nothing. The crescendo of our civilization! Well done. If there's an overmind, it's feeding us shit.

(Sorry again, I'm cynical.)


I don't think your interpretation of that prediction is accurate. Rather it would be that there are "bursts" of technological progress followed by slower or no gains while the world "catches up." That seems to more accurately follow the history of technological progress.

I think a better interpretation would be that those "bursts" happen at tighter intervals, and if you look at the course of history that seems to be the case.

For example, the period between the introduction of horses/plows widely adopted in agriculture in the 1700s and the wide adoption of internal combustion in the 1940s was ~300 years. From Internal Combustion to wide adoption of transistors (1970s) was about 30 years from transistors, transistors to internet about 20 years, internet to ? (Deep Learning 2012) looks like about 15 years.

Not sure if that's a perfect fit but I think it represents a pretty compelling case.


I think that bursts of technological progress follows more from the model of individualized intelligence, whereas continuous progress follows from the distributed, networked model of intelligence.

A promoter of the distributed model of intelligence might argue that Einstein was only able to produce the general theory of relativity because of the knowledge already contained within society, such as the mathematics and physics that had already been built up to that time. All the stuff from Euclid to Newton to Gauss to Poincare and Minkowski that Einstein's work relied upon.

Does that imply that Einstein wasn't really smart? If you narrow your focus to just the innovation Einstein made, where did that come from? Did it come from the "hivemind" or was Einstein himself doing something special that allowed him to develop the insight?

More individualized intelligence would predict that we would see smaller intervals between bursts as society increases in size and connectedness (more chances for Einsteins to appear, more likelihood that they can work together). But if intelligence is somehow an emergent process from the network of all humans itself, then as society grows we shouldn't see many bursts at all, just a fairly continuous increase in knowledge as little bits and pieces get absorbed and distributed.


I think you're making too many assumptions about the inner workings of "the brain".

If we look at actual brains - including Einstein's - are they not bursty? Don't people have periods of greater intellectual output with lulls in between? Seems to match pretty well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: