Hacker Newsnew | past | comments | ask | show | jobs | submit | crystal_revenge's commentslogin

> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

What makes you believe this? The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

> Slot machines have variable reward schedules by design

LLMs by all major providers are optimized used RLHF where they are optimized in ways we don't entirely understand to keep you engaged.

These are incredibly naive assumptions. Anthropic/OpenAI/etc don't care if you get your "answer solved quickly", they care that you keep paying and that all their numbers go up. They aren't doing this as a favor to you and there's no reason to believe that these systems are optimized in your interest.

> I built things obsessively before LLMs. I'll build things obsessively after.

The core argument of the "gambling hypothesis" is that many of these people aren't really building things. To be clear, I certainly don't know if this is true of you in particular, it probably isn't. But just because this doesn't apply to you specifically doesn't mean it's not a solid argument.


> The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

Well stated


There’s a line to be trod between returning the best result immediately, and forcing multiple attempts. Google got caught red-handed reducing search quality to increase ad impressions, no reason to think the AI companies (of which Google is one) will slowly gravitate to the same.

My (possibly dated) understanding is that OpenAI/Anthropic are charging less than it costs right now to run inference. They are losing money while they build the market.

Assuming that is still true, then they absolutely have an incentive to keep your tokens/requests to the absolute minimum required to solve your problem and wow you.


> What makes you believe this?

Simply, cut-throat competition. Given multiple nations are funding different AI-labs, quality of output and speed are one of the most important things.


Dating apps also have cut-throat competition and none of them are optimised for minimising the time you spend on the app.

~90% of them are owned by Match

They don’t, they’re all owned by Match group

sigh We're doing this lie again? Quality of Outcome is not, has never been, and if the last 40 years are anything to go on will never be a core or even tangential goal. Dudes are trying to make the stock numbers go up and get paid. That's it. That's all it ever is.

You're just being pedantic and cynical.

Goal of any business in principle is profit, by your terms all of them are misaligned.

Matter of fact is that customers are receiving value and the value has been a good proxy for which company will grow to be successful and which will fail.


I mean, yeah. All businesses are misaligned, unless a fluke aligns the profit motive with the consumers for a brief period.

I'm being neither pedantic nor cynical. Do you need a refresher on value proposition vs actual outcomes on the last few decades of breathlessly hyped tech bubbles? Executive summary: the portions of tech industry that attract the most investment consistently produce the worst outcomes, the more cash the shittier the result. It's also worth noting that "value" is defined as anything you can manipulate someone to pay for.

When he says:

> You're just being pedantic and cynical.

What he means is that your point does not align with his narrow world view, and he's labelling you as a pedant and a cynic to justify writing off your opinion altogether.

It's a projection of his fragile world view. Don't take it personally.


Hey man people either get it or they don't. We're doomed.

How is nation-states funding private corporations "cut-throat competition"?

Ok, to be very honest I wrote that in the middle of having a couple of drinks. I guess, what I mean is, countries are funding AI labs because it can turn into a “winner-takes-it-all” competition. Unless the country starts blocking the leading providers.

Private companies will turn towards the best, fastest, cheapest (or some average of them). Country borders don’t really matter. All labs are fighting to get the best thing out in the public for that reason, because winning comes with money, status, prestige, and actually changing the world. This kind of incentives are rare.


> countries are funding AI labs because it can turn into a “winner-takes-it-all” competition.

Winner takes what exactly? They can rip off react apps quicker than everyone else? How terrifying.


What does this even mean? Are you disputing the fact that AI labs are competing with each other because they are funded by nation-states?

Why do you have to compete if you can just say "but China!" And get billions more dollars from the government

He's disputing the idea that nationally funded business initiatives are competitive.

Cut throat competition between nations is usually called war. In war, gathering as much information as possible on everyone is certainly a strategic wanna do. Selling psyops about how much benefits will come for everyone willing to join the one sided industrial dependency also is a thing. Giving significant boost to potentially adversarial actors is not a thing.

That said universe don't obligate us to think the cosmos is all about competition. Cooperation is always possible as a viable path, often with far more long term benefits at scale.

Competition is superfluous self inflict masochism.


> The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

I was surprised when I saw that Cursor added a feature to set the number of agents for a given prompt. I figured it might be a performance thing - fan out complex tasks across multiple agents that can work on the problem in parallel and get a combined solution. I was extremely disappointed when I realized it's just "repeat the same prompt to N separate agents, let each one take a shot and then pick a winner". Especially when some tasks can run for several minutes, rapidly burning through millions of tokens per agent.

At that point it's just rolling dice. If an agent goes so far off-script that its result is trash, I would expect that to mean I need to rework the instructions and context I gave it, not that I should try the same thing again and hope that entropy fixes it. But editing your prompt offline doesn't burn tokens, so it's not what makes them money.


Cursor and others have a subagent feature, which sounds like what you wanted. However, there has to be some decision making around how to divide up a prompt into tasks. This is decided by the (parent) model currently.

The best-of-N feature is a bit like rolling N dice instead of one. But it can be quite useful if you use different models with different strengths and weaknesses (e.g. Claude/GPT-5/Gemini), rather than assigning all to N instances of Claude, for example. I like to use this feature in ask mode when diving into a codebase, to get an explanation a few different ways.


Bill is unrelated to their cost. If they can produce answer in 1/10th of the token, they can charge 10x more per token, likely even more.

That is simply not true, token price is largely determined by the token price of their rival services (even before their own operational costs). If everybody else charges about $1 per millions of tokens, then they will also charge about $1 per millions of tokens (or slightly above/below) regardless of how many answers per token they can provide.

This applies when there is a large number of competitors.

Now companies are fighting for the attention of a finite number of customers, so they keep their prices in line with those around them.

I remember when Google started with PPC - because few companies were using it, it cost a fraction of recent prices.

And the other issue to solve is future lack of electricity for land data centers. If everyone wants to use LLM… but data centers capacity is finite due to available power -> token prices can go up. But IMHO devs will find an innovative approach for tokens, less energy demanding… so token prices will probably stay low.


Opus 4.6 costs about 5-10x of GLM 5.

It only matters if the rivals have same performance. Opus pricing is 50x Deepseek, and like >100x of small models. It should match rival if the performance is same, and if they can produce model with 10x lower token usage, they can charge 10x.

Gemini increased the same Flash's price by something like 5x IIRC when it got better.


I bet that the actual "performance" of all the top-tier providers is so similar, that branding has bigger impact on if you think Claude or ChatGPT peforms better.

I don't know if "performance" is relevant in this context, where these "tools" are marketed to non-technical developers (read: "vibe coders") who are by definition unable to verify the quality of the code produced by their LLMs;

I think branding is the entire game.

My illiterate, LLM-addict cousin is convinced that Claude is the answer to the ultimate question of life, the universe, and everything.

Criticisms of the code he (read: Claude) generates are not relevant to him -- Claude is the most intelligent being to ever exist, therefore, to critique its output is a naive waste of breath.


Performance or perception of performance

Potato potato Tomato tomato


What businesses charge for a product is completely unrelated to what it costs them.

They charge what the market will bear.

If "what the market will bear" is lower than the cost of production then they will stop offering it.


Companies make a loss on purpose all the time.

Not forever. If that's their main business then they will eventually have to profit or they die.

> and equity is a lottery ticket.

What's surprised me is that most of the younger coworkers I've had, having grown up largely through boom times, absolutely believe that this equity has tremendous value. I've had multiple younger coworkers talk about how excited they are to have so much equity in companies that have no visible path to a liquidity event.


There's a mythology in this business and it's highly seductive. You need to get burned once or twice by the reality before you learn.

> "AI is completely useless."

This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.

AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a production code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PRs that are largely AI generated.


> There's a mass psychosis happening

There absolutely is but I'm increasingly realizing that it's futile to fight it.

The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?

But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.


Yes, it's been odd to observe the parallels with the web3 craze.

You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.

The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.


Very similar to web3! On paper the web3 craze sounded very exciting: yes, I absolutely would love an alternate web of truly decentralized services.

I've been pretty consistently skeptical of the crypto world, but with web3 I was really hoping to be wrong. What's wild is there was not a single, truly distributed, interesting/useful service at all to come out of all that hype. I spent a fair bit of time diving into the details of Ethereum and very quickly realized the "world computer" there (again, wonderful idea) wasn't really feasible for anything practical (I mean other than creating clever ways to scam people).

Right now in the LLM space I see a lot of people focused on building old things in new ways. I've realized that not only do very few people work with local models (where they can hack around and customize more), a surprisingly small number of people write code that even calls an LLM through an API for some specific task that previously wasn't possible (regular ol'software build using calls to an LLM has loads of potential). It's still largely "can some variation on a chat bot do this thing I used to do for me".

As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create. I mean "Hamster Dance" was it's own sort of slop, but the first time you say it you engaged with it. Snarg.net still stands out as novel in it's experiments with "what is an interface".


>As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create

I'm hoping that the already full of slop centralized platforms now with LLM fueled implosion will overflow and lead to a renaissance of sorts for small and open web, niche communities and decoupling from big tech.

It's already gaining traction among the young, as far as I can see.


> The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

I think we all do???

Even if I'm not coding a lot, I use it every day for small tasks. There is not much to code in my job, IT in a small traditional-goods export business. The tasks range from deciphering some coded EDI messages (D.96A as text or XML, for example), summarizing a bunch of said messages (DESADV, ORDERSP, INVOIC), finding missing items, Excel formula creation for non-trivial questions, and the occasional Python script, e.g. to concatenate data some supplier sent in a certain way.

AI is so strange because it is BOTH incredibly useful and incredibly random and stupid. Among the latter, see a comment in my history I made earlier today, the AI does not tell me when it uses a heuristic and does not provide an accurate result. EVERY result it shows me it shows as final and authoritative and perfect. Even when after questioning it suddenly "admits" that it actually skipped a few steps and that's not the correct final result.

Once AI gets some actual "I" I'm sure the revolution some people are commenting about will actually happen, but I fear that's still some way off. Until then, lots of sudden hallucinations and unexpected wrong results - unexpected because normal people believe the computer when it claims it successfully finished the task and presents a result as correct.

Until then it's daily highs and lows with little in between, either it brilliantly really solves some task, or it fails and that includes telling you about it.

A junior engineer will at least learn, but the AI stays pretty constant in how it fails and does not actually learn anything. The maker providing a new model version is not the AI learning.


There's a good reason for that. The end result of exploring what they can actually do isn't very exciting or marketable

"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini


There was a pre-LLM version of this called "battledecks" or "PowerPoint Karaoke"[0] where a presenter is given a deck of slides they've never seen and have to present on it. With a group of good public speakers it can be loads of fun (and really impressive the degree that some people can pull it off!)

0. https://en.wikipedia.org/wiki/PowerPoint_karaoke


There is a Jackbox game called "Talking Points" that's like this: the players come up with random ideas for presentations, your "assistant" (one of the other players) picks what's on each slide while you present: https://www.youtube.com/watch?v=gKnprQpQONw

That is very cool. Thanks for posting this - I think I’m going to put on a PowerPoint karaoke night. This will rule! :)

If you like this, search on YouTube for "Harry Mack". Mindblowing

> might need many years to become stable again

People really fail to grasp the significance of this part.

One of our most common apocalyptic fantasies lays this out quite well: nuclear annihilation. The common narrative is about the post-apocalyptic world and rebuilding. But this presumes a new normal has been established.

With climate change we will continue to experience more extreme changes at a faster rate over time with no chance of a "new normal" in our lives.

It took hundreds of thousands of years for humans to develop agriculture. It's no coincidence that this development happened during one of the most stable periods of climate the planet has ever seen. People love to wax poetic on human adaptability, but we were effectively playing on "easy" mode.

While the other side of climate change might be a more hostile earth, the transition period will be worse because you can't adapt. In our lifetimes we may live to see a period of record heat waves in Europe, followed by a transition of Europe to that is dramatically colder (and who knows, maybe back again).

The other major problem is as stability decreases so does our ability to predict the future. It's hard to even know what we might be facing in the coming years, but high variance is usually not great for complex life.


> the transition period will be worse because you can't adapt.

As far as agriculture goes we can adapt but the cost would be exorbitant. Vertical farming is technically doable.


> to introduce early stage students to Scheme before Python, or deep learning before calculus.

This book is part of the classic "Little" series of books starting with the "Little Schemer" which, despite its name and style, is certainly not a novice beginner book.

The later books: "Seasoned Schemer", "Reasoned Schemer", "The Little Typer" and "The Little Prover" are all very advanced books. They share the same style of illustrations and Socratic method, but you will absolutely need to work through them slowly and careful to get value out of them.

The "Little" books are generally targeted to an audience of computer language nerds and pretty much assume you have a solid understanding of programming, familiarity with scheme and the books come from a time when every serious engineer had basic calc knowledge.

These are classics (and I was really impressed with "The Little Learner") but are very serious and challenging texts, that outside the first book, are aimed at advanced readers (and for those readers are true delights).


I think you're overstating the difficulty of Seasoned and Reasoned, both are comfortably undergraduate texts (though Reasoned is a challenging text if it's your first time with that style of programming). We used to teach Seasoned to high school students (advanced high school students, but I'd put them on par with motivated 1st/2nd year college students not more advanced than that). I'd agree with the other three, though, they are advanced texts.

I'm always a bit surprised by the degree that non-tech people don't understand how much they're being openly and transparently manipulated in various ways. Most of my work has been statistical/quantitative in nature from complex A/B testing setups to dynamic pricing algorithms. Yet so many of the most benign parts of my work in the past unnerve some people.

Measuring human behavior and exploiting it for some hope at profit has been an obvious part of my job description for many years. Yet I've had friends and acquaintances that are shocked when they accidentally realize they're part of an A/B test "Wait, Amazon doesn't show the same thing to everyone!?" I've seen reddit conversations where people are horrified at the idea of custom pricing models (something so mundane it could easily be an interview question). I had a friend once claim a basic statement about what I did at work was a "conspiracy theory" because clearly companies don't really have that much control.

To your point, at work the fact that we're manipulating people algorithmically isn't remotely a secret. Nobody in the room at any of my past jobs has felt a modicum of shame about optimization. The worst part is I have drawn a line multiple times at past jobs (typically to my own detriment), so there are things that even someone as comfortable with this as I am finds go to far. Ironically, I've found it's hard to get non-technical people to care about these because you have to understand the larger context to see just how dangerous they are.

I have ultimately decided to avoid working in the D2C space because inevitably you realize you aren't providing any real value to your user (despite internal sloganeering to the contrary) and very often causing real harm. In the B2B space you're working with customers who you have a real business relationship with, so crass manipulation to move the needle for one month isn't worth the long term harm.


I'm always a bit surprised that people that work in tech can be so passive in regards to their civic duty. Instead of going to lawmakers and legislators and trying to stop their employers from destroying society they just quietly watch from the sidelines.

And if laqmakers are silent, then publicising, collecting, and sharing this knowledge to all ends of the earth through mainstream and independent journalism, paying a few hundred dollars out of their Silicon Valley salaries to put up billboards shining a light on the misdeeds of tech companies towards their own customers and society.

Per your examples, when the average person is made aware of the injustice, stalking, and tracking of them, they are not in any way happy with it, and want things to change.


> Per your examples, when the average person is made aware of the injustice, stalking, and tracking of them, they are not in any way happy with it, and want things to change.

Maybe I'm too cynical, but I don't think this the average person will care as much as you think. Even on HN you get plenty of folks who think that tracking is ok or even a benefit since it provides a more personalized experience.

> I'm always a bit surprised that people that work in tech can be so passive in regards to their civic duty. Instead of going to lawmakers and legislators and trying to stop their employers from destroying society they just quietly watch from the sidelines.

Snowden was almost 15 years ago. The only punishment meted out as a result of him coming forward was for Snowden himself. Why would anyone else assume that coming forward would result in substantive change?


> Many of our child-free friends are going to go through a lot of loneliness when they're old

I've seen this "kids are insurance against loneliness" logic repeated often, but I don't believe this bares out in reality. I personally know plenty of child-free older couples who remain quite happy and social. I also know plenty of parents whose kids don't speak to them anymore or whose children have lives on the other side of the country/world. Anecdotally the loneliest older people I know are ones who have put it upon their children to keep themselves from loneliness.

> And despite all that, we love them and we want to have them

As a parent I always find it funny that we need to add this to every statement of frustration of family life (I'm not critiquing you, I also say this every time I mention any frustration about parenting). It is worth recognizing that saying the contrary is fundamentally taboo. I find this to be another under-discussed challenging of parenting: you can never even entertain the idea that "maybe this wasn't what I wanted"


<< I find this to be another under-discussed challenging of parenting: you can never even entertain the idea that "maybe this wasn't what I wanted"

You can absolutely think it as long as it stops there. There is a reason. At that point in the game, your needs and wants are supposed to be subordinate to those of the kids' long term survival. I could maybe understand this sentiment, oh 50 years ago, when you maybe could plausibly claim you had no idea that child rearing is not exactly easy, but unless a person is almost completely detached from society, it is near impossible to miss the "pregnancy will ruin your life" propaganda.

Consequences. They exist. Some are life altering and expected to last a long time.


Some of my friends and family who had kids at a young(er) age - and by that, I mean late twenties or early thirties - seemed totally oblivious to the hardships of parenthood.

You’d think by your thirties you’d do some basic research. Most people just have kids because it’s just “what your supposed to do” and don’t give much thought beyond that.

I don’t know what they thought to themselves, but outwardly they projected rainbows and unicorns until reality eventually hit them.


> You’d think by your thirties you’d do some basic research.

I've often had thoughts like this, but had to just accept that people often don't do basic research. For another example, consider how many people work full-time (160+ hours per month) to make money, but have never bothered to take even a 2-hour course on how to manage it well. They spend all that time making money but no time on how to use it wisely. And then they make obvious mistakes. Unnecessary debt, lack of investments, complain that they never learned this stuff in school, etc. Not trying to sound judgmental, but I always found that surprising.


in my case i made money to not have to think about it, the whole managing money thing was/is repulsive to me.

i only started learning about that when i had kids, so i have something to leave them when i die.


>You can absolutely think it as long as it stops there.

If that's the attitude it renders virtually every discussion about the topic moot and the people in question better stop trying to give life advice to anyone else.

My wife and I don't want kids and we've heard our fair share of (unsolicited) opinions on the topic from people who clearly weren't always happy. I've only ever known one woman I worked with, who was a brilliant scientist, tell me straight up she regrets having children and wished she could have focused on her research.

If that's not something you can honestly say without being berated then clearly the 'propaganda' still works mostly in one direction.


I agree with you but I don't really see what the alternative is. If you openly go around stating that you regret having children, what are people supposed to say? It's better to keep those thoughts to yourself because there, quite frankly, is nothing helpful anyone could say even if they wanted to. Not to mention that it would be unfair to the kids if they got the feeling that you regret having them.

This!

Unfortunately, my soul brought me into that bucket :-( I can speak about it with other (men) friends without children, and to some women without children. But if you ever say that you didnt want children when others with children are around, they see you as an alien.

Esp it gets hard if you are single (what happenend to me now) and you meet new women and tell them that the only reason for the last breakup was that I couldnt bear the stress with out children.

I knew from the beginning of my life on, that it will totally crush me if this happens - coming from a "not supportive family" makes it really hard to >actually want children<, esp. if the same stories now repeat :-(


What even is this? People will say what they will say when they listen to it. It’s not like you are complaining to the kids.

This is a hilariously narrow view of family life.

Life is a lot more complicated and there's essentially limitless possibility between living a life you feel is solely about "paying consequences" or "completely abandoning all responsibility" (which, btw, is still an option. Not great, but neither is the former)

But I do appreciate you providing an object lesson in just how taboo it is to even entertain the thought publicly!


You can entertain it. You just did. But don't expect standing ovation is my very subtle point.

<< This is a hilariously narrow view of family life.

Quite the contrary, it allows for a very broad range of outcomes, because it deals with reality of the human condition.

<< which, btw, is still an option. Not great, but neither is the former

Everything always is. Why, tomorrow I could quit my job and start a bar in Hawaii. As arguments go, this one was pretty weak.

<< Life is a lot more complicated and there's essentially limitless possibility between

Why am I getting this feeling that you completely misread what I wrote.


"But don't expect standing ovation is my very subtle point."

that's the exact reasoning why parents who complain about how hard it is to be a parent get no sympathy from you. You blew a load in somebody (or had a load blown into you) and another human popped out. That's a choice you made for yourself, nobody forced you to, and there is a big giant swath of people out there who couldn't care less.


I sense there is some confusion, but I can't pin point where it is coming from. Is it possible you are not replying the person you thought you were?

>I find this to be another under-discussed challenging of parenting: you can never even entertain the idea that "maybe this wasn't what I wanted"

Because there's no point in thinking about it. Your wife will ask if you want to leave, your children will hate you, and society will hate you, it will make you feel depressed, and meanwhile it won't accomplish anything. It's a dialogue only for yourself, once you acknowledge that, it becomes far less challenging to deal with and you can move forward with dealing with challenges in solvable ways.


My mothers' friends have to fund vacations for their adult children and grand children in order to spend time with them. They wont let her stay at their home.

My mother was giddy when my father died; so I have strong boundaries in our relationship.

My brother moved to colorado after the service and never returned.

I'm not convinced having children is the answer alone. (I say as a childless 35yo)


> They wont let her stay at their home.

There are many reasons this could be the case. The internet (and Reddit in particular) is abound with AITA type discussions around boundaries within families.


Being a parent is orthogonal to being someone people want to spend time with. Unless I knew for sure I was not in the latter group, I wouldn’t use it as a justification for not having kids.

About your first point, I understand why it happens, but I get frustrated at these debates nowadays. Both sides cannot talk about their experiences without having to add something that invalidates the other side choice. They cannot fathom that the other side may prefer the disadvantages of their choice instead of the disadvantages of yours. Maybe it's the human condition to try to point out how the other side will regret their choices to validate our life decisions

Well said. I appreciate people on both sides that can simply acknowledge having kids is great for some people, and not having them is great for others, and the world is big enough for all of us.

Indeed. There was a CBC radio episode last year that had parents discussing regrets. It felt weird to hear people saying these things out loud.

https://www.cbc.ca/player/play/audio/9.6661746


Being able to hook up with random strangers on apps might be fun in your 20s and 30s. When you're old and wrinkly, it's not going to be the same. I hate to say it, but this is especially true for women entering their twilight years. A lot of childless people in our generation are headed to a very sad and lonely future.

COVID was exceptionally hard on these people. A lot of the weirdness of the COVID years was just people going crazy in isolation. Trading random stocks, or ordering crazy nonsense off of Amazon. Being alone is literally psychological abuse and a lot of them were subjected to it for months at a time.


And wife and I are both old and wrinkly and happily child free. Childfree people aren't just hedonists.

So why do you have children? Can't synthesize a reason?

have kids because i wanted to.

and life is beautiful.


The problem with the concept of "the singularity" it is has a hidden assumption that computation has no relationship to energy. Which, once unmasked, is a pretty outlandish claim.

There is a popular illusion that somehow technological progress is a pure function of human ingenuity, and that the more efficient we can make technology the faster we can make even better technological improvement. But history of technology has always been the history of energy usage.

Prior to the emergence of homo-sapiens, "humans" learned to cook food by releasing energy stored in wood. Cooking food is often considered a prerequisite for the development of the massive, energy consuming, brain of homo-sapiens.

After that it took hundreds of thousands of years for Earth's climate to become stable enough to make agriculture feasible. We see almost no technological progress until we start harvesting enormous amounts of solar energy through farming. Not long after this we see the development of mathematics and writing since humans now had surplus energy and they could spend some of it on other things.

You can follow this pattern though the development and extraction of coal, oil etc. You can look at the advancement of technology in the last 100 years alongside our use of fossil fuels and expansion of energy capabilities with renewables (which historically only been used to supplement, not replace non-renewables).

But technological progress has always been a function of energy, and more specifically, going back to cooking food, computational/cognitive ability similarly demands increasingly high energy consumption.

All evidence seems to suggest that we increasingly need more energy for incrementally smaller return on computation.

So for something like the singularity to happen, we would also need incredible changes in available energy (there's also a more nuanced argument that you also need smooth energy gradients but that's more discussion than necessary). Computation is not going to rapidly expand without also requiring tremendously large increases in energy.

Further it's entirely reasonable that there is some practical limit to just how "smart" a thing can be based on the energy requirements to get there. That is, you can't reasonably harvest enough energy to create intelligence on the level we imagine (the same way there is a limit to how tall a mountain can be on earth due to gravity).

Like most mystical thinking, ignoring what we know about thermodynamics tends to be a fundamental axiom.


There are hard limits for how much energy we can provide to computation, but we are not even close to what we can do in a non-suicidal way. In addition to expanding renewables, we could also expand nuclear and start building Thorium reactors - this alone ensures at least an extra order of magnitude in capacity compared to Uranium.

As for the compute side, we are running inference on GPUs which are designed for training. There are enormous inefficiencies in data movement in these platforms.

If we play our cards right we might have autonomous robots mining lunar resources and building more autonomous robots so they can mine even more. If we manage to bootstrap a space industry on the Moon with primarily autonomous operations and full ISRU, we are on our way to build space datacenters that might actually be economically viable.

There is a lot of stuff that needs to happen before we have a Dyson ring or a Matrioska brain around the Sun, but we don’t need to break any laws of physics for that.


> we are not even close to what we can do in a non-suicidal way.

I'm honestly not sure how anyone can be remotely aware of the other major consequence of our energy consumption, climate change, and make this statement.

We're far more likely to be extinct in 100 years than see any of your sci-fi proposals come to fruition.

But I guess that gets to the point of all of this discussion: belief in a technological singularity is just a different flavor of the religious tools we have used to avert existential dread for thousands of years. You need to believe these things are true so there's nothing that can ever be said to convince you otherwise.


There are several books which explore this concept, viewing history through the lens of energy systems available to and utilised by humans.

Vaclav Smil has written two of these, Energy and Civilization (2017) and Energy in World History (1994). They cover much the same ground, though with different emphases.

<https://vaclavsmil.com/book/energy-and-civilization-a-histor...>

<https://vaclavsmil.com/book/energy-in-world-history/>

Manfred Weissenbacher's Sources of Power (2009) more specifically addresses political and military implications of different power systems.

<https://www.bloomsbury.com/us/sources-of-power-9780313356261...>

In the past year there's a new book on the topic, Energy's History: Toward a Global Canon, by Daniela Russ and Thomas Turnbull, though I've yet to read it.

<https://www.sup.org/books/politics/energys-history>

There's a review here: <https://networks.h-net.org/group/reviews/20131545/priest-rus...>.


Don’t forget the practical ability to dissipate waste heat on top of producing energy. That’s an upper limit to all energy use unless we decide boiling ourselves is fine, or find a way to successfully ignore thermodynamics, as you say.

If we'd ever get so far that would be the most compelling argument for datacenters in space

Heat rejection is far more challenging in vacuum.

We can always build a sunshade in the Earth-Sun L1. Make it a Sun facing PV panel pointing radiators away from us and we can power a lot of compute there (useful life might be limited, but compute modules can be recycled and replaced, and nothing needs to be launched from Earth in this case).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: