Hacker Newsnew | past | comments | ask | show | jobs | submit | Shrezzing's commentslogin

Most likely it'd be burned in a bioenergy powerstation.

Drax in the UK [1] is a quite good case study for this (assuming they get it all up and running), though they're not using algae. Right now they grow trees, and burn those in pellet form. It's currently considered sustainable as it's not adding new carbon to the above-ground system (whereas coal/gas/oil is adding to the above-ground carbon). Their next phase is to attempt to capture the post-combustion emissions from their chimney stacks, at which point they have a non-biodegradable mass of carbon to bury somewhere.

[1] https://www.drax.com/sustainability/sustainable-bioenergy/


Even in that baroque interpretation of sustainability it's surely unsustainable once you include the emissions produced by harvesting, packaging and transporting the fuel.


It's less sustainable than renewables, but fills a practical requirement for peaker style powerplants (which fill sudden demand) without resorting to coal/oil/gas, which are considerably less sustainable. Drax already existed as a coal/gas facility, so transitioning it to a biomass-with-carbon-capture facility is a net benefit to the environment, even when considering the packaging and transportation of the wood, because those steps were required for its coal predecessor.


Ironically, "Marxist" is also a classic motte-and-bailey word.


The argument for why they're being exploited is basically labor theory of value, which is a pretty big part of marxist theory.

https://en.wikipedia.org/wiki/Labor_theory_of_value


> The world relentlessly marches forward. However, I've learned human resilience is AMAZING. You'll be surprised at what you are capable of when life asks for it.

What I'm about to say obviously pales in comparison to raising a child with autism, but entering an ultramarathon/triathalon is a quite good way to experience something like this first hand in a safe environment. The amount a human can actually "go through" when it's asked of them is entirely remarkable.


A friend of mine ran a marathon with 0 training in large part to spite me for laughing at him for saying he would.

Not as extreme as an ultra, but still an unimaginable feat for most people.

I wouldn’t have believed it if he didn’t wake me up to rub it in my face when he got back.


I used to attempt things like that too (long races or tough mudders with little to no training). After doing my third tough mudder and getting injured I realized it's just stupid and a great way to cause a serious, possibly life enduring, injury.


Most people in somewhat reasonable shape could "run" a marathon on zero training. They might take 7 hours and have some minor injuries by the end, but they could finish.


To add a personal anecdote, I once walked 25 miles in about 8-9 hours* with a heavy bag, and then showered & walked to work afterwards. I was kind of out of it, but definitely not falling-down-tired or anything like that.

*I severely misjudged the length of the trip before setting out, thought it was about 5-6 miles.


It must have been a nice day out.


I'd say if you're under age 40 it should be possible unless you're in really poor shape but then again that shouldn't be a problem to get in shape.

As you get older recovery time is an issue it takes a long time to get over hard workout unless you've always done it but even so it's still not like when younger.


Wow. Could they walk the next day? That's seriously impressive.


Yes! This is so true and requires much of the same skill set.


'56 is too early, given how much of east Africa was under British colonial control into the 60s, and how much of S/E Asia was still looking for independence. It's likely the population of the empire was still above 100mn at the time. I'd say '56 is more like the start of the very rapid decline of the empire.

It highlighted both to the colonised and the colonisers that the empire was way over-extended.


Another "beginning of the end" moment might be the Ugandan Asians incident of 1972: the Empire had "free movement" of subjects, but only so long as very few of them used it to come to Britain.


Those who had (the right to) British passports were allowed in, right? So similar to the recent situation with regard to BN(O) people in HK.

There was a lot of free movement within the empire other than to the UK too. Many people left India, in particular, for west Africa, SE Asia. Some of my ancestors moved to Sri Lanka.


To add, free trade too.


You know, that's another interesting data point.

My grandfather was a "home child", basically a war orphan indentured, his contract sold to a farmer in Canada, while his brother went to Australia, never to be seen again.

But at the time, even for normal moves to Canada or other places, people were worried that their children would not be Subjects of The Empire.

So promises were made, that if subjects moved to a colony, their Grandchildren would be British". This was still a pledge in the 20s when my grandfather arrived in Canada, and thus I am eligible for a UK "Ancestry VISA".

This only works if your grandfather was born in the UK, amd went to a colony, and my point?

Well, eventually the last person capable of exercising this right will be gone. Maybe 30 years?

It is another point in the end of empire.


> while his brother went to Australia, never to be seen again.

There's a tale that likely ended in tears.

British war orphans sent to Australia largely fell into the clutches of the Christian Brothers . . .

* https://kelsolawyers.com/au/paedophile_offenders/brother-kea...

* https://www.theguardian.com/uk-news/2017/mar/02/child-migran...

* https://www.bbc.com/news/uk-39078652


Oh I know, sadly.

We've never been able to track him down, or their sister down (she was still at the orphanage, too young to ship off when they were broken up).

My grandfather was lucky in Canada. He worked dawn to dusk, but was fed well, sheltered from the elements, and learned how to manage a farm. He came out of it reasonably well.

People often say there is a history of treating Natives poorly in Canada, colonies. Yet we did it to ourselves, too.

Especially the churches.


Likewise in Australia, I am eligible for a UK passport since my father was born there, even though he emigrated in 1949 and I wasn't born until 1975. Was a lot of fun back in the 2000's when England was still part of the Eurozone since the passport allowed me to live/work anywhere in the EU.


> This only works if your grandfather was born in the UK, amd went to a colony

Anyone with any grandparent born in the UK is eligible, whether that grandparent went to a colony or not.


Good to know, thanks


I remember some documentary where they discussed the victory march at the end of WWII. They called it "The last march of The Empire".

Indian, Canadian, etc etc troops marching in step. Within a decade so many gone.

But I agree I think, that the 50s seem too soon.

Still, that last march is an important symbol.


OK, so how about "fall of empire" as taking place 1956-1984?

(3 decades sounds long to me, but it would allow a royal wedding and the recovery of Las islas Malvinas to be the last gasp of empire?)


There's quite a lot of other constraints too. Lots of goods aren't allowed to sit side-by-side, for example, anything explosive goods cannot sit within n containers of hazardous chemicals. Because goods codification is so low-fidelity, lots of things which aren't actually explosive/hazardous can't be stored in close proximity, because we can't differentiate them from things which are actually hazardous/explosive.


I'm not sure if this is concrete fact, or just a theory, but you can continue the line up Norway's western coast too. Then in the other direction, the line was broken, but restarts & progresses from Nova Scotia down through the Appalachians in North America.


IIRC they were all part of the same Pangaian range, and include Greenland east coast and the Atlas mountains.


>employees are the most expensive thing a SaaS business has.

I'm pretty sure for the overwhelming majority of (successful) SaaS businesses, the most expensive part is the marketing & advertising budget. 30-50% isn't uncommon, because the returns on successful sign-ups are enormous.


Not so. early stage funding goes to hiring.


The paper discusses this, and the approach taken in the paper implements a number-flip stage, so numbers are formatted with their least significant figure first.


Since models are very good at writing very short computer programs, and computer programs are very good at mathematical calculations, would it not be considerably more efficient to train them to recognise a "what is x + y" type problem, and respond with the answer to "write and execute a small javascript program to calculate x + y, then share the result"?


From a getting answers perspective yes, from an understanding LLMs perspective no. If you read the avstract you can see how this goes beyond arithmetic and helps with longform reasoning


But that's not all that relevant to the question "can LLMs do math". People don't really need ChatGPT to replace a calculator. They are interested in whether the LLM has learned higher reasoning skills from it's training on language (especially since we know it has "read" more math books than any human could in a lifetime). Responding with a program that reuses the + primitive in JS proves no such thing. Even responding with a description of the addition algorithm doesn't prove that it has "understood" maths, if it can't actually run that algorithm itself - it's essentially looking up a memorized definition. The only real proof is actually having the LLM itself perform the addition (without any special-case logic).

This question is of course relevant only in a research sense, in seeking to understand to what extent and in what ways the LLM is acting as a stochastic parrot vs gaining a type of "understanding", for lack of a better word.


That's a fair summary of why the research is happening. Thanks.


That's in fact what ChatGPT does ... because 99% accurate math is not useful to anyone.


This is a cromulent approach, though it would be far more effective to have the LLM generate computer-algebra-system instructions.

The problem is that it's not particularly useful: As the problem complexity increases, the user will need to be increasingly specific in the prompt, rapidly approaching being fully exact. There's simply no point to it if your prompt has to (basically) spell out the entire program.

And at that point, the user might as well use the backing system directly, and we should just write a convenient input DSL for that.


Yes, this is what external tools/plugins/api calls are all about.


>deductive reasoning is just drawing specific conclusion from general patterns. something I would argue this models can do

That the models can't see a corpus of 1-5 digit addition then generalise that out to n-digit addition is an indicator that their reasoning capacities are very poor and inefficient.

Young children take a single textbook & couple of days worth of tuition to achieve generalised understanding of addition. Models train for the equivalent of hundreds of years, across (nearly) the totality of human achievement in mathematics, and struggle with 10-digit addition.

This is not suggestive of an underlying capacity to draw conclusions from general patterns.


> Young children take a single textbook & couple of days worth of tuition to achieve generalised understanding of addition

Maybe you did! Most young children cannot actually do bigint arithmetic reliably or at all after a couple days worth of tuition!


I think the “train for hundreds of years” argument is misleading. It’s based off of parallel compute time and how long it would take to run the same training sequentially on a single GPU. This assumes an equivalence with human thought based on the tokens per second rate of the model which is a bad measurement because it varies depending on hardware and the closest comparison you could draw to what a human brain is doing would be either the act of writing or speaking but we obviously process a lot more information and produce a higher volume of information at a much higher rate than we can speak or write. Imagine if you had to verbally direct each motion of your body, it would take an absurd amount of time to do anything depending on the specificity you had to work with.

The work done in this paper is very interesting and your dismissal of “it can’t see a corpus and then generalize to n digits” is not called for. They are training models from scratch in 24 hours per model using only 20 million samples. It’s hard to equate that to an activity a single human could do. It’s as though you had piles of accounting ledgers filled with sums and no other information or knowledge of mathematics, numbers or the world and you discovered how to do addition based on that information alone. There is no textbook or tutor helping them do this either it should be noted.

There is a form of generalization if it can derive an algorithm based on a maximum length of 20 digit operands that also works for 120 digits. Is it the same algorithm we use by limiting ourselves to adding two digits at a time? Probably not but it may emulate some of what we are doing.


>There is no textbook or tutor helping them do this either it should be noted.

For this particular paper there isn't, but all of the large frontier models do have textbooks (we can assume they have almost all modern textbooks). They also have formal proofs of addition in Principia Mathematica, alongside nearly every math paper ever produced. And still, they demonstrate an incapacity to deal with relatively trivial addition - even though they can give you a step-by-step breakdown of how to correctly perform that addition with the columnar-addition approach. This juxtaposition seems transparently at odds with the idea of an underlying understanding & deductive reasoning in this context.

>There is a form of generalization if it can derive an algorithm based on a maximum length of 20 digit operands that also works for 120 digits. Is it the same algorithm we use by limiting ourselves to adding two digits at a time? Probably not but it may emulate some of what we are doing.

The paper is technically interesting, but I think it's reasonable to definitively conclude the model had not created an algorithm that is remotely as effective as columnar addition. If it had, it would be able to perform addition on n-size integers. Instead it has created a relatively predictable result that, when given lots of domain-specific problems, transformers get better at approximating the results of those domain-specific problems, and that when faced with problems significantly beyond its training data, its accuracy degrades.

That's not a useless result. But it's not the deductive reasoning that was being discussed in the thread - at least if you add the (relatively uncontroversial) caveat that deductive reasoning should lead to correct conclusion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: