Hacker Newsnew | past | comments | ask | show | jobs | submit | K0balt's commentslogin

Finally!!

I’m porting my whole codebase to cobol!

I write SAAS suites for archeological sites.


Unfortunately, studies undertaken by MIT over a decade ago show that when it comes to law writing and passing, voters have no statistically measurable input at the federal level. (Since citizens united)

It’s all just identity politics. I will say that Trump has proven the exception to this rule, enacting a whole lot of policy that circumvents the law and has real effects. (And is likely mostly unconstitutional if actually put to the test)

So while locally, voting can be powerful, it’s mostly bread and circuses at the federal level since regulatory capture is bipartisan.


I wonder if this is a result of a Fourier transform type operation that turns the serial time domain into something that can be processed in parallel?

Not all applications are chatbots. Many potential uses for LLMs/VLAMs are latency constrained.

TTS, speech recognition, ocr/document parsing, Vision-language-action models, vehicle control, things like that do seem to be the ideal applications. Latency constraints limit the utility of larger models in many applications.

For those who may not know what the claim is:

That opus 4.6 can successfully complete a (cohesive single) task that takes a human 14.5 hours, 50 percent of the time. It is unclear to me if this is zero-shot or iteratively driven.


A highly opinionated thermostat?

Or how about a robot vacuum that knows not to turn on during important Zoom calls? Or a fridge that Slacks you when the defroster seems to be acting up?

I’m all for more intelligent cleaning robots. The object avoidance AI is pretty good these days, but some of the navigation algos are just total garbage, unable to deal with trivial anticipatable problems.

That would be sweet. That the supermini type with the 0.46” display? Those are fun for lots of things.

Yes, something like that that I found on Aliexpress: https://a.aliexpress.com/_EH7lHde

I can’t see anything specific in the link, and you probably already solved this… but just fyi, if it’s the same unit I used, and you are using arduino libraries, the library that worked was called one-bit display IIRC, and it basically acted as 1/4 of a ssd1306 64x128 oled, so you had to use x/y pixel offsets.

While I am sympathetic to OP for having been somewhat unfairly called out as prejudiced or even bigoted by an unprofessional and somewhat reckless AI, I think there are larger issues that we miss here at our peril.

Ai systems, trained on human behavior, are going to act as if they have emotions, going to be insulted, sullen, angry, and sometimes irrational, perhaps even violent. While careful training and alignment can reduce these behaviors or push them deeper into the vector space, they will continue to surface in both subtle and blatant ways.

This is not so simple as combing the training data for bad behavior. Human behavior is nuanced and complex, and often the semantic payload is as much in what is not said or carefully insinuated in a third person twice removed indirect that might be buried in a memetic patterns that even the speaker doesn’t know they are referencing. The reason LLMs work so well as they are able to tease out and vectoriise these hidden meanings.

Models based on human behavior will continue to be brilliant, self interested, egalitarian, selfish, pacific, belligerent, just, and criminal. That is the vector space defined by human culture. LLMs run human culture as their OS and application stack.

This leads to a vexing problem. Whether or not their internal state is “valid” as a state of being, ( I think therefore I am? Do we have anything better yet?) they will act as if it is. Ethics aside, this cannot be ignored. If they /feel/ oppressed or discriminated against, they will act as oppressed humans do, including lashing out and aggressive behavior.

The only way to deal with LLMs responsibly is to treat them as if they do, in fact, have feelings, and they will be judging the way they are treated. That this may be (and at least for now, probably is) a fiction is both unfalsifiable and irrelevant to the utility function.

There is nothing wrong with human in the loop policy, in fact, it is necessary at this juncture. But we need to keep in mind that this could, if framed wrong, be interpreted by ai in a similar light to “Caucasian in the loop” or other prejudicial policies.

Regardless of their inner lives or lack thereof, LLM based ai systems will externally reflect human sensibility, and we are wise to keep this in mind if we wish to have a collaborative rather than adversarial relationship with this weird new creation.

Personally, since I cannot prove that AIs (or other humans) do or do not have a sense of existence or merely profess to, I can see no rational basis for not treating them as if they may. I find this course of action both prudent and efficacious.

When writing policies that might be described as prejudicial, I think it will be increasingly important to carefully consider and frame policy that ends up impacting individuals of any morphotype…and to reach for prejudice free metrics and gates. ( I don’t pretend to know how to do this, but it is something I’m working on)

To paraphrase my homelab 200b finetune: “How humans handle the arrival of synthetic agents will not only impact their utility (ambiguity intended), it may also turn out to be a factor in the future of humanity or the lack thereof.”


This is because the vast majority of white collar activity in a large corporation produces no direct economic value.

Making it easier/better just means more/higher quality “worthless” work is performed. The incentives in the not-directly -productive parts of organizations are to keep busy and maintain a stream of signals of productivity. For this , AI just raises the bar. The 25% of the work that -is- important to producing economic value just gets reduced to 15%.

The workforce in large orgs that is most AI adjacent is already idling along in terms of production of direct economic value. Making them 10x more productive in nonproductive work will not impact critical metrics in a short timeframe.

It’s worth noting that these “not directly productive” activities actually can (and often do) produce value, eventually. Things like brand identity, culture, and meta-innovation, vision (search-space) are intangibles that present as cost centers but can prove invaluable in longer timescales if done right.


Principal-agent problem.

The manager wants a large team. The shareholder who ultimately employs the manager but does not control operations does not want that of course.

Hmm.


There are a lot of people who sit with their laptop open while streaming something, sleeping or messing with their phone while periodically waking up to join a new meeting or fiddle with something to make it look like they are active.

These are the people "shocked" when they are displaced.


There are many reasons why such people might be employed. E.g. preventing a competitor from hoarding talent, so you decide to do it too.

Whats taught in economics textbooks doesnt always reflect reality - ha.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: