Hacker Newsnew | past | comments | ask | show | jobs | submit | cadamsdotcom's commentslogin

If the “amount of semantic ablation” in a generated phrase/sentence/paragraph can be measured and compared, then a looped process (an agent) could be built that tries to decrease that..

It might come up with something original - I mean there has to be tons of interesting connections in the training data that no one’s seen before.

But maybe it’d just end up shouting at you.


Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..

Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.

And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.

There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.


Maybe a rename to Barra. Everyone will still get the pun :)

... or Baccaruda or Baba-rara-cucu-dada (https://youtu.be/2tvIVvwXieo)

Or bacaruda.

Great to see important projects like Gentoo showing it can be done

This “Great Uncoupling” is well underway and will take us toward a less monocultural Internet.


> This “Great Uncoupling” is well underway and will take us toward a less monocultural Internet.

Gentoo's Github mirrors have only been to make contributing easier for -I expect- newbies. The official repos have -AFAIK- always been hosted by the Gentoo folks. FTFA:

  This [work] is part of the gradual mirror migration away from GitHub, as already mentioned in the 2025 end-of-year review.

  These [Codeberg] mirrors are for convenience for contribution and we continue to host our own repositories, just like we did while using GitHub mirrors for ease of contribution too.
And from the end-of-year review mentioned in TFA [0]

  Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg. ... Gentoo continues to host its own primary git, bugs, etc infrastructure and has no plans to change that.
we learn that the primary reason for moving is Github attempting to force its shitty LLM onto folks who don't want to use it.

So yeah, the Gentoo project has long been "decoupled" or "showing it can be done" or whatever.

[0] <https://www.gentoo.org/news/2026/01/05/new-year.html>


Rather than let results be random, iteratively and continuously add more and more guardrails and grounding.

Tests, linting, guidance in response to key events (Claude Code hooks are great for this), automatically passing the agent’s code plan to another model invocation then passing back whatever feedback that model has on the plan so you don’t have to point out the same flaws in plans over and over.. custom scripts that iterate your codebase for antipatterns (they can walk the AST or be regex based - ask your agent to write them!)

Codify everything you’re looping back to your agent about and make it a guardrail. Give your agent the tools it needs to give itself grounding.

An agent without guardrails or grounding is like a person unconnected to their senses: disconnected from the world, all you do is dream - in a dream anything can happen, there’s nothing to ensure realism. When you look at it that way it’s a miracle coding agents produce anything useful at all :)


> The era spawning from the 1950s throughout the 1980s can be considered the golden era of telecommunication

I’m not so sure! These days we have FaceTime and dozens of other video and voice call services on our bodies 24/7 - and it’s so competitive among them that they are ALL free! We live in a golden age in a great many ways!

It’s awesome to learn about the engineering and history that got us to to this point.


Bandwidth between Los Angeles and New York, very approximately:

1915: 1 kHz - telegraph lines

1925: 10 kHz - a handful of voice channels

1935: 100 kHz - several frequency multiplexed carrier lines across the desert

1945: 200 kHz - a few more lines - war time expansion restrictions

1955: 5 MHz - coaxial cable and microwave links

1965: 20 MHz - coast to coast simultaneous television and tens of thousands of voice channels

1975: 100 MHz - scaling up

1985: 10 GHz - analog to digital phase change - fibre optic and satellite

I still remember the first time I spoke to a person so far away the sun had set and risen in between.


I think the meaning of this is that it was a golden era for infrastructure build out.

There are certainly impressive things like starlink today. But the cheaper and easier to maintain infrastructure deployed to everyone was more common.

There's a lot of 60s infrastructure still in operation today. Some of it barely maintained (see the campfire fire).


Oh wow, the whole collection is 2.28TB. Super practical to archive!

This could be used for a truly eye-opening art installation: a screen that as you walk by it, tells you when you were last there..

Even wilder would be to buy data on you in real time and display that.


The Hollywood movie Minority Report has a scene where an advertising display personalizes the ad by your name. https://www.youtube.com/watch?v=7bXJ_obaiYQ

Great to see people thinking about this. But it feels like a step on the road to something simpler.

For example, web accessibility has potential as a starting point for making actions automatable, with the advantage that the automatable things are visible to humans, so are less likely to drift / break over time.

Any work happening in that space?


In theory you could use a protocol like this, one where the tools are specified in the page, to build a human readable but structured dashboard of functionality.

I'm not sure if this is really all that much better than, say, a swagger API. The js interface has the double edge of access to your cookies and such.


As someone heavily involved in a11y testing and improvement, the status quo, for better or worse, is to do it the other way around. Most people use automated, LLM based tooling with Playwright to improve accessibility.

I certainly do - it’s wonderful that making your site accessible is a single prompt away!

There is a proposed extension in the repo that is getting some traction that automatically converts forms into tools. There is trouble in linking this to a11y though, since that could lead to incentivize sites to make really bad decisions for human consumers of those surfaces.

We're building an app that automatically generates machine/human readable JSON by parsing semantic HTML tags and then by using a reverse proxy we serve those instead of HTML to agents

Chris Shank & Orion Reed doing some very nice work with accessibility trees. https://bsky.app/profile/chrisshank.com/post/3m3q23xpzkc2u

I tried to play along at home some, play with rust accesskit crate. But man I just could not get Orcas or other basic tools to run, could not get a starting point. Highly discouraging. I thought for sure my browser would expose accessibility trees I could just look at & tweak! But I don't even know if that's true or not yet! Very sad personal experience with this.


If anyone is looking for ideas for these projects - it’d be great to be able to run macos applications on linux…

Someone could have a swarm of agents build “wine for macos apps”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: