If the “amount of semantic ablation” in a generated phrase/sentence/paragraph can be measured and compared, then a looped process (an agent) could be built that tries to decrease that..
It might come up with something original - I mean there has to be tons of interesting connections in the training data that no one’s seen before.
Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..
Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.
And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.
There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.
> This “Great Uncoupling” is well underway and will take us toward a less monocultural Internet.
Gentoo's Github mirrors have only been to make contributing easier for -I expect- newbies. The official repos have -AFAIK- always been hosted by the Gentoo folks. FTFA:
This [work] is part of the gradual mirror migration away from GitHub, as already mentioned in the 2025 end-of-year review.
These [Codeberg] mirrors are for convenience for contribution and we continue to host our own repositories, just like we did while using GitHub mirrors for ease of contribution too.
And from the end-of-year review mentioned in TFA [0]
Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg. ... Gentoo continues to host its own primary git, bugs, etc infrastructure and has no plans to change that.
we learn that the primary reason for moving is Github attempting to force its shitty LLM onto folks who don't want to use it.
So yeah, the Gentoo project has long been "decoupled" or "showing it can be done" or whatever.
Rather than let results be random, iteratively and continuously add more and more guardrails and grounding.
Tests, linting, guidance in response to key events (Claude Code hooks are great for this), automatically passing the agent’s code plan to another model invocation then passing back whatever feedback that model has on the plan so you don’t have to point out the same flaws in plans over and over.. custom scripts that iterate your codebase for antipatterns (they can walk the AST or be regex based - ask your agent to write them!)
Codify everything you’re looping back to your agent about and make it a guardrail. Give your agent the tools it needs to give itself grounding.
An agent without guardrails or grounding is like a person unconnected to their senses: disconnected from the world, all you do is dream - in a dream anything can happen, there’s nothing to ensure realism. When you look at it that way it’s a miracle coding agents produce anything useful at all :)
> The era spawning from the 1950s throughout the 1980s can be considered the golden era of telecommunication
I’m not so sure! These days we have FaceTime and dozens of other video and voice call services on our bodies 24/7 - and it’s so competitive among them that they are ALL free! We live in a golden age in a great many ways!
It’s awesome to learn about the engineering and history that got us to to this point.
Great to see people thinking about this. But it feels like a step on the road to something simpler.
For example, web accessibility has potential as a starting point for making actions automatable, with the advantage that the automatable things are visible to humans, so are less likely to drift / break over time.
In theory you could use a protocol like this, one where the tools are specified in the page, to build a human readable but structured dashboard of functionality.
I'm not sure if this is really all that much better than, say, a swagger API.
The js interface has the double edge of access to your cookies and such.
As someone heavily involved in a11y testing and improvement, the status quo, for better or worse, is to do it the other way around. Most people use automated, LLM based tooling with Playwright to improve accessibility.
There is a proposed extension in the repo that is getting some traction that automatically converts forms into tools. There is trouble in linking this to a11y though, since that could lead to incentivize sites to make really bad decisions for human consumers of those surfaces.
We're building an app that automatically generates machine/human readable JSON by parsing semantic HTML tags and then by using a reverse proxy we serve those instead of HTML to agents
I tried to play along at home some, play with rust accesskit crate. But man I just could not get Orcas or other basic tools to run, could not get a starting point. Highly discouraging. I thought for sure my browser would expose accessibility trees I could just look at & tweak! But I don't even know if that's true or not yet! Very sad personal experience with this.
It might come up with something original - I mean there has to be tons of interesting connections in the training data that no one’s seen before.
But maybe it’d just end up shouting at you.
reply