That's a good start I'd say, but I agree with you that detection is not trivial. I wonder if there's enough value in distinguishing between AI agents (with full browser) and humans. What use cases would it enable?
For the distinguishing part, it's hard to tell if it can be done anyway, as the agent browsers are still new and constantly changing, and it's up to them if they will correctly identify themselves or not (same as with crawler/bots, the main indication is still the source IP address).
There could be use cases, like tracking if your content was stolen/parsed by an AI, maybe future pay-per-request for LLMs, etc.