I am working on an app that uses old-school predictive AI (i.e. pytorch and scikit-learn) to predict professional sports outcomes in a way that is useful for DFS fans. Here is the landing page.
We seek to empower DFS fans through education about predicting professional sports athlete outcomes. We do that through strategy advice, hot player tips, optimized lineups, and pick’em style game-friendly player props. We're not trying to take away your control or do your thinking for you. We are just here to support you in making better decisions. Let the app do the number crunching so you can get back to competitive play that gets results and is also fun.
That's a type of fatigue that is not new but I hear you, context switching fatigue has increased ten fold with the introduction of agentic AI coding tools. Here are some more types of fatigue that have been increased with the adoption of LLMs in writing code.
There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.
The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.
The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.
I think all of this is why I don’t really experiment with an LLM anymore. I just use it to ideatw/rewrite things in different styles so I can turn rough drafts into finished things. It’s just an editor to bounce ideas off of essentially. Using it that way is the only way I find myself being actually productive and not annoyed with it
Maybe this is why I’m different. I love reviewing code, it’s a great way to learn about a system, get new ideas. Diffs are great, see how things are interconnected
1. Since the same AI writes both the code and the unit tests, it stands to reason that both could be influenced by the same hallucinations.
2. Having a dev on call reduces time to restore service because the dev is familiar with the code. If developers stop reviewing code, they won't be familiar with it and won't be as effective. I am currently unaware of any viable agentic AI substitute for a dev on call capability.
3. There may be legal or compliance standards regarding due diligence which won't get met if developers are no longer familiar with the code.
How are these libraries curated? I ask because Clojure Land includes Donkey https://clojure.land/?q=donkey which was abandoned a couple of years ago.
Not sure about your information architecture. What is the difference between the web frameworks and web server abstraction tags?
This next question is more for the Clojure community. From https://clojure.land/?tags=Web%20Frameworks we see 34 web frameworks. That seems like a lot to me. Why is there so much "scratching your own itch" because you don't like ring?
I created Clojure Land. The idea is to be more comprehensive than curated and to be able to discover rather than recommend any particular project.
If a repo has been archived on GitHub then I do show a lock icon on the card. Some projects that have been completely abandoned have been hidden from the list but I've been reluctant to be the gatekeeper of what projects should be removed. I was kind of hoping that if a project owner considered their project dead then they would create a PR against the Clojure Land repo to remove or hide the project from the list.
Most of the tags came from the sections on https://www.clojure-toolbox.com/ and I'll admit they are a bit arbitrary and I would happily accept any help to make them better organized.
Most of those are not really "frameworks" as such (with some exceptions, like Biff or Fulcro), but rather libraries, or curated collections of libraries. Most Clojure people tend to roll their own set of libraries, rather than use actual frameworks.
The conversation here seems to be more focused on coding from scratch. What I have noticed when I was looking at this last year was that LLMs were bad at enhancing already existing code (e.g. unit tests) that used annotation (a.k.a. decorators) for dependency injection. Has anyone here attempted that with the more recent models? If so, then what were your findings?
My experience is the opposite. The latest Claude seems to excel in my personal medium-sized (20-50k loc) codebases with strong existing patterns and a robust structure from which it can extrapolate new features or documentation. Claude Code is getting much better at navigating code paths across many large files in order to provide nuanced and context-aware suggestions or bug fixes.
When left to its own devices on tasks with little existing reference material to draw from, however, the quality and consistency suffers significantly and brittle, convoluted structures begin to emerge.
This is just my limited experience though, and I almost never attempt to, for example, vibe-code an entire greenfield mvp.
I used a graph database in https://www.exploravention.com/products/askarch/ because software architects typically need to understand the dependencies of a complex software system before they can suitably lead that technology. A dependency graph is a good data structure to use when reasoning about dependencies and a graph database is a natural choice for capturing dependency graphs. See https://www.infoq.com/articles/architecting-rag-pipeline/ for more details on the architecture of this AI product. The graph database works very well for this use case.
If you are considering the use of a graph database for AI based search and you are not already familiar with graph database technology, then you should be advised that graph databases are not relational databases. If you cognitively model nodes = tables and edges = joins, then you will be in for some nasty surprises. You should consider some learning, and some unlearning, to do before proceeding with that choice.
I don't think the cognitive models are that distinct, just a different way in which relations are stored. In any case, not distinct enough to warrant 'unlearning' relational approaches. While I find graph based approaches more natural to some problems we can stretch the relational paradigm quite a bit.
I agree. Permit me to rephrase. From this learning adventure https://www.infoq.com/articles/architecting-rag-pipeline/ I came to understand what many now call context rot. If you want quality answers, you still need relevance reranking and filtering no matter how big your context window becomes. Whether that happens in a search that is upfront in a one shot prompt or iteratively in a long session through an agentic system is merely an implementation detail.
Location: Bay Area of California
Remote: yes
Willing to relocate: no
Technologies: https://www.exploravention.com/services/
Résumé/CV: https://www.linkedin.com/in/gengstrand/
Email: https://www.exploravention.com/subscriptions/subscribe/
Like the rest of online mass media, HN covers generative AI a lot but there is still plenty of value in predictive AI. Both forms provide plenty of technical challenges to the AI engineer. I miss the days when you could get a stack trace to when debugging an issue.
https://www.higherscoresdfs.com/dfs/spa/welcome/
Here is the pitch.
We seek to empower DFS fans through education about predicting professional sports athlete outcomes. We do that through strategy advice, hot player tips, optimized lineups, and pick’em style game-friendly player props. We're not trying to take away your control or do your thinking for you. We are just here to support you in making better decisions. Let the app do the number crunching so you can get back to competitive play that gets results and is also fun.