Usually you indeed scan this file sequentially, doing some filtration / transformation. As you do this transformation for each record, the speed of the tool used (e.g. jq) really matters.
Databases are not used in this case because it’s a complexity overhead compared to plain-text files. The ability to use unix pipelines and tools (such as grep) is a bonus.
In those cases, querying un-indexed files seems quite a thinko. Even if you can fit it all in RAM.
If you only scan that monstrous file sequentially, then you don't need either jq or jj or any other "powerful" tool. Just read/write it sequentially.
If you need to make complex scans and queries, I suspect a database is better suited.