The trick here is to take some source of information online that's updated frequently and turn that into a historic record of every change made to that source, by setting up a GitHub repository and dropping a YAML file into it setting up a scheduled action.
Achieving the same thing with a time series database would require a whole lot more work I think - you'd need to run that database somewhere, then run code that scrapes and writes to it on a scheduled basis.
If you already have a time series database running and a machine that runs cron I guess it wouldn't be too much work to put that in place.
Git scraping also lets you easily track changes made to textual content, which I don't think would fit neatly in a time series database.
I mean you could use SQLite the wrong way and use it as a time series database, which would save you from having to have a machine to host it, and I'm sure you could cobble together some sort of hosting for it and glue it to a web cron system. this github seems quite a bit more straightforwards, but then you're on git instead of something else.
The trick here is to take some source of information online that's updated frequently and turn that into a historic record of every change made to that source, by setting up a GitHub repository and dropping a YAML file into it setting up a scheduled action.
Achieving the same thing with a time series database would require a whole lot more work I think - you'd need to run that database somewhere, then run code that scrapes and writes to it on a scheduled basis.
If you already have a time series database running and a machine that runs cron I guess it wouldn't be too much work to put that in place.
Git scraping also lets you easily track changes made to textual content, which I don't think would fit neatly in a time series database.