Maybe I'm using SQLite wrong, but I have a website scraper. It's written in python and so uses processes for concurrency to avoid GIL issues.
On my laptop, 16 processes accessing the same SQLite db file can do something like 5,000 write transactions/second. That seems like plenty for many purposes?
A caveat: it's in WAL mode, with synchronous mode set to NORMAL. This means that it's not fsyncing the WAL on each individual transaction commit. Instead, they get synced in batches roughly every 200ms. If you pull the power plug on the server, it may lose some amount of writes that happened in that time. Terminating the process is fine.
Yeah, yeah, we're giving up some amount of durability. In practice, not very much (200ms of transactions) and only very rarely (when the server dies). That seems like a fine trade-off for many backend processes and many web apps. You could tune this, as well: for the transactions that you _really_ want durability on, run a checkpoint before rendering your response to the user.
On my laptop, 16 processes accessing the same SQLite db file can do something like 5,000 write transactions/second. That seems like plenty for many purposes?
A caveat: it's in WAL mode, with synchronous mode set to NORMAL. This means that it's not fsyncing the WAL on each individual transaction commit. Instead, they get synced in batches roughly every 200ms. If you pull the power plug on the server, it may lose some amount of writes that happened in that time. Terminating the process is fine.
Yeah, yeah, we're giving up some amount of durability. In practice, not very much (200ms of transactions) and only very rarely (when the server dies). That seems like a fine trade-off for many backend processes and many web apps. You could tune this, as well: for the transactions that you _really_ want durability on, run a checkpoint before rendering your response to the user.