Great article! I found the section on "GC Assists" very informative:
> You can think of this like a tax that your goroutine must pay for allocating during a GC cycle, except that this tax must be paid upfront before the allocation can actually happen.
Sum types would add much needed compiler safety to returned errors and optional values. We often see nil pointer dereferences during runtime due to the usage of pointers as optional types.
It's very unlikely they store their passwords in plaintext. They were probably logging during login requests and didn't realize the whole request body was being logged. Can happen if you have a proxy/load balancer with logging enabled.
This is not what happened. The tokens were mapped to user IDs and when people signed in, the db created new users which may have had the same IDs as old deleted accounts. When they restored the DB, these tokens pointed to other users and granted access to these other users' accounts. Quite an unfortunate situation. May have been mostly avoidable if UUIDs were used instead of incrementing IDs, but hindsight is 20/20.
The part of that that I don't get is how a new user could have the same ID as an old (truncated) user since "our system created new records for them, with primary keys generated from the existing sequence (PostgreSQL does not reset id sequences on truncate)."
Do they mean that the only potentially exposed accounts are those that signed up after the database was restored?
Yeah they must mean new accounts, if not then I'm lost. I guess it could have reset autoincrement but they said it didn't. The only other thing I can think of is that the signed token that's put in localStorage is sent to the server like "someuser|sometoken", the server inspects sometoken, says it checks out, then takes the client at its word that it's someuser.
HTTP 1.1 + json support for twirp opens up a lot of doors too. It's easy for the browser to natively hit a twirp service without the need for large packages such as https://github.com/improbable-eng/grpc-web.
I think you would be better off using more well known statistics for betting (ie. multivariate regressions, t-tests, etc..). These stats methods are less of a black box than ML (neural networks, etc..) so you'll be able to understand why they have chosen specific bets and you'll be able to tweak them easier.
ML is often just stats under the hood in most cases -- I think I am targeting neural networks specifically with this comment.