We bank with Mercury and I couldn't get a hold of them in business hours after receiving this email at 5:30PM CET on a Friday (unsurprisingly). This email seems odd to me, Mercury shows a transaction posted as of March 9 for employee payments, and another posted as of March 10 for tax payments. Had I was successful in contacting Mercury, would they even be able to reverse them?
Rippling should be working with FDIC to sort out these in-flight payments, not asking their customers this. When I reached their support team for what the additional instructions are, I couldn't get an answer. This situation sure doesn't look great.
Does the set of validators really need to be static (or fixed-size)? I may be missing something obvious, but it seems like we can also support a dynamic set. Consider the following scheme:
- In addition to transaction data, each validator stores three sets: CURRENT, the current validator set; IN_PENDING, the set of clients who are to join the validator set; OUT_PENDING, the set of validators who are to leave the validator set.
- Validators support four additional requests: v_nominate, v_initialize, v_remove, v_eject.
- When a client wants to join the validator set, it sends the v_nominate request to all validators. Validators who agree add the client to IN_PENDING, sign the tuple (CURRENT, IN_PENDING) and reply.
- If the candidate client receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_initialize request to all validators (along with the signatures). Validators receiving this request remove this candidate from IN_PENDING and add it to CURRENT.
- When a validator wants to remove a validator (can ask to remove itself) from the set, it sends the v_remove request to all validators. Validators who agree add the outgoing validator to OUT_PENDING, sign the tuple (CURRENT, OUT_PENDING) and reply.
- If the validator who originates the removal request receives 2 * f + 1 signatures where f = (max |CURRENT| - 1) / 3 (maximization is over all responses), it sends the v_eject request to all validators (along with the signatures). Validators receiving this request remove the outgoing validator from OUT_PENDING and CURRENT.
Wouldn't arguments similar to the ones in the article also work for showing consensus on these sets?
It should be relatively easy to port it to other programming languages.
Compared to regular counting Bloom filters, there are some advantages (e.g. uses half the space, lookup is much faster, no risk of overflows). It has a disadvantage: add+remove are slower (currently). Cuckoo filters need less space, but otherwise advantages+disadvantages are the same.
For more questions, just open an issue on that project (I'm one of the authors).
I was thinking about the signature issue as well. In flat space (i.e. Minkowski metric), this would imply a constant four-potential with an imaginary 0'th component, which I can not make sense of.
IIUC the authors are saying that if we associate the metric with the four-potential via an outer product, they get a picture coherent with the current understanding of how electromagnetism "works" in GR under certain circumstances.
I can somewhat see how to interpret the mathematics in free space. But what about when there are massive bodies in the picture? They will result in a non-flat metric... does that imply they create their own electromagnetism?
So if we have a theory expressive enough to make statements about ordinary (Peano) arithmetic, we can always form a self-referential statement within the framework of this theory which we can not prove or disprove. So far, so good. Here is my question: What happens if we restrict/weaken the theory to preclude self-referential statements? Obviously, we will lose our ability to express certain arithmetic statements which correspond to self-referential statements in the original theory. But what else? Is that the only class of statements we lose? Also, are there any other kinds of statements that still make the theory incomplete?
It is very hard to detect self-referential statements and restricting yourself to "non-self-referential statements" might be quite severe.
Given a set of "domino" tiles - each having a top and a bottom. Each top and bottom has some word on it - these words can only use the letters "a" and "b".
You can duplicate domino tiles and also align tiles so that all tops and all bottoms are aligned.
Now, given a finite set of such tiles, can you say whether there is an alignment so that all tops concatenated, read from right to left, equal all bottoms concatenated?
In fact, given such a set of tiles S, you can easily create a formula P(S) that is true iff such a valid alignment does not exist. Obviously this formula is true for some sets of tiles and false for others.
Now the funny thing: Given a (correct) fixed theory T in which you can state P(S) for every S and in which proofs can be computationally checked, there must be infinitely many S so that P(S) is true, but cannot be proved in T. Thus such theory T is incomplete.
Where is the self-reference?
This problem is also known as the Post correspondence problem (PCP). The halting problem can be reduced to it, which is not decidable. If T was complete, you could enumerating all proofs and see whether they correctly proof P(S) or its negation. Due to its completeness you would eventually find a proof for either of them and thus you could decide the halting problem.
We lose the ability to reason about sets of unbounded size. As long as we only restrict ourselves to some bounded subset of integers, Gödel can't do the trick with his numerals. Equivalently, on the CS side, we must restrict ourselves to total functions, that is, all valid programs must provably halt after some bounded number of steps.
You are talking about Sprecher's modification to the original Kolmogorov-Arnold theorem, right? This version, and its implications, have been a lingering wondering for me for quite a while. Are you aware of any research on 3-layer networks where the unknown transfer function is also learnable? I suspect such an approach does not result in good models (otherwise we would have known about them!), but I can not articulate why. Where exactly does the K-A reasoning fail when we try to apply it in practice?
Yes. I haven't touched this since 1995, so I had to refresh my memory. I was indeed, talking about Sprecher's modification. Back when I studied this, the proofs I found were not constructive.
I was unaware, but apparently Gribel gave a constructive proof in 2009 (link from Wikipedia article about KA rep theorem). I would have to read it and hope I am not too rusty to understand it before I could really ponder your question...
But I could offer two places I would have looked:
1. The approximation is of a continuous function, and such approximations (e.g. chebychev, bernstein) usually require that you be able to sample the function at specific points - but learning usually gives you training data that does not correspond to those specific points. It's possible that construction fails here somehow.
2. The approximation is too hard in practice. This is the too often the case for Breiman's beautiful ACE (Alternating Conditional Expectation) which, if you squint hard enough, looks like a two-layer network where each neuron has its own transfer function. The algorithm is incredibly simple in theory, but very hard to use in practice.
No, but s/he might tell you that it is safe to count on God being an important concept to many people for the foreseeable future. In the same vein, gold will likely be perceived as a good store of value for the foreseeable future, which is what matters in this context.
> God being an important concept to many people for the foreseeable future
Just like those tribes in the Amazon, they will exist, but most of humanity will fork off that, it's already happening. Like someone else in this thread said: "You seem to be under the impression that only 'better' things exist. That is wrong." -> I agree with this 100%, old things will continue to exist, but they will be left behind.
Clear, concise and to the point. Looking forward to reading the rest. If you finish the whole series and fix/improve the posts in this sequence using the feedback from here, I think it might serve as one of the go-to pages for technically inclined people who are interested to learn about Bitcoin and blockchains.
That's generally true. The trouble is that your post more or less does to the thread what you're predicting that people will do to the thread—the polarity of the high-order bit doesn't matter much. It's best to abstain.
I don't understand if they use windowing as a fixed computational step that is active both in training and scoring time, or, if they use sliding windows only to chop up the training data.
Also, I wonder if they checked how a feed-forward NN that operates on the contents of a sliding window (e.g. as in the first approach above) compares with their RNN results. I am curious about this, as it would give us a hint whether the RNN's internal state encodes something that is not a simple transformation of the window contents. If this turns out to be the case, I'd then be interested in figuring out what the internal state "means"; i.e. whether there is anything there that we humans can recognize.
I wasn't very sure what the sliding window part was about either. I think they were just saying that they trained on a sliding window using the "output window" as part of their loss function.
A feed-forward NN wouldn't do much because it doesn't hold a state variable which you need to be able to understand context in time series data. There are probably some pieces of the state that you'd be able to interpret but the majority of it would mean nothing to us.
It makes very little sense to me that their sliding window does not appear to contain previous holidays. I'm not expert but I'm pretty sure holidays are a seasonal trend, and it would benefit them to train their models on previous holidays as opposed to the 3 months before a holiday, right?
Rippling should be working with FDIC to sort out these in-flight payments, not asking their customers this. When I reached their support team for what the additional instructions are, I couldn't get an answer. This situation sure doesn't look great.