For any value 0 < a < n, a^((n-1)/2) = (a^6)^144 = 1^144 = 1 mod 7.
For any value 0 < a < n, a^((n-1)/2) = (a^12)^72 = 1^72 = 1 mod 13.
For any value 0 < a < n, a^((n-1)/2) = (a^18)^48 = 1^48 = 1 mod 19.
Thus 1729 will pass the Lehmann test for all a. (Unsurprisingly 1729 is a Carmichael number; not all Carmichael numbers will pass all Lehmann tests, but any number which passes all Lehmann tests will be Carmichael.)
And this gives us an infinite-under-random-primes class, too: If 12n+7, 24n+13, and 36n+19 are all primes, then their product will pass all Lehmann tests.
I'm pretty sure that the Alford-Granville-Pomerance proof of n^(2/7) Carmichael numbers can be extended to give the same order result for "Lehmann-Carmichael" numbers, but I'm not going to do it at 5:30 AM when I'm struggling with a 102F fever.
Thanks! However, I'm not quite convinced. I ran the given Scheme implementation with k=10000 and it correctly reported all the numbers at http://www.kobepharma-u.ac.jp/~math/notes/note02.html less than 4294967087 as composite. (4294967087 is the largest random number Racket can give.)
However, running it with k=50 gave a random set of them each time. So perhaps the probability data I gave was not right.
I can't tell you what your code is doing wrong, but I promise 1729 should pass the test you described.
EDIT: And of course I forgot about non-relatively-prime values. But those are asymptotically sparse; you're seeing a random set of pseudoprimes for k=50 because with that many trials you have a good chance of catching small primes, but for larger Carmichael numbers you'll need a very large number of trials to get good odds.
The Solovay–Strassen and Miller–Rabin tests, which are fast and accurate, but not easy to implement: the former requires understanding the Legendre and Jacobi symbols (arcane mathematical concepts) and the latter requires an efficient factorisation algorithm.
Its very frustrating how often this comes up: an answer is well known but "hard to implement". The answer here should have clearly been "so then I used Solovay–Strassen and Miller–Rabin". You'd think there would be SOME sample code out there or that translating the math wouldn't be too difficult, yet I often run into this problem myself. The divide between mathematical notation, or perhaps the "language of CS papers" and actual code is distressing.
Miller-Rabin isn't that hard to implement - it just needs some reasonable maths skills related to modular arithmetic, and a copy of Knuth for fast versions of some algorithms helps a lot as well.
I agree. Implementing the Miller-Rabin algorithm is one of the homework assignments in the Computer Algebra course we give here. In fact, once you've got the necessary concepts from group theory and modular arithmetic down, the algorithm itself is actually quite simple (it's the correctness proof that has a subtle and clever idea).
When the author was talking about the "Project Triangle" in the context of primality testing, I thought he was going to bring up AKS!
I too am very frustrated with this problem. I would love a library to implement temporal-difference learning, but I can't implement it from the papers describing it.
I'm not sure, an easy biased answer from our side is that papers should include pseudo code closer to what we actually use to code (the flip side being we should learn more math). However, a particularly frustrating example was one paper that did actually code up what they were talking about to test it, but didn't include the code in the paper at all (instead just graphs of the results). I think there is a general feeling that the code is all "implementation details" and not as important as the theory/proof/etc, which is very unfortunate.
I can't guarantee the correctness of the implementation, but a friend (burke on HN, I believe) and I wrote a version of the Miller-Rabin primality test in Ruby & C that you may want to use as reference.
IIRC numbers under 341,550,071,728,321 could be accurately tested for primality on the "instant" timescale.
Another vote for AKS. Our cryptography lecturer skimmed over the Fermat and Miller-Rabin tests, then just said "AKS is better" and left it at that.. I believe the best approach is to do a few iterations of a non-deterministic test to quickly rule out some numbers and then start on a deterministic one. Wikipedia's description of the algorithm still looks pretty simple and being deterministic makes me feel all warm and fuzzy inside.
The documentation and download links for Plan at http://plan.dpk.org.uk seem to be broken so i'm not sure if this has bigints.
Having implemented AKS in Python and Sage, it can get pretty slow - obviously, nowhere near a good probabilistic test even to a staggering degree of certainty. That said, most of the algorithm is very quick. It might be smart to run its basic compositeness tests, do a pseudoprime test, and only do the polynomial mod operation if the number passes.
Plan isn't done yet. You can see the preliminary Ruby implementation at http://github.com/dpkendal/Plan, but I'm now working on a "final" version in RPython to be compiled by PyPy. The download and docs links are there to show what the website will look like eventually.
Elliptic Curve Primality Proving is way faster in practice, and although it's running time is not deterministic, it does give a proof that a number is prime.
"On my machine, testing the prime number 27,644,437 took about 1.2 milliseconds with a k value of 50 (far higher than you’ll probably need for your applications.)"
I'm sorry, but WHAT?
Writing the dumbest implementation of trial-division that I could, in C++ on my laptop, in Debug mode, took 0.0247046 ms to determine definitively if 27,644,437 is prime.
My dumbest-possible implementation, in Debug mode, is 48.5 times faster, and doesn't get confused by Carmichael numbers.
Move along, folks, nothing to see here.
"The trial-division method... gets slow quickly."
That may be true, but using the number 27,644,437 as a benchmark will not get you anywhere near that "slow" point.
I'm fairly sure that most of that 48x difference will come from the fact that you're using C++. I tested it in Scheme, a much slower language. (Actually, it was in Arc, which compiles to Scheme, but is still even slower.)
> The Solovay–Strassen and Miller–Rabin tests, which are fast and accurate, but not easy to implement: the former requires understanding the Legendre and Jacobi symbols (arcane mathematical concepts) and the latter requires an efficient factorisation algorithm.
The author is a bit confused. Miller-Rabin, when testing n, only requires factoring to the extent of determining the largest power of 2 that divides n-1. In other words, you have to have factor n-1 into the form 2^s d, where d is odd. This is trivial to do efficiently.
For any value 0 < a < n, a^((n-1)/2) = (a^6)^144 = 1^144 = 1 mod 7.
For any value 0 < a < n, a^((n-1)/2) = (a^12)^72 = 1^72 = 1 mod 13.
For any value 0 < a < n, a^((n-1)/2) = (a^18)^48 = 1^48 = 1 mod 19.
Thus 1729 will pass the Lehmann test for all a. (Unsurprisingly 1729 is a Carmichael number; not all Carmichael numbers will pass all Lehmann tests, but any number which passes all Lehmann tests will be Carmichael.)