Yesterday I noted the somewhat amazing fact that the latest-discovered (Mersenne) prime number had almost 4.5 million MORE digits than the previously largest known prime! Part of this simply has to do with the fact that Mersenne primes are only one sub-category of ALL primes, and probably also relates back to how the GIMPS project (searching only for Mersennes) operates. But still quite stunning.
I was thinking of writing a short post about Mersenne versus other primes, but heck you can get that info from the Web... more interestingly, Patrick Honner took up the cause to actually get a handle on how many primes may have gone as yet undetected between the newly, and last, found ones.
Mr. Honner did the legwork, by starting with Bertrand's Postulate, and in the end calculating that there must be "at least 14,772,551 primes between the largest and second-largest known primes!" ( - 1 and -1)
His first commenter takes the argument/logic even further and concludes that there should be around 1.45 * 10^17,425,162 prime numbers between the two known primes, which as he notes "is staggeringly larger" than even Patrick's number [
Anyway… read all about it:
http://mrhonner.com/2013/02/06/how-many-primes-did-we-miss/
As an aside, I noticed that on the Wikipedia page for Mersenne primes it mentions that in 2003 the GIMPS project produced a "false positive" for the 40th Mersenne prime, which was shown to be invalid upon failing verification. If someone can explain how such "false positives" can come about (is it due to computer chip glitches or some chance algorithmic flaw or what?) I'd be curious to hear in the comments (...just verifying these GIMPS numbers must be a bit of a story unto itself as well!)...
1 comment:
Hi Sketchy!
I think the problem in finding a false positive lies in the implementation of the Lucas-Lehmer test. At http://www.mersenne.org/various/math.php you may read:
As mentioned in the last paragraph, GIMPS uses floating point FFTs written in highly pipelined, cache friendly assembly language. Since floating point computations are inexact, after every iteration the floating point values are rounded back to integers. The discrepancy between the proper integer result and the actual floating point result is called the convolution error. If the convolution error ever exceeds 0.5 then the rounding step will produce incorrect results - meaning a larger FFT should have been used. One of GIMPS' error checks is to make sure the maximum convolution error does not exceed 0.4. Unfortunately, this error check is fairly expensive and is not done on every squaring.
Post a Comment