The old boys are back in town...
Now expect a lot more with AMD's Ryzen chips, especially Threadripper. :) Finally there's good competition.
Humanity's collection of the very large prime numbers just grew by one member: 9194441048576 + 1. The newly-found number lands in twelfth place on the list of largest prime numbers and, set down in full, would be 6,253,210 digits long (number one on the large primes list, 274207281 -1, is 22,338,618 digits long). The number …
Now expect a lot more with AMD's Ryzen chips, especially Threadripper. :) Finally there's good competition.
I assume you get academic brownie points and bragging rights over your enormous shiny rig but what real value to science is yet another prime number?
I am assuming that however large you go there will always be a larger one.
"I am assuming that however large you go there will always be a larger one."
Well yes, the set of primes is infinite and was proven to be over 2000 years ago by Euclid.
As to why, well here are just a number of reasons:
https://primes.utm.edu/notes/faq/why.html
https://arstechnica.com/science/2013/02/ask-ars-why-spend-time-and-money-finding-new-prime-numbers/
Damn, so much for SSH. That's my server wide open to the world.
I find this stuff interesting but do not have the specific maths background to understand it all, despite three A levels in Maths :)
What puzzles me is how we've found the 12th largest number. My naive thinking would be that we slowly increment the highest prime rather than finding one in the middle of other primes.
Any simple answers to my stoopid question?
Thanks
Rob
It is not stupid question, it is a good question.
The answer is that primes get more and more rarer, and there will always be large gaps where there are no primes:
https://en.wikipedia.org/wiki/Prime_gap
I believe it is more efficient to examine a certain number that you suspect is a prime (looking like Mersenne numbers) and check if it is prime. Instead of checking many numbers in a row. There is no way to point out a prime number, no one knows if they are more frequent in some "areas" than other. So, we just basically check a random number - which takes a long time to do.
Well, you can make massive jumps quite easily, but slowly incrementing is possibly harder, particularly as this seems to be looking for Generalised Fermat primes, rather than just a regular prime.
I'd have thought it would be relatively easy to beat this number as the "largest prime" given a sufficiently large and complete set of primes. If the current largest prime has 22+ billion digits to it, then you just collect a sufficiently large set of primes, multiply them all together and subtract 1 - bingo.
I suspect it may be more complicated than this, though... :(
> then you just collect a sufficiently large set of primes, multiply them all together and subtract 1 - bingo.
I thought that theorem only works if you multiply *all* primes together then subtract one. Otherwise, making new primes would just be a leapfrogging process.
Finding huge Mersenne primes is (comparatively) easy because there is the Lucas-Lehmer primality test.
For numbers of the form C^N+1 where C is factored (here: 919444 = 2 * 2 * 53 * 4337 and N=2^20) one can try to find a primitive root (of the multiplicative group). In other words, Pratt's primality certificate is doable.
For "random" numbers of this size (random meaning not of some rather special form) proving primality is way out. Still a huge number "surviving" several rounds of the Rabin-Miller (compositeness!) test can safely be assumed to be prime.
I quite agree, but I don't think you need to go that far to get to a set that, when multiplied together, has more than 22 billion digits - if fag packet calcs are moderately accurate, the first few billion primes should do the trick and there's at least one website with over 37 billion in a list (that I think claims to be complete)
> then you just collect a sufficiently large set of primes, multiply them all together and subtract 1 - bingo.
That doesn't work, but it sounds a bit like the proof that there are an unlimited number of primes.
The proof that there are an unlimited number of primes goes something like this:
Suppose you think there are only a limited number of primes, and you have a list of all the primes. Take all the primes you know about, multiply them all together, then ADD one, to get a new number P. This will be bigger than all the primes you know about. You can't divide P by any of the primes you know about, because you'll always get a remainder of one. So either P is a prime that you didn't know about, or it's composite and it's factors are primes you didn't know about. This proves that your list of primes was incomplete.
This proves that any (finite-length) list of primes will always be incomplete, so there have to be infinitely many primes.
Unfortunately, this process is not a great way of finding new primes, because of the step at the end where you have to figure out if P is prime or not, and factor it if it's not a prime. Factoring big numbers is a very hard problem. (If you can solve that problem, you can break the RSA encryption algorithm).
arse, hadn't appreciated that if could be a composite of primes greater than your biggest prime in your set - case in point is {2,3,5,7,11,13}, which would result in 30031 when you use the multiply and +1 method, and 30031 = 59 * 509.
Perhaps I will graciously let them off and concur that what they are doing is really hard :)
> then you just collect a sufficiently large set of primes, multiply them all together and subtract 1 - bingo.
I thought about this yesterday and realised that it didn't work. Consider the product of the first n primes less one. You're basically using a prime sieve, but trying to generate new primes from it rather than doing primality testing.
For a sieve to work on a number n, you have to have a table of all possible prime factors up to sqrt(n). However, the list of primes and products grow at different rates, so you're going to come across a product whose square root is greater than the largest prime in your list, breaking the ability of the sieve to test whether the number is a prime or not.
This happens quite quickly since 2 * 3 * 5 - 1 is 29, and sqrt(29) > 5. Then comes 2 * 3 * 5 * 7 -1 = 209, which is composite with factors 11 and 19, thus kicking the whole idea in the head.
What puzzles me is how we've found the 12th largest number. My naive thinking would be that we slowly increment the highest prime rather than finding one in the middle of other primes.
It's due to the way Mersenne primes are used to simplify the search.
Mersenne primes are a subset of all possible primes, and take the form 2^n - 1. This makes it easy to search for a new higher one because you only need increase n and check the new calculated value of 2^n - 1.
However, increasing n means that you can skip over primes. For example, the first four Mersenne primes are: M2 = (2^2 - 1) = 4-1 = 3; M3 = (2^3 - 1) = 8-1 = 7, M5 = 31 and M7 = 127. We've quite quickly found a 'largest' prime of 127 but in the process we skipped over 11, 13, 17 etc.
The same thing has happened in this story but for a much larger number.
Am I safe to assume that the following idea is already part of how primes are determined to be prime?..
Prime Number = N
Divide N/2, this is the upper limit of the factoring process, = Nx
Start dividing N by Y (3, 4, 5 etc.),all the time reducing Nx by dividing it by the current value of Y.
At some point Y=Nx and if you haven't found it to be divisible, then you have a prime#
(Apologies to any mathematicians out there, I failed A level :) )
> N/2, this is the upper limit of the factoring process
You can improve that to sqrt(N).
Still this is probably the worst (most inefficient) way to prove primality.
"Still this is probably the worst (most inefficient) way to prove primality."
Cool, do I get a prize?
> Cool, do I get a prize?
Yes, the number n of English pounds that is both a third power x^3 (x a non-negative integer) and also the sum of two third powers y^3 + z^3 (again, both y and y non-negative integers).
LOL - I have no idea what that actually means, but I'm guessing it's approximating zero somehow :)
> but I'm guessing it's approximating zero somehow :)
By Fermat's (last) Theorem there are only the trivial solutions: either x=1 and one of y, z 1 and the other zero, or all of x, y, z zero. As I promised ("x a non-negative integer"), you get 1 pound.
Generalised Fermat Number = a^(2^n) + 1
or, more generally, a^(2^n) + b^(2^n)
http://mathworld.wolfram.com/GeneralizedFermatNumber.html
919444^1048576 + 1 = 919444^(2^20) + 1 = (53×4337×2^2)^(2^20) + 1
So an engineer had an idea, that all odd numbers were prime. He set out to prove his theorem.
1 is prime.
3 is prime.
5 is prime.
7 is prime.
9 is - well, observation error. Throw that one out.
11 is prime.
13 is prime.
That's a big enough sample size. Proven!
Applying government logic there would be a law that says 9 isn't an odd number.
> 1 is prime.
Nope: prime numbers P have exactly two factors, 1 and P (which have to be distinct).
Considering 1 prime destroys the uniqueness of the factorization into primes. You very much do not want that!
"So an engineer had an idea,"
Please, don't call the software coders at Microsoft "engineers". That's an affront to the folks who have fancy embossing stamps.
Yes, I jumped to the conclusion you meant "Microsoft engineers" due to the simple fact that your slightly flawed proof looks like it may have been foundational to Microsoft's more general assertion that "The general public loves the Windows 8/10 UI". GIGO and all that...
Bah, yer telling it wrong. Mainly because I work with engineers, scientists and mathematicians I have a lot of the three types jokes.
Premise is that all odd are prime.
Mathematician goes:
3 is prime, 5 is prime, 7 is prime, 9 is not prime. 9 is an exception, therefore premise is false.
Scientist goes:
3 is prime, 5 is prime, 7 is prime, 9 is not prime, 11 is prime, 13 is prime. 9 is likely to be an observation error, premise appears to be correct, and I need a six figure grant to improve the accuracy of measurement equipment.
Engineer goes:
3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime, 13 is prime....
That is not how engineering is done.
That's how they do climate science.
This is all well past the math I studied, but if this is the 12th largest prime number, isn't there the possibility that we could find two new "lower ranked" primes that are the prime factors to a "higher ranked" prime?
Say we "know" Prime2 is a prime, but discover Prime12 and Prime7, and they are the two prime factors of Prime2, so that Prime12 x Prime7 = Prime2. We just invalidated Prime2 as a prime number.
We know 11 is a Prime number because we know all of the prime numbers below it. If we assume 221 is a prime but don't know 13 and 17 are primes we are wrong because of incomplete information. If we subsequently discover that 13 is a prime number, we then invalidate 221 as a prime.
Or is there a better proof that someone has come up with?
Once a number N has been proved prime (assuming the proof algorithm is correct and the are no machine malfunctions) you do not need to worry about finding (non-trivial) factors of N, it cannot happen.
Such is the nature of a proof 8^)
So there is a way to prove a prime without checking if every lower prime number is a factor of it. Cool!
Lots of tests. Miller-Rabin, Lucas-Lehmer (for Mersenne primes), ECPP (which is deterministic) etc.
> without checking if every lower prime
Yes. Algorithmic number theory is one lovely branch of mathematics.
> Miller-Rabin...
Careful. Miller-Rabin is a compositeness test. I am sure you know, but let's avoid such misleading statements. ---- pedant's pint ---->
This is a generalised Fermat prime, which is a prime of the form a^n + 1. Fairly obviously, a has to be even for this to be a prime. Less obvious is that n has to be a power of two. There are probably an infinite number of GFPs, but only a finite number for each base a.
man
breaks automated tests at 00:30