Note: this script requires gmpy2
, if you are using Python 3.9.5 x64 on Windows 10 21H1 like me, then you need to download the pre-built gmpy2
wheel from here: https://www.lfd.uci.edu/~gohlke/pythonlibs/.
So this is a script that approaches pi using 9 algorithms (actually only 7, two of the algorithms are implemented in two different ways each).
In this script I used Liu Hui's π algorithm, Leibniz formula for π, Summing the area of a disk, Madhava's series, Archimedes' method, Arctangent series, and Bailey–Borwein–Plouffe formula.
As these methods are all well-known and implemented countless times well before me, I will not explain the principles and details.
This question is about making the implementations as efficient and accurate as possible.
The code:
import gmpy2
import sys
from gmpy2 import mpfr, sqrt, atan, ceil, factorial
from itertools import product
gmpy2.get_context().precision = 339
def cyclotomy(n):
s = 6
M = 1
for _ in range(n):
G = mpfr(sqrt(1 - M ** 2 / 4))
j = mpfr(1 - G)
m = mpfr(sqrt((M / 2) ** 2 + j ** 2))
s *= 2
M = m
return '{0:.100f}'.format(mpfr(s * m / 2))
def Leibniz(n):
pi = mpfr(0.0)
d = 1
for i in range(1, n):
a = 2 * (i % 2) - 1
pi += mpfr(a * 4 / d)
d += 2
return '{0:.100f}'.format(mpfr(pi))
def sumarea(r):
r_sq = r**2
quardrant_sq = product((x**2 for x in range(r+1)), repeat=2)
q = sum(a+b <= r_sq for a, b in quardrant_sq)
return '{0:.100f}'.format(mpfr((4*q - 4*r + 1) / r_sq))
def pixelpi(r):
r_sq = r**2
s = sum(ceil(sqrt(r_sq - x**2)) for x in range(r+1))
return '{0:.100f}'.format(mpfr((4*s - 4*r + 1) / r_sq))
def Madhava(n):
a = mpfr(0.0)
for i in range(n):
a += mpfr((-1.0/3.0)**i/(2.0*i+1.0))
return '{0:.100f}'.format(mpfr(sqrt(12)*a))
def Archimedes(n):
pa = 6
pb = 4 * sqrt(3)
for _ in range(n):
pb = mpfr(2 * pa * pb / (pa + pb))
pa = mpfr(sqrt(pa * pb))
return '{0:.100f}'.format(mpfr((pa + pb) / 4))
def Arctangent(n):
term = lambda x: mpfr((2 ** (x + 1) * factorial(x) ** 2) / factorial(2 * x + 1))
return '{0:.100f}'.format(mpfr(sum(term(i) for i in range(n))))
def ArctanPi(n):
term = lambda x: mpfr(sqrt(2)) if x == 1 else mpfr(sqrt(2 + term(x - 1)))
if n >= 2:
return '{0:.100f}'.format(mpfr(2 ** (n + 1) * atan(sqrt(2 - term(n - 1)) / term(n))))
def BBP(n):
term = lambda x: mpfr((4 / (8 * x + 1) - 2 / (8 * x + 4) - 1 / (8 * x + 5) - 1 / (8 * x + 6)) * 16 ** -x)
return '{0:.100f}'.format(mpfr(sum(term(i) for i in range(n))))
def main(a, b):
switcher = {
0: cyclotomy,
1: Leibniz,
2: sumarea,
3: pixelpi,
4: Madhava,
5: Archimedes,
6: Arctangent,
7: ArctanPi,
8: BBP
}
return switcher.get(a)(b)
if __name__ == '__main__':
print(main(*map(int, sys.argv[1:])))
So far I have found the algorithm based on Liu Hui's converges the fastest (except the recursive arctangent function which is prone to error), it yields the first 100 decimal places of pi at 166th iteration with 339 bit precision, though it will produce inaccurate result (the last two digits will change from "79" to "80") if you input any higher number due to rounding.
But at the 166th iteration we are calculating the perimeter of a regular polygon with 6 * 2 ^ 166 sides to get the result.
The third and fourth methods use the same logic, but the third counts all the cells inside a circle which is always an integer, and this is somehow more accurate than the fourth.
The fourth method sums the maximum y value at each x value, because it uses square roots y isn't always an integer, because issues of rounding this is less accurate than the third, but this is much faster than the third. And I have found rounding up y make the result most accurate.
The seventh and eighth methods are using the same logic, but the eighth uses recursion, and it achieves the most accurate result at the fifth iteration (input is 6), then it gives very inaccurate results at inputs 164 to 169, and then it gives complete wrong results at input 170, and at input 172 it gives 4.0000, and at any number after that it yields -0.0000, and the maximum input is 997 in terminal and 998 in Python shell, otherwise it throws RecursionError: maximum recursion depth exceeded in comparison
.
These issues are caused by the way the computer works, the errors in precision and recursion limits, and I am unable to fix it.
Beyond that there are no issues to report.
I would like to know how can I improve the efficiency and accuracy of the algorithms, how can they be optimized. Any help will be appreciated.