Thursday, March 28, 2013

Calculating a series on the computer.

In calculus we have started a new section on series in 10.3 on convergent and divergent series.  I decided to calculate out these series on a computer and see for myself if the series converges or diverges.   I know that pure mathematicians frown upon using numeric methods to calculate these things.  But I find that you can often see patterns that you would miss without using numeric tools to calculate huge numbers of results and

The program I wrote calculates all the results through an absurdly large maximum value.  Because of the huge number of results I decided to take a sample at regular intervals of the powers of 10.  

I stopped the programs after about 5 minutes because the interval between results became too long.  Each value that they produce takes 10 times longer than the value before it did, upto about 100,000,000 the computer is absurdly fast, but after that it slowly grinds to a halt.  It would be interesting to break these sums up into threads and calculate out huge values

This first program calculates the sum of 1/n

#include <stdio.h>
 
void main (){
 
long double i=1;
long double j;

long double result = 0 ;
long double inter = 0; 
 printf("\n");

    for (j = 1; j<10000000000000; j*=10) {

 for (; i<j; i++){
     inter = 1.0/ i;
     result =  result + inter;
 }

 printf("1.0 / %.0Lf = %.20Lf", i, inter);
 printf("  += %.20Lf\n", result);
    }
}





1.0 / 1 = 0.00000000000000000000  += 0.00000000000000000000
1.0 / 10 = 0.11111111111111111111  += 2.82896825396825396851
1.0 / 100 = 0.01010101010101010101  += 5.17737751763962026144
1.0 / 1000 = 0.00100100100100100100  += 7.48447086055034491430
1.0 / 10000 = 0.00010001000100010001  += 9.78750603604438229946
1.0 / 100000 = 0.00001000010000100001  += 12.09013612986342790703
1.0 / 1000000 = 0.00000100000100000100  += 14.39272572286572335516
1.0 / 10000000 = 0.00000010000001000000  += 16.69531126585985137713
1.0 / 100000000 = 0.00000001000000010000  += 18.99789640385390624040
1.0 / 1000000000 = 0.00000000100000000100  += 21.30048150134794616301
1.0 / 10000000000 = 0.00000000010000000001  += 23.60306659479200872866

This series diverges, as can be seen by the ever increasing sum, but I noticed that,  if you looked at the second order difference between sums, the output of this series converged to ln(10) .  I have no clue what this means at this point.

-->
1 0
10 2.828968254 2.828968254
100 5.1773775176 2.3484092637
1000 7.4844708606 2.3070933429
10000 9.787506036 2.3030351755
100000 12.0901361299 2.3026300938
1000000 14.3927257229 2.302589593
10000000 16.6953112659 2.302585543
100000000 18.9978964039 2.302585138
1000000000 21.3004815013 2.3025850975
10000000000 23.6030665948 2.3025850934

-----------

The second program is same as the first, except it is i*i instead of just i for the intermediate result in inter.  This represents the sum of 1/(n^2), which does converge to the value pi^2/6  You wouldn't thing that something so innocent looking as 1/n^2 would hold the value of pi inside itself.

1.0 / 1 = 0.00000000000000000000  += 0.00000000000000000000
1.0 / 10 = 0.01234567901234567901  += 1.53976773116654069027
1.0 / 100 = 0.00010203040506070809  += 1.63488390018489286538
1.0 / 1000 = 0.00000100200300400501  += 1.64393356668155980261
1.0 / 10000 = 0.00000001000200030004  += 1.64483406184805976450
1.0 / 100000 = 0.00000000010000200003  += 1.64492406679822627375
1.0 / 1000000 = 0.00000000000100000200  += 1.64493306684772645277
1.0 / 10000000 = 0.00000000000001000000  += 1.64493396684822148644
1.0 / 100000000 = 0.00000000000000010000  += 1.64493405684822654295
1.0 / 1000000000 = 0.00000000000000000100  += 1.64493406584812120830
1.0 / 10000000000 = 0.00000000000000000001  += 1.64493406664905016438