Saturday, April 5, 2014

Linux has just passed the tipping point.

It was funny to watch MS try to fight the Linux Hydra. For every head they chopped off, 3 more heads popped out. But right now it is all over but the crying. Linux has won on servers, on low end netbooks, on phones, on embedded devices, and on all the new low end micro controllers that are currently taking over the desktop.  Every attempt by Windows to enter new markets has failed, with their foray into gaming costing them much more than they made from gamers. (

MS clusters on have 2 out of the top 500 super computers ( Most cloud systems run Linux with about 80% of web/cloud service traffic on the Internet run on Linux. One service, Netflix on Linux, take up about 1/3 of all Internet traffic alone (  Every time you use Google, you are using one of the largest Linux clusters in the world with 25% of daily traffic ( 

MS was forced to keep XP alive for 5 years longer than they had planned to fight off Linux on netbooks. And even this didn't work because there are chrome books from many vendors that do not run Windows and many people are very happy with these small, low cost, full featured netbooks.

MS lost the mobile/embedded market before they even got started, with all the flavors of Android overwhelming even the former market leader iPhone like an avalanche. You can't compete against 100 low cost competitors each advancing some aspect of Android and having to share that with everyone else. The only way to compete is to join.  Zune was a horrible loss (,28804,1898610_1898625_1898633,00.html).

Android and other flavors of Linux run on most embedded devices, because of the low cost, ease of porting to new micro controllers, and full features. If your TV can play videos, it is probably running some flavor of Linux. If you have a stand alone media player, it is probably running Linux (

Now single board computer like the Beaglebone Black and Raspberry Pi cost less than $50; to put a MS OS on those boards would more than triple the cost, and quite frankly these boards are just about more computer than the average person actually needs. With Moore's law every 2 years these kind of boards will only get faster and cheaper. Many routers even run Linux.

In 20 years you will be able to buy a $5 computer that is as powerful as the average desktop machine today. Is MS Windows still going to cost $100 then? I'm going to guess that to stay relevant and competitive MS will have to be priced at about 50 cents per machine at that point. Can they stay in business at that price point?

The Steam Box running Linux is going to do to gaming what Android did to the mobile phone market ( Even I am planning on buying one. Goodbye PS4 and Xbox one. You can already emulate most games older than about 5 years old on the Raspberry Pi.

The only place that Windows still has a major market share is on the desktop, and only because they were convicted of illegally using monopoly powers to maintain that share. They are currently trying to convert 500 million XP users, an OS that came out in 2001 over to Windows 8.  In fact, the 40% of desktops in both corporate and home are still running XP.  (

I don't believe that the conversion rate is going to be very high. I foresee at least half of these users upgrading their machines to running Linux, instead of having to buy a new machine to run the latest version of windows.  The only thing keeping Microsoft afloat right now is the little bit it is making from Windows 8, which is their least popular operating system they have ever released (  

So all in all, Microsoft is pretty much running on empty right now.  Only the billions they have in the bank are keeping them rolling along, but that is going to run out in just a few more years.

Goodbye Microsoft.

Solving Systems of Non-Linear Equations.

Got Newton's method working with the guess given to it by Steepest Descent.

I had to extend the matrix library to add 2norm and transpose functions.
matrix.c    matrix.h

The code for the new library is here:06_SystemsNonLinearEq.c    06_SystemsNonLinearEq.h

Finally the example program to define the function is here:  06_SystemsNonLinearEq_HW.c

The system of 3 equations with three unknowns.  Followed by 9 derivatives with respect to X1, X2, and X3.
Steepest Descent gets close globally, but then only converges linearly, it would take hundreds or thousands of iterations to find the value to a few decimal places:
initial 0               0                     00  0.0112181743 0.0100963569 -0.5227407743 1 -0.0392975036 0.0960761555 -0.5230125588 

Once you have that initial guess then you can use Newtons method which converges in just a few iterations:
0 0.5002972122  0.0144515939 -0.5210250069 1 0.5000081616  0.0009138476 -0.5235749873 2 0.5000000378  0.0000041458 -0.5235986672 3 0.5000000000  0.0000000001 -0.5235987756 4 0.5000000000 -0.0000000000 -0.5235987756
If you aren't close with the initial value on Newton's method it may never converge to a value.

Most of the time in books for class the initial guess is just provided for you, our teacher went that extra 2 days of work to let us figure out the guess ourselves. Because nobody will give you that initial guess in the real world.

What I need to do now is to find the error and only stop once I have met the required accuracy.


I fixed a couple of issues with symbols in converting from math to the c programming language.  It wasn't clear that the quadratic was always 3 points, I was setting it to OrderSize points instead.  Fixed that.


Feeding solution back into formulas as an accuracy and final sanity check. :D
Modified the program so you can set how many iterations of steepest descent you want, with none as an option.  Ran into a homework problem where steepest descent just ran off into the wild blue yonder.


This is one of my homework problems that is due on Tuesday.  It is a 2nd order non linear system of equations.  As you can see in main() the system_new function defines the 2 order and the title.  Then the Jacobian functions are defined based on the derivatives of those functions for either x1 or x2.  When you are creating a derivative of a function on a variable, every other variable is treated like it was a constant. 

Monday, March 24, 2014

N order systems of equations.

I updated my previous 2 order problem solver and 1 order problem solver to work with N order systems of equations, where N > 0 and < 100.  At N = 1 you are doing Runge-Kutta order 4 estimation.

You just have to set up all the equations and then pass them in along with an initial value for each one.  Your functions will be passed an array of long doubles in u[] from 0 to n-1.

Source code is here:

First you create an object, setting it's start, end, steps per unit, and title.
Next you add Orders one at a time, with the initial value and function to use at that order.
One all the orders are entered, call the  calculate function to generate the results.
Then call the print function to output the results.
There are also methods to get the results programatically.
Finally dispose of the object and you are done.

Thinking about adding an actual result function that, if present, will print out with the results as the first column.

Friday, March 21, 2014

Estimating Systems of Higher-Order Equations

Evolving the Runge-Kutta method to solve higher-order equations is an interesting challenge. To get the first program working I did it in stages.  First the entire problem was in main, and I was happy to just get u1 and u2 sorted out.  After that I just got one iteration of the K1_1 - K4_2 sorted out, all eight values.  I found a couple of problems with my formulas and fixed them so that I got the same values as the example from the book.

The main loop was next.  I was able to loop through all 10 iterations without any problems because I took my time on that first iteration.  I printed out the results and it matched the output from the example in the book.  All that took about 4 hours.  It is tough to translate from a math book to a computer program, but I guess with practice one probably gets faster at it.

After that I wrapped the code in a C object and created an interface to the function.  I followed the same interface as the Runge-Kutta module I previously did.

I solved a homework problem as well, and print out the results from the actual function. 

The source code:

As I find problems in the code I will update the git repository.

Left to do:

This solves a specific order (order 2) of higher order problems.  To be truly general this should be made into a general function where you can add a function and matching initial values one at a time, then tell it solve all the levels, no matter how wide.

In that case this should solve when you tell it to print or try to pull out a value from the solved array.  It should not allocate the arrays for the work or the temp k values until you know how level of order you need.

Thursday, March 20, 2014

4th Order Runge-Kutta numerical method.

Wrote a C object to manage the code and data around generating Runge-Kutta 4th order interpolation of data.  Fairly efficient algorithm.  I estimate that it will process about 3 million points a second on a single core of a slow netbook computer.  I tried to use the same variable names in the interface that seemed to be used in most books and papers on this topic.

I  do auto ranging based on a step size per number unit.  So if your function runs from 0 to 5 and you ask for 5 steps per number, then you would end up with 5*5+1 = 26 max steps.  The +1 comes from the initial value at the start of the range, then you would do 5 steps in the 1 interval, 5 steps in the 2 interval, and so on.

Source code is here:

The example program, you pass in the starting value, the start and end points, the step per unit, and the function you are interpolating.
The object does the rest with the interface below:

The Object Interface to the Runge-Kutta module.
Once you create an object you can retrieve all the properties and data values for the object.
When you are done, you dispose the object to free the memory. 

The output for the example from the book, exact same values.

Saturday, March 15, 2014

The recent Apple SSL issue.

The double goto fail line caused the next line to be unreachable.  A fact that should have been see in compiler warnings, in code reviews, and in the IDE itself. 

Tuesday, March 11, 2014

GNU MP arbitrary precision library.

Tonight I am learning the arbitrary precision math library from gnu called gmp.   It is tough to work only with functions instead of just using math expressions.   It takes multiple lines of obscure code to do what you can do with a single C expression.  Instead of saying:
bb = floor (n *sqrt(.5) +1);
You have to say:
  mpf_t n;  mpf_init2(n, 256);  mpf_set_ui(n, 10);  mpf_pow_ui(n,n,12);
  mpf_t bb;  mpf_init2(bb, 256);  mpf_set_ui(bb, 5);  mpf_div_ui(bb, bb, 10);
  mpf_t srpf;  mpf_init2(srpf, 256);  mpf_sqrt(srpf, bb);

  mpf_mul(bb, srpf, n);  mpf_add_ui(bb,bb, 1);  mpf_floor(bb, bb);

Why deal with this extra complexity?  Because it is amazingly easy to run into problems of precision with just the 64 bits that the standard data structures give you.  Simple things like working with numbers in the 10^16 range, or 100!, or the 5000th Fibonacci number and you get the wrong values coming out of your functions.   The 5000th Fibonacci number runs you out of bits in the exponent of long double.

I am playing around with the problems on the Project Euler site and am running into a lot of situations where I cannot solve the problems with C because the problem sets have large enough numbers that they are even overflowing 64 bit integers and floating point numbers are fuzzy enough that two things that should match don't because the lowest precision bits don't match, when they should.