Saturday, December 1, 2012

Linux from Scratch on the Rasberry Pi

The Linux from scratch project is a way to build your own custom Linux distribution.  By building a distribution in this way you learn how every part of the system works together.  This would also be a good way to become intimately familiar with all the hardware in the Raspi as well.  The Raspi would be an excellent system to learn how to do this on.

http://www.linuxfromscratch.org/

My first Linux box was in 1993, it was a 33MHz 386 with no math coprocessor, 5 MB of RAM, and a 60 MB Linux partition on a 120 MB hard drive.  I managed to fit most of the Soft Landing System onto the computer over a period of 6 months, installing things one at a time (from 1.44MB floppy disk), then optimizing each package, compressing what I could, removing unneeded files.  A Raspi is so far beyond that system that it is amazing to think about it now.

http://en.wikipedia.org/wiki/Softlanding_Linux_System

One of the problems with a traditional Linux distro is that it uses the glibc library, which, while complete, is also very heavy and large.  On a system with 4 GB of RAM running at 3.8GHz, this is not an issue.  But when you have a 256 MB system and 128 MB is reserved for the graphics, then every single byte counts.  A solution to this problem is to use the uclibc library to reduce the size of the system and the compiled executables. 

http://en.wikipedia.org/wiki/UClibc

I have linked against this small libc version for several programs, and the difference in  compiled program size is amazing.  I have seen 1/4 to 1/10 reductions in the sizes of compiled executables.    You also want to try to compile programs for size instead of speed and use built in executable compression systems such as:

http://linux.die.net/man/1/upx

Because the raspi is 4 to 10 times slower than a modern $600 desktop machine, and it has a very slow disk subsystem, then by making the files much smaller than the desktop machine you can load them much faster.  This is not a criticism of the Raspi, because you are comparing a $25 computer against a $600 computer, by any economic measure the Raspi wins everything on a dollar by dollar basis.   By reducing the file size with dynamic linking against smaller libraries and compressing the resulting executables you can get amazingly fast performance because the processor is much faster than the file subsystem.  Of course this could also be done on the desktop machine, but nobody does it because in that case  the disk subsystem is so fast that you don't gain much performance gain.

For the level just above the OS, you need a good, small shell such as busybox.

http://www.busybox.net/

What busy box does is sit in the bin directory as a single file.  Many of the commands such as "cat" or "ls" are then linked against that single executable and depending on the name it was called by, it can transform and act just like the command itself.  Most of the commands and options for a normal UNIX system are supported.  Once this single command is loaded once, it will stay in memory and you can rapidly run commands again and again, without first needing to load different executables into memory against and again.

There are a variety of small versions of many programs that can easily be put onto a box.  These are commonly found on Linux based routers to provide ssh access, web servers, and the like.  If you just need to share a few files, or run a couple of cgi scripts, instead of Apache, you install Boa:

http://www.boa.org/

Instead of the normal ssh you run beardrop ssh:

https://matt.ucc.asn.au/dropbear/dropbear.html

Once you get to the graphics system, the X window system is actually fairly lightweight at this point.  If this subsystem was compiled against uclibc and then compressed in place with upx, then you could get amazing size reductions.

Then on top of X you need a library like Qt or Gnome to allow applications to be built.  These often come with large collections of applications.  This is where it becomes more interesting.  There may be a better choice for a system like the raspi than the two most popular choices.

One of the things that I would like to do is to analyze what libraries are commonly used by the majority of applications, then standardize on a collection of a couple of dozen libraries that do common tasks, such as processing html, xml, compression, encryption, file retrieval and so on.  Instead of using 12 different libraries that all do the same thing one function, you port any program you want to run on the box to use a much smaller standard set of libraries.  This reduces the memory footprint of the system and the speed that the system is able to load programs substantially.

You also want to dynamically link as many of the libraries as possible in the system.   If you statically link libraries into an executable, it makes the program more portable, because the library is built into the executable at that point.  This results in that program being larger and each program having to load its own copy of each library into memory at the same time.  With a couple dozen programs running you can easily waste MB of RAM with redundant copies of libraries.

Once I have a system up and running, then it would be very interesting to start looking at how to get to a login prompt as quickly as possible, and then get to their desktop as quickly as possible once they have logged in. We may be able to load in common libraries in memory in the background so they are ready when a user successfully logs in.  Start other services on the box, such as the web server, over a couple of minutes time in the background.

Back in the late 1980's I used commodore 64 and 128 computers.  These machines were slow.  Start loading a program, walk down to the bottom of the lane to check the mail, drink a cup of tea, come back and still have to wait for the program to load slow.   There was a cartridge that you could plug into the machine, that once a program had loaded into memory, you could click a button and a menu would pop up, allowing you to save the entire state of the computer at that point.  After that you could speed load that saved position in just a few seconds, instead of the multiple minutes that many of the games took to load.

It would be very interesting if we could snapshot a computer at the login prompt so that we could speed load the system to this point in just a second or two, instead of up to a minute. Almost an instant on.  As the user is logging in, other things can be loaded in the background. Of course, if we updated the system, then a new snapshot would have to be taken. 

No comments:

Post a Comment