Monday, April 30, 2012

Massively multi-player game is effectively a supercomputer.

Ponders the fact that all the computers in a massively multi-player game is effectively one of the largest super computers in the world; wonders how much work could be offloaded from the servers to the clients.  Make all the clients use a bittorrent like protocols to download updates and textures between machines on closer networks.

Sunday, April 29, 2012

Data Processing Pattern.

I've done a lot of data processing over the years and have come to the following understanding of how data processing words at a general level.  This concept is what I am planning on using for several batch and message processing projects I wish to create.

Data level

Data can come from many sources.  The program has to open a file, a database connection, a serial port, a network port, or other device and begin reading in a stream of data.  At this level the data is an almost meaningless stream of single bytes.

Format level

These bytes are organized in a specific pattern known as a format.  There are many different formats that the data can be organized around.

Fixed length.  Each field can be in a strict order, each with a fixed length, so that each record you read will be the sum of those fixed fields.  Typically there will be a special byte with an end of line significance, typically a newline or a carriage return, but with this format the record separator is optional.    This is how the IP and TCP headers come in a data packet at layers 4 and 5.  Each byte and even each bit can have a specific positional meaning.  If you look at a set of data and you can set it to be 80 columns wide in a text editor and suddenly you see beginning of last names all line up right down the page at column 20, each name followed by spaces until another column starts all lined up at column 32.

Delimited.  Each field is followed by a delimiter, typically a comma or a tab and there is an end of record marker that separates each record from each other, typically a newline or a carriage return as with some of the fixed length format above.  Typically the fields do still have a maximum length, or range of values, but this is not visible from the format itself.  Typically you can spot this format by seeing the commas or the tabs in the data, typically every record will have a fixed count of commas or tabs in every record.

Mixed. A message, or record, can be a combination of the above.  The fields can mostly be delimited with commas or tabs, but have a few fields whose contents have a fixed.  HL7 is an example of a mixed format.

Grammar. This used to be much more difficult than it is now.  Typically this means they used XML now. In the past people would create many different formats for data that was contextual in nature.  If you are trying to parse text that comes from a command line, or a language like English, or a program file written in C or Java, then your parser will have to understand that combination of positional text whose meaning is determined by the initial state and the order of the commands.

Conceptual Layer

 At this point you have read in the stream of bytes, given groups of those bytes meaning and stored the data into a record or other data object in your program.   A reference to this data can be passed around to represent that stored set of meaning.  

Translation and Routing

Often the data you received has fields in the wrong order,  or you have a set of numbers from 1-5 that actually represents a user name.  This layer will take the incoming data and create a new record in the new format, transferring and transforming the data from one data object to the other.   Or an xml file you parsed has a 100 records that need to be pulled out of the object and 100 individual records sent to the next layer.  So this layer would have a loop that lets you get one data value and create as many objects as you need.  A single message might be split into multiple outbound messages.

Data Store

The data coming out of the translation layer will need to be mapped to a set of outbound data objects.  One stream might go one direction, while another set of messages goes to another table.  This operation has to be tied to a database transaction, so that either all the data is applied to the database, or none of it is.  Or you can have an exception log that others have to check and correct later.

Data View

In order to see this data you can map a view onto one or more data objects and see records in the data view.  The data view can represent the underlying data objects in many ways.  It also only has to retrieve what it needs to fill the current set of records in the view, so a dataview for a million element database might only have to load in the first 10 elements.  This data view could even be aliased across to another computer and still only has to cache a little data to represent many records.




The 7 layers of the OSI Network Model







In order for two computer systems to transfer data between them the information has to be wrapped up at each layer, transmitted across the physical wire or with radio waves.   On the receiving side each layer will be unwrapped in turn so that the correct process gets the right information into the correct place.

There is no guarantee that any packet will get across the physical media.  So at level 4 and above you can manage the transport layer to ensure that the receiving computer has gotten all the information you are trying to send, or at least you will know the data was not received.

Most applications will also send acknowledgements across the connection so that the application can be sure that the data has been placed into a database on the other side of the connection.

How to layout project files for multi platform/target code bases.

There are often many ways to look at how to build and manage the project files, as well as the output from the project, the intermediary and release files. It is better to keep the intermediaries out of the source tree, to keep it clean and small. You can accidentally check in files that should not be included if you are not careful. Best to keep them out of the tree altogether. Then there is the actual output from your work; the libraries, modules and programs you wish to release. If these are in their own folder organized how they need to be then you can just zip or tar.bz2 them up easily. You can even add a target level at that point and use a cross complier to release for many platforms at one go. You may also need to add in a special build for code that uses the compiler to build a special version of There are three levels of indirection that apply to the hard set calculations of where the path goes. Level 1 is: where does the project go when you save out a new one you create? Level 2 is: where is the project file in relation to the root of the project file? IE... where is the source code? Level 3 is: where do my intermediate and builds go in relation to the project root? /Builds /Targets /Debug /Release /Clang /Intermediates /Targets /Debug /Release /Clang /MainProject-SDK /LoadableModules/ProjectRoot /platform /os /buildsystem /target /ProjectFile Where target is removed if a certain os only has a single target ever.

Simple ipv4 dot notation parser I wrote once.

ip.c an ipv4 dot notation parser.
#include 



/*



This is a routine designed to parse the ipv4 dot notation.



Given a string with a dot notation value between

0.0.0.0 and 255.255.255.255

make sure this is a valid string of that format.



*/



static char ** string;

       char * str;



int

parseNum(){



       int value = 0;



       if (*string && isdigit(*string[0])){



               value = value*10+*string[0]-'0';



               str = *string;

               str++;

               string = &str;



               if (*string && isdigit(*string[0])){



                       value = value*10+*string[0]-'0';



                       str = *string;

                       str++;

                       string = &str;



                       if (*string && isdigit(*string[0])){



                               value = value*10+*string[0]-'0';



                               str = *string;

                               str++;

                               string = &str;



                       }

               }

       } else {

               return 0;

       }



       if (value < 256)

               return 1;

       else

               return 0;

}



int

parseDot(){



       if (*string && *string[0] == '.') {



               str = *string;

               str++;

               string = &str;



               return 1;

       } else

               return 0;



}



int

endOfString(){



       if (*string && *string[0] == 0)

               return 1;

       else

               return 0;



}

int

parseDotNotationV4 (char * stringtoparse){



       string = &stringtoparse;



       if (parseNum()){

        if (parseDot()){

         if (parseNum()){

          if (parseDot()){

           if (parseNum()){

            if (parseDot()){

             if (parseNum()){

               if (endOfString()){

                // Good dot notation value

                return 1;

               }

             }

            }

           }

          }

         }

        }

       }



       // failure

       return 0;

}



printfParseDotNotationV4(char * string){



       int returnVal = parseDotNotationV4(string);



       printf("%s %d\n", string, returnVal);



       return returnVal;

}



int main (){



       int i,j,k,l;



       char string[1024];



       printfParseDotNotationV4("255255.255.255");

       printfParseDotNotationV4("2");

       printfParseDotNotationV4("255.255.255.255.6");

       printfParseDotNotationV4("255.255.256.255");



       for(i=0; i<256; i=i+4){

        for(j=0; j<256; j=j+4){

         for(k=0; k<256; k=k+4){

          for(l=0; l<256; l=l+4){

               sprintf(string, "%d.%d.%d.%d", i, j, k, l);

               if (!parseDotNotationV4(string))

                       printf("%d failed when it should have been
 valid.", string);

          }

         }

        }

       }



       return 0;

}

Saturday, April 28, 2012

How I used Linux as a Multi Line Fax Server for a Pharmacy Order System

Summer of 1995.

My boss came to me and said "Jim, I need a fax machine in our new out-of-state office that can refax the orders to us here at the main office." 
     I promised that I would look into the possibilities immediately.  I started by calling my main parts vendor but was unable to find anything at the time for either Windows 95 or Windows NT that was a multi-line fax server and could also refax the received faxes to our main office. 
     I had been using a small utility with my home Linux system and had configured efax to work with one modem line.  Sure enough the main controlling program was just a shell script and was extremely well documented with comments.  In fact there were more lines of comments than of code. 
     After messing around that night faxing things back and forth from my home and the office I decided that efax could do the job.  I went to my boss and told him that there were no commercial products that would do what he wanted, but that I had found a solution that would work.  What he wanted to do was a minor problem with UNIX because controlling banks of serial devices is one of the reasons that UNIX was written by AT&T in the first place. 
     He was hesitant to go with a UNIX solution until I assured him that a company called Caldera was working with Novell to provide a UNIX operating system that would run on regular IBM compatible hardware and be interoperate with our network.  He gave me the go ahead. 
     Since the system was going to be in an other state I decided to go with new, powerful equipment so as to have as few problems as possible. I got the quotes that I needed and ordered a copy of Caldera 1.0,  100 MHz Pentium, 16 MB RAM, 1.2 GB HD, mini tower case, NE2000 combo network card, 4x EIDE CR-ROM player, 8 port ISA Cyclades serial board with octopus cables, 4 USR Sportster 14.4 external modems, and an HP 5 laser printer.  The grand total came up to about $2000.00. 
     All the parts came in by the second day and I assembled the hardware in a few minutes.  I then placed the Caldera CD-ROM in the player and the boot disk in the floppy drive and quickly ran through the setup to have an operational OS in less than 10 minutes.  This was the first time that I had ever used Redhat and I haven't used anything else since then for any full installs. 
     The system didn't recognize the Cyclades serial card.  No problem, I was an old hand at reconfiguring and installing the kernel.  Still didn't recognize the Cyclades serial card.  I double check everything that I had done and it was correct.  I called Cyclades technical support and they knew exactly what was wrong and directed me to a patch on the internet.  I got the patch and had full instructions.  I applied the patch, setup the /etc/rc.d/rc.serial file to recognize the new ports,  created the new devices in the /dev directory with a supplied script and remade the kernel. 
     With problems I had the entire system operational inside of two hours.  It even installed Samba print and file server and the Apache web server and they were running with no configuration on my part. 
     Now came the tough part.  I copied the /usr/bin/fax script to /usr/bin/faxa.  Then I modified the /usr/bin/faxa script as follows: 
I changed the 
     DEV=modem 
to 
     DEV=ttyS0 
I changed the 
     NAME="" 
to 
     NAME="name of our company" 
I corrected the phone number to report the phone number that it was going to answer at.  I also fixed the log name and received fax names to give each a unique name by prepending an 'a' to these names. 
     Then I made sure that /usr/bin/faxa worked in that it would answer the line receive the fax, print the fax locally, retransmit the fax to our bank of fax machines in the office and then move this file to a done directory. 
     Finally I copied /usr/bin/faxa to /usr/bin/faxb, /usr/bin/faxc and /usr/bin/faxd and corrected each of these new files so that  DEV= ttyS1, ttyS2 and ttyS3 respectively.  They each prepended the log file name and received fax file names with 'b', 'c' and 'd' respectively. 
     I spent the next two days testing the system to ensure that it would work without any hardware or software problems.  The following day I made the 3 hour drive and installed the system.  I had to modify the print portion of the /usr/bin/fax[a|b|c|d] scripts in order to fix a glitch in the information that was being sent to the system from some old fax machines.  The efax machine was rescaling the faxes so that one page faxed would fit on one printed page.   We were getting a postage stamp printed page in the upper left corner with a long thin horizontal line clear across the page.  I modified the faxa, faxb, faxc and faxd scripts to trim the page to 8.5x11 and cleared up the problem. 
     In less that a week I had researched, developed and implemented a multi-line fax server.  I never saw the utilization go below 85% on that box and there was always plenty of free memory.  And the thing that amazed me is that we were only scratching the surface of the power that having a Linux system could provide.
     The system worked with only a few minor user glitches during the next nine months that I worked at that company.  Once the phone lines became messed up.  The users forgot to put paper in the printer upon occasion or didn't change the toner until two days after the printer ran out.  "Oh, you mean it's _supposed_ to have stuff printed on the page?" 
     Even a complete unexpected power down would only cause the system to come back online reprint and refax what it hadn't moved to the done directory and then continue on answering the fax lines.  Please note that this is the default behavior of the operating system and efax,  I didn't have to do anything special to get this robustness. 
     Next I tell you how I automated putting the received faxes into a web server. 

Lua scripting language in nmap overview.


Nmap, wireshark, and other programs use Lua as a scripting language to enable users to extend the functionality of the program. Seems that Lua is very popular in security programs.  As popular as Tcl in the hospital environment.

This is the nmap scripting chapter from the nmap book;
http://nmap.org/book/nse.html

From the link:

 The Nmap Scripting Engine (NSE) is one of Nmap's most powerful and flexible features. It allows users to write (and share) simple scripts to automate a wide variety of networking tasks. Those scripts are then executed in parallel with the speed and efficiency you expect from Nmap. Users can rely on the growing and diverse set of scripts distributed with Nmap, or write their own to meet custom needs.

In the end, Lua excelled in all of our criteria. It is small, distributed under the liberal MIT open source license, has coroutines for efficient parallel script execution, was designed with embeddability in mind, has excellent documentation, and is actively developed by a large and committed community. Lua is now even embedded in other popular open source security tools including the Wireshark sniffer and Snort IDS. 

Here is a link to a pdf that discusses extending both nmap and wireshark:
http://nmap.org/presentations/Sharkfest10/sharkfest10-slides-fyodor.pdf

This is a video presentation of the lua interface in nmap by David Fifield:
The Nmap scripting engine

David had been the maintainer of the scripting engine for two years when that video was made.

And "Mastering The Nmap Scripting Engine (Blackhat 2010)" Video:
http://www.securitytube.net/video/1223


Learning the Lua Language

This is a general tutorial for the language:
http://lua-users.org/wiki/LuaTutorial

From the link:
This tutorial is aimed at all newcomers to the language Lua. We start off with where to find relevant introductory material and then progress to using the language with tutorials in the TutorialDirectory. The style is directed at newcomers to scripting languages, as well as newcomers to Lua. Common uses of Lua are:

  • A configuration language for applications.
  • A standalone scripting language.
  • An embedded language in applications to modify runtime behavior.

There is a link from the above overview to a list of very good, detailed tutorials here:
http://lua-users.org/wiki/TutorialDirectory

You can also do functional programming in Lua using lisp like structures
http://en.wikibooks.org/wiki/Lua_Functional_Programming/Functions

And is one of the few languages to support the power of closures:
http://www.lua.org/pil/6.1.html

The complete language reference is available from here:
http://www.lua.org/pil/index.html

And that Lua language book is available from the languages main web site:
http://www.lua.org/


Clang static analyzer.

From the website: http://clang-analyzer.llvm.org/

Clang Static Analyzer

The Clang Static Analyzer is source code analysis tool that find bugs in C and Objective-C programs.

Currently it can be run either as a standalone tool or within Xcode. The standalone tool is invoked from the command-line, and is intended to be run in tandem with a build of a codebase.

The analyzer is 100% open source and is part of the Clang project. Like the rest of Clang, the analyzer is implemented as a C++ library that can be used by other tools and applications.

What is Static Analysis?

The term "static analysis" is conflated, but here we use it to mean a collection of algorithms and techniques used to analyze source code in order to automatically find bugs. The idea is similar in spirit to compiler warnings (which can be useful for finding coding errors) but to take that idea a step further and find bugs that are traditionally found using run-time debugging techniques such as testing.

Static analysis bug-finding tools have evolved over the last several decades from basic syntactic checkers to those that find deep bugs by reasoning about the semantics of code. The goal of the Clang Static Analyzer is to provide a industrial-quality static analysis framework for analyzing C and Objective-C programs that is freely available, extensible, and has a high quality of implementation.



This means that this is an open source program that is designed to analyze code without running the code.  Clang reads the source code of the program you are working with in order to tell you what problems the rule set finds in the code.

Several open source projects are interested in using Clang to make their code base better.  In my opinion, Clang is only part of an overall solution that includes unit testing of interfaces combined with high level integration testing of the overall released project.  But it is better than no testing at all.

Since the primary target for Clang is OSX I have to download and follow the instructions here: For other platforms, such as my Ubuntu system the web site lead me to this page: http://clang.llvm.org/get_started.html#build

 I followed those directions and got an executable to begin testing against source code after just a few hours.

http://clang-analyzer.llvm.org/scan-build.html

I did have to do a

sudo make install

in both the llvm and  build directories.  It installed everything in

/usr/local/bin
/usr/local/lib

Once everything is working you build your project with

scan-build make

This does the build by replacing the normal compiler with a special compiler that both compiles and analyzes the code.  At the end of the run


My plan is to get clang all build and begin testing against a few small projects to see what it says, then scale up to complete open source projects.

The page recommends that the build be done in debug mode with assertions enabled to help control the way the program is analyzed.


I had to use the command:

/usr/local/bin/scan-build .configure
/usr/local/bin/scan-build -v -V make

in order to make everything work correctly with the way I had installed the software.

It took hours to build nmap using this method.

Almost immediately I got this error:

nmap.cc:2044:3: warning: Value stored to 'num_host_exp_groups' is never read
  num_host_exp_groups = 0;

which would not be found by the normal checks, because it was used before this point.

2041   /* Free host expressions */
2042   for(i=0; i < num_host_exp_groups; i++)
2043     free(host_exp_group[i]);
2044   num_host_exp_groups = 0;
2045   free(host_exp_group);

Line 2044 really has no effect because it doesn't actually do anything that has an effect.

This is the output from the error report: 













Then it took about 10-20 minutes to check each of the rest of the files.

The second thing it found was:






Which was in utils.cc  

It is a little scary that it appears that the buflen is checked only after writing to the buffer.

osscan.cc:543:7: warning: Value stored to 'state' is never read
      state = 0;
      ^       ~

osscan2.cc:491:11: warning: Value stored to 'foundgood' is never read
          foundgood = true;
          ^           ~~~~
osscan2.cc:663:11: warning: Value stored to 'foundgood' is never read
          foundgood = true;
          ^           ~~~~
osscan2.cc:2248:7: warning: Value stored to 'seq_stddev' is never read
      seq_stddev = 0;




It will even sort a few fields.

I cleared out all the .o instead of just nmap's .o  and found this bug:

linear.cpp:1092:9: warning: ‘loss_old’ may be used uninitialized in this function
linear.cpp:1090:9: warning: ‘Gnorm1_init’ may be used uninitialized in this function
linear.cpp:1376:9: warning: ‘Gnorm1_init’ may be used uninitialized in this function
linear.cpp:1805:15: warning: Call to 'malloc' has an allocation size of 0 bytes
        int *start = Malloc(int,nr_class);
                     ^~~~~~~~~~~~~~~~~~~~
linear.cpp:21:32: note: expanded from macro 'Malloc'
#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
                               ^      ~~~~~~~~~~~~~~~~
linear.cpp:2000:30: warning: Assigned value is garbage or undefined
                                        model_->w[j*nr_class+i] = w[j];
                                                                ^ ~~~~

This one actually seems more serious, if true.

evidently nr_class is 0 there which causes malloc to allocate no bytes to "start".


I set the -k option to keep going, and am letting everything run for as long as it needs to run.

About 6 hours later I checked and it had finished up.  The reports are very complete:

 



This is an example of one of the errors, but as you can see the NULL does appear to be checked in the previous.  The documentation talks about using asserts to  remove some of these errors in a debug build.


Overall this does look like an interesting additional tool to use in addition to other tools.

Because it makes a copy of the file for each bug report, because the paths to the failed branches it makes a copy of a file for each bug report.

The compressed size of the reports was 3.7 MB, and the uncompressed size was 47 MB.

Friday, April 27, 2012

Learning nmap


Original document from 1997 that describes why nmap was created:

http://nmap.org/nmap_doc.html
The Art of Port Scanning - by Fyodor

The above article is interesting because it illustrates some of the philosophical underpinnings of the innocuous network scanning tool and the background of where the tool came from. 

Then the main web site documentation is available here:

http://nmap.org/docs.html

Which goes into detail on how to use the tools and what the options are.

A good tutorial can be found here:

http://www.certforums.com/forums/computer-security/18698-nmap-tutorial-part-one.html


This documentation needs to be a _little_ more fleshed out.  I looked at a bug someone posted with output from the --packet-trace option whose output looks similar to this:

SENT (0.1281s) TCP 192.168.0.21:53940 > 74.207.254.18:443 S  ttl=59 id=23310 iplen=44 seq=3166348013 win=1024 <mss 1460>
RCVD (0.1440s) TCP 74.207.254.18:80 > 192.168.0.21:53940  R  ttl=53 id=0 iplen=40     seq=3166348013 win=0

And I could find nowhere an explanation of what the fields were representing.   I know a little about tcp/ip, so I know that ttl stands for time to live, iplen is the length of the ip packet, and sequence is the packet sequence number that is assigned to every packet by the sender of the packet. However I am not positive what is meant by id, win, or what the letters  'R', 'S', and 'A' represent.

If you are going to be doing much development on the software then you need to download the software and begin reading the code. 

Instructions for getting and compiling the code is here: http://nmap.org/book/install.html

For me on an Ubuntu box it is as simple as this:

svn co https://svn.nmap.org/nmap

cd nmap

./configure

make

I actually just downloaded the source tarball for the current and development versions and did the .configure;make on them and they compiled just fine.    The tarballs for source and compiled versions are available here: http://nmap.org/download.html  source is second section down.

If you are not part of the core team then you can't check changes back into the main branch.  The changes you make will just be for your own use.  If you would like you could post diffs to the main dev mailing lists for discussion and inclusion to the main code base. This nmap dev mailing list archive is here: http://seclists.org/nmap-dev/  And you can subscribe to the list here: 

And even if you were part of the core team it would probably be bad to check things in directly to the main branch without extensive testing and having things reviewed by others.

I am still in process of creating a branch to work in the main repository.   If I remember correctly creating a branch in svn is the same as making a low cost copy internally to a new location inside the svn database.   Which should be a command similar to this:

http://svnbook.red-bean.com/en/1.0/re07.html

svn copy SRC DST

Still working out exactly what the SRC and DST parts will be.  I'm thinking it will be this:


svn mkdir https://svn.nmap.org/nmap-exp/username

svn copy https://svn.nmap.org/nmap https://svn.nmap.org/nmap-exp/username/main -m "branching for username gsoc work"

and then I need to do a check out this way:

svn co https://svn.nmap.org/nmap-exp/username/main
 

I had a little scare when I created the svn directory, it defaulted to the user name on my system, I hit return and it then asked for username and password which it seems to have cached for that host, which is nice.  I had to install autoconf to get the compile to work as well.

A little page of basic svn commands is helpful: http://www.linuxfromscratch.org/blfs/edguide/chapter03.html

I am a little rusty on my source code patchs.. reading this to get re-familiarized: http://jungels.net/articles/diff-patch-ten-minutes.html

and of course man diff and man patch are your friends.

Frequently Asked Questions About Ports In The Internet Protocol.




I wrote this in the 90's to help a few team mates understand how the Internet worked.  It is very basic.

Everything you ever wanted to know about ports but were afraid to ask.

TCP is a network protocol, built on top of IP, that guarantees reliable delivery of data across the network. 

In order for two processes to communicate using TCP/IP they must perform the following steps:

The server creates a tcp socket.
The server binds a local address and port number to that socket.
The server then listens for a connection on the socket

The client creates a tcp socket.
The client then connects to a server on a port.  (The client port is chosen at random.)

The server accepts the connection and the socket on the client and server are now called 'fully specified sockets'

---

Fully specified sockets are those that form a set of five values:

Server       Port   Protocol  Client       Port
192.168.0.1  32123    tcp     192.168.5.6  65030

These values form a unique combination that exactly match this and only this connect on the entire Internet.

Known ports are considered to be those ports where a server can accept multiple connections from many clients.  Telnet has a know port of 23.  Any server which accepts multiple connections on a single port is considered to have a known port, because this is the port that is known by all its clients.

Two telnet sessions from a client to the server would look like this:

Server       Port   Protocol  Client       Port
192.168.0.1  23       tcp     192.168.5.6  41049
192.168.0.1  23       tcp     192.168.5.6  41071

Note that even though the telnet server has accepted two connections on port 23 that each client was randomly given a different port number and this slight difference is all that is needed to uniquely differentiate the two connections from each other.

This is not to be confused with the /etc/services file which reflect those ports which are assigned by the Internet Assigned Numbers Authority.   Many of these ports are know ports, but that because they accept multiple connections on a single port, not because they are in the services file.

Cloverleaf doesn't use a known port to accept connections.  Each process that wants to connect to a tcp/ip port on cloverleaf will get it's own port.  Adding to the complexity is the fact that Cloverleaf has a production, test and training environment.  The number of interfaces that we have will only grow with time.  In order to manage this complexity we need the flexibility to assign port numbers on the cloverleaf servers in a logical manner that is maintainable and ensures that we can quickly and easily troubleshoot any networking problems.

Ports are only bound to particular socket on the server.  So it is perfectly acceptable for a port to be used for one purpose on an application server and for an entirely different purpose on another server, such as the interface server.  In fact, restricting the use of a port on a machine, for a service that the machine will never provide is counter productive.  In just a few years we would run out of blocks of numbers that we are allowed to use.  Such a network wide restriction on ports would not be enforceable and would not be maintainable.

We are perfectly willing to fully publish our entire port number specification as a network reachable document on the novell server.

Thursday, April 26, 2012

Evolution Report


There are thousands of mythologies that explain how life was created and got onto the earth. Just about every group of people that ever lived made up some explanation about the creation of life. Because their memory of history only extended back to the memory of the oldest living person in the tribe, they had no perspective on how the Earth had changed on vastly longer time scales. Memory can only last as long as someone remembers the story. And as anyone knows that has ever played a game of telephone, the story grows in the telling. The big points stay similar, but the details are filled in.

These creation stories were comforting for the people who shared them. But as the dark ages ended Europeans began to chart the course of the stars in the heavens, systematically dig into the earth, began to explore the world on ships, and most importantly of all, began to methodically record the information they were finding, and sharing this information with others. The information they were finding didn't match the creation myths to this point. Instead of the Earth being thousands of years old, life was millions of years old, and then billions of years old. Instead of the Earth being the center of the universe, the Earth circled around the Sun, and it wasn't even the biggest planet.

This new evidence completely disproved all the ancient myths. The old ways of thinking were so entrenched that anyone who disagreed with them was stoned or burned to death. A new more systematic way of asking questions about everything was developed called “The Scientific Method.” This allowed people to ask questions about the universe and experiment to see if the question was true or not. Instead of taking things on faith this new method required people to repeat the experiment before they accepted the theory as true. And new theories could be presented to refine or replace older theories, without anyone being burned to death.

Many theories were proposed that explained bits and pieces of the theory of life. One theory took precedence over all the rest. A man named Darwin took a voyage on a ship and studied a group of islands with an amazing diversity of life. He noticed many patterns and over the course of years he worked on a book. Another man named Wallace independently came up with nearly identical theories to Darwin. So Darwin was forced to abandon work on his masterwork book and immediately publish a pamphlet called, “On the Origin of Species.”

It took decades of experiments to prove this controversial work to be correct. A combination of the study of fossils and looking at living species. Over the years since then the theory has been refined, but never disproven.

The way evolution works is that living populations have environmental pressures. Their offspring will either be fit to survive in that environment or not. Because there are limited resources the offspring that can get the most resources and have the most offspring will have the most descendants, crowding out the less fit. This pressure from other species and your own species is a form of environmental pressure as well.

It is important to realize that individuals in a population do not evolve. The offspring of a pairing can have a lot of different combinations of genes, which make them more or less fit. Sometimes there is an error when the DNA is duplicated. Most of the time this causes the offspring to be completely unfit. But the mutation can result in an individual that is more fit for the environment it finds itself in.

The pressures of all the species in an area all evolving together is called macro-evolution. When one species becomes more fit for the environment it puts more pressure on the other species competing for the same resources.

A new species forms when individuals from the same species stop breeding together, either because of behavior changes, or because of geographic separation. At some point in the future the small changes accumulate or the number of genes changes and makes it impossible for the individuals to breed together anymore.

We talked about theories about how life evolves, but where did life originally come from? Nobody knows for sure, and it is impossible to prove one way or another. The most popular theory is that the early atmosphere rained out organic compounds into the early seas and ponds. Then primitive replicating molecules began duplicating. Eventually a cell wall was formed by mistake, this became the primary cell that out competed all the existing self replicating molecules.

This cell replicated itself and mutations caused it to fit into every available ecological niches. Cells began to invade other cells in parasitic relationships. Eventually these relationships became symbiotic and beneficial to both cells. This happened several times; once with mitochondria, a second time with the nucleus replacing previous, and a third time with chloroplasts. This increase in complexity allowed the development of multi-cellular life. This multi-cellular life got washed up onto land during high tide, became adapted for land bit by bit, and then spread from the shore inland.

If you look at all the stellar systems in our galaxy, and all the galaxies in the sky, the odds that life evolved on more than just one planet is almost a certainty. Even if you say that only 1 in a million suns has a planet in the right place, and 1 in a million of those planets develop life, that still leaves millions of places that life could develop. And life may just be completely adaptable so that it can form in conditions well beyond what we expect, including in interstellar space far from any sun. We have found life on Earth in ice fields, miles deep in the earth, miles deep under the ocean around volcanic vents, in hot springs, and even in nuclear reactors. It may be possible that the first life on Earth came from the comets that rained down onto the forming planet.

Describe how you have reacted to a failure.

30 minutes to write this for the class final.

The difference between the wise man and the fool is not that the wise man succeeds at everything on the first try.  The difference is that the wise man only sees someone fail that particular way one time.

As a computer programmer I was often called upon to code solutions to problems that nobody at that company had dealt with before. Failure had to be factored into every project, and a solution to each failure found before the scheduled deadline. You couldn’t just look up the answer and implemented it. You first had to figure out exactly what the question was, because often the person asking for something wasn’t very specific about what they wanted.  This was called a functional specification and is a negotiation between everything that someone could possibly want, and what was technically feasible to implement in the limited time with the limited resources available.

Only once you knew exactly what someone was asking for could you write up a technical specification about how you would implement a solution given the limitations of your computing environment.  After 3-4 weeks you would have a good idea of the question and a solution to the problem and it was time to attempt to implement the solution.

Often this is where programmers like me would come into the project.  We would be handed the functional and technical specifications and told we had 4 weeks to implement the code.  We were told that this would be our number one priority, along with the dozen other number one priorities we also had at the same time.   

Often I would bring up the project at team meetings and get input from everyone there about how they would like the project to be implemented.  I would give my ideas on their projects as well.  Once I had an idea of how to implement the code I would break the code up into interfaces and code each piece.  If more than one programmer was working on the project we would each work behind one of these interfaces and if we did our piece correctly the code would just match up and run in a couple of weeks.

Now, we programmers expect to write the code completely wrong, so at each interface we would first spend a couple of hours writing the interface and a test harness to test the code.  The more time we put into this, the more robust the interface would be later.

Then we would implement the code behind the interface, in a fill in the blank way.  It was a cycle.  Write code, test it, see how complete and functional it is, repeat.  At this point it is common to run into problems, figure out a solution, and have to go back and amend the technical and functional specifications.  Sometimes the project is made longer to fix the problem.  Sometimes the project is broken into a phase I and II.  Sometimes everything goes right and you make the deadline with functional code that actually works.

You then go into testing.  Quality assurance gets the code and the specifications and tests everything.  We developers would get bug reports and fix the problems, rolling out new releases to test.

At the end of the project we always won, despite having numerous failures along the way.  Any system that doesn’t implement feedback to correct failure is a broken system.  Failure is part and parcel of life.  The only failure that counts is the failure that isn’t corrected.

Wednesday, April 25, 2012

Creating Linux USB startup drives

One of the things I had to do with the netbook when I installed easy peasy ubuntu onto it,  was to create a startup disk that ran from a USB port.  Here is a tutorial on how to create a boot thumb drive of nearly any Linux install disks.

http://www.howtoforge.com/creating-usb-startup-disks-from-various-linux-distributions-with-unetbootin

How to Forge -  Creating USB startup disks from various linux distributions with unetbootin

Mini travel trailer for van.

I am building a mini travel trailer to pull behind my van.

The reason I am doing this is to be able to camp and travel in comfort and style.  If I get a short term job some place I want to be able to come in and work right away.  I may also homestead a small plot of land with the trailer.

I wanted the trailer to be just wide enough so that a full sized futon would fit in the front of the van.  It should have enough room to store clothing for 2 people,  Have a shower, a toilet and a kitchen.  It should provide enough power to heat the water, heat the interior, power a fridge, led interior lights, a small tv and a laptop computer.  It should also be able to charge up a battery to run a drill or a cutter.

[gallery]

I had never built anything of this scale before.  So everything was new to me.

At first I was aiming to build a 2000 pound trailer, but the cost for the 3500 pound parts were just a little more and even if my trailer only weighted 2000 pounds that would add a large safety margin into the equation.

I got 3 twelve foot and three eight foot sections of 2x2x1/4 angle iron.  This was around $150.  I only had three small pieces of waste metal when I was finished.

I laid out two of the 12 foot pieces as side rails and two of the eight foot pieces as end rails.  I got 4 buckets all the same size to work on.  The side rails went on the buckets.  I clamped the end rails under the side rails.  The axle was lifted onto the side rails and the width of the end rails adjusted to 61 inches on the outside, so that the spring mounts were just inside the frame.

Everthing was tack welded and then the frame was squared by measuring both diagonal directions.  The frame was welded together by my step dad.

The front spring shackles were put on just past the mid point to lean some of the weight of the trailer onto the hitch.  The rear spring shackle was put back 25 inches from the front shackle so that the swing arms just broke past the halfway point.  The frame and shackles were drilled for grade 5 3/8 inch bolts.

The front A frame was welded to the receiver.  It was positioned equally on each side so that it made two triangles on the front corner.

The frame was primered and painted with rustolium spray paint.

Top Deck.

My first plan was to build the deck on top of 2x4 framing inside the angle iron, but this puts the deck up 3 1/2 inches higher than it needs to be.

So the plan now is to build inside the angle iron frame and go under it with 2x4's in 3 places to bring the weight back onto the trailer frame.

Database Design - homework


  1. The four steps of database design are discovery phase, plan the tables, normalize, and test the database using sample data.
  1. Data duplication is entering two or more records in the database about the same entity with a slight variation. You have to delete or merge duplicate records by hand. Data redundancy is the same data in the database repeatedly. You remove this redundancy by normalizing the database. The reason you want to try to only store data a minimum number of times is to assist in keeping the data up to date and consistent.
  1. Scope creep is when new features and requirements are added to a project after the project has begun.
     
  2. When you are assigning a data type and size to a field you have to know the storage requirements for that field. You need to know the size and range of data will be stored in the field.
  1. Text fields store 255 characters and are used for a short collection of text, codes such as phone numbers, email addresses, and zip codes . Memo fields store up to 1GB of data and display up to 64,000 characters and are used for formated text, and to accumulate logs in append mode.

  2. Currency fields are number fields with a currency sign in front of them. Currency fields also default to 2 decimal places.

  3. You should never store calculated fields in your database. For instance, if you know someone is 32 years old, you don't store their age, you store their date of birth. That way you can generate a report now or 5 years from now that includes the age of that person and it will be correct in both cases without having to go in to the database annually and recalculate that persons age.

  4. When you have a field in a form that has a small set of non changing possible values you create a drop down box to allow people to quickly set the value in that field. This prevents things like someone entering a state name that doesn't exist. You could have drop downs for ice cream at an ice cream shop or for picking the name of someone to assign a bug to in a bug tracking database.

  5. What are the three general rules about naming objects in a database? Names have to be less than 64 characters, but should be less than this. Names cannot include a period, exclamation point, accent grave, or brackets. Names cannot include spaces.

  6. The four main database objects are Tables, Queries, Forms, and Reports. Tables hold data, queries ask questions about the data, forms allow you to enter and display data and to act as a switchboard to your program, and reports allow you to retrieve and format data from the database in an attractive way.

  7. Select, Action and Crosstab queries are the three types. Select queries ask questions about the data in the tables in the database and display a dataset. Action queries change the data in the database. Crosstab queries calculate data from a table and display it in a dataset.

  8. Redundant data entry, Error prone and difficult to update when information changes. If the information does change and you don't change all the occurrences then you have introduced data inconsistencies to the data.

  9. A primary key is a field or set of fields that uniquely identifies a record. If this key is included in another table it is called a foreign key. By linking tables together this way you create a relation between the tables that links the records in one table to the records in another table.

  10. A is one to many. Any one customer will have one or more orders. B is one to one. States only have a single capital. C is many to many. There are many college students in a class and each of the students take multiple classes.

  11. Entity integrity is done by using a primary key, which requires that there be only one record in the database with that key and that key is not null. Referential integrity is when the value in the foreign keys in a table match the table where they are primary keys. Yes, you should enforce referential integrity in a database so that you don't get records with values that don't relate to the other tables correctly.

  12. Deletion anomalies occur when you delete a record and the cascading delete removes data from related tables because that was the records last matching record set. Update anomalies occur when there is duplicate data in the database and an update only changes some of the data. Insertion anomalies occur when you can't insert a record into a table unless you enter data into another table first.

  13. Normalizing databases result in a smaller database, reduces the occurrence of inconsistencies and reduces the occurrence of all three types of anomalies. The first normal form reduces repeating groups. The second normal form removes functional dependencies. The third normal form requires that every field in a record be at least partially determinant on the key.

  14. A determinant is a field, or set of fields whose value determines the value in another field. A partial determinant is where the value depends on a subset of a key. A transitive dependency is related to another field that has a partial or full determinant dependency.

Tuesday, April 24, 2012

5‐Paragraph Essay


5‐Paragraph Essay



This document gives a general overview of the 5 paragraph essay. It is as brief as possible and should be used in conjunction with several specific good and bad examples in order to teach how to use this tool to improve writing techniques.

Outline

  1. Introduction (with thesis statement)
  2. Body Paragraph #1 (with topic sentence)
  3. Body Paragraph #2 (with topic sentence)
  4. Body Paragraph #3 (with topic sentence)
  5. Conclusion



Introduction

  • Is the first paragraph of your essay. 
  • Introduces your topic to your reader. 
  • Tells the reader exactly what the rest of the essay is about. 
  • Concludes with a clear, strong thesis statement. 

Body Paragraph #1

  • Open with first topic sentence.
  • Corresponds to the first point in the essay map. 

Body Paragraph #2

  • Open with second topic sentence.
  • Corresponds to the second point in the essay map. 

Body Paragraph #3

  • Open with third topic sentence.
  • Corresponds to the third point in the essay map. 

Conclusion

  •  In your conclusion, reflect on the main points you made in the paper. 
  •  Highlight the most important information. 
  •  Do not introduce new points.
  •  Do not simply re‐state your thesis statement and/or the main points from the essay. 
  •  Leave your reader with something interesting to think about.  


What is a Thesis Statement?

  • A single, clear, concise sentence. 
  • The final sentence of the introduction. 
  • Contains the topic of your essay, and your opinion on the topic. 
  • It often includes an “essay map” that lists the three main points you plan to make in the paper.

What is a Topic Sentence?

  • A topic sentence is a single sentence at the beginning of a paragraph that tells your reader what the paragraph is going to be about.
  • A topic sentence is similar to the thesis statement, but it works only on the paragraph‐level, whereas the thesis statement covers the whole essay. 
  • Each topic sentence should directly reflect one of the points made in the thesis statement.

Body Paragraph 

  • Will focus on a single idea, reason, or example that supports your thesis. 
  • Discuss only one point per body paragraph.
  • Begins with a clear topic sentence (a mini thesis that states the main idea of the paragraph)
  • Has as much discussion or explanation as is necessary to explain the point. 
  • Use details and specific examples to make your ideas clear and convincing
  • Five lines minimum per paragraph.

Transitions

  • Connect your paragraphs to one another, especially the main body ones. 
  • Do not jump from one idea to the next.
  • You need a transition between each paragraph. 
  • Use the end of one paragraph and/or the beginning of the next to show the relationship between the two ideas.
Think about words and phrases that compare and contrast.
  • Does first tell us a pro and the second a con? ("on the other hand . . .")
  • Does second tell us something of greater significance? ("more importantly . . .")
  • An earlier historical example? ("even before [topic of paragraph 1], [topic of paragraph 2]")
  • A different kind of consideration? (money versus time).
Click here to see more suggestions for transition words.

General rules - mostly just for college writing, but may apply to more formal business writing.


  • Do not use contractions.
  • Do not use first-person pronouns such as "I" "me" "my."
  • Do not use second-person pronouns such as "you" "your" "yours."
  • Do not engage in personal stories, meaning stories of your own life experiences, or the experiences of friends, family, and so on.
  • Do not begin sentences with conjunctions: but, and, or, nor, for, so, yet.
  • Write sentences in the form of statements.
  • Avoid any form of direct address to the reader, such as "think about the fact that . . ."
  • Avoid too casual of a prose style, such as sentences that begin with words like "well, sure, now, yes, no."
  • Do not use phrases such as, "a lot," "lots" or "lots of," which can usually be replaced with one of the following words: many, most, much, often.
  • Do not use exclamation points, for they are almost always unnecessary!!!!!!!!!!!!!!!!
  • Periods and commas should be inside of quotation marks, but other forms of punctuation go outside of quotation marks.
  • Do not use the word "okay" when words like "acceptable" could be used instead.
  • Do not use the word "nowdays," "nowadays," or any slight variation thereof.
  • Various commands and scripts



    I am mainly just posting this for my own use.  A good place to place short scripts or commands that I figure out that are tricky to work out the first time.  As time goes on this will grown.

    Please add your own command line shortcuts and commands below in the comments.

    To scan a page:

    scanimage -d artec_eplus48u:libusb:001:008  -y 350 -x 216 --mode gray --resolution 300  > test.pnm

    --

    Create a movie from images:

    mencoder "mf://*.JPG" -mf fps=10 -o test.avi -ovc lavc -lavcopts vcodec=msmpeg4v2:vbitrate=800  -vf crop=2000:1500:400:400,scale=720:-2

    Transcode a movie from my Xacti to a smaller avi format that I can play:

    mencoder ./SANY0121.MP4 -oac mp3lame -ovc lavc -o RollingDice.avi  -vf scale=640:400

    To scale to exactly half of 1080p do this scale:
    mencoder ./SANY0002.MP4 -oac mp3lame -ovc lavc -o L.avi  -vf scale=960:540

    The above give very small files, but they have moving blocks in areas with low contrast.  To give a very detailed picture at the cost of about 1.7 times the file size do this:

    mencoder ./SANY0121.MP4 -oac mp3lame -ovc lavc -o TumblinDiceN.avi  -vf scale=960:540 nr=2000 -lavcopts vcodec=mpeg4:mbd=1:vbitrate=2000

    --

    To download a bunch of similiar files from a website with the number only differing.

    for i in `seq 0 9`; do for j in `seq 1 3`; do wget -c http://dl.fullcirclemagazine.org/issue$j$i\_en.pdf; done; done

    --

    How to reduce a 25,000x25,000 tif file down to a manageable size.

    2012  tifftopnm wac_nearside.tif | pnmscale .25 | pnmtojpeg > wac_nearside.jpg
     2013  ls
     2014  ls -altr
     2015  tifftopnm wac_nearside.tif | pnmscale .5 | pnmtojpeg  -quality=97 > wac_nearside.jpg
     2016  tifftopnm wac_nearside.tif | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside.jpg
     2017  tifftopnm wac_nearside.tif | pnmsmooth -size 10 10 | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside2.jpg
     2018  tifftopnm wac_nearside.tif | pnmsmooth -size 10x10 | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside2.jpg
     2019  tifftopnm wac_nearside.tif | pnmsmooth -size 5 5 | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside2.jpg
     2020  tifftopnm wac_nearside.tif | pnmsmooth -size 9 9 | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside2.jpg
     2021  jpegtopnm wac_nearside2.jpg | pnmsmooth -size 9 9 | pnmscale .25 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside3.jpg
     2022  jpegtopnm wac_nearside3.jpg | pnmsmooth -size 9 9 | pnmscale .25 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside4.jpg
     2023  jpegtopnm wac_nearside3.jpg | pnmsmooth -size 9 9 | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=10 > wac_nearside4.jpg
     2024  jpegtopnm wac_nearside3.jpg | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=2 > wac_nearside4.jpg
     2025  jpegtopnm wac_nearside4.jpg | pnmscale .5 | pnmtojpeg  --quality=97 --smooth=2 > wac_nearside5.jpg


    --

    How to clean out firefox to run faster:


     cd ~/.mozilla/firefox/_your_profile_.default
      for i in *.sqlite; do echo $i; echo "VACUUM;" | sqlite3 $i; done


    --

    Split a book into chapters, then convert the chapters to html files.

    csplit -f Chapter_ -ks Schmitz\,\ James\ H.\ -\ The_Witches_of_Karres.txt '/^ Chapter/' {50}
    rename 's/Chapter_([0-9][0-9])/Chapter_\1.txt/' *
    find . -name "Chapter_*.txt" -exec ~/bin/txt2html.sh {} \;



    --

    Removing extra file url lines in text files, in place.

    perl -pi.orig -e 's/^file:\/\/\/.*+//e' s
    perl -pi.orig -e 's/^file:\/\/\/.*+//e' pb
    perl -pi.orig -e 's/^[ ]*+file:\/\/\/.*+//e' pb
    perl -pi.orig -e 's/^[ ]{0,}file:\/\/\/.*+//e' pb
    perl -pi.orig -e 's/^[ ]{0,}file:\/\/\/.*+//e' pb


    --

    Sleep better, change screen to reddish tint at dusk and change back to normal at dawn.

    ./xflux -z 45715 -k 3000

    http://stereopsis.com/flux/

    --

    how to back up disks/folders:

    rsync -a -v --progress  /media/Action/Action/* /media/SciFi/Action/
    rsync -a --progress /media/truecrypt4/* /media/truecrypt1/

    I set up rsnapshot to run and back up /etc/ /home/ /opt/ and /usr/local/  to /media/Comedy/rsnapshot

    If Comedy isn't there I changed it to not create the root file system so it won't backup unless the usb drive is mounted.

    had to set exceptions for a couple of files .gvfs because of a strange error and Downloads because it is just too huge and ever changing.


    exclude    /home/*/Downloads/
    exclude    /home/*/.gvfs


    http://rsnapshot.org/howto/1.2/rsnapshot-HOWTO.en.html
    http://rsnapshot.org/

    --

    Compiling an SDL program:

    gcc c4.c -I /usr/include/SDL -lSDL

    if you get a strange error about a symbol not found, then the module is probablly a c++ program and you compile it like this:

    gcc c4.c -I /usr/include/SDL -lSD++



    -----

    Convert txt file into an epub:

     ebook-convert "./Hitchhikers/Adams, Douglas - Hitchhiker's Trilogy 3 - Life, the Universe, and Everything.txt" "./Hitchhikers/Adams, Douglas - Hitchhiker's Trilogy 3 - Life, the Universe, and Everything.epub" --no-default-epub-cover --pretty-print --asciiize
      328  find . -name "*.txt" -exec ~/bin/txt2epub.sh "{}" \;
      329  cd ..
      330  ls
      331  find . -name "*.txt" -exec ~/bin/txt2epub.sh "{}" \;

    txt2epub.sh:

    #!/bin/bash

    echo
    echo $1
    myfile="$1"
    path=`dirname "$1"`
    name=`basename "$1" .txt`
    echo "path $path"
    echo "name $name"
    echo ebook-convert \"$path/$name.txt\" \"`pwd`/$path/$name.epub\" --no-default-epub-cover --pretty-print --asciiize
    ebook-convert "$path/$name.txt" "`pwd`/$path/$name.epub" --no-default-epub-cover --pretty-print --asciiize




    Transcode a video file:

    mencoder svd-tat720p.mkv -ni -ovc lavc -lavcopts vcodec=mpeg4:vbitrate=1200:keyint=120:vqmin=4:vqmax=24:autoaspect    -oac mp3lame -vf scale=720:310,denoise3d -ffourcc DX50 -o svd-tat720p2.avi


    Convert a pdf into image files:

      759  gs -SDEVICE=jpeg -r100x100 -sPAPERSIZE=letter -sOutputFile=x/x%04d.jpg -dNOPAUSE -dBATCH -- "filename.pdf"
      760  cd x/
      761  mogrify -shave 70x10 *
      762  mogrify -resize  70% *
      763  cd ../filename.pdf_images/
      764  mogrify -transpose *
      765  mogrify -rotate -90 *


    Extract image files from a pdf:

    #!/bin/bash

    # Extracts image files from PDF files
    # For more information see www.boekhoff.info

    mkdir ./"$1_images"
    pdfimages -j "$1" ./"$1_images"/"$1"

    exit 0


    Write a text file out to a wav file:

    espeak -v f2 -s 100 -f filename.txt -w filename.wav


    --

    Convert a wav file to an mp3

    ffmpeg -i  file.wav -acodec libmp3lame  -ab 96k file.mp3


    --




    Mass renaming files on disk:

      395  rename 's/\(2\)\.zip/.cbz/' *.zip
      396  rename 's/\.zip/.cbz/' *.zip
      397  rename 's/--#//' *.zip
      398  rename 's/--\#//' *.zip
      399  rename 's/* --\#*//' *.zip
      400  rename 's/The Walking Dead --#//' *.zip
      401  rename 's/The Walking Dead --#/The Walking Dead /' *.cbz

    How to rename author names from
    First Last
    First Middle Last
    First M. Last
    F. M. Last

    To

    Last, First
    Last, First Middle
    Last, First M.
    Last, F. M.

    rename 's/([A-Za-z]+) ([A-Za-z]+)/$2, $1/'  *
    rename 's/([A-Za-z.]+) ([A-Za-z.]+) ([A-Za-z]+)/$3, $1 $2/'  *

    Comix couldn't open .cbr files:


    Converting a file from daa to iso:

    daa2iso Programming\ Books\ Collection\(Total\ 19\ Books\).daa Prodaa2iso Programming\ Books\ Collection\(Total\ 19\ Books\).daa Programming\ Books\ Collection\(Total\ 19\ Books\).isogramming\ Books\ Collection\(Total\ 19\ Books\).iso



    --


    To ignore robots.txt with wget do this:

    wget -e robots=off --wait 1 -rc http://www.zx81.nl/files.htm

    Regret


    by James M. Rogers.


    We live life.

    We make choices.

    We hurt people.

    We feel pain.

    Life goes on.

    And on.

    And on.

    We meet people.

    We fall in love.

    Love ends.

    Life goes on.

    And on.

    And on.

    We miss what we no longer have.

    We miss what we think we should have.

    We fear that even if we find love.

    That we will just be hurt again.

    Life goes on.

    And on.

    And on.

    Live in the moment.

    Live in the now.

    Seek to make those around you happy.

    Surround yourself with those who make you happy.

    Be true to yourself.

    Life goes on.

    And on.

    And on.

    The application programming framework

    I wrote this up a few years ago, thinking about how to implement a way to program complex systems.   Instead of writing programs like previous programming languages you create assembly lines of objects through which the data flows as messages.  This turns everything around.  Multit-hreading, multi-processor and cloud processing should be able to be introduced at the system level, without adding any complexity to the "programs" that people have already written.

    --

    The application programming framework is a general framework to manage the memory and messaging between objects.

    Everything in the framework is an object.  From a simple number to the most complex protocol is an object.  TCP is an object, file is an object.

    Let us say that one wanted to create a web server.

    One would create a work unit object to contain everything.  A view, if you will.

    Inside the view you create a TCP object, a stream to http object, and an http processor object.  Everything would be configured at this point.  Then connect the inputs and outputs between the objects together.

    The starting and stopping actions of the work unit are defined.

    Finally the objects would be started from the http processor to the stream to tcp object, and finally the tcp port would be started.

    After everything is started then the tcp port accepts connections, this generates a connect message with a session Id that is sent to the stream to http object that allows it to set up a data structure in expectation of more to come.

    Any data that comes in on session is sent to the stream to http object as a stream object with a session id embedded in it.  The http data object generated is associated with the session id as well.  The http data object is sent to the http processor.  The http processor queues up all the http requests.

    When finished and the work unit is deactivated, the tcp object stops listening for new connections, the queue is flushed and all current work is finished, then the http processor is disabled, the stream to http object is deactivated, and finally the tcp object is deactivated.  After a timeout even if a work unit isn't finished, then everything is shut down anyway.

    The key here is that it should be a very simple script to create, configure and connect these objects together and then to control their startup and shutdown behavior. With the proper base objects it should be easy to create a secure web service or an rss feed.

    -- -- --

    There are multiple levels. 

    The memory.

    The objects.


    -- -- --


    The first Object must be hand crafted, because there are no facilities to create objects until the object class exists.

    Once the Object class is present then you can subclass from the Object to create new classes. 

    The Object class is the root class.

    /grokthink/stream/tcp
                     /file
                     /command
              /packet/udp
          /transform/stream2http
                /http2stream
                        /number2string
                        /string2number
                        /stream2hl6
                /hl62stream
                /hl6tohl7
                /hl7tohl6
          /data/string
                   /number
                   /xml
               /hl7
                   /hl6
              /process/http


    I want to be able to easily swap out any subclass of the same type, so that you can easily change out a file for a tcp connection to make it easy to test processing from a file.


    -- -- --

    Versioning

    If you just declare to this point, then you will get the highest version of anything

    Let us say that we have tcp.

    inside that object path is version info.

    /grokthink/stream/tcp/1/0/0
                           /1/0
                             /1
                           /5/0
                           /6/3
                         /2/0/0
                           /1/0
                   /2/0
                             /1

    It goes /major/minor/build

    If you don't specify a version, then you get the highest version of the same major number you used when you built something.  You can specify just a major number, or a major and minor number or a complete path to a specific version. 

    -- -- --

    Installing versions.

    "GrokThink" is the name for our company.  The URL to the GrokThink object repository is stored as a property in the GrokThink Object.

    An interface is designed to check this repository for new versions and ask if the user wants to install them.

    Build number changes are for internal use, to differentiate between versions for QA.  Minor number upgrades are controlled by QA, usually once a bug fix is done, then the build number is set to 0 and the build number is changed to incremented.  Typically these changes are minor bug fixes or a feature add that will not change the behavior of previous functionality.  The key here is that the fix or feature add should not change the behavior of objects previously used to build prior services or applications.

    Major number changes are made for incompatible changes to an object. Often you will change the interface or fix a bug in such a way as to no longer work with older objects.  At this time the major number should be incremented and the minor and build numbers set to 0.

    Development goes like this:
    1. Developer gets a bug against a specific major version.
    2. The bug report includes a new unit test to demo the bug.
    3. Developer checks out the source, this increments the build number.
       This also locks the object so nobody else can work on it.
    4. The Developer fixes the bug.
    5. The Developer runs the unit tests against the object, adding in the new bug unit test.
    6. Once all the tests pass, the Developer can check in the fix to the code system.
       A description of what was fixed should be attached.
    7. This creates a diff from the old code, the new code and diff go into the system for another developer to double check.
    8. The developer approves or disapproves the fix. The two developers negotiate a fix and finally the second developer approves the fix after the changes are made.
    9. The final fixed release is sent to QA.
    10.  QA approves the release and the release is put into the published area of the site.

    11. The end users versions of the software can now download and install the new object.

    The Development system comes with the software to perform all of these actions and publish to a public web server for your own company.  The company name is the name of the web server that the repository resides on. This will prevent any conflicts as time goes on between different companies.  There can be a redirect added to override the repository location if the website name where the repository resides changes in the future.

    -- -- --


    How to Merge in CVS.

    This document describes the procedures you need to do to merge a change from the main development branch to the PubFinal branch.




    Commit the changes that you made.

    cvs update .
    cvs commit cmdtool*

    Tag the files that you want to merge over.

    cvs tag bug873 cmdtool.c cmdtool.h cmdtoolpb.c

    Checkout the PubFinal branch of VNOS in a temporary location.

    mkdir tempPubFinal
    cd tempPubFinal
    cvs checkout -r PubFinal VNOS/widgets/cmdtool

    Then merge the fix into the branch.

    cd VNOS/widgets/cmdtool
    cvs update -j bug873

    Fix any conflicts and then commit the changes to PubFinal.

    cvs commit .

    Finally, throw away the copy of PubFinal so you don't accidentally start working in the wrong branch.

    cd ../../../
    rm -rf tempPubFinal

    And you are done.



    If it was just one file, you could have used its version number to merge in the changes to PubFinal.

    Instead of:

    cvs update -j bug873

    You could have used:

    cvs update -j 1.49 cmdtool.c

    If cmdtool.c was the only file you changed and 1.49 was the version that contained your checkin.