ELSA in action: sample session

A quick overview of ELSA

In order to be able to use ELSA, we need to load the fork history module.

    
    root# insmod /home/guill/devel/module/fork_history.ko
    root# lsmod
    Module                  Size  Used by
    fork_history            3212  0
    autofs                 17536  0
    e100                   35328  0
    mii                     5248  1 e100
    e1000                  87296  0
    
As we can see, the module is loaded. Now, we can launch the job daemon.
    
    [guill ~/elsa_project/job_daemon]$ ./jobd
    In the syslog you should see:
    "Oct 26 10:18:25 account jobd[7770]: Userspace ELSA daemon is alive"
    
We can also check if the message queue has been created.
    
    [guill ~/elsa_project]$ ipcs
    ------ Shared Memory Segments --------
    key        shmid      owner      perms      bytes      nattch     status

    ------ Semaphore Arrays --------
    key        semid      owner      perms      nsems

    ------ Message Queues --------
    key        msqid      owner      perms      used-bytes   messages
    0xfeedbeef 360448     guill      660        0            0
    
Now it's time to launch the control interface (called elsa). It uses the curses library.
Here is a screen shot of the interface.

As you can see on the image, you can do the following actions:

Everything is ready to show a first example of what can be done with the Enhanced Linux System Accounting.

A real test

The test will consist of the creation of three jobs. Each jobs will be a compilation of three different Linux kernels.

The first screen shot (Figure 1) shows four terminals. You can recognize the elsa interface. When pressing the 'g' key all jobs are displayed on the screen. It is possible to sort the list by jobs by pressing the 'G' key instead of 'g'. If there is not enough space on the screen some information will be overwritten (needs a fix). In our example, we see that three jobs have been created and in each container there is a bash. The other three terminals are three shells from which we will launch the three compilations. Therefore, at this point, we do the following actions:

A jid sets to 0 means that the jobd must create a new job. If we add a pid of an inexistent jobID, action is not perform. Start the jobd records means that for each fork that will occur in our Linux box, the jobd will know that. Here is the complete logs for our test:
    
    Oct 26 10:18:25 account jobd[7770]: Userspace ELSA daemon is alive
    Oct 26 10:22:35 account jobd[7770]: process 24411 added in job 1
    Oct 26 10:22:40 account jobd[7770]: process 5606 added in job 2
    Oct 26 10:22:45 account jobd[7770]: process 5650 added in job 3
    Oct 26 10:23:11 account jobd[7770]: start recording fork
    Oct 26 10:36:33 account jobd[7770]: stop recording fork
    Oct 26 10:38:40 account jobd[7770]: Arrgh... I'm dying after having manages
    Oct 26 10:38:40 account jobd[7770]:  3 jobs
    

Figure 1

On the Figure 2 you can see the three different compilations that are running in the three different terminals. The display about jobs contents isn't automatic. If you want to see the processes that were added in a job you need to hit the 'G' key.

Figure 2

When the three compilations are done (see Figure 3) we can refresh the jobs display. As you can see, many processes have been added automatically by the jobd. Now, to do the group of processes accounting we need to stop all records (accounting and fork history) and run the analyzer. So here are the steps:

Thus, we have two files. The accounting file that contains per-process accounting (BSD acct_v3) called acct_sample.dat and the file that contains information about group of processes called fh_sample.dat.

Figure 3

We can call the analyzer. It is a C program that parses the accounting file and stores information in a table. After that, it goes through the group of processes and compute the per-jobs accounting. Here is the output:

    
    [0] guill$ ./elsa/analyzer acct_sample.dat fh_sample.dat
    Use acct_sample.dat has the accounting file
    Use fh_sample.dat has the fork history file

      * JOBS ACCOUNTING RESULTS *

    jobID#1: 8863 process(es) in this job
        Elapsed time: 409517.00
        User time: 24168
        System Time: 1412
        Minor Pagefaults: 27573
        Major Pagefaults: 294
        Number of swaps: 0


    jobID#2: 9174 process(es) in this job
        Elapsed time: 414403.00
        User time: 24391
        System Time: 1427
        Minor Pagefaults: 10748
        Major Pagefaults: 276
        Number of swaps: 0


    jobID#3: 9204 process(es) in this job
        Elapsed time: 416685.00
        User time: 23624
        System Time: 1388
        Minor Pagefaults: 4833
        Major Pagefaults: 269
        Number of swaps: 0