Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Restricting number of slots per server

News Enterprise Unix System Administration Recommended Links Installation Planning Usage of NFS Installation of the Master Host Installation of the Execution Hosts
SGE Queues Grid Engine Config Tips SGE Parallel Environment Configuring Hosts From the Command Line Perl Admin Tools and Scripts Humor Etc

Slot Limits

You normally want to prevent over-subscription of cores on execution hosts by limiting the slots allocated on a host to its CPU cores ( processors) count — where “processors” might mean hardware threads.

If you only have a single queue, you can get away with specifying the slot counts in the queue for each host  (qconf -mconf). You can do it also by host group

slots 0,[@hexcore=12],[@quadcore=8]...

With multiple queues on the same hosts, you may need to avoid over-subscription due to contributions from each queue.

An easy way for an inhomogeneous cluster is with the following RQS (with qconf -arqs), although it may lead to slow scheduling in a large cluster:

{
   Name         host-slots
   description  restrict slots to core count
   enabled      true
   limit        hosts {*} to slots=$num_proc
}

This would probably be the best solution if num_proc, the processor count, is variable by turning hardware threads on and off.

Alternatively, with a host group for each hardware type, you can use a set of limits like

   
limit        hosts {@hexcore} to slots=12
limit        hosts {@quadcore} to slots=8 

which will avoid the possible scheduling inefficiency of the $num_proc dynamic limit.

Finally, and possibly the most foolproof way in normal situations is to set the complex on each host, e.g.

$ for n in 8 16; do qconf -mattr exechost complex_values slots=$n \
   `qconf -sobjl exechost load_values "*num_proc=$n*"`; done

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Common Uses - GridWiki

RQS

Max user jobs on a particular execution host

{
 name max_per_host
 enabled TRUE
 limit users dag hosts chrisdag-laptop to slots=2
}
The RQS Design Specification document contains the following more complicated example:
  1. All users together should never take more than 20 slots
  2. All users should maximal take 5 slots on all linux hosts
  3. Every user is restricted to one slot per linux host, only user "roland" is restricted to 2 slots and all other slots on hosts are set to 0

In that case, the ruleset would look like this, note that "@linux" is a predefined hostgroup:

{
 name maxujobs
 limit users * to slots=20
}

{
 name max_linux
 limit users * hosts @linux to slots=5
}

{
 name max_per_host
 limit users roland hosts {@linux} to slots=2
 limit users {*} hosts {@linux} to slots=1
 limit users * hosts * to slots=0
}

[gridengine users] queue slots per machine

Am 03.01.2012 um 11:34 schrieb Ben De Luca:

> I wonder if I am miss remembering, but is there a way to
> configure a queue to have the same number of slots as there are NCOR
> (or even NCPU) as per machine. I seem to remember doing this though I
> may have set this with hostlist some how? I am running SGE 8.0.0e
> (son of gridengine)

If you want to have it generic, you could define 999 slots and use an RQS per hosts to define it to match the number of cores automatically, but it will make the output longer and give a (IMO) confusing output for qstat.

To define it in the queue definition:

$ qconf -sq all.q
...
slots 1,[node01=4],[node02=8],[@hexacore=6]

[gridengine users] queue slots per machine

Reuti reuti at staff.uni-marburg.de
Tue Jan 3 16:42:46 UTC 2012

[gridengine users] queue slots per machine

Dave Love d.love at liverpool.ac.uk
Wed Jan 4 17:39:03 UTC 2012
Reuti <reuti at staff.uni-marburg.de> writes:

>>> In `qstat -f` you see the number of resv/used/tot. slots. All slots
>>> show up with the value of the queue configuration if I'm not mistaken,
>>> reading 999 then.
>> 
>> I see this for a couple of nodes:
>> 
>>  --------------------------------------------------------------------------------
>>  -
>>  parallel at node063               BIPC  0/4/4          3.82     lx-amd64      
>>   122440 0.67089 NVT5job2a  ********     r     01/03/2012 16:50:25     4        
>>  --------------------------------------------------------------------------------
>>  -
>>  parallel at node064               BIPC  0/0/4          0.00     lx-amd64     
>
> Not for me, I have 999 in the queue configuration and get:
>
> $ qstat -f
> queuename                      qtype resv/used/tot. load_avg arch          states
> ---------------------------------------------------------------------------------
> a at b             BIPC  0/0/999        0.01     lx24-x86      
> ------------------------------------------------------------------------------

Oh, right.  I have

  $ qconf -sq parallel | grep slots
  slots                 0,[@quadcore=8],[@octcore=16],[@dualcore=4],[@hexcore=12],[@ibsdr4=4]

as opposed (I assume) to your 999 there.

I'll clarify the doc.

Am 03.01.2012 um 17:38 schrieb Dave Love:

> Reuti <reuti at staff.uni-marburg.de> writes:
> 
>> Am 03.01.2012 um 11:34 schrieb Ben De Luca:
>> 
>>>    I wonder if I am miss remembering, but is there a way to
>>> configure a queue to have the same number of slots as there are  NCOR
>>> (or even NCPU) as per machine. I seem to remember doing this though I
>>> may have set this with hostlist some how?  I am running SGE 8.0.0e
>>> (son of gridengine)
> 
> [Beware that's not released.]
> 
>> If you want to have it generic, you could define 999 slots and use an
>> RQS per hosts to define it to match the number of cores automatically,
>> but it will make the output longer and give a (IMO) confusing output
>> for qstat.
> 
> I.e.
> 
>  $ qconf -srqs host-slots
>  {
>     name         host-slots
>     description  "restrict slots to core count"
>     enabled      TRUE
>     limit        hosts {*} to slots=$num_proc
>  }
> 
> How is it confusing, exactly?

In `qstat -f` you see the number of resv/used/tot. slots. All slots show up with the value of the queue configuration if I'm not mistaken, reading 999 then.

-- Reuti


> In case it's not clear, it isn't the same as just doing this with
> multiple queues on the host.
> 
>> To define it in the queue definition:
>> 
>> $ qconf -sq all.q
>> ...
>> slots 1,[node01=4],[node02=8],[@hexacore=6]
> 

XXX

-pe high 10-20 

requests a high-priority "parallel environment" that spans several machines, in this case any number of CPUs between 10 and 20 (inclusive), a single number will request exactly that many CPUs. Note that if you request more CPUs than you actually have high-priority access to, your job will hang. See Submitting OpenMP Jobs or Submitting MPI Jobs

-q *@machineName-n*

request a specific machine, or machine group

-l slots=2 

requests that the job be given 2 slots (or 2 cpus) instead of 1; you MUST use this if your program is multi-threaded, you should NOT use it otherwise

-l mem_free=1.5G

requests that only machines with 1.5GB (=1536MB) or more be used for this job; ie. the job requires a lot of memory and thus is not suitable for all hosts. Note that 1G is equal to 1024M (How do I determine how much memory my program needs? See the FAQ)

MPI

Applications that use Message Passing Interface (MPI) consist of multiple tasks that rely on a communication infrastructure. The orte (Open Run-Time Environment) parallel environment supports Open MPI applications.

   % qsub -pe orte 4 runme

In the example above, the runme script calls mpirun to start the tasks, which GridEngine distributes to 4 separate machines.

There is an example MPI application to help you get started. Also there is much more information at open-mpi.org.

Sun Grid Engine Plugin — StarCluster 0.95.5 documentation

Advanced Options

The SGE plugin has advanced options that some users may wish to tune for their needs. In order to use these advanced options you must first define the SGE plugin in your config:

[plugin sge]
setup_class = starcluster.plugins.sge.SGEPlugin

Disabling Job Execution on Master Node

By default StarCluster configures the master node as an execution host which means that the master node can accept and run jobs. In some cases you may not wish to run jobs on the master due to resource constraints. For example, if you’re generating a lot of NFS traffic in your jobs you may wish to completely dedicate the master to serving NFS rather than both running jobs and serving NFS.

To disable the master node being used as an execution host set master_is_exec_host=False in your sge plugin config:

[plugin sge]
setup_class = starcluster.plugins.sge.SGEPlugin
master_is_exec_host = False

Now whenever a new cluster is created with the SGE plugin enabled the master will not be configured as an execution host.

Setting the Number of Slots Per Host

By default StarCluster configures each execution host in the cluster with a number of job ‘slots’ equal to the number of processors on the host. If you’d like to manually set the number of slots on each execution host set slots_per_host=<num_slots_per_host> in your SGE plugin config:

[plugin sge]
setup_class = starcluster.plugins.sge.SGEPlugin
slots_per_host = 10

Sun Grid Engine Quick-Start

The following sections give an overview of how to submit jobs, monitor job and host status, and how to use the SGE parallel environment.

Submitting Jobs

A job in SGE represents a task to be performed on a node in the cluster and contains the command line used to start the task. A job may have specific resource requirements but in general should be agnostic to which node in the cluster it runs on as long as its resource requirements are met.

Note

All jobs require at least one available slot on a node in the cluster to run.

Submitting jobs is done using the qsub command. Let’s try submitting a simple job that runs the hostname command on a given cluster node:

sgeadmin@master:~$ qsub -V -b y -cwd hostname
Your job 1 ("hostname") has been submitted

Notice that the qsub command, when successful, will print the job number to stdout. You can use the job number to monitor the job’s status and progress within the queue as we’ll see in the next section.

Monitoring Jobs in the Queue

Now that our job has been submitted, let’s take a look at the job’s status in the queue using the command qstat:

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
1 0.00000 hostname sgeadmin qw 09/09/2009 14:58:00 1
sgeadmin@master:~$

From this output, we can see that the job is in the qw state which stands for queued and waiting. After a few seconds, the job will transition into a r, or running, state at which point the job will begin executing:

sgeadmin@master:~$ qstat
job-ID  prior   name       user         state submit/start at     queue  slots ja-task-ID
-----------------------------------------------------------------------------------------
1 0.00000 hostname   sgeadmin     r     09/09/2009 14:58:14                1
sgeadmin@master:~$

Once the job has finished, the job will be removed from the queue and will no longer appear in the output of qstat:

sgeadmin@master:~$ qstat
sgeadmin@master:~$

Now that the job has finished let’s move on to the next section to see how we view a job’s output.

Viewing a Job’s Output

Sun Grid Engine creates stdout and stderr files in the job’s working directory for each job executed. If any additional files are created during a job’s execution, they will also be located in the job’s working directory unless explicitly saved elsewhere.

The job’s stdout and stderr files are named after the job with the extension ending in the job’s number.

For the simple job submitted above we have:

sgeadmin@master:~$ ls hostname.*
hostname.e1 hostname.o1
sgeadmin@master:~$ cat hostname.o1
node001
sgeadmin@master:~$ cat hostname.e1
sgeadmin@master:~$

Notice that Sun Grid Engine automatically named the job hostname and created two output files: hostname.e1 and hostname.o1. The e stands for stderr and the o for stdout. The 1 at the end of the files’ extension is the job number. So if the job had been named my_new_job and was job #23 submitted, the output files would look like:

my_new_job.e23 my_new_job.o23

Monitoring Cluster Usage

After a while you may be curious to view the load on Sun Grid Engine. To do this, we use the qhost command:

sgeadmin@master:~$ qhost
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
master lx24-x86 1 0.00 1.7G 62.7M 896.0M 0.0
node001 lx24-x86 1 0.00 1.7G 47.8M 896.0M 0.0

The output shows the architecture (ARCH), number of cpus (NCPU), the current load (LOAD), total memory (MEMTOT), and currently used memory (MEMUSE) and swap space (SWAPTO) for each node.

You can also view the average load (load_avg) per node using the ‘-f’ option to qstat:

sgeadmin@master:~$ qstat -f
queuename qtype resv/used/tot. load_avg arch states
---------------------------------------------------------------------------------
all.q@master.c BIP 0/0/1 0.00 lx24-x86
---------------------------------------------------------------------------------
all.q@node001.c BIP 0/0/1 0.00 lx24-x86

Creating a Job Script

In the ‘Submitting a Job’ section we submitted a single command hostname. This is useful for simple jobs but for more complex jobs where we need to incorporate some logic we can use a so-called job script. A job script is essentially a bash script that contains some logic and executes any number of external programs/scripts:

#!/bin/bash
echo "hello from job script!"
echo "the date is" `date`
echo "here's /etc/hosts contents:"
cat /etc/hosts
echo "finishing job :D"

As you can see, this script simply executes a few commands (such as echo, date, cat, etc.) and exits. Anything printed to the screen will be put in the job’s stdout file by Sun Grid Engine.

Since this is just a bash script, you can put any form of logic necessary in the job script (i.e. if statements, while loops, for loops, etc.) and you may call any number of external programs needed to complete the job.

Let’s see how you run this new job script. Save the script above to /home/sgeadmin/jobscript.sh on your StarCluster and execute the following as the sgeadmin user:

sgeadmin@master:~$ qsub -V jobscript.sh
Your job 6 ("jobscript.sh") has been submitted

Now that the job has been submitted, let’s call qstat periodically until the job has finished since this job should only take a second to run once it’s executed:

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
6 0.00000 jobscript. sgeadmin qw 09/09/2009 16:18:43 1

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
6 0.00000 jobscript. sgeadmin qw 09/09/2009 16:18:43 1

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
6 0.00000 jobscript. sgeadmin qw 09/09/2009 16:18:43 1

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
6 0.00000 jobscript. sgeadmin qw 09/09/2009 16:18:43 1

sgeadmin@master:~$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
-------------------------------------------------------------------
6 0.55500 jobscript. sgeadmin r 09/09/2009 16:18:57 all.q@node001.c 1

sgeadmin@master:~$ qstat
sgeadmin@master:~$

Now that the job is finished, let’s take a look at the output files:

sgeadmin@master:~$ ls jobscript.sh*
jobscript.sh jobscript.sh.e6 jobscript.sh.o6
sgeadmin@master:~$ cat jobscript.sh.o6
hello from job script!
the date is Wed Sep 9 16:18:57 UTC 2009
here's /etc/hosts contents:
# Do not remove the following line or programs that require network functionality will fail
127.0.0.1 localhost.localdomain localhost
10.252.167.143 master
10.252.165.173 node001
finishing job :D
sgeadmin@master:~$ cat jobscript.sh.e6
sgeadmin@master:~$

We see from looking at the output that the stdout file contains the output of the echo, date, and cat statements in the job script and that the stderr file is blank meaning there were no errors during the job’s execution. Had something failed, such as a command not found error for example, these errors would have appeared in the stderr file.

Deleting a Job from the Queue

What if a job is stuck in the queue, is taking too long to run, or was simply started with incorrect parameters? You can delete a job from the queue using the qdel command in Sun Grid Engine. Below we launch a simple ‘sleep’ job that sleeps for 10 seconds so that we can kill it using qdel:

sgeadmin@master:~$ qsub -b y -cwd sleep 10
Your job 3 ("sleep") has been submitted
sgeadmin@master:~$ qdel 3
sgeadmin has registered the job 3 for deletion

After running qdel you’ll notice the job is gone from the queue:

sgeadmin@master:~$ qstat
sgeadmin@master:~$

OpenMPI and Sun Grid Engine

Note

OpenMPI must be compiled with SGE support (–with-sge) to make use of the tight-integration between OpenMPI and SGE as documented in this section. This is the case on all of StarCluster’s public AMIs.

OpenMPI supports tight integration with Sun Grid Engine. This integration allows Sun Grid Engine to handle assigning hosts to parallel jobs and to properly account for parallel jobs.

OpenMPI Parallel Environment

StarCluster by default sets up a parallel environment, called “orte”, that has been configured for OpenMPI integration within SGE and has a number of slots equal to the total number of processors in the cluster. You can inspect the SGE parallel environment by running:

sgeadmin@ip-10-194-13-219:~$ qconf -sp orte
pe_name            orte
slots              16
user_lists         NONE
xuser_lists        NONE
start_proc_args    /bin/true
stop_proc_args     /bin/true
allocation_rule    $fill_up
control_slaves     TRUE
job_is_first_task  FALSE
urgency_slots      min
accounting_summary FALSE

This is the default configuration for a two-node, c1.xlarge cluster (16 virtual cores).

Parallel Environment Allocation Rule

Notice the allocation_rule setting in the output of the qconf command in the previous section. This rule defines how to assign slots to a job. By default StarCluster uses the fill_up allocation rule. This rule causes SGE to greedily take all available slots on as many cluster nodes as needed to fulfill the slot requirements of a given job. For example, if a user requests 8 slots and a single node has 8 slots available, that job will run entirely on one node. If 5 slots are available on one node and 3 on another, it will take all 5 on that node, and all 3 on the other node.

The allocation rule can also be configured to distribute the slots around the cluster as evenly as possible by using the round_robin allocation_rule. For example, if a job requests 8 slots, it will go to the first node, grab a slot if available, move to the next node and grab a single slot if available, and so on wrapping around the cluster nodes again if necessary to allocate 8 slots to the job.

Finally, setting the allocation_rule to an integer number will cause the parallel environment to take a fixed number of slots from each host when allocating the job by specifying an integer for the allocation_rule. For example, if the allocation_rule is set to 1 then all slots have to reside on different hosts. If the special value $pe_slots is used then all slots for the parallel job must be allocated entirely on a single host in the cluster.

You can change the allocation rule for the orte parallel environment at any time using:

$ qconf -mp orte

This will open up vi (or any editor defined in the EDITOR environment variable) and let you edit the parallel environment settings. To change from fill_up to round_robin in the above example, change the allocation_rule line from:

allocation_rule    $fill_up

to:

allocation_rule    $round_robin

You can also change the rule to the pe_slots mode:

allocation_rule    $pe_slots

or specify a fixed number of slots per host to assign when allocating the job:

allocation_rule    1

After making the change and saving the file you can verify your settings using:

sgeadmin@ip-10-194-13-219:~$ qconf -sp orte
pe_name            orte
slots              16
user_lists         NONE
xuser_lists        NONE
start_proc_args    /bin/true
stop_proc_args     /bin/true
allocation_rule    $round_robin
control_slaves     TRUE
job_is_first_task  FALSE
urgency_slots      min
accounting_summary FALSE

Submitting OpenMPI Jobs using a Parallel Environment

The general workflow for running MPI code is:

  1. Compile the code using mpicc, mpicxx, mpif77, mpif90, etc.
  2. Copy the resulting executable to the same path on all nodes or to an NFS-shared location on the master node

Note

It is important that the path to the executable is identical on all nodes for mpirun to correctly launch your parallel code. The easiest approach is to copy the executable somewhere under /home on the master node since /home is NFS-shared across all nodes in the cluster.

  1. Run the code on X number of machines using:

    $ mpirun -np X -hostfile myhostfile ./mpi-executable arg1 arg2 [...]
    

where the hostfile looks something like:

$ cat /path/to/hostfile
master  slots=2
node001 slots=2
node002 slots=2
node003 slots=2

However, when using an SGE parallel environment with OpenMPI you no longer have to specify the -np, -hostfile, -host, etc. options to mpirun. This is because SGE will automatically assign hosts and processors to be used by OpenMPI for your job. You also do not need to pass the –byslot and –bynode options to mpirun given that these mechanisms are now handled by the fill_up and round_robin modes specified in the SGE parallel environment.

Instead of using the above formulation create a simple job script that contains a very simplified mpirun call:

$ cat myjobscript.sh
mpirun /path/to/mpi-executable arg1 arg2 [...]

Then submit the job using the qsub command and the orte parallel environment automatically configured for you by StarCluster:

$ qsub -pe orte 24 ./myjobscript.sh

The -pe option species which parallel environment to use and how many slots to request. The above example requests 24 slots (or processors) using the orte parallel environment. The parallel environment automatically takes care of distributing the MPI job amongst the SGE nodes using the allocation_rule defined in the environment’s settings.

You can also do this without a job script like so:

$ cd /path/to/executable
$ qsub -b y -cwd -pe orte 24 mpirun ./mpi-executable arg1 arg2 [...]

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: June, 04, 2016