I am an experienced computer programmer, with expertise in numerical analysis, information processing, and web development. My favorite language for general purpose use, including scripting and task automation is Perl. For compute-intensive work, I like Fortran 90/95.
Since being awarded a private ten node (80 CPUs, 160GB RAM) cluster as research startup at Sam Houston, I have taken an interest in high-performance and high-throughput computing. I have become proficient in use of the Message Passing Interface (MPI) protocols for inter-process communication during massively parallel computations, and also in the use of batch schedulers such as Platform LSF. Both techniques now form an integral component of the success of my research collaboration with the group led by Dimitri Nanopoulos at Texas A&M.
The "Brazos" CMS Site Monitor: I am a collaborating member of a project entitled "Discovery of Dark Matter using High Performance Computing and LHC Data at Texas A&M", for which a Norman Hackerman Advance Research Program grant in the amount of $100,000 was awarded to Co-PI's David Toback and Guy Almes of Texas A&M. My role in this project has been the development of new web-based tools for the monitoring of CMS Tier 3 Clusters involved in the massive job of distributed GRID data analysis for the Large Hadron Collider (LHC). My students Jacob Hill and Micael Kowalczyk have assisted in that work. A summary presentation of that work is available here. The working development version of our website is available here, and the installation source package repository is available below.
I have posted selected program packages below in tar-gzipped format for free download.
- Cut LHCO
(For counting and selection cuts on PGS .lhco files)
(For establishing cross-section expected confidence limits with correlated errors. An extension of work by John Conway.)
(For monitoring single CMS data analysis sites.)