Theory and Phenomenology
My principal research training is as a theoretical particle physicist,
having studied under Dimitri V. Nanopoulos, a distinguished professor
at Texas A&M University. I have done work in the subject of Grand
Unified Theories (GUTs), particularly the Flipped SU(5) GUT.
My research straddles the line between theory and phenomenology, spanning
topics as diverse as string model building (in the free fermionic
and intersecting D-brane constructions) and the simulation of collider-level
supersymmetry signatures at the Large Hadron Collider (LHC). The integration
of large scale computing is a persistent element across all of my research.
Recently, I have collaborated with Dimitri Nanopoulos' research group at Texas
A&M to co-author (joined also by our colleagues Tianjun Li and James Maxin) a
series of papers describing the properties of a model which we have named F-SU(5).
This model represents a highly phenomenologically favorable combination of the
Flipped SU(5) GUT with a pair of exotic TeV scale vector-like field multiplets
having origins in F-theory model building, and the dynamically established boundary
conditions of no-scale supergravity. In particular, our model has a very direct connection
to experiments studying dark matter, proton decay, and rare processes such as
flavor-changing neutral current transitions and contributions to the anomalous
magnetic moment of the muon. Moreover, it makes very specific predictions for
observations at the LHC, including a Higgs around 125 GeV, the manifestation of
supersymmetry in ultra-high jet multiplicity (at least 9 jets) events, and possibly
even direct detection of the hypothesized vectorlike multiplets.
I am fortunate though to have not one, but two simultaneously active research
collaborations. The second is with the group led by high-energy experimentalist
Professor David Toback at Texas A&M. This partnership has presented a variety of
opportunities to me that are atypical for a physicist trained in theory and phenomenology.
In particular, I was given the chance in early 2011 to perform a week of hands on service
work in the control room of the (recently decommissioned from collider mode operation)
CDF experiment at the Fermilab Tevatron particle accelerator in Batavia, Illinois. Most
recently, I have been deeply engaged (along with recent SHSU physics graduates Jacob
Hill and Mike Kowalczyk) in the construction of an autonomous web-based monitoring utility
designed to report on the health of computing clusters which operate in support of the data
analysis agenda of the LHC.
The particle collisions occur within the LHC accelerator with such intensity and
frequency that their processing can only be handled by a truly world-wide computing grid of
enormous flexibility, speed, and reliability. Texas A&M hosts a "Tier 3" (endline data
consumer) site, one of about 45 spread all around the planet. Each is a unique entity, composed of
extremely complicated interdependent hardware and software under local management. It is not too
surprising that the extraordinary networking and performance requirements on each installation should
result in not-so occasional system failures, particularly for up-and-coming sites. Successful operation
and optimization of a Tier 3 site thus requires intimately detailed, near real-time feedback on how
system components are behaving at a given moment, and how this compares to design goals and historical
norms. It is the purpose of our monitor to efficiently provide this essential information in a
comprehensive, unified, and streamlined format which is specialized for a single site view, and
available immediately upon request. You can download a summary presentation on this monitoring project