Kenneth Christie Young

About This Member:

EDUCATION

1959-1961       University of Arizona

1963-1965       Arizona State University  (B.S. Chemistry, 1965)

1965-1973       University of Chicago     (M.S. Geophysics, 1967)

1973-1974       Post-doctoral fellow- National Center for Atmo­spheric Research

1974-1979       Assistant Professor, Dept of Atmo­spheric Sciences, Univ of Arizona

1979-1994       Associate Professor, Dept of Atmo­spheric Sciences, Univ of Arizona

 

My interest in weather and climate dates back to elementary school.  I have been collecting and analyzing climate data ever since.  I attended the University of Arizona following high school because they had a meteorology program.  However, I was expelled from the UofA for failing ROTC (mandatory for all male students at that time) three times, despite maintaining a B average.  I then joined the US Air Force so I could attend weather observer school.  I loved the work but had problems obeying orders so I was “expelled” from the USAF.  I returned to finish my BS in Chemistry at Arizona State University and then earned my MS and PhD at the University of Chicago (Geophysical Sciences).

My early research focused on precipitation processes and led to my text book (Young, K.C., 1993: Microphysical Processes in Clouds. Oxford University Press, New York 427 pp.).  I was involved with the National Hail Research Experiment in the late 1970’s and became persona non grata with the weather modification community for pointing out that drilling a well and pumping out the ground water was more effective that cloud seeding to make rain.

In the late 1980’s, I became interested in artificial intelligence and developed a direct solution for developing a neural net model (by-passing the tedious, repetitive training).  I used this to develop a model to forecast summer rain in Tucson and extracted what the model had “learned”.  The key features from the model matched the PhD thesis findings of a student of another professor in our department at the same time.

In the spring of 1994, one of my graduate students came to me with a problem from one of the Ag departments as to the best time to seed grasslands in the summer.  They had tried Markov chains but these require uncorrelated variables. I used a variation of my neural net model to develop a method of chaining correlated variables. (Young, K.C., 1994: A multivariate chain model for simulating climatic parameters from daily data. J. Appl. Meteor.,33,661-671.) 

I also used this model to reconstruct streamflow from tree ring data. (Young, K.C., 1994: Reconstructing streamflow time series in central Arizona using monthly precipitation and tree ring records. J. Climate, 7, 361-364.)  In this research, I realized that the statistical methods used to reconstruct past climate from proxy data inherently reduce the variance of the resulting time series.  I wanted to run a harmonic analysis on the time series but needed a relatively uniform variance to do so.  As part of this research, I developed a method for restoring the variance.

This led to a similar study for the California Department of Water Resources in 1995 that showed dominant 95 and 49.5 year cycles in the streamflow in northern California.  One hundred years of independent back-tested gave P-values of 99.5 to 99.9% on the various watersheds.  The study predicted the beginning of the next major drought in California would be 2014 and would last until 2047.  The cycles also predicted a minor wet period for 2017-2018 before the drought cycle deepens.

I spent several years developing AI tools, one of which was commercialized.  This was a linear indexing method that I applied to finding near duplicates in mailing lists.  Traditionally, a key, made up of characters from the name and address would be created for each record and then the keys would be sorted.  The time required to sort a list of “n” items is proportional to n log n.  I.e., when the list gets really long, the time required to do the sort becomes prohibitive.  In a test using a 100,000 name mailing list that took 20 minutes to run on a PC, the algorithm that I developed took 20 seconds and found twice as many duplicates.

I also developed a program to match drivers and riders in a hypothetical city-wide car-pooling system using a hill-climber algorithm.  And finally, I developed an automated home appraisal program that would return specific comparable properties, just as would a human appraiser.