
Roxburgh, A.J.,
"On Computing the Discrete Fourier Transforms" A report
submitted to fulfill the requirements of
55:198 Individual Investigations: Electrical and Computer
Engineering. University of Iowa, Iowa City, IA 52242
Summer Session, 2009. Newly revised and updated, December 9,
2013
ABSTRACT:
The
development of timeefficient smallN discrete
Fourier transform (DFT) algorithms has received a lot of
attention due to the ease with which they combine,
“building block” style, to yield timeefficient large
transforms. This paper reports on the discovery that
efficient computational algorithms for smallN
DFT developed during the 19th century bear more than a
passing resemblance to similarsized modernday
algorithms, including the same nested (+)(x)(+)
structure, similar flow graphs, and a comparable number
of arithmetic operations. This suggests that despite the
formal sophistication of more recent approaches to the
development of efficient smallN DFT algorithms,
the key underlying principles are still the symmetry and
periodicity properties of the sine and cosine basis
functions of the Fourier transform. While the earlier
methods explicitly manipulated the DFT operator on the
level of these properties, the presentday methods
(typically based on the cyclic convolution properties of
the DFT operator) tend to hide this more basic level of
reality from view. All reducedarithmetic DFT algorithms
take advantage of how easy it is to factor the DFT
operator. From the matrix point of view, an efficient
DFT algorithm results when we factor the DFT operator
into a product of sparse matrices containing mostly ones
and zeros. Given that there are innumerable
factorizations, it is interesting that modernday
algorithms developed using numbertheoretic techniques
quite removed from the trigonometric identities and
simple algebraic techniques used by the pioneers of
discrete signal analysis, should be so similar in form
to the early algorithms.

Infrared Repeater System.
United States Patent Application Application
US 2010/0258729 A1
Published Oct 14, 2010. Filed April 13, 2009. Inventors:
Alastair Roxburgh, Richard Lenser, Bill Cawlfield
ABSTRACT:
An infrared sensor includes a photodiode receiving
an infrared signal. A first amplifier is connected to
the photodiode. A second amplifier is connected to the
first amplifier. A DC servo is connected in a feedback
loop between the output of the second amplifier and the
positive side of the first amplifier. An
analogtodigital signal converter is connected to the
second amplifier. An output driver is connected to the
analogtodigital signal converter. The infrared sensor
may receive and retransmit an infrared signal and may be
incorporated in an infrared repeater system."
SUMMARY:
A reduced number of capacitors in the
signal path lowers the "Q" and reduces phase distortion.
This allows IR signals to be repeated with less ringing,
overshoot, and distortion, all of which are important
for accurately passing the new highdensity IR codes.
Examples of these codes are RCMM and XMP, which
respectively use 4ary and 8ary symbol coding, rather
than the conventional binary (2ary) format. The
corresponding code symbols are respectively 2 and 3 bits
long. Avoidance of intersymbol interference for these IR
codes requires timing precision to be maintained 4x
better than binary for RCMM, and 8x better for XMP. It
is difficult to maintain a sufficient degree of ringing
and overshootfree accuracy in circuits that have a
modulation passband response that has too high a "Q"value (i.e., a response that has too high an order, primarily
caused by too many coupling capacitor poles in the
frequency response). This new "HiFi" IR repeater
architecture solves the problem.

Baker, A.B., McLeod, C.N., Roxburgh, A.J., and Bannister, P.,
"Descending
aortic flow contribution to intrathoracic impedanceDevelopment and preliminary
testing of a dual impedance model," Journal of Clinical Monitoring and
Computing, 22, pp. 1122, 2008.
OBJECTIVE:
Impedance measurement of cardiac output has struggled to
become established partly because there have been only a
few attempts to establish a sound theoretical basis for
this measurement. Our objective is to demonstrate that
there is valuable aortic flow information available from
an intrathoracic impedance signal which may eventually
be useful in the measurement of cardiac output by
impedance technology.
METHODS:
A model, using dual impedance measurement
electrodes and the change in impedance when blood flows,
has been developed based on an intrathoracic impedance
model of the descending aorta and esophagus. Using this
model as the basis for measurement by an esophageal
probe, we provide solutions to the velocity of blood
flow in the descending aorta.
RESULTS:
Five patients were studied. Only three patients
had suitable signals for analysis but the aortic flow
profiles from these three patients were consistent and
realistic.
CONCLUSION:
Aortic blood flow information may be
obtained from the intrathoracic impedance signal using
this dual impedance method.

Roxburgh, A.J.,
On computing the discrete Fourier
transform. Research paper (55:198 Individual Investigations:
Electronic and Computer Engineering, University of Iowa), 1990 (pp. iviii,
130).
ABSTRACT:
The development of timeefficient smallN
discrete Fourier transform (DFT) algorithms has received
a lot of attention due to the ease with which they may
be combined, “building block” style, to yield
timeefficient large transforms. This paper reports on
the discovery that efficient computational algorithms
for smallN DFT developed during the 19th century
bear more than a passing resemblance to similarsized
modernday algorithms, including the same nested
(+)(´)(+)
structure, similar flow graphs, and a comparable number
of arithmetic operations. This suggests that despite the
formal sophistication of more recent approaches to the
development of efficient smallN DFT algorithms,
the key underlying principles are still the symmetry and
periodicity properties of the sine and cosine basis
functions. While the earlier methods explicitly
manipulated the DFT operator on the level of these
properties, the presentday methods (typically based on
the cyclic convolution properties of the DFT operator)
tend to hide this more basic level of reality from view.
All reducedarithmetic DFT algorithms take advantage of
how easy it is to factor the DFT operator. From the
matrix point of view, an efficient DFT algorithm results
when we factor the DFT operator into a product of sparse
matrices containing ones and zeros. Given that there are
innumerable factorizations, it is interesting that
modernday algorithms developed using numbertheoretic
techniques quite removed from the trigonometric
identities and simple algebraic techniques used by the
pioneers of discrete signal analysis, should be so
similar in form to the early algorithms.

Roxburgh,
A.J. (team leader), Aminzay S.Q., Khumalo, T., Nguyen, D.,
Binary Lookahead Carry Adder (BLCA). Project Final
Report (55:142 Introduction to VLSI Design,
Electronic and Computer Engineering Department, University
of Iowa), 1988 (pp. iiii, 120).
OBJECTIVE:
Create
a floor plan and interconnections for cell design layouts (Magic).
Pad design: input protection, output buffering.
Types of BLCA: 4bit, 8bit, 16bit, 32bit.
Simulation: functional verification (Esim); timing
analysis (Crystal).
DESIGN:
Carry expression: C_{i} = G_{i},
(G_{i}, P_{i}) = (g_{i},
p_{i})
o (G_{i1}, P_{i1})
if 2 ≤ i ≤ n
= (g_{i}, p_{i})
if i = 1,
where (g, p) o (g', p') = (g + (p.g'),
p.p')
Floor plan generation: Used corrected and verified C program
(appendix A).
Theoretical equations: time ~ log_{2}(n),
n = adder size; area ~ n log_{2}(n).
RESULT:
Completed simulation of 4, 8, 16, and 32bit BCLA.
Achieved reasonable propagation time (table 1, appendix D),
and showed that time delay closely followed log_{2}(n).
Chip area is ~ 2n log_{2}(n)+n (table 2).

Roxburgh, A.J.,
The simple Fourier transform,
Thesis, M.Sc., University of Otago, 1987 (249 pp.).
ABSTRACT:
A new algorithm for numerically evaluating the discrete
Fourier transform (DFT) is developed. The algorithm,
which yields results of high precision, is also
computationally efficient in the context of typical
8bit microprocessor instruction sets. Because it gives
a particularly simple DFT implementation for such
microprocessors, the algorithm has been named the simple
Fourier transform, or SFT. Central to the SFT algorithm
is the implementation of multiplication using a lookup
table of squares. However, due to a mathematical
simplification the number of squarings required is
smaller than might be expected, each multiplication
essentially being reduced to a single ADDandSQUARE
macrooperation. Thus, even though most simple
microprocessors lack a builtin multiply instruction,
the slower alternative of software multiplication need
not be considered for DFT processing. The SFT algorithm
is extended with a Hann (sinesquared) data window
applied as a spectral convolution, which due to further
simplification of the arithmetic requires no additional
computation time. This modified form of the SFT has been
named the SFTHann algorithm. Good performance for
realinput narrowband spectral analysis makes the SFT
and SFTHann algorithms useful for a variety of lowend
signal processing applications. Versions of these
algorithms written for the z80 microprocessor are
examined, and compared with several other discrete
Fourier transform programs. In order to verify the
methods used, as well as to make them more widely
accessible, several illustrative programs written in
BASIC are also presented.

Baker, A.B. and Roxburgh, A.J.,
"Computerised EEG
monitoring for carotid endarterectomy," Anaesthesia and Intensive Care,
14(1), pp. 3236, Feb., 1986.
ABSTRACT:
A prospective study was undertaken in twenty patients
undergoing carotid endarterectomy using computerised EEG monitoring in the form of a
densitymodulated spectral array, spectral edge
frequency and integrated EEG power for monitoring
cerebral ischaemia. This form of monitoring proved to be
easy to use and understand. Because ischaemic EEG events
longer than one minute were not necessarily followed by
postoperative deficits, the definition of significant
events that would cause ischaemia may need to be
modified.

Roxburgh, A.J., Baker, A.B., Bannister, P., and McLeod, C.,
"Aortic blood flow from intrathoracic impedance," Proc. Univ. Otago Med. Sch.,
63, pp. 7374, 1985.
ABSTRACT:
Intrathoracic electrical impedance change may be caused
as much by aortic blood flow as by aortic movement,
contradicting a statement by Mitchell and Newbower (1979) that all of the impedance change is due
to movement alone. With two impedance analyzers,
however, the aortic movement component of the
intrathoracic impedance may be cancelled out enabling a
more accurate measurement of the aortic blood flow. Dual
impedance data was analyzed for one cardiac cycle from a
patient following cardiac surgery, which using a
simplified anatomical model, gave a stroke volume of 57
ml. Cardiac output measured simultaneously by thermal
dilution gave a stroke volume of 59 ml.

Baker, A.B., Roxburgh, A.J., and McLeod, C.,
"Intrathoracic impedance plethysmography and aortic blood
flow," Proc. Univ.
Otago Med. Sch., 62, pp. 6970, 1984.
ABSTRACT:
Mitchell and Newbower (1979) produced a
theoretical model which shows that any change in the
intrathoracic electrical impedance is unlikely to be
correlated with stroke volume, due to the inability to
distinguish aortic movement from blood flow. Their model
did not take into account the increase in the electrical
conductivity of blood that occurs when blood flows,
which can be as high as 25%, as reported by Coulter
(1949), and Visser (1981) and others. This study has
refined these models to generate an equation that
defines the relationship between blood velocity and
other components of the intrathoracic impedance. From
blood velocity , stroke volume may be derived.

Baker, A.B. and Roxburgh, A.J.,
"Intrathoracic impedance plethysmography and cardiac output," Proc. Univ. Otago Med. Sch., 62,
pp. 1214, 1984.
ABSTRACT:
Oesophageal catheter probes provide an established
method of measuring variables such as ECG, temperature,
heart and breath sounds, diaphramatic EMG, and
electrical impedance. One advantage for the
transthoracic electrical impedance measured by the
oesophageal probe is that it gives a pulsatile component
of 510% compared with 0.2% for the transthoracic
method. The aim of this study was to document the better
cardiacrelated signaltonoise ratio from the
intrathoracic method, as a first step in allowing
better impedancebased measurements of cardiac output.

Roxburgh, A.J. and Baker, A.B.,
"The use of disposable ECG
electrodes for intraoperative electroencephalography," Proc. Univ. Otago Med.
Sch., 61, pp. 5153, 1983.
ABSTRACT:
Following on from the suggestion to use disposable
electrocardiograph (ECG) electrodes for intraoperative
electroencephalography (EEG), as a timesaving and
reliability measure, and a recent theoretical prediction
that two widelyspaced EEG electrodes attached to the
frontal and mastoid regions of the scalp will be
sufficiently sensitive to detect diffuse events, as well
as major focal events such as ischaemia, we decided to
compare the suitability of various electrodes for EEG by
measuring the electrical impedance of such a
widelyspace pair of electrodes. Low impedance is an
important factor in EEG measurements, but is not
typically specified for ECG electrodes. Standard gold
cup electrodes were compared with two varieties of
disposable Ag/AgCl ECG electrodes, and
stainless steel 27 gauge needles. In terms of electrical
impedance at EEG frequencies, one brand of disposable
ECG electrodes performed as well as the traditional gold
cup EEG electrodes.

Roxburgh, A.J.,
"Spectral edge frequency: a comparison of methods," Proc. Univ. Otago Med. Sch., 61, pp. 4951, 1983.
ABSTRACT:
Owing to its value in the detection of cerebral
ischaemia, spectral edge frequency (SEF) stands
out as the single most useful univariate descriptor of
the electroencephalogram (EEG) power spectrum. Previous
work used the cumulative power method to define a
significant upper spectral edge, however this correlates
poorly with visual estimates. Rampil et al (1980)
improved the detection of the spectral edge by using a
recursivelyfiltered, templatematching algorithm, but
found that is did not provide reliable detection for the
human EEG.

Roxburgh, A.J. and Baker, A.B.,
"A standard for display of EEG data using the density
modulated spectral array," Proc. Univ. Otago Med.
Sch., 60, pp. 8183, 1982.
ABSTRACT:
The densitymodulated spectral array (DSA) is one of the
more recently developed techniques for automated
processing and display of clinical EEG data. Compared
with earlier display methods. the DSA offers improved
legibility of spectral patterns, yet is more easily
integrated into existing patient monitoring systems.
This report presents a concise description of the DSA
system currently in use at Dunedin Hospital.

Roxburgh, A.J., Dobbinson, T.L., and Baker, A.B.,
"Monitoring ischaemic EEG events with the DSA display," Proc. Univ. Otago
Med. Sch., 60, pp. 4647, 1982.XXXX
ABSTRACT:
The densitymodulated spectral array (DSA) is a
relatively new techniques for displaying the EEG power
spectrum in a compact pictorial form which seems useful
for detecting cerebral ischaemic and hypoxic events.
This report describes preliminary trials using the DSA
at Dunedin Hospital.

Roxburgh, A.J. and Baker, A.B.,
"Linear greyscale raster displays on a thermal stripchart
recorder," Proc. Univ. Otago Med. Sch.,
60, pp. 1618, 1982.
ABSTRACT:
The term "raster" derives from the scanning pattern used
in television. Densitymodulated raster displays plotted
on a thermal stripchart recorder have for several years
been used to display EEG spectral data (the
densitymodulated spectral array, or DSA). The DSA shows
frequency and power (pen position and greydensity,
respectively) versus time. The
raster greydensity is varied by changing the pen
scanning speed, thereby varying the amount of heat
applied to the chart paper. An inherent nonlinearity in
the grey scale is compensated for with a simple
correction that is derived in the paper, and is found to
obey a squarelaw. Retrace speed limitations limit the
available contrast ratio to about 5:1, which causes some
loss of data at small values of the density variable,
however the parabolic density map can be offset to
compensate.

Holmes, C.McK. and Roxburgh, A.J.,
"A computer simulation of gas concentrations in the circle
system," Proc. Computing in Anesthesia
Symposium, Santa Monica, CA, 1982.
ABSTRACT:
The complex interaction of factors governing the
concentration of gases in an anesthetic circle system
are not easily understood by medical students, interns
and residents. Even when the inhalational components of
an anesthetic are nitrous oxide and oxygen only, it is
not a simple twocomponent model, due to the presence
initially of air in the lungs and circuit. The
interacting factors are many, however, in the clinical
situation some of these factors cannot be varied, and
others can be altered only within safety limits. To this
end a computer simulation has been devised, in which all
of the variables may be changed at will, and the effects
observed by the student. The program, which uses the
nitrous oxide uptake rate found by Severinghaus, is
written in Applesoft BASIC. The user can initially set
the flows of the nitrous oxide and oxygen, the oxygen
consumption, and stop time. Further, the initial values
of circuit volume and nitrous oxide uptake may each be
halved or doubled. At the stop time the use may exit the
program or continue with the same or altered variables.
The results are displayed in numerical and graphical
form.

Roxburgh, A.J. and Holmes, C.McK.,
"A computerized anesthesia record for the smaller hospital," Proc. Computing in Anesthesia
Symposium, Santa Monica, CA, 1982.
ABSTRACT:
Placeholder
(under construction).

Smith, N.T., Roxburgh, A.J., and Quinn, M.L.,
"Continual measurement of airway resistance; use of a
microprocessor controlled ventilator," Anesthesiology,
53:s389, 1980.
ABSTRACT:
A microcomputercontrolled ventilator which can generate
virtually any type of waveform has been developed. To
allow the continual measurement of airway
resistance during ventilation, it was programmed it to
superimpose high frequency square waves upon the regular
flow pattern: a square wave, half sine wave, ramp,
or reverse ramp. We determined that the maximum
difference in the high frequency amplitude, between high
and low resistance, was seen with a high frequency of 5
Hz. Normal changes in compliance did not change the high
frequency amplitude.

Edwards, P.J., Hurst, R.B., Roxburgh, A.J., and Stanley,
G.R., Data acquisition and processing. Otago Wind Energy Resource Survey
Phase II. Report No. 2, New Zealand Research and Development Committee.
April 1979. NZERDC P13, ISSN 01105388.
ABSTRACT:
This report describes methods of data acquisition,
processing and analysis used in implementing the NZ Wind
Energy Resource Survey in Otago. Field operation of
windrun and windspeed anemometers, electronic wind
speed integrators and wind speed recorders is described.
The recovery of field recorded data in computer
compatible form and its subsequent analysis to provide
wind energy parameters is also described. Examples of
these analyses are given. Computer program listings are
given in the internal version of this report, available
from the Department of Physics, University of Otago,
Dunedin, New Zealand.

Roxburgh, A.J., Edwards, P.J., and Hurst, R.B.,
"Acquisition and analysis of Otago wind energy data," Proc. N.Z.
Meteorological Service Symposium on Meteorology and Energy, Wellington, New
Zealand, Oct 1112, 1977. Proc. New Zealand Meteorological Service, May 25,
1978.
ABSTRACT:
This paper describes the acquisition and analysis of
wind data by the University of Otago as part of the Wind
Energy Resource Survey of New Zealand. Field operation
of both windrun and windspeed anemometers by the Otago
University Physics Department is detailed, together with
calibration data. A wind speed recording system is
described with particular reference to the continuous
data format used. The format allows flexible readout in
computer compatible form, in analog and numeric printer
chart form, or allows direct analysis of the recovered
analog windspeed variable using special hardware.

Hurst, R.B., Edwards, P.J., and Roxburgh, A.J.,
"Characterisation of wind energy sites," Proc. N.Z. Meteorological Service
Symposium on Meteorology and Energy, Wellington, New Zealand, Oct 1112, 1977.
Proc. New Zealand Meteorological Service, May 25, 1978.
p.5768.
ABSTRACT:
The Otago University Physics Department, as part of its
involvement in a national survey of wind energy
resources, has logged a large quantity (approximately 10
loggeryears) of wind speed data on magnetic tape from a
selection of Otago sites. The blocks of data are
continuous and up to a time 30 days in length. The
recording format allows digitization with time
resolutions of 2 s chosen when the tape is read out.
Times of 64 seconds and 112 seconds have often been
used, to give convenient speed resolutions of 0.1 or 0.2
m/s (depending on the variety of logger). However, time
resolution down to a few seconds is attainable. Access
to computing facilities is available directly (PDP11) or
via punched paper tape. This paper describes some of the
analysis carried out to date on this data to extract
statistical information relevant to wind power
generation.

Edwards, P.J., Hurst, R.B., and Roxburgh, A.J.,
"Aerogenerator performance at representative Otago sites," Proc. N.Z.
Meteorological Service Symposium on Meteorology and Energy, Wellington, New
Zealand, Oct 1112, 1977. Proc. New Zealand Meteorological Service, May 25,
1978. p.8592.
ABSTRACT:
Electricity supply authorities in New Zealand have more
difficulty in meeting load demands in late winter and
early spring than at other times in the year. Thus, aerogeneration on a large scale will be of most
value if it can provide a reliable source during this
period. Of course, it is unrealistic to expect to be
able to provide a fulltime base load from the area
covered by the Otago survey, measuring approximately 150
km by 100 km, but when similar studies become available
from other parts of New Zealand then the ability of wind
energy to provide winter base load can be assessed. Six
special aspects of winter winds in Otago are examined in
this report.

Edwards, P.J. and Roxburgh, A.J.,
"A low cost meteorological data logging system for remote
sites," Proc. World
Meteorological Organization TECIMO Conf., Hamburg, July 1977. University of Otago Physics
Department publication Astrophys 77/4
SUMMARY:
This
paper describes the design of a low cost, low power magnetic
tape cassette data recorder, its use at remote sites, and
the associated data readout facilities. A conventional
magnetic cassette transport system with low power slow speed
DC motor is used. In the single data mode, clock pulses
derived from a crystal controlled oscillator are recorded on
one track, and event pulses on the second track. A four
channel head may be used to provide three data
channels with a time resolution of one second, channel
bandwidth 3 Hz, for one month recording period on a standard
C90 cassette. Power drain is 60 mW. The recorder has been
successfully used with solarimeters, anemometers, and
tipping buck rain gauges. Longer record duration is obtained
with proportionally reduced time resolution and frequency
response. The readout facilities include analog chart
recording, paper tape punching, and character printing as
well as direct access to a minicomputer.

Hurst R.B., Roxburgh A.J., and Edwards P.J.,
Computer
program for atmospheric turbidity determination, University of Otago Physics
Department publication Astrophys 77/5, (Document produced as
partfulfillment of N.Z. Meteorological Service Turbidity Contract), 1977.
ABSTRACT:
It
has been proposed (Edwards, P.J. and Othman, M.,
Southern Stars, Journal of the Royal Astronomical
Society of New Zealand, 26:8, p.184, 1976)
that measurements be made of atmospheric stellar
extinction at selected astronomical observatories, for
the purpose of estimating atmospheric turbidity. This
report describes data reduction and a Burroughs 6700
computer programme developed to process the data from
these astronomical observations.
The astronomical observations consist of photoelectric
measurements at several wavelengths of light from a
known star (i.e., with known right ascension and
declination, and known spectral characteristics). These
measurements, made for a range of zenith angles (and
hence for a range of airpath lengths) allow
determination of the atmospheric extinction, at the
wavelength in question. The principal processes
contributing to this extinction are Rayleigh scattering,
ozone absorption, and aerosol scattering. The extinction
due to Rayleigh scattering and ozone absorption alone
may be estimated for a model atmosphere. Such an
estimate is generally less than the extinction actually
measured, the difference being attributed to turbidity.

Cherry
N.J., Edwards P.J., and Roxburgh A.J., "Lowcost
instrumentation for a wind energy survey," Proc. 22nd
International Instrumentation Symposium, San Diego, May
2527, 1976.
ABSTRACT:
An
observational programme for a wind energy survey is
being carried out in several areas of New Zealand. The
instrumentation required, excluding the anemometer
assembly, was developed locally. Windrun, or mean wind
speed, is obtained by counting one pulse per revolution
of the anemometer on a modified pocket calculator. Power
requirements are reduced to less than an average of 5 mA
by turning the display off when it is not being read.
This is a low cost system with the additional advantage
of being able to run hundreds of meters of cable to the
display in a location remote from the mast.
Mean wind speeds over averaging periods of an hour or
submultiples of an hour down to a minute or less are
recorded electronically on standard reeltoreel or
cassette tape decks or recorders. Two systems are in
use. The first records a frequency proportional to the
wind speed on channel one and a clock pulse train on
channel two. Mean wind speeds over time intervals as
short as three to five seconds or as long as one month
can be retrieved. The second system uses a standard
cassette tape recorder to record data in an incremental
digital form, using 12bit binary numbers recorded in
biphase audio tones. Averaging periods of 1, 2, 5, 15,
30, or 60 minutes may be selected. The density of data
on the tape is increased by recording the data in blocks
of 100 numbers out of a memory unit. All systems can be
powered from the mains or from 12 V dc batteries.

Roxburgh, A.J.,
The construction of apparatus producing
kilowatt nanosecond pulses at 337.1 nm for the study of organic laser dye
characteristics. Thesis, Post Grad. Dip. Sci., University of Otago, 1972 (54
pp.). (Short title: N2 Laser at 337.1 nm for the Study of Organic Laser Dyes.)
ABSTRACT:
This work concerns the modification of a superradiant
nitrogen laser built by Manson (1972). Project stages:
1) Redesign the laser discharge channel using
demountable glass components with integral metal shield
casing and cutoff waveguide for beam exit, to reduce RFI
in nearby equipment by 40 to 50 dB; 2) Optimize the
power output of the laser using a highpressure (6 Atm.)
spark gap instead of the original atmosphericpressure
one, together with a more powerful 60 kV 120 W
adjustable power supply and endtapered 30 MW
60 W charging resistor chain; 3) Verify power output
using a selfbuilt microstripline PIN photo diode UV
detector with 1 ns sampling oscilloscope, and a
commercial radiometer (peak powers of 1.4 kW at 25 Hz
repetition rates were obtained); 4) Attempt to pump
quartz cell filled with 0.0001 M rhodamine 6G dye into
lasing.
