Re: Fundamental Parameter Approach versus Empirical

bergmann ( bergmann@Rcs1.urz.tu-dresden.de )
Fri, 26 Jun 1998 10:58:54 +0200 (MET DST)

At 03:42 PM 6/25/98 +0200, Armel wrote:

>In the more exact version of this method,
>the profiles should never be modelled, but experimentally measured
>(using two patterns, one from a defect-free sample and the other
>from the defective sample) and then deconvoluted. No modelling
>at all is the best approach...
>
>In this sense, pretending that the fundamental approach is better than
>the empirical is nonsense because none is as good as measuring profile
>shapes. This third approach was also applied to the Rietveld method
>in the so-called learnt-profile methods used in XRS-82 or ARIT1 (...).

Armel,
we fully agree: A measured ideal profile is a much better starting point
for deconvolution than a simulated one. Therefore, BGMN has the option to
use experimental data (in the sense of learnt-profiles) too. But:

>Of course, the problem of these learnt-profiles is that they apply perfectly
>only at the angular position from where they are parametrized. Models
>come back for their extrapolation to the whole powder pattern.

The question is: What is (or can be) more precise for lab X-ray devices?
The extrapolation from about 20-30 degrees 2 theta down to, say, 8 degree,
or a fundamental parameters approach including all of its uncertainities ?
A little bit fantasy:
We need the defect-free sample, having about 15-25 intense, isolated
reflections from 5 to 140 degree 2theta for copper radiation, in a 1 micron
powder. To include the effects of sample thickness/penetration depth in the
'ideal' profile, this reference sample should have the same linear
attenuation coefficients and the same packing density as all the samples to
be analyzed. If we fullfill all these conditions, the learnt-profile method
can be used as a universal substitution for all fundamental and empirical
approachs and may reach the same accuracy as the classical profile analysis.

>
>Fundamental parameters and empirical are both approximated
>approaches. The BGMN program uses another terminology which
>is "Exact Peak Shape Model". Sorry, I don't like it anymore,
>because a model is by definition not exact.
>
You are right. Mathematical theory of optimum experimental design
knows the thesis of the "true model". Any model has some degrees
of freedom, mostly called "parameters". The true model fulfills
the following statement: There exist one set of parameters,
for which the modell is identic with the expectation value of the
(errorneous) measured values for all possible measuring points. Of course,
the true set of parameters will be unknown forever, because of random
errors of the measured values. The question is: Do we favour parameters
from first principles (e.g. geometric values), which may lead to
systematic errors; or do we use fitted parameters to some reference
measurement, which may lack from random errors and corellation?

Reinhard Kleeberg & J"org Bergmann