GLMMs
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

GLMMs



Dear Arthur,

could you please provide a reference on the methodology used for your
Binomial GLMM? I've just read a paper written by Engel et al (1995)
discussing your Gilmour et al. (1985) procedure, but aren't sure if this is
what you use in ASREML? I can't find reference to your exact procedure in
the manual, but I guess by now you will not be using your early approach....?

I have also read the various answers regarding aspects of running GLMMs
(on-line and in the manual), and have a fairly general understanding of
what's going on (I think). However, I am a bit confused as to exactly what
information the Deviance value is telling you (and why you calculate it
twice per iteration used to estimate the VC) and how this is related to
dispersion. This probably just reflects my appalling understanding of GLMMs
in general, but would appreciate it if you could help out with a laymans
description of what exactly is going on (and I can read your reference
material at leisure until I better understand it :-)).

For example, it is my understanding that when your variance heterogenity
factor (deviance/DF) is approximately 1 (as occurs under and animal model)
there is no need to allow for overdispersion because it doesn't exist when
cluster size=1 (ie don't re-run using !disp). Further, that the deviance is
supposed to represent the goodness of fit of replacing y values with fitted
values. I would have thought this also equated with maximising the
information contained in the working variables for parameter estimation,
but guess it doesn't account for the larger measurement error associated
with the small cluster size (ie an animal) compared to larger cluster sizes
(eg. a sire)? Hence, even though the deviance is smallest and disp is
approx. 1 under an animal model, this in no way tells you that the estimate
of additive variance (for example) is going to be the best of alternative
models. In fact, the additive variance under an animal model is strongly
biased downwards - not very helpful if you are wanting to fit an
additive-maternal model. Further, this unaccounted for additive variance
can be picked up by additional random effects which have inherently high
sampling correlations with the additive effects (eg maternal). ie resulting
in spurious maternal effects for example. It is also not possible to
compare the Log-likelihoods of additive vs additive-maternal effect models
(my simulations often show the incorrect direction of change under more
parameters). Is this something to do with  not being able to use deviance
as an appropriate measure of fit when cluster size=1 (although cluster size
presumably varies for the different effects: ie., = 1 for animal effects
but is large for sire effects)?

Have I got the right understanding of what deviance, variance heterogenity
and dispersion mean in the context of interpreting output from ASREML? Is
the deviance adjust a scaled deviance - or is your variance heterogenity
(VH) value a scaled deviance (D/disp)? What if the implied dispersion is
incorrect? It seems to me that after running with !disp (sire model), the
error variance was altered but the VH value was pretty much the same anyway
(ie, no improvement in fit - just rescaling?). Is this always the case with
using !disp, and if so, why bother rescaling weights if the fit of the
model is not improved?

Also, regarding models where we want to estimate maternal effects (for
example), what are the best options for 0/1 data?

Anyway, no doubt I have just displayed how little I understand about GLMMs,
but hopefully others will also benefit from your reply :-).

Hope you are enjoying your visit OS (or are you back now?)...

Cheers

Kim

Kim Bunter (M.Rur.Sc)
PhD Student
Animal Genetics and Breeding Unit
University of New England
Armidale, NSW, 2351
AUSTRALIA

Ph (ISD): -61-2-67733788
Fax (ISD): -61-2-67733266
email: kbunter@metz.une.edu.au
--
Asreml mailinglist archive: http://www.chiswick.anprod.csiro.au/lists/asreml