Unfortunately, no. If I set it up this way I get the same standard
error.
Eric McGhee | Research Fellow | PPIC | 415-291-4439
Any opinions expressed in this message are those of the author alone and
do not necessarily reflect any position of the Public Policy Institute
of California.
-----Original Message-----
From: Kosuke Imai [mailto:kimai@Princeton.Edu]
Sent: Tuesday, May 25, 2010 11:42 AM
To: Eric McGhee
Cc: zelig(a)lists.gking.harvard.edu
Subject: RE: [zelig] simple question
maybe, the problem is that you are creating the interaction terms
separately. In R, you should do the following:
y ~ x1 + x2 + x1:x2
if you want to include an interaction term in addition to the main
terms.
setx() assumes such an input.
Kosuke
--
Department of Politics
Princeton University
http://imai.princeton.edu
On Tue, 25 May 2010, Eric McGhee wrote:
I'm taking this off-list, because I have a
feeling my questions are
not
of general interest. However, I hope you can help me
through a few
more
questions.
The standard error of the estimate I described below seems enormously
high to me. Taken literally, it suggests that a dichotomous variable
that once had a standard error of 0.013 (since the mean of the var is
0.504 and the number of cases in the data set is 1518) suddenly has a
standard error of 0.1446 when translated into probabilities through a
logit model. A confidence interval of 0.1446*1.96=0.283 makes
statistical significance basically impossible. Even apparently
enormous
shifts in the mean are reduced to mush.
The irony is that if I set all vars to their sample means (a possibly
unrealistic extrapolation), the problem goes away. I get a very
manageable standard error. So a prediction I'm less certain about
looks
more solid.
Some background: I'm trying to replicate a procedure in the
literature
that simulates "full information" by first
interacting all the
independent variables in the model with a variable indicating how well
informed a respondent is, and then generating predicted values with
all
respondents set equal to the fully-informed condition.
Even though
this
literature has been careful to generate bootstrapped
errors and the
like, I've never seen them produce a standard error this large.
I feel like I must be missing something. Isn't there any way to
generate conditional predictions without losing all precision in the
estimates?
Thanks again for any help you can provide.
Best,
Eric
Eric McGhee | Research Fellow | PPIC | 415-291-4439
Any opinions expressed in this message are those of the author alone
and
do not necessarily reflect any position of the Public
Policy Institute
of California.
-----Original Message-----
From: Kosuke Imai [mailto:kimai@Princeton.Edu]
Sent: Tuesday, May 25, 2010 10:52 AM
To: Eric McGhee
Cc: zelig(a)lists.gking.harvard.edu
Subject: RE: [zelig] simple question
"sd" is the standard deviation of posterior distribution, which is
equivalent to standard error asymptotically. So, you can interpret it
as
standard error.
Kosuke
-
Zelig Mailing List, served by Harvard-MIT Data Center
Send messages: zelig(a)lists.gking.harvard.edu
[un]subscribe Options:
http://lists.gking.harvard.edu/?info=zelig
Zelig program information:
http://gking.harvard.edu/zelig/