I'm not sure whether your issue is optimal matching or propensity score
matching. However, propensity score matching is designed to achieve
balance only in expectation, and only if the right assumptions hold. Its
not designed to reduce balance in-sample, and usually does a pretty bad job
at it. See this paper <http://j.mp/jCpWmk>, for example.
Gary
--
*Gary King* - Albert J. Weatherhead III University Professor - Director,
IQSS - Harvard University
GKing.Harvard.edu <http://gking.harvard.edu/> - King(a)Harvard.edu -
@kinggary<http://twitter.com/kinggary>- 617-500-7570 - Asst 495-9271 -
Fax 812-8581
On Wed, Feb 1, 2012 at 7:03 AM, Shane Phillips <phillips.shane(a)gmail.com>wrote;wrote:
Good morning!
I used MatchIt! to run a simulation comparing three different types of
propensity score matching techniques: 1-to-1 Nearest neighbor including
exact matching on two dichotomous variables, 1-to-1 nearest neighbor with a
.1 sd caliper, and 1-to-1 optimal matching. After conducting 1000 runs of
1600 cases each (134 treated cases and 1466 possible control cases),
optimal showed the lowest average standardized mean difference, but there
was MUCH more variability in the standardized mean difference values than
in the other two methods. How can I explain this? All of the methods used
the same data. There was not much competition for controls. The nearest
neighbor methods used the default order settings. Please help!!!!
Thanks,
Shane Phillips
-
---
MatchIt mailing list served by HUIT
List Address: matchit(a)lists.gking.harvard.edu
Subscribe/Unsubscribe:
http://lists.gking.harvard.edu/mailman/listinfo/ei
MatchIt Software and Documentation:
http://gking.harvard.edu/matchit/
Browse/Search List Archive:
http://lists.gking.harvard.edu/mailman/private/matchit/
Matchit mailing list
Matchit(a)lists.gking.harvard.edu
https://lists.gking.harvard.edu/mailman/listinfo/matchit