Quantile Regression Methodology Quantile regression is based on the mi dịch - Quantile Regression Methodology Quantile regression is based on the mi Việt làm thế nào để nói

Quantile Regression Methodology Qua

Quantile Regression Methodology
Quantile regression is based on the minimi
zation of weighted absolute deviations
(also known as L_1 method) to estimate c
onditional quantile (percentile) functions
(Koenker and Bassett, 1978; Koenker and Ha
llock, 2001). For the median (quantile =
0.5), symmetric weights are used, and fo
r all other quantile
s (e.g., 0.1, 0.2 ....., 0.9)
asymmetric weights are employed. In contra
st, classical OLS regression (also known as
L_2 method) estimates conditional mean functi
ons. Unlike OLS, quantile regression is
not limited to explaining the mean of the
dependent variable. It
can be employed to
explain the determinants of the dependent vari
able at any point of the distribution of the
dependent variable. For hedoni
c price functions, quantile regr
ession makes it possible to
statistically examine the exte
nt to which housing characteri
stics are valued differently
across the distribution of housing prices.
One may argue that the same goal
may be accomplished by segmenting the
dependent variable, such as house price, in
to subsets according
to its unconditional
distribution and then applying OLS on the subs
ets, as done, for example, in Newsome
and Zietz (1992). However, as clearly argue
d by Heckman (1979), this “truncation of the
dependent variable” may create biased parame
ter estimates and shoul
d be avoided. Since
quantile regression employs the full data set,
a sample selection problem does not arise.
Quantile regression generalizes the con
cept of an unconditional quantile to a
quantile that is conditioned on one or more
covariates. Least squares minimizes the sum
of the squared residuals,
{}
0
2
,
0
min
k
j
j
k
ijji
b
ij
ybx
=
=





∑∑
,
6
where
y
i
is the dependent variable at observation
i
,
x
j,i
the
j
th regressor variable at
observation
i
, and
b
j
an estimate of the model’s
j
th regression coefficient. By contrast,
quantile regression minimizes a weighted
sum of the absolute deviations,
{}
0
,
0
min
k
j
j
k
ijjii
b
ij
ybxh
=
=

∑∑
,
where the weight
h
i
is defined as
2
i
hq
=
if the residual for the
i
th observation is strictly positive or as
22
i
hq
=

if the residual for the
i
th observation is negativ
e or zero. The variable
q
(0
1)
q
0/5000
Từ: -
Sang: -
Kết quả (Việt) 1: [Sao chép]
Sao chép!
Quantile Regression Methodology Quantile regression is based on the minimization of weighted absolute deviations (also known as L_1 method) to estimate conditional quantile (percentile) functions (Koenker and Bassett, 1978; Koenker and Hallock, 2001). For the median (quantile = 0.5), symmetric weights are used, and for all other quantiles (e.g., 0.1, 0.2 ....., 0.9) asymmetric weights are employed. In contrast, classical OLS regression (also known as L_2 method) estimates conditional mean functions. Unlike OLS, quantile regression is not limited to explaining the mean of the dependent variable. It can be employed to explain the determinants of the dependent variable at any point of the distribution of the dependent variable. For hedonic price functions, quantile regression makes it possible to statistically examine the extent to which housing characteristics are valued differently across the distribution of housing prices. One may argue that the same goal may be accomplished by segmenting the dependent variable, such as house price, into subsets according to its unconditional distribution and then applying OLS on the subsets, as done, for example, in Newsome and Zietz (1992). However, as clearly argued by Heckman (1979), this “truncation of the dependent variable” may create biased parameter estimates and should be avoided. Since quantile regression employs the full data set, a sample selection problem does not arise. Quantile regression generalizes the concept of an unconditional quantile to a quantile that is conditioned on one or more covariates. Least squares minimizes the sum of the squared residuals, {}02,0minkjjkijjibijybx==−∑∑, 6where yi is the dependent variable at observation i, xj,i the jth regressor variable at observation i, and bj an estimate of the model’s jth regression coefficient. By contrast, quantile regression minimizes a weighted sum of the absolute deviations, {}0,0minkjjkijjiibijybxh==−∑∑, where the weight hi is defined as 2ihq=if the residual for the ith observation is strictly positive or as 22ihq=−if the residual for the ith observation is negative or zero. The variable q(01)q<< is the quantile to be estimated or predicted. The standard errors of the coefficient estimates are estimated using bootstrapping as suggested by Gould (1992, 1997). They are significantly less sensitive to heteroskedasticity than the standard error estimates based on the method suggested by Rogers (1993).3Quantile regression analyzes the similarity or dissimilarity of regression coefficients at different points of the distribution of the dependent variable, which is sales price in our case. It does not consider spatial autocorrelation that may be present in the data. Because similarly priced houses are unlikely to be all clustered geographically, one cannot expect that quantile regression will remove the need to account for spatial autocorrelation. 3 The quantile regressions employ the “sqreg” command in Stata for seed 1001. 7In this paper, spatial autocorrelation is incorporated into the quantile regression framework through the addition of a spatial lag variable. The spatial lag variable is defined as Wy, where W is a spatial weight matrix of size TxT, where T is the number of observations, and where yis the dependent variable vector, which is of size Tx1. Any spatial weight matrix can be employed, for example, one based on the ith nearest neighbor method, contiguity, or some other scheme. In the present application, a contiguity matrix is used.4Adding a spatial lag to an OLS regression is well known to cause inference problems owing to the endogeneity of the spatial lag (Anselin, 2001). This is not any different for quantile regression than for OLS. We follow the approach suggested by Kim and Muller (2004) to deal with this endogeneity problem in quantile regression. As instruments we employ the regressors and their spatial lags.5 However, instead of using a density function estimator for the derivation of the standard errors, we follow the well established route of bootstrapping the standard errors (Greene, 2000, pp. 400-401).64. Data and Estimation Results This study uses multiple listing service (MLS) data from the Orem/Provo, Utah
đang được dịch, vui lòng đợi..
 
Các ngôn ngữ khác
Hỗ trợ công cụ dịch thuật: Albania, Amharic, Anh, Armenia, Azerbaijan, Ba Lan, Ba Tư, Bantu, Basque, Belarus, Bengal, Bosnia, Bulgaria, Bồ Đào Nha, Catalan, Cebuano, Chichewa, Corsi, Creole (Haiti), Croatia, Do Thái, Estonia, Filipino, Frisia, Gael Scotland, Galicia, George, Gujarat, Hausa, Hawaii, Hindi, Hmong, Hungary, Hy Lạp, Hà Lan, Hà Lan (Nam Phi), Hàn, Iceland, Igbo, Ireland, Java, Kannada, Kazakh, Khmer, Kinyarwanda, Klingon, Kurd, Kyrgyz, Latinh, Latvia, Litva, Luxembourg, Lào, Macedonia, Malagasy, Malayalam, Malta, Maori, Marathi, Myanmar, Mã Lai, Mông Cổ, Na Uy, Nepal, Nga, Nhật, Odia (Oriya), Pashto, Pháp, Phát hiện ngôn ngữ, Phần Lan, Punjab, Quốc tế ngữ, Rumani, Samoa, Serbia, Sesotho, Shona, Sindhi, Sinhala, Slovak, Slovenia, Somali, Sunda, Swahili, Séc, Tajik, Tamil, Tatar, Telugu, Thái, Thổ Nhĩ Kỳ, Thụy Điển, Tiếng Indonesia, Tiếng Ý, Trung, Trung (Phồn thể), Turkmen, Tây Ban Nha, Ukraina, Urdu, Uyghur, Uzbek, Việt, Xứ Wales, Yiddish, Yoruba, Zulu, Đan Mạch, Đức, Ả Rập, dịch ngôn ngữ.

Copyright ©2024 I Love Translation. All reserved.

E-mail: