MIRR - Mary Immaculate Research Repository

    • Login
    View Item 
    •   Home
    • FACULTY OF ARTS
    • Department of Mathematics and Computer Studies
    • Mathematics and Computer Studies (Conference proceedings)
    • View Item
    •   Home
    • FACULTY OF ARTS
    • Department of Mathematics and Computer Studies
    • Mathematics and Computer Studies (Conference proceedings)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of MIRRCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Resources

    How to submitCopyrightFAQs

    Limitations of the least squares estimators; a teaching perspective

    Citation

    O’Driscoll, D. and Ramirez, D.E. (2016). "Limitations of the Least Squares Estimators; A Teaching Perspective", Athens: ATINER'S Conference Paper Series, No: STA2016-2074.
    Thumbnail
    View/Open
    Conference Paper (1.344Mb)
    Date
    2016
    Author
    O'Driscoll, Diarmuid
    Ramirez, Donald E.
    Peer Reviewed
    No
    Metadata
    Show full item record
    O’Driscoll, D. and Ramirez, D.E. (2016). "Limitations of the Least Squares Estimators; A Teaching Perspective", Athens: ATINER'S Conference Paper Series, No: STA2016-2074.
    Abstract
    The standard linear regression model can be written as Y = Xβ+ε with X a full rank n × p matrix and L(ε) = N(0, σ2In). The least squares estimator is = (X΄X)−1XY with variance-covariance matrix Coυ( ) = σ2(X΄X)−1, where Var(εi) = σ2. The diagonal terms of the matrix Coυ( ) are the variances of the Least Squares estimators 0 ≤ i ≤ p−1 and the Gauss-Markov Theorem states is the best linear unbiased estimator. However, the OLS solutions require that (X΄X)−1 be accurately computed and ill conditioning can lead to very unstable solutions. Tikhonov, A.N. (1943) first introduced the idea of regularisation to solve ill-posed problems by introducing additional information which constrains (bounds) the solutions. Specifically, Hoerl, A.E. (1959) added the constraint term to the least squares problem as follows: minimize ||Y – Xβ||2 subject to the constraint ||β||2 = r2 for fixed r and dubbed this procedure as ridge regression. This paper gives a brief overview of ridge regression and examines the performance of three different types of ridge estimators; namely the ridge estimators of Hoerl, A.E. (1959), the surrogate estimators of Jensen, D.R. and Ramirez, D.E. (2008) and the raise estimators of Garcia, C.B., Garcia, J. and Soto, J. (2011).
    Keywords
    Limitations
    Least
    Squares
    Estimators
    Teaching perspective
    Language (ISO 639-3)
    eng
    Publisher
    Athens Institute for Education and Research
    License URI
    https://www.atiner.gr/papers/STA2016-2074.pdf
    URI
    http://hdl.handle.net/10395/2537
    ISSN
    2241-2891
    Collections
    • Mathematics and Computer Studies (Conference proceedings)

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us | Send Feedback
     

     


    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us | Send Feedback