.

Multidimensional Gains for Stochastic Approximation

LAUR Repository

Show simple item record

dc.contributor.author Saab, Samer S.
dc.contributor.author Shen, Dong
dc.date.accessioned 2019-07-24T09:51:18Z
dc.date.available 2019-07-24T09:51:18Z
dc.date.copyright 2019 en_US
dc.date.issued 2019-07-24
dc.identifier.issn 2162-237X en_US
dc.identifier.uri http://hdl.handle.net/10725/11135
dc.description.abstract This paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated with the noisy observations. The analytical development of the matrix gain is built on an iterative-dependent linear function interfered by additive zero-mean white noise, where the dimension of the function is M≥ 1 and the dimension of the unknown variable is N≥ 1. Necessary and sufficient conditions for M≥ N algorithms are presented pertaining to algorithm stability and convergence of the estimate error covariance matrix. Two algorithms are proposed: one for the case where M≥ N and the second one for the antithesis. The two algorithms assume full knowledge of the Jacobian. The recursive algorithms are proposed for generating the optimal iterative-dependent matrix gain. The proposed algorithms here aim for per-iteration minimization of the mean square estimate error. We show that the proposed algorithm satisfies the presented conditions for stability and convergence of the covariance. In addition, the convergence rate of the estimation error covariance is shown to be inversely proportional to the number of iterations. For the antithesis M < N, contraction of the error covariance is guaranteed. This underdetermined system of equations can be helpful in training neural networks. Numerical examples are presented to illustrate the performance capabilities of the proposed multidimensional gain while considering nonlinear functions. en_US
dc.language.iso en en_US
dc.title Multidimensional Gains for Stochastic Approximation en_US
dc.type Article en_US
dc.description.version Published en_US
dc.author.school SOE en_US
dc.author.idnumber 199690250 en_US
dc.author.department Electrical And Computer Engineering en_US
dc.description.embargo N/A en_US
dc.relation.journal IEEE Transactions on Neural Networks and Learning Systems en_US
dc.article.pages 1-14 en_US
dc.identifier.doi http://dx.doi.org/10.1109/TNNLS.2019.2920930 en_US
dc.identifier.ctation Saab, S. S., & Shen, D. (2019). Multidimensional Gains for Stochastic Approximation. IEEE transactions on neural networks and learning systems. en_US
dc.author.email ssaab@lau.edu.lb en_US
dc.identifier.tou http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php en_US
dc.identifier.url https://ieeexplore.ieee.org/abstract/document/8751995 en_US
dc.orcid.id https://orcid.org/0000-0003-0124-8457 en_US
dc.author.affiliation Lebanese American University en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search LAUR


Advanced Search

Browse

My Account