首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对测绘领域中函数模型为非线性函数的线性组合的特殊结构,本文提出了基于Moore-Penrose广义逆和立体矩阵的可分离非线性最小二乘解算方法。该方法首先利用变量投影算法消除可分离非线性模型中的线性参数,将包含两类参数的原非线性优化问题转化为仅含有非线性参数的最小二乘问题。然后,基于Moore-Penrose广义逆矩阵的微分和立体矩阵理论计算最小二乘目标函数的一阶导数,进而采用非线性优化的LM方法求解非线性参数的最优估值。最后,根据最小二乘方法求解线性参数的最优估值。通过指数函数模型拟合和机载LiDAR全波形参数求解试验与传统参数不分离优化方法进行对比,结果表明,基于Moore-Penrose广义逆和立体矩阵的可分离非线性最小二乘解算方法对待求参数初值依赖性低,同时避免了迭代过程中线性参数导致的病态问题,算法稳定性好,为测绘领域中可分离非线性最小二乘问题的解算提供了一种思路,也拓展了可分离非线性最小二乘方法的应用。  相似文献   

2.
Array algebra forms the general base of fast transforms and multilinear algebra making rigorous solutions of a large number (millions) of parameters computationally feasible. Loop inverses are operators solving the problem of general matrix inverses. Their derivation starts from the inconsistent linear equations by a parameter exchangeXL 0, where X is a set of unknown observables,A 0 forming a basis of the so called “problem space”. The resulting full rank design matrix of parameters L0 and its ℓ-inverse reveal properties speeding the computational least squares solution expressed in observed values . The loop inverses are found by the back substitution expressing ∧X in terms ofL through . Ifp=rank (A) ≤n, this chain operator creates the pseudoinverseA +. The idea of loop inverses and array algebra started in the late60's from the further specialized case,p=n=rank (A), where the loop inverse A 0 −1 (AA 0 −1 ) reduces into the ℓ-inverse A=(ATA)−1AT. The physical interpretation of the design matrixA A 0 −1 as an interpolator, associated with the parametersL 0, and the consideration of its multidimensional version has resulted in extended rules of matrix and tensor calculus and mathematical statistics called array algebra.  相似文献   

3.
Summary Within potential theory of Poisson-Laplace equation the boundary value problem of physical geodesy is classified asfree andnonlinear. For solving this typical nonlinear boundary value problem four different types of nonlinear integral equations corresponding to singular density distributions within single and double layer are presented. The characteristic problem of free boundaries, theproblem of free surface integrals, is exactly solved bymetric continuation. Even in thelinear approximation of fundamental relations of physical geodesy the basic integral equations becomenonlinear because of the special features of free surface integrals.  相似文献   

4.
Summary The standard Mollweide projection of the sphere S R 2 which is of type pseudocylindrical — equiareal is generalized to the biaxial ellipsoid E A,B 2 .Within the class of pseudocylindrical mapping equations (1.8) of E A,B 2 (semimajor axis A, semiminor axis B) it is shown by solving the general eigenvalue problem (Tissot analysis) that only equiareal mappings, no conformal mappings exist. The mapping equations (2.1) which generalize those from S R 2 to E A,B 2 lead under the equiareal postulate to a generalized Kepler equation (2.21) which is solved by Newton iteration, for instance (Table 1). Two variants of the ellipsoidal Mollweide projection in particular (2.16), (2.17) versus (2.19), (2.20) are presented which guarantee that parallel circles (coordinate lines of constant ellipsoidal latitude) are mapped onto straight lines in the plane while meridians (coordinate lines of constant ellipsoidal longitude) are mapped onto ellipses of variable axes. The theorem collects the basic results. Six computer graphical examples illustrate the first pseudocylindrical map projection of E A,B 2 of generalized Mollweide type.  相似文献   

5.
The boundary value problem in physical geodesy is nowadays mostly presented with the use of an advanced stochastic model by Krarup-Moritz. This model includes a primary Gauss-Markov model and an adjoining Wiener-Hopf model. Degenerations of the Wiener-Hopf section are found in thesingular auto-covariance matrix of the residuals. The non-singular inverse of the auto-covariance matrix of the signal is proved to be a generalized inverse of the singular auto-covariance matrix of the residuals. The joint model is given a non-stochastic evaluation for a case with spherical external surface (using a non-singular inverse). These findings will not prevent a successful application of the model, which has important merits, specially when using suitablea priori values for the stochastic parameters in the covariance functions. A method for quadratic unbiased estimation ofa priori variances is presented in an introductory section. It is meant to be of value when using a solution of the boundary value problem with the collocation technique based on the classical Gauss-Markov solution. (Bjerhammar (1963).)  相似文献   

6.
Résumé Une des techniques de détermination fine et globale du champ de gravitation terrestre U est la gradiométrie spatiale, dans laquelle on mesure à bord d'un satellite sur orbite basse certaines combinations linéaires des composantes du tenseur ∂2 U/∂xi ∂xj dans des axes {x i } liés au satellite. Un tel projet, appelé GRADIO, est actuellement à l'étude en France et pourrait aboutir à partir de 1990. Après avoir rappelé les objectifs scientifiques d'une telle mission, nous en donnons les spécifications—étayées par une série d'études analytiques; nous définissons ensuite le satellite porteur et ses caractères techniques, en insistant sur les points délicats de la faisabilité (facteurs d'échelle des micro-accéléromètres constituant l'appareil, connaissance de l'attitude...) et en présentant des idées de solution en cours d'approfondissement.
Summary Satellite gradiometry arises as one of the methods for improving our knowledge of the global Earth gravity field at high resolution: by means of micro-accelerometers on board a low orbiting spacecraft, linear combination of the gravity tensor components ∂2 U/∂xi ∂xj are measured in a satellite-fixed reference frame {x i }. Based on this technique, a project named GRADIO is presently under study in France and could fly in 1990 at the earliest. After the scientific objectives of that experiment have been reviewed, the measurement specifications are given as coming from various analytical studies. The platform and its characteristics are then defined: the critical realization problems (scale factors of the micro-accelerometers, spacecraft attitude control and restitution) are pointed out together with some ideas for their solution which are under analysis and require further study.
  相似文献   

7.
在当今各国正大力倡导的“数字国家”、“数字城市”、“数字矿山”等科学工程构建中的数据处理是基础和核心 ,其数据又具有多源、多维、多类型、多时态、多精度并具有非线性特征等特点 ,其数据处理的参数估计模型大都是复杂的非线性函数模型 ,模型中的参数有非随机参数 ,也有随机参数 ,这些系广义非线性数据处理 ,应采用广义非线性动态最小二乘数据处理的理论、方法来完成。本文提出了一种新的解算模型和解算方法 ,将问题分离 ,转换成单变量的一般非线性最小二乘问题求解。先按非线性拟合模型线性逼近法求得靠近真值的最优初值 ,再按非线性最小二乘解算方法求解参数估值。本方法使原来的高维方程得以简化 ,还不用计算二阶导数 ,大大简化了计算难度 ,并大大减少了迭代次数和计算工作量。  相似文献   

8.
The main goal of this paper is to show that the solution obtained by adjusting a free network via the inner adjustment constraint method is the minimum norm solution. The latter is a special case of the class of “minimum trace” solutions, where the trace of the variance-covariance matrix for the adjusted parameters is a minimum. The derivations are carried out in terms of pseudo-inverses, the various other forms of generalized inverses having been left out of consideration.  相似文献   

9.
及时监测干旱与半干旱区光合/非光合植被覆盖度时空变化,可以为指导荒漠化防治工程及植被衰退机制研究提供重要信息。本文以甘肃民勤典型植被白刺灌丛为研究对象,通过地面控制性光谱实验获取混合光谱、端元光谱与丰度信息,开展线性与非线性光谱混合模型(包括核函数非线性和双线性混合模型)估算光合和非光合植被覆盖度的对比研究,采用全限制最小二乘法进行模型解混,分别获取各样本数据中各类端元丰度及其精度信息,通过模型分解的均方根误差(RMSE)与地面验证精度确定用于光合和非光合植被覆盖度估算的最佳光谱混合模型,其中参考端元丰度采用神经网络(NNC)分类算法对数字影像进行分类获取。结果表明:(1)引入阴影端元的四端元模型相对于传统的三端元模型(光合/非光合植被与裸土)能有效提高光谱解混的精度,并提高光合和非光合植被覆盖度估算精度;(2)对白刺灌丛来说,光合植被、非光合植被、裸土及阴影间多重散射混合效应存在,但混合效应不够显著;考虑非线性参数的核函数非线性光谱混合模型表现略低于线性光谱混合模型,因此非线性光谱混合模型在估算白刺灌丛光合和非光合植被覆盖度时相对于线性光谱混合模型没有明显优势;(3)基于光合/非光合植被、裸土与阴影四端元的线性光谱混合模型可以实现白刺灌丛光合和非光合植被覆盖度的准确估算,光合植被覆盖度估算RMSE为0.11 77,非光合植被覆盖度估算RMSE为0.0835。  相似文献   

10.
11.
 Several pre-analysis measures which help to expose the behavior of L 1 -norm minimization solutions are described. The pre-analysis measures are primarily based on familiar elements of the linear programming solution to L 1-norm minimization, such as slack variables and the reduced-cost vector. By examining certain elements of the linear programming solution in a probabilistic light, it is possible to derive the cumulative distribution function (CDF) associated with univariate L 1-norm residuals. Unlike traditional least squares (LS) residual CDFs, it is found that L 1-norm residual CDFs fail to follow the normal distribution in general, and instead are characterized by both discrete and continuous (i.e. piecewise) segments. It is also found that an L 1 equivalent to LS redundancy numbers exists and that these L 1 equivalents are a byproduct of the univariate L 1 univariate residual CDF. Probing deeper into the linear programming solution, it is found that certain combinations of observations which are capable of tolerating large-magnitude gross errors can be predicted by comprehensively tabulating the signs of slack variables associated with the L 1 residuals. The developed techniques are illustrated on a two-dimensional trilateration network. Received: 6 July 2001 / Accepted: 21 February 2002  相似文献   

12.
广义非线性动态最小二乘问题的一个直接解算方法   总被引:4,自引:3,他引:1  
构建“数字地球”、“数字国家”等数字化科学工程的基础是数据[1 ] ,其数据具有多源、多维、多种类型、多种时态、多种精度并具有非线性特征等特点[2 ] ,首先要进行数据处理并应采用全新的广义非线性动态最小二乘法[3] [4] ,数据处理方法的核心是广义非线性动态最小二乘问题参数估计的函数模型及其解算方法 ,迄今国内外对这方面的研究尚不多。本文在作者前期研究、提出的广义非线性动态最小二乘函数模型参数估计迭代法求解[5] 的基础上 ,进一步研究、提出了一种广义非线性动态最小二乘模型参数估计的直接解算方法 ,将问题分离 ,把待求参数减半 ,直接求解。从而大大降低求解问题的维数 ,大大减少计算难度和计算工作量 ,这是国内外首次研究提出的一种比迭代法更快速、更有效、更科学的解算方法。为多源、多类、多时态数据处理开辟了一新途径 ,也大大扩大了广义非线性动态最小二乘法的应用面  相似文献   

13.
《测量评论》2013,45(9)
Abstract

The following method will be found better and quicker than the usual logarithmic process in computing the co-ordinates of intersected points in minor triangulation and traverse work. Let A and B be two stations whose co-ordinates (x 1 y 1), (x 2 y 2) are known. Let P be an intersected point whose co-ordinates (x, y) we wish to determine. Let α and β be the observed angles at A and B respectively.  相似文献   

14.
Existing research on DEM vertical accuracy assessment uses mainly statistical methods, in particular variance and RMSE which are both based on the error propagation theory in statistics. This article demonstrates that error propagation theory is not applicable because the critical assumption behind it cannot be satisfied. In fact, the non‐random, non‐normal, and non‐stationary nature of DEM error makes it very challenging to apply statistical methods. This article presents approximation theory as a new methodology and illustrates its application to DEMs created by linear interpolation using contour lines as the source data. Applying approximation theory, a DEM's accuracy is determined by the largest error of any point (not samples) in the entire study area. The error at a point is bounded by max(|δnode|+M2h2/8) where |δnode| is the error in the source data used to interpolate the point, M2 is the maximum norm of the second‐order derivative which can be interpreted as curvature, and h is the length of the line on which linear interpolation is conducted. The article explains how to compute each term and illustrates how this new methodology based on approximation theory effectively facilitates DEM accuracy assessment and quality control.  相似文献   

15.
白杨  王盼  赵鹏飞  郭建忠  王家耀 《遥感学报》2022,26(5):988-1001
明确当地臭氧生成敏感性变化的主控因子是制定有效臭氧污染控制策略的前提。采用卫星观测OMI FNR(Ratio of the tropospheric columns of Formaldehyde to Nitrogen dioxide,HCHO/NO2)指示剂将河南省夏季臭氧生成敏感性OFS(Ozone Formation Sensitivity)划分为VOCs控制区、协同控制区和NOx控制区。基于地理探测器,量化气象条件、人为源前体物及其交互作用与OFS的关系。研究揭示:(1)河南省夏季OFS以协同控制区为主,区域内臭氧污染严重,仅次于VOCS控制区。2005年—2015年,FNR值波动下降,OFS向协同控制区转变,主要受NOX减排的影响。2016年之后,FNR值变大,OFS有向NOX控制区转变的趋势。(2)人为源排放是OFS变化的主要驱动因子,平均可解释FNR变化的40.5%(q=0.405)。若CO、PM2.5、NOx和非甲烷挥发性有机物NMVOC(Non-methane Volatile Organic Compounds)的排放量增加,FNR减小,河南省夏季OFS向VOCs控制区转变,对NOx减排的敏感性降低。(3)地表净太阳辐射SSR(q=0.321, Surface net Solar Radiation)和大气柱总水量TCW(q=0.302, Total Column Water)是河南省夏季OFS变化的主要气象驱动因素。SSR增加,FNR减小,使臭氧生成对VOCs更加敏感。TCW对OFS变化的影响较为复杂,当TCW<40 kg/m2时,TCW增加,FNR减小,臭氧生成对VOCs更加敏感;当TCW>40 kg/m2时,TCW增加,FNR增大,臭氧生成对NOx更加敏感。(4)因子间的交互作用对OFS空间分布的驱动大于单一因子的独立作用,人为源前体物和气象因子的交互作用占主导地位。研究结果可加强对臭氧生成光化学过程的认识,为制定合理的污染减排措施提供依据。  相似文献   

16.
首先讨论了非线性系统的拟合推估线性化算法、迭代算法和顾及二次项的非线性算法,推导了相应计算公式。证明了非线性拟合推估理论是线性拟合推估理论的拓展,线性拟合推估是非线性拟合推估的一个特例。  相似文献   

17.
 When standard boundary element methods (BEM) are used in order to solve the linearized vector Molodensky problem we are confronted with two problems: (1) the absence of O(|x|−2) terms in the decay condition is not taken into account, since the single-layer ansatz, which is commonly used as representation of the disturbing potential, is of the order O(|x|−1) as x→∞. This implies that the standard theory of Galerkin BEM is not applicable since the injectivity of the integral operator fails; (2) the N×N stiffness matrix is dense, with N typically of the order 105. Without fast algorithms, which provide suitable approximations to the stiffness matrix by a sparse one with O(N(logN) s ), s≥0, non-zero elements, high-resolution global gravity field recovery is not feasible. Solutions to both problems are proposed. (1) A proper variational formulation taking the decay condition into account is based on some closed subspace of co-dimension 3 of the space of square integrable functions on the boundary surface. Instead of imposing the constraints directly on the boundary element trial space, they are incorporated into a variational formulation by penalization with a Lagrange multiplier. The conforming discretization yields an augmented linear system of equations of dimension N+3×N+3. The penalty term guarantees the well-posedness of the problem, and gives precise information about the incompatibility of the data. (2) Since the upper left submatrix of dimension N×N of the augmented system is the stiffness matrix of the standard BEM, the approach allows all techniques to be used to generate sparse approximations to the stiffness matrix, such as wavelets, fast multipole methods, panel clustering etc., without any modification. A combination of panel clustering and fast multipole method is used in order to solve the augmented linear system of equations in O(N) operations. The method is based on an approximation of the kernel function of the integral operator by a degenerate kernel in the far field, which is provided by a multipole expansion of the kernel function. Numerical experiments show that the fast algorithm is superior to the standard BEM algorithm in terms of CPU time by about three orders of magnitude for N=65 538 unknowns. Similar holds for the storage requirements. About 30 iterations are necessary in order to solve the linear system of equations using the generalized minimum residual method (GMRES). The number of iterations is almost independent of the number of unknowns, which indicates good conditioning of the system matrix. Received: 16 October 1999 / Accepted: 28 February 2001  相似文献   

18.
Gravity field estimation in geodesy, through linear(ized) least squares algorithms, operates under the assumption of Gaussian statistics for the estimable part of preselected models. The causal nature of the gravity field is implicitly involved in its geodetic estimation and introduces the need to include prior model information, as in geophysical inverse problems. Within the geodetic concept of stochastic estimation, the prior information can be in linear form only, meaning that only data linearly depending on the estimates can be used effectively. The consequences of the inverse gravimetric problem in geodetic gravity field estimation are discussed in the context of the various approaches (in model data spaces) which have the common goal to bring into agreement the statistics between these two spaces. With a simple numerical example of FAA prediction, it is shown that prior information affects the accuracy of estimates at least equally as the number of input data. Received: 25 April 1994; Accepted: 15 October 1996  相似文献   

19.
The three-dimensional (3-D) resection problem is usually solved by first obtaining the distances connecting the unknown point P{X,Y,Z} to the known points Pi{Xi,Yi,Zi}i=1,2,3 through the solution of the three nonlinear Grunert equations and then using the obtained distances to determine the position {X,Y,Z} and the 3-D orientation parameters {,, }. Starting from the work of the German J. A. Grunert (1841), the Grunert equations have been solved in several substitutional steps and the desire as evidenced by several publications has been to reduce these number of steps. Similarly, the 3-D ranging step for position determination which follows the distance determination step involves the solution of three nonlinear ranging (`Bogenschnitt') equations solved in several substitution steps. It is illustrated how the algebraic technique of Groebner basis solves explicitly the nonlinear Grunert distance equations and the nonlinear 3-D ranging (`Bogenschnitt') equations in a single step once the equations have been converted into algebraic (polynomial) form. In particular, the algebraic tool of the Groebner basis provides symbolic solutions to the problem of 3-D resection. The various forward and backward substitution steps inherent in the classical closed-form solutions of the problem are avoided. Similar to the Gauss elimination technique in linear systems of equations, the Groebner basis eliminates several variables in a multivariate system of nonlinear equations in such a manner that the end product normally consists of a univariate polynomial whose roots can be determined by existing programs e.g. by using the roots command in Matlab.Acknowledgments.The first author wishes to acknowledge the support of JSPS (Japan Society of Promotion of Science) for the financial support that enabled the completion of the write-up of the paper at Kyoto University, Japan. The author is further grateful for the warm welcome and the good working atmosphere provided by his hosts Professors S. Takemoto and Y. Fukuda of the Department of Geophysics, Graduate School of Science, Kyoto University, Japan.  相似文献   

20.
广义非线性动态处理模型及其解算方法   总被引:2,自引:2,他引:0  
本文针对当今各国高层领导和科学家十分关注并大力倡导的“数字地球”、“数字国家”、“数字城市”、“数字矿山”等科学工程构建中遇到的大量的多源、多维、多类型、多时态、多精度并具有非线性特征的联合数据处理问题的特点,建立了一个广义非线性动态联合数据处理模型及其相应的广义非线性最小二乘模型。针对该模型规模大、维数高的特点,借鉴多变量函数寻优的“变量轮换法”或“因素交替法”的思想、结合无记忆牛顿法,建立了一个解算算法,该算法将大规模的优化问题分解为两个较低规模的优化问题进行解算,降低了问题的规模,借助无记忆牛顿法,减少了存储量,特别适合大规模问题的解算。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号