提问人:Dexter S 提问时间:9/18/2020 最后编辑:Dexter S 更新时间:9/19/2020 访问量:235
近似值从双精度到单精度的转换
Transfer of approximation from double precision to single precision
问:
我想将给定函数的双精度近似转换为单精度 C 实现(目标设备仅提供单精度 ALU)。
使用双精度生成高精度(例如.max误差0.1e-12)近似并不太复杂。我使用过 maple minimax 函数,但我也发现了一些使用双精度示例的实现。
但是,一旦将这种近似值转换为单一精度方法,当我简单地将系数转换为浮点数时,我就面临着精度损失。我的目标是一个近似值(单精度),它精确到大约 +/-5 ulp。简单地将系数转换为浮点数似乎并不能完成这项工作。我已经学会了将 pi/2 等常数拆分为四舍五入部分和误差部分,我认为有某种技巧可以转移系数(近似的核心计算通常是多项式,我想在这个问题中重点介绍它们),我还不知道。
我感谢每一个提示,关于旨在实施的转移的文件。我已经研究了一些关于浮子精度的论文,但在过去的两周里没有取得太大的进展。
提前致谢!
答:
生成多项式极小值近似的常用方法是使用俄罗斯数学家叶夫根尼·雷梅兹 (Evgeny Remez) 于 1934 年发表的 Remez 交换算法。这是一个数值过程,通常涉及条件不佳的方程组。因此,它通常是在任意精度库的帮助下实现的。例如,在我使用的 Remez 算法的实现中,我将库配置为 1024 位精度。
对于表现相当良好的函数,Remez 算法的各种变体可以找到非常接近数学最小值多项式的近似值。正如问题中所指出的,问题在于,当将生成的多项式系数移动到有限精度浮点计算时会发生什么。人们经常发现近似值的最小值特性受损,有时甚至严重受损。有两个错误来源在起作用。首先,生成的系数不能用有限精度浮点格式准确表示。其次,多项式的计算使用有限精度运算,而不是具有无限精度的数学运算。
The first problem is the easier one to address. As one can see from some quick experiments, simply rounding the coefficients to the finite-precision format doesn't accomplish the desired near minimax result. By using a finite-precision format, we basically transform from an N-dimensional continuous space to an N-dimensional discrete lattice, and to do this properly, we need to find the closest lattice points. This is a solvable but hard problem, which is usually made easier through the use of heuristics. Relevant literature:
N. Brisebarre, J.-M. Muller, and A. Tisserand, "Computing machine-efficient polynomial approximations". ACM Transactions on Mathematical Software, Vol. 32. No. 2, June 2006, pp. 236-256. (online)
Nicolas Brisebarre and Sylvain Chevillard, "Efficient polynomial L∞-approximations", In 18th IEEE Symposium on Computer Arithmetic, June 2007, pp. 169-176 (online)
Florent de Dinechin and Christoph Lauter, "Optimizing polynomials for floating-point implementation", ArXiv preprint 2008 (online)
The Sollya tool uses these techniques from the literature for its command. Worth checking out in addition to Maple's and Mathematica's facilities for generating minimax polynomial approximations, as it often results in superior approximations in my experience.fpminimax
The second problem, how to account for evaluation with finite-precision floating-point computation and how to adjust the coefficients of a polynomial approximation accordingly, are still subject to research. Some initial results:
Tor Myklebust, "Computing accurate Horner form approximations to special functions in finite precision arithmetic", ArXiv manuscript 2015 (online)
Denis Arzelier, Florent Bréhard, Mioara Joldes, "Exchange algorithm for evaluation and approximation error-optimized polynomials", In 26th IEEE Symposium on Computer Arithmetic, June 2019, pp. 30-37 (online)
Note that the first publication came about due to a question I asked on Stackoverflow.
For my own use I am using a heuristic search for finding approximations optimized to account for both representational error in the coefficients and evaluation error in the polynomial evaluation. it can be loosely described as a form of simulated annealing. I have also checked into the use of genetic programming, but the preliminary results did not look promising, so I stopped pursuing this approach.
评论
double
float
float