Life may toss us ill-conditioned problems, but it is too short to settle for unstable algorithms. - D.P. O'Leary
Measures of error
If xxx is a number and x^hat xx^ is its approximation, then the are two notions of error:
Since the relative error is invariant under scaling (x↦αx)(x mapsto alpha x)(x↦αx), we will mostly be interested in relative error.
Significant digits
The significant digits in a number are the first nonzero digit and all succeeding digits. Thus 1.7320
has five significant digits. 0.0491
has only three significant digits. It is not transparent to me why this definition is sensible.
Correct Significant digits --- a first stab
We can naively define x^hat xx^ agrees to xxx upto ppp significant digits if x^hat xx^ and xxx round to the same number upto ppp significant digits. This definition is seriously problematic. Consider the numbers:
Here, yyy has correct one and three significant digits relative to xxx, but incorrect 2 significant digits, since the truncation at x2x_2x2 and y2y_2y2 do not agree even to the first significant digit.Correct Significant digits --- the correct definition
We say that x^hat xx^ agress to xxx upto ppp significant digits if ∣x−x^∣|x - hat x|∣x−x^∣ is less than half a unit in the pth significant digit of xxx.
Accuracy v/s precision
+, -, *, /
are performed.for floating point, measured by unit round-off (we have not met this yet).Accuracy is not limited by precision: By using fixed precision arithmetic, we can emulate arbitrary precision arithmetic. The problem is that often this emulation is too expensive to be useful.
Backward, Forward errors
Let y=f(x)y = f(x)y=f(x), where f:R→Rf: mathbb R ightarrow mathbb Rf:R→R. Let us compute y^hat yy^ as an approximation to yyy, in an arithmetic of precision uuu. How do we measure the quality of y^hat yy^?
There are two reasons we prefer backward error.
Backward stable
A method for computing y=f(x)y = f(x)y=f(x) is called backward stable if it produces a y^hat yy^ with small backward error. That is, we need a small δxdelta xδx such that y^=f(x+δx)hat y = f(x + delta x)y^=f(x+δx).
Mixed forward-backward error
We assume that addition and subtraction are backward stable, where uuu is the number of significant digits to which our arithmetic operations can be performed:
x±y=x(1+Δ)±y(1+Δ)∀∣Δ∣≤u x pm y = x(1 + Delta) pm y(1 + Delta) forall |Delta| leq u x±y=x(1+Δ)±y(1+Δ)∀∣Δ∣≤u
Another type of error we can consider is that of the form:
y^+δy=f(x+Δx) hat y + delta y = f(x + Delta x) y^+δy=f(x+Δx)
That is, for a small perturbation in the output (δy)(delta y)(δy), we can get a backward error of δxdelta xδx. This is called as mixed forward backward error. We can say that an algorithm with mixed-forward-backward-error is stable iff:
y^+δy=f(x+Δx)∣Δy∣/∣y^∣<ϵ∣Δx∣/∣x^∣<ηϵ,η are small egin{aligned} &hat y + delta y = f(x + Delta x) \ &|Delta y|/|hat y| < epsilon \ &|Delta x|/|hat x| < eta \ & ext{$epsilon, eta$ are small} end{aligned} y^+δy=f(x+Δx)∣Δy∣/∣y^∣<ϵ∣Δx∣/∣x^∣<ηϵ,η are small
This definition of stability is useful when rounding errors are the dominant form of errors.
conditioning
Relationship between forward and backward error is govered by conditioning: the sensitivity of solutions to perturbations of data. Let us have an approximate solution y^=f(x+δx)hat y = f(x + delta x)y^=f(x+δx). Then:
y^−y=f(x+δx)−f(x)=f′(x)δx+O((δx)2)(y^−y)/y=(xf′(x)/f(x))(Δx/x)+O((Δx)2)(y^−y)/y=c(x)(Δx/x)+O((Δx)2)c(x)≡∣xf′(x)/f(x)∣ egin{aligned} &hat y - y = f(x + delta x) - f(x) = f'(x) delta x + O((delta x)^2) \ &(hat y - y)/y = (x f'(x)/f(x)) (Delta x/x) + O((Delta x)^2) \ &(hat y - y)/y = c(x) (Delta x/x) + O((Delta x)^2)\ &c(x) equiv |x f'(x)/f(x)| end{aligned} y^−y=f(x+δx)−f(x)=f′(x)δx+O((δx)2)(y^−y)/y=(xf′(x)/f(x))(Δx/x)+O((Δx)2)(y^−y)/y=c(x)(Δx/x)+O((Δx)2)c(x)≡∣xf′(x)/f(x)∣
The quantity c(x)c(x)c(x) measures the scaling factor to go from the relative change in output to the relative change in input. Note that this is a property of the function fff, not any particular algorithm.
Example: logxlog xlogx
If f(x)=logxf(x) = log xf(x)=logx, then c(x)=∣(x(logx)′)/logx∣=∣1/logx∣c(x) = |(x (log x)') / log x| = |1/log x|c(x)=∣(x(logx)′)/logx∣=∣1/logx∣. This quantity is very large for x≃1x simeq 1x≃1. So, a small change in xxx can produce a drastic change in logxlog xlogx around 111.
Rule of thumb
We now gain access to the useful rule:
forward error≲condition number×backward error ext{forward error} lesssim ext{condition number} imes ext{backward error} forward error≲condition number×backward error
Forward stable
If a method produces answers with forward errors of similar magnitude to those produced by a backward stable method, then it is called forward stable. Backward stability implies forward stability, but not vice-versa (TODO: why?)
Cancellation
Consider the following program:
#include <cmath>
#include <stdio.h>
int main() {
double x = 12e-9;
double c = cos(x);
double one_sub_c = 1 - c;
double denom = x*x;
double yhat = one_sub_c / denom;
printf("x: %20.16f
"
"cx: %20.16f
"
"one_sub_c: %20.16f
"
"denom: %20.16f
"
"yhat: %20.16f
",
x, c, one_sub_c, denom, yhat);
}
which produces the output:
x: 0.0000000120000000
cx: 0.9999999999999999
one_sub_c: 0.0000000000000001
denom: 0.0000000000000001
yhat: 0.7709882115452477
This is clearly wrong, because we know that (1−cosx)/x2)≤1/2(1-cos x)/x^2) leq 1/2(1−cosx)/x2)≤1/2. The reason for this terrible result is that:
In general:
x≡1+ϵerror of order ϵy≡1−x=ϵvalue of order ϵ egin{aligned} &x equiv 1 + epsilon ext{error of order $epsilon$} \ &y equiv 1 - x = epsilon ext{value of order $epsilon$} \ end{aligned} x≡1+ϵerror of order ϵy≡1−x=ϵvalue of order ϵ
That is, subtracting values close to each other (in this case, 111 and xxx) converts error order of magnitude into value order of magnitude. Alternatively, it brings earlier errors into promience as values.
Analysis of subtraction
We can consider the subtraction:
x=a−b;x^=a^−b^a^=a(1+Δa)b^=b(1+Δb)∣x−x^x∣=∣−aΔa−bΔba−b∣=∣−aΔa−bΔb∣∣a−b∣=∣aΔa+bΔb∣∣a−b∣≤max(∣Δa∣,∣Δb∣)(∣a∣+∣b∣)∣a−b∣ egin{aligned} &x = a - b; hat x = hat a - hat b \ &hat a = a(1 + Delta a) \ &hat b = b(1 + Delta b) \ &left| frac{x - hat x}{x} ight| \ &= left| frac{-aDelta a - bDelta b}{a - b} ight| \ &= frac{|-aDelta a - bDelta b|}{|a - b|} \ &= frac{|aDelta a + bDelta b|}{|a - b|} \ &leq frac{max(|Delta a|, |Delta b|) (|a| + |b|)}{|a - b|} end{aligned} x=a−b;x^=a^−b^a^=a(1+Δa)b^=b(1+Δb)∣∣∣∣∣xx−x^∣∣∣∣∣=∣∣∣∣∣a−b−aΔa−bΔb∣∣∣∣∣=∣a−b∣∣−aΔa−bΔb∣=∣a−b∣∣aΔa+bΔb∣≤∣a−b∣max(∣Δa∣,∣Δb∣)(∣a∣+∣b∣)
This quantity will be large when ∣a−b∣≪∣a∣+∣b∣|a - b| ll |a| + |b|∣a−b∣≪∣a∣+∣b∣: that is, when there is heavy cancellation in the subtraction to compute xxx.
Underflow
#include <cmath>
#include <stdio.h>
int main() {
double x = 1000;
for(int i = 0; i < 60; ++i) {
x = sqrt(x);
}
for(int i = 0; i < 60; ++i) {
x = x*x;
}
printf("x: %10.20f
", x);
}
This produces the output:
./sqrt-pow-1-12
...
x: 1.00000000000000000000
That is, even though the function is an identity function, the answer collapses to 1
. What is happening?
Computing (ex−1)/x(e^x - 1)/x(ex−1)/x
One way to evaluate this function is as follows:
double f(double x) { return x == 0 ? 1 : (pow(M_E, x) - 1) / x; }
This can suffer from catastrophic cancellation in the numerator. When xxx is close to 000, exe^xex is close to 1, and ex−1e^x - 1ex−1 will magnify the error in exe^xex.
double f(double x) {
const double y = pow(M_E, x);
return y == 1 ? 1 : (y - 1) / log(y);
}
This algorithm seems crazy, but there's insight in it. We can show that the errors cancel! The idea is that neither (y−1)(y - 1)(y−1) nor logylog ylogy are particularly good, the errors accumulated in them almost completely cancel out, leaving out a good value:
assume y^=11=y^≡ex(1+δ)log1=log(ex)+log(1+δ)x=−log(1+δ)x=−δ+O(δ2) ext{assume $hat y = 1$} \ 1 = hat y equiv e^x(1 + delta) \ log 1 = log (e^x ) + log(1 + delta) \ x = -log(1 + delta) \ x = -delta + O(delta^2) assume y^=11=y^≡ex(1+δ)log1=log(ex)+log(1+δ)x=−log(1+δ)x=−δ+O(δ2)
If y^≠1hat y eq 1y^=1:
f^=(y^−1)/logy^=(1+ϵ3)(y^−1)(1+ϵ+1)/(logy^(1+ϵ2)) hat f = (hat y - 1)/log{hat y} = (1+epsilon_3)(hat y - 1)(1 + epsilon+1)/(log hat y(1 + epsilon_2)) f^=(y^−1)/logy^=(1+ϵ3)(y^−1)(1+ϵ+1)/(logy^(1+ϵ2))
EEE floating point fun: +0
and -0
for complex analysis
Rather than think of+0
and-0
as distinct numerical values, think of their sign bit as an auxiliary variable that conveys one bit of information (or misinformation) about any numerical variable that takes on 0 as its value.
We have two types of zeroes, +0
and -0
in IEEE-754. These are used in some cases. The most famous is that 1/+0=+∞1/+0 = +infty1/+0=+∞, while 1/−0=−∞1/-0 = -infty1/−0=−∞. Here, we proceed to discuss some complex-analytic considerations.
Therefore. implementers of compilers and run-time libraries bear a heavy burden of attention to detail if applications programmers are to realize the full benefit of the IEEE style of complex arithmetic. That benefit deserves Some discussion here if only to reassure implementers that their assiduity will be appreciated.
−1+0i=+0+i−1−0i=+0−i sqrt{-1 + 0 i} = +0 + i \ sqrt{-1 - 0 i} = +0 - i \ −1+0i
=+0+i−1−0i
=+0−i
These will ensure that z∗=(z)∗sqrt{z*} = (sqrt{z})*z∗=(z)∗:
copysign(1,+0)=+1copysign(1,−0)=−1 exttt{copysign}(1, +0) = +1 \ exttt{copysign}(1, -0) = -1 \ copysign(1,+0)=+1copysign(1,−0)=−1
These will ensure that copysign(x,1/x)=x exttt{copysign}(x, 1/x) = xcopysign(x,1/x)=x when x=±∞x = pm inftyx=±∞. An example is provided where the two limits:
f(x+i0)=limy→0−f(x+iy)f(x+i0)=limy→0−f(x+iy) egin{aligned} &f(x + i0) = lim_{y ightarrow 0-} f(x + i y) \ &f(x + i0) = lim_{y ightarrow 0-} f(x + i y) \ end{aligned} f(x+i0)=y→0−limf(x+iy)f(x+i0)=y→0−limf(x+iy)
Complex-analytic considerations
The principal branch of a complex function is a way to select one branch of a complex-function, which tends to be multi-valued. A classical example is the argument function, where arg(reiθ=θarg(r e^{i heta} = hetaarg(reiθ=θ. However, this is ambiguous: we can map θ↦θ+2π heta mapsto heta + 2 piθ↦θ+2π and still have the same complex number. So, we need to fix some standard. We usually pick the "branch" where 0≤θ<2π0 leq heta < 2 pi0≤θ<2π. In general, we need to carefully handle what happens to the function at the discontinuity.
https://coloradohorseforum.com/advert/spring-fcs-new-hampshire-vs-delaware-live-streams-fcs-spring-college-football-game/
https://coloradohorseforum.com/advert/spring-fcs-furman-vs-chattanooga-live-streams-fcs-spring-college-football-game/
https://coloradohorseforum.com/advert/spring-fcs-morehead-state-vs-stetson-live-streams-fcs-spring-college-football-game/
https://coloradohorseforum.com/advert/spring-fcs-youngstown-state-vs-south-dakota-live-streams-fcs-spring-college-football-game/
https://coloradohorseforum.com/advert/spring-fcs-kennesaw-state-vs-dixie-state-live-streams-fcs-spring-college-football-game/
https://coloradohorseforum.com/advert/dazn-tv-fight-krzysztof-glowacki-vs-lawrence-okolie-live-stream-full-fight-boxing-free/
https://coloradohorseforum.com/advert/free-fight-glowacki-vs-okolie-live-free-stream-full-fight-boxing-free/
https://coloradohorseforum.com/advert/watch-fight-artur-beterbiev-vs-adam-deines-live-free-stream-full-fight-boxing-free/
https://coloradohorseforum.com/advert/full-fight-beterbiev-vs-deines-live-free-stream-full-fight-boxing-free/
https://jiyaroy.com/general/hackers-guide-to-numerical-analysis-20-03-2021
https://blog.goo.ne.jp/lifetvs/e/3338282fcd819456a067ad67cde3ca44
What deserves to be undermined is blind faith in the power of Algebra. We should not believe that the equivalence class of expressions that all describe the same complex analytic function can be recognized by algebraic means alone, not even if relatively uncomplicated expressions are the only ones considered.
References
- Not all offscreen romances have ended happily ever after. Miley Cyrus and Liam Hemsworth fell in love after working on 2010’s The Last Song. The former
- Paylines are the patterns in a slot machine that must match to award a payout. They can be vertical,
- Charles and Princess Anne followed the coffin on foot, followed by their brothers Edward and Andrew.
- Getting scammed is the worst feeling of the world. But unfortunately, we all have to face it at some point or another. Wouldn’t it be great if anyone