From 33471c1f0de82a611769fa476f984e7f6ec69f4a Mon Sep 17 00:00:00 2001 From: David Rotermund <54365609+davrot@users.noreply.github.com> Date: Wed, 17 Jan 2024 12:12:49 +0100 Subject: [PATCH] Update README.md Signed-off-by: David Rotermund <54365609+davrot@users.noreply.github.com> --- matlab/2/README.md | 20 +++----------------- 1 file changed, 3 insertions(+), 17 deletions(-) diff --git a/matlab/2/README.md b/matlab/2/README.md index e5d155d..dc631b8 100644 --- a/matlab/2/README.md +++ b/matlab/2/README.md @@ -85,7 +85,7 @@ Real numbers are, by their nature, analogue quantities. Hence we would expect th $\mbox{floating-point number} = \mbox{mantissa} \cdot \mbox{basis}^{\mbox{exponent}} $ -Thereby, the precision, with which the real number can be represented, is determined by the number of available bits."Simple precision" requires 4 Bytes, for *double precision* 8~Bytes are needed. The latter is the default configuration in Matlab. The IEEE format of double precision uses 53-Bits for the mantissa, 11-Bits for the exponent and for the basis the remaining 2. One Bit of the mantissa respectively the exponent are used for the sign of the quantity. Thus, the exponent can vary between$-1024$ and $+1023$. The mantissa always represents a value in the interval $[1, 2[$ in the IEEE notation. Here, the $52$ Bits are utilized to add up fractions of exponents of 2. The value of the mantissa yields +Thereby, the precision, with which the real number can be represented, is determined by the number of available bits."Simple precision" (i.e. float32) requires 4 Bytes, for *double precision* (i.e. float64) 8~Bytes are needed. The latter is the default configuration in Python and Matlab. The IEEE format of double precision uses 53-Bits for the mantissa, 11-Bits for the exponent and for the basis the remaining 2. One Bit of the mantissa respectively the exponent are used for the sign of the quantity. Thus, the exponent can vary between$-1024$ and $+1023$. The mantissa always represents a value in the interval $[1, 2[$ in the IEEE notation. Here, the $52$ Bits are utilized to add up fractions of exponents of 2. The value of the mantissa yields $$mantissa = 1 + \sum_{i=1}^{52} b_i 2^{-i}$$ , with $b_i=1$ , if the $i$-th bit in the mantissa is set. @@ -108,7 +108,7 @@ An even bigger problem can be illustrated by the calculation of the factorial. T $n! = n\cdot(n-1)\cdot(n-2)\cdot\ldots3\cdot 2\cdot 1 $ -In Matlab, it can be easily verified by using the function factorial(n), that the factorial for $n>170$ can not be represented, even with double precision numbers. A way out is provided by the use of logarithms, since the logarithm of a bigger number still gives moderately small values, e.g. $\log_{10}(10^{100}) = 100$. It ensues that +In Python or Matlab, it can be easily verified by using the function factorial(n), that the factorial for $n>170$ can not be represented, even with double precision numbers. A way out is provided by the use of logarithms, since the logarithm of a bigger number still gives moderately small values, e.g. $\log_{10}(10^{100}) = 100$. It ensues that $\ln(n!) = \ln(n) + \ln(n-1) + \ldots + \ln(3) + \ln(2) + \ln(1) $ @@ -132,18 +132,6 @@ From these examples we learn that range errors can usually be circumvented with Rounding errors stem from the finite precision of the mantissa. The following program illustrates this fact: -```matlab -x = 1; - -while (1+x ~= 1) - x = x/2; -end - -x = x*2; -``` - -Python version: - ```python x: float = 1.0 @@ -155,7 +143,5 @@ print(x) # -> 2.220446049250313e-16 ``` -One might think that this constitutes an infinite loop. To the contrary, the loop will be left in finite time. The result for double precision is $x \approx 2\times 10^{-16}$ (= Matlab variable eps). eps is the smallest number with $1+$eps$>1$, and is the \quoting{machine accuracy}. Rounding errors of this order of magnitude occur on a regular basis. For example, Matlab calculates $\sin{\pi} \approx 1.2246\times 10^{-16}$. It shall be mentioned hat the machine accuracy for double precision is exactly eps $= 2^{-52}$, - -since 52 bits (plus one bit for the sign) are used for the mantissa. This rounding error might appear to be small and negligible. However, if further calculations are performed with rounded numbers, the rounding errors can accumulate with each calculation and grow to a significant value. +One might think that this constitutes an infinite loop. To the contrary, the loop will be left in finite time. The result for double precision is $x \approx 2\times 10^{-16}$ (import sys ; print(sys.float_info.epsilon)). eps is the smallest number with $1+$eps$>1$, and is the \quoting{machine accuracy}. Rounding errors of this order of magnitude occur on a regular basis. For example, Python calculates $\sin{\pi} \approx 1.2246\times 10^{-16}$ (import math ; print(math.sin(math.pi)) # -> 1.2246467991473532e-16). It shall be mentioned hat the machine accuracy for double precision is exactly eps $= 2^{-52}$, since 52 bits (plus one bit for the sign) are used for the mantissa. This rounding error might appear to be small and negligible. However, if further calculations are performed with rounded numbers, the rounding errors can accumulate with each calculation and grow to a significant value.