[PATCH] Fix common typos.
Ondřej Bílka
neleai at seznam.cz
Sun Jul 21 21:31:40 CEST 2013
Hi,
One of things that benefit from economy of scale is fixing typos. Below
is patch that fixes 99 typos and it was fast to generate.
This is part of project that I am writing
https://github.com/neleai/stylepp
whose aim is to effectively solve various issues.
A method that I used is following:
There is script stylepp_fix_spell. This script reads dictionary file.
then it will find all occurences in c comments, html, texi and txt files
and fix them.
I used dictionaries(save as dictionary file) with fixes of common mistakes here:
A list of common typos made in wikipedia
https://github.com/neleai/stylepp/blob/master/maintained/dictionary_wiki
A list of correct capitalizations of names.
https://github.com/neleai/stylepp/blob/master/maintained/dictionary_names
Then I generated list of likely typos with
stylepp_spellcheck
then found likely misspells with
stylepp_make_dictionary
acknowledgements acknowledgments
amortised amortized
analysed analyzed
apppears appears
assings assigns
cancelation cancellation
cancelling canceling
chosing choosing
complementt complement
conservativeky conservatively
coprocessor coproccessor
daplicating duplicating
declaratinos declarations
emphasised emphasized
factorisation factorization
functon function
granilarity granularity
initialised initialized
inlcude include
matris matrix
measurments measurements
mechnanism mechanism
minimise minimize
multipliciaton multiplication
optimise optimize
organisation organization
parameterized parametrized
probabilty probability
programmes programmers
recognise recognize
recognised recognized
recognises recognizes
recognising recognizing
recurrency recurrence
rumoured rumored
simplicty simplicity
slighty slightly
sligthly slightly
sligtly slightly
trucate truncate
uninitialised uninitialized
unnecssary unnecessary
vaues values
---
demos/expr/expr.c | 2 +-
doc/fdl-1.3.texi | 2 +-
doc/gmp.texi | 16 ++++++++--------
doc/projects.html | 6 +++---
doc/tasks.html | 12 ++++++------
errno.c | 2 +-
gmp-impl.h | 12 ++++++------
invalid.c | 2 +-
longlong.h | 6 +++---
mini-gmp/mini-gmp.c | 4 ++--
mini-gmp/tests/t-cmp_d.c | 2 +-
mpn/cray/ieee/addmul_1.c | 2 +-
mpn/cray/ieee/mul_1.c | 2 +-
mpn/cray/sub_n.c | 2 +-
mpn/generic/divis.c | 2 +-
mpn/generic/gcdext_1.c | 2 +-
mpn/generic/get_str.c | 2 +-
mpn/generic/hgcd.c | 2 +-
mpn/generic/hgcd_appr.c | 2 +-
mpn/generic/hgcd_jacobi.c | 2 +-
mpn/generic/hgcd_step.c | 2 +-
mpn/generic/invertappr.c | 2 +-
mpn/generic/mod_1.c | 2 +-
mpn/generic/mode1o.c | 2 +-
mpn/generic/mu_div_qr.c | 4 ++--
mpn/generic/mu_divappr_q.c | 2 +-
mpn/generic/mullo_n.c | 2 +-
mpn/generic/rootrem.c | 4 ++--
mpn/generic/sbpi1_div_sec.c | 2 +-
mpn/generic/set_str.c | 2 +-
mpn/generic/toom_interpolate_5pts.c | 2 +-
mpn/sparc64/mode1o.c | 4 ++--
mpn/x86/fat/fat.c | 6 +++---
mpz/2fac_ui.c | 2 +-
mpz/bin_uiui.c | 2 +-
mpz/combit.c | 2 +-
mpz/jacobi.c | 2 +-
mpz/oddfac_1.c | 2 +-
mpz/prodlimbs.c | 2 +-
mpz/ui_pow_ui.c | 2 +-
nextprime.c | 2 +-
printf/doprnt.c | 2 +-
scanf/doscan.c | 4 ++--
tests/amd64check.c | 2 +-
tests/arm32check.c | 2 +-
tests/cxx/t-istream.cc | 2 +-
tests/devel/try.c | 4 ++--
tests/mpn/t-get_d.c | 4 ++--
tests/mpq/t-cmp.c | 2 +-
tests/mpq/t-cmp_ui.c | 2 +-
tests/mpz/t-cmp_d.c | 2 +-
tests/mpz/t-get_d.c | 2 +-
tests/tests.h | 2 +-
tests/x86check.c | 2 +-
tune/freq.c | 6 +++---
tune/noop.c | 2 +-
tune/speed.c | 2 +-
tune/speed.h | 2 +-
tune/time.c | 16 ++++++++--------
tune/tuneup.c | 2 +-
60 files changed, 99 insertions(+), 99 deletions(-)
diff --git a/demos/expr/expr.c b/demos/expr/expr.c
index 1f4af6c..1548f04 100644
--- a/demos/expr/expr.c
+++ b/demos/expr/expr.c
@@ -531,7 +531,7 @@ mpexpr_evaluate (struct mpexpr_parse_t *p)
values.
The use of sp as a duplicate of SP will help compilers that can't
- otherwise recognise the various uses of SP as common subexpressions. */
+ otherwise recognize the various uses of SP as common subexpressions. */
TRACE (printf ("apply control: nested %d, \"%s\" 0x%X, %d args\n",
p->control_top, CP->op->name, CP->op->type, CP->argcount));
diff --git a/doc/fdl-1.3.texi b/doc/fdl-1.3.texi
index 8805f1a..f9e30c1 100644
--- a/doc/fdl-1.3.texi
+++ b/doc/fdl-1.3.texi
@@ -252,7 +252,7 @@ publisher of the version it refers to gives permission.
@item
For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve
the Title of the section, and preserve in the section all the
-substance and tone of each of the contributor acknowledgements and/or
+substance and tone of each of the contributor acknowledgments and/or
dedications given therein.
@item
diff --git a/doc/gmp.texi b/doc/gmp.texi
index 61b20fd..a7fce4d 100644
--- a/doc/gmp.texi
+++ b/doc/gmp.texi
@@ -1320,8 +1320,8 @@ use of a 64-bit chip.
@sp 1
@need 1000
- at item Sparc V9 (@samp{sparc64}, @samp{sparcv9}, @samp{ultrasparc*})
- at cindex Sparc V9
+ at item SPARC V9 (@samp{sparc64}, @samp{sparcv9}, @samp{ultrasparc*})
+ at cindex SPARC V9
@cindex Solaris
@cindex Sun
@table @asis
@@ -1591,15 +1591,15 @@ possibly with appropriate compiler options (like @samp{-mcpu=common} for
workstations) is accepted by @file{config.sub}, but is currently equivalent to
@option{--disable-assembly}.
- at item Sparc CPU Types
- at cindex Sparc
+ at item SPARC CPU Types
+ at cindex SPARC
@samp{sparcv8} or @samp{supersparc} on relevant systems will give a
significant performance increase over the V7 code selected by plain
@samp{sparc}.
- at item Sparc App Regs
- at cindex Sparc
-The GMP assembly code for both 32-bit and 64-bit Sparc clobbers the
+ at item SPARC App Regs
+ at cindex SPARC
+The GMP assembly code for both 32-bit and 64-bit SPARC clobbers the
``application registers'' @code{g2}, @code{g3} and @code{g4}, the same way
that the GCC default @samp{-mapp-regs} does (@pxref{SPARC Options,, SPARC
Options, gcc, Using the GNU Compiler Collection (GCC)}).
@@ -1737,7 +1737,7 @@ The system @command{sed} prints an error ``Output line too long'' when libtool
builds @file{libgmp.la}. This doesn't seem to cause any obvious ill effects,
but GNU @command{sed} is recommended, to avoid any doubt.
- at item Sparc Solaris 2.7 with gcc 2.95.2 in @samp{ABI=32}
+ at item SPARC Solaris 2.7 with gcc 2.95.2 in @samp{ABI=32}
@cindex Solaris
A shared library build of GMP seems to fail in this combination, it builds but
then fails the tests, apparently due to some incorrect data relocations within
diff --git a/doc/projects.html b/doc/projects.html
index 35caf59..4a8baf3 100644
--- a/doc/projects.html
+++ b/doc/projects.html
@@ -151,7 +151,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/.
<p> Add more functions to the set of fat functions.
- <p> The speed of multipliciaton is today highly dependent on combination
+ <p> The speed of multiplication is today highly dependent on combination
functions like <code>addlsh1_n</code>. A fat binary will never use any such
functions, since they are classified as optional. Ideally, we should use
them, but making the current compile-time selections of optional functions
@@ -323,12 +323,12 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/.
<li> <strong>Factorial</strong>
- <p> Rewrite for simplicty and speed. Work is in progress.
+ <p> Rewrite for simplicity and speed. Work is in progress.
<li> <strong>Binomial Coefficients</strong>
- <p> Rewrite for simplicty and speed. Work is in progress.
+ <p> Rewrite for simplicity and speed. Work is in progress.
<li> <strong>Prime Testing</strong>
diff --git a/doc/tasks.html b/doc/tasks.html
index da4dfe0..9d4de7a 100644
--- a/doc/tasks.html
+++ b/doc/tasks.html
@@ -213,7 +213,7 @@ either already been taken care of, or have become irrelevant.
<li> <code>mpn_add_1</code>, <code>mpn_sub_1</code>, <code>mpn_add</code>,
<code>mpn_sub</code>: Internally use <code>__GMPN_ADD_1</code> etc
instead of the functions, so they get inlined on all compilers, not just
- gcc and others with <code>inline</code> recognised in gmp.h.
+ gcc and others with <code>inline</code> recognized in gmp.h.
<code>__GMPN_ADD_1</code> etc are meant mostly to support application
inline <code>mpn_add_1</code> etc and if they don't come out good for
internal uses then special forms can be introduced, for instance many
@@ -353,7 +353,7 @@ either already been taken care of, or have become irrelevant.
explicit masks or small types like <code>short</code> and
<code>int</code> ought to work.
<li> Sparc64 HAL R1 <code>popc</code>: This chip reputedly implements
- <code>popc</code> properly (see gcc sparc.md). Would need to recognise
+ <code>popc</code> properly (see gcc SPARC.md). Would need to recognize
it as <code>sparchalr1</code> or something in configure / config.sub /
config.guess. <code>popc_limb</code> in gmp-impl.h could use this (per
commented out code). <code>count_trailing_zeros</code> could use it too.
@@ -445,7 +445,7 @@ either already been taken care of, or have become irrelevant.
<li> IRIX 6 MIPSpro compiler has an <code>__inline</code> which could perhaps
be used in <code>__GMP_EXTERN_INLINE</code>. What would be the right way
to identify suitable versions of that compiler?
-<li> IRIX <code>cc</code> is rumoured to have an <code>_int_mult_upper</code>
+<li> IRIX <code>cc</code> is rumored to have an <code>_int_mult_upper</code>
(in <code><intrinsics.h></code> like Cray), but it didn't seem to
exist on some IRIX 6.5 systems tried. If it does actually exist
somewhere it would very likely be an improvement over a function call to
@@ -558,7 +558,7 @@ either already been taken care of, or have become irrelevant.
<code>long double</code> is an ANSI-ism, so everything involving it would
need to be suppressed on a K&R compiler.
<br>
- There'd be some work to be done by <code>configure</code> to recognise
+ There'd be some work to be done by <code>configure</code> to recognize
the format in use, MPFR has a start on this. Often <code>long
double</code> is the same as <code>double</code>, which is easy but
pretty pointless. A single float format detector macro could look at
@@ -628,7 +628,7 @@ either already been taken care of, or have become irrelevant.
will be "3-1307" in the current switch, but need to verify that. (On
OSF, current configfsf.guess identifies ev7 using psrinfo, we need to do
it ourselves for other systems.)
-<li> Alpha OSF: Libtool (version 1.5) doesn't seem to recognise this system is
+<li> Alpha OSF: Libtool (version 1.5) doesn't seem to recognize this system is
"pic always" and ends up running gcc twice with the same options. This
is wasteful, but harmless. Perhaps a newer libtool will be better.
<li> ARM: <code>umul_ppmm</code> in longlong.h always uses <code>umull</code>,
@@ -704,7 +704,7 @@ either already been taken care of, or have become irrelevant.
Consider making these variant <code>mpz_set_str</code> etc forms
available for <code>mpz_t</code> too, not just <code>mpz_class</code>
etc.
-<li> <code>mpq_class operator+=</code>: Don't emit an unnecssary
+<li> <code>mpq_class operator+=</code>: Don't emit an unnecessary
<code>mpq_set(q,q)</code> before <code>mpz_addmul</code> etc.
<li> Put various bits of gmpxx.h into libgmpxx, to avoid excessive inlining.
Candidates for this would be,
diff --git a/errno.c b/errno.c
index e5e160d..e63d01b 100644
--- a/errno.c
+++ b/errno.c
@@ -29,7 +29,7 @@ int gmp_errno = 0;
/* The deliberate divide by zero triggers an exception on most systems. On
- those where it doesn't, for example power and powerpc, use abort instead.
+ those where it doesn't, for example power and PowerPC, use abort instead.
Enhancement: Perhaps raise(SIGFPE) (or the same with kill()) would be
better than abort. Perhaps it'd be possible to get the BSD style
diff --git a/gmp-impl.h b/gmp-impl.h
index 3119688..2f83a93 100644
--- a/gmp-impl.h
+++ b/gmp-impl.h
@@ -341,7 +341,7 @@ union tmp_align_t {
/* Return "a" rounded upwards to a multiple of "m", if it isn't already.
"a" must be an unsigned type.
This is designed for use with a compile-time constant "m".
- The POW2 case is expected to be usual, and gcc 3.0 and up recognises
+ The POW2 case is expected to be usual, and gcc 3.0 and up recognizes
"(-(8*n))%8" or the like is always zero, which means the rounding up in
the WANT_TMP_NOTREENTRANT version of TMP_ALLOC below will be a noop. */
#define ROUND_UP_MULTIPLE(a,m) \
@@ -745,7 +745,7 @@ __GMP_DECLSPEC void __gmp_default_free (void *, size_t);
On athlon-unknown-freebsd4.9 with gcc 3.3.3, regparm cannot be used with
-p or -pg profiling, since that version of gcc doesn't realize the
.mcount calls will clobber the parameter registers. Other systems are
- ok, like debian with glibc 2.3.2 (mcount doesn't clobber), but we don't
+ OK, like debian with glibc 2.3.2 (mcount doesn't clobber), but we don't
bother to try to detect this. regparm is only an optimization so we just
disable it when profiling (profiling being a slowdown anyway). */
@@ -1829,11 +1829,11 @@ __GMP_DECLSPEC void mpn_copyd (mp_ptr, mp_srcptr, mp_size_t);
/* Zero n limbs at dst.
- For power and powerpc we want an inline stu/bdnz loop for zeroing. On
+ For power and PowerPC we want an inline stu/bdnz loop for zeroing. On
ppc630 for instance this is optimal since it can sustain only 1 store per
cycle.
- gcc 2.95.x (for powerpc64 -maix64, or powerpc32) doesn't recognise the
+ gcc 2.95.x (for powerpc64 -maix64, or powerpc32) doesn't recognize the
"for" loop in the generic code below can become stu/bdnz. The do/while
here helps it get to that. The same caveat about plain -mpowerpc64 mode
applies here as to __GMPN_COPY_INCR in gmp.h.
@@ -3403,7 +3403,7 @@ __GMP_DECLSPEC extern const unsigned char binvert_limb_table[128];
#endif
/* bswap is available on i486 and up and is fast. A combination rorw $8 /
- roll $16 / rorw $8 is used in glibc for plain i386 (and in the linux
+ roll $16 / rorw $8 is used in glibc for plain i386 (and in the Linux
kernel with xchgb instead of rorw), but this is not done here, because
i386 means generic x86 and mixing word and dword operations will cause
partial register stalls on P6 chips. */
@@ -3831,7 +3831,7 @@ __GMP_DECLSPEC double mpn_get_d (mp_srcptr, mp_size_t, mp_size_t, long) __GMP_AT
#endif
/* On m68k, x86 and amd64, gcc (and maybe other compilers) can hold doubles
- in the coprocessor, which means a bigger exponent range than normal, and
+ in the coproccessor, which means a bigger exponent range than normal, and
depending on the rounding mode, a bigger mantissa than normal. (See
"Disappointments" in the gcc manual.) FORCE_DOUBLE stores and fetches
"d" through memory to force any rounding and overflows to occur.
diff --git a/invalid.c b/invalid.c
index 24c6f13..1db8a69 100644
--- a/invalid.c
+++ b/invalid.c
@@ -34,7 +34,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
#include "gmp-impl.h"
-/* Incidentally, kill is not available on mingw, but that's ok, it has raise
+/* Incidentally, kill is not available on mingw, but that's OK, it has raise
and we'll be using that. */
#if ! HAVE_RAISE
#define raise(sig) kill (getpid(), sig)
diff --git a/longlong.h b/longlong.h
index d969c74..9e28d6b 100644
--- a/longlong.h
+++ b/longlong.h
@@ -1678,7 +1678,7 @@ extern UWtype __MPN(udiv_qrnnd) (UWtype *, UWtype, UWtype, UWtype);
#endif /* __sparclite__ */
#endif /* __sparc_v8__ */
#endif /* __sparc_v9__ */
-/* Default to sparc v7 versions of umul_ppmm and udiv_qrnnd. */
+/* Default to SPARC v7 versions of umul_ppmm and udiv_qrnnd. */
#ifndef umul_ppmm
#define umul_ppmm(w1, w0, u, v) \
__asm__ ("! Inlined umul_ppmm\n" \
@@ -1820,7 +1820,7 @@ extern UWtype __MPN(udiv_qrnnd) (UWtype *, UWtype, UWtype, UWtype);
: "g" ((USItype) (x))); \
} while (0)
#endif
-#endif /* vax */
+#endif /* VAX */
#if defined (__z8000__) && W_TYPE_SIZE == 16
#define add_ssaaaa(sh, sl, ah, al, bh, bl) \
@@ -1969,7 +1969,7 @@ extern UWtype mpn_udiv_qrnnd_r (UWtype, UWtype, UWtype, UWtype *);
/* If we still don't have umul_ppmm, define it using plain C.
For reference, when this code is used for squaring (ie. u and v identical
- expressions), gcc recognises __x1 and __x2 are the same and generates 3
+ expressions), gcc recognizes __x1 and __x2 are the same and generates 3
multiplies, not 4. The subsequent additions could be optimized a bit,
but the only place GMP currently uses such a square is mpn_sqr_basecase,
and chips obliged to use this generic C umul will have plenty of worse
diff --git a/mini-gmp/mini-gmp.c b/mini-gmp/mini-gmp.c
index 3d193cf..e5844f1 100644
--- a/mini-gmp/mini-gmp.c
+++ b/mini-gmp/mini-gmp.c
@@ -1348,7 +1348,7 @@ mpz_init (mpz_t r)
}
/* The utility of this function is a bit limited, since many functions
- assings the result variable using mpz_swap. */
+ assigns the result variable using mpz_swap. */
void
mpz_init2 (mpz_t r, mp_bitcnt_t bits)
{
@@ -3200,7 +3200,7 @@ mpz_bin_uiui (mpz_t r, unsigned long n, unsigned long k)
/* Numbers are treated as if represented in two's complement (and
infinitely sign extended). For a negative values we get the two's
- complement from -x = ~x + 1, where ~ is bitwise complementt.
+ complement from -x = ~x + 1, where ~ is bitwise complement.
Negation transforms
xxxx10...0
diff --git a/mini-gmp/tests/t-cmp_d.c b/mini-gmp/tests/t-cmp_d.c
index c08e3a5..9dc2423 100644
--- a/mini-gmp/tests/t-cmp_d.c
+++ b/mini-gmp/tests/t-cmp_d.c
@@ -172,7 +172,7 @@ check_low_z_one (void)
/* FIXME: It'd be better to base this on the float format. */
#if defined (__vax) || defined (__vax__)
-#define LIM 127 /* vax fp numbers have limited range */
+#define LIM 127 /* VAX fp numbers have limited range */
#else
#define LIM 512
#endif
diff --git a/mpn/cray/ieee/addmul_1.c b/mpn/cray/ieee/addmul_1.c
index 158a79c..77b1d73 100644
--- a/mpn/cray/ieee/addmul_1.c
+++ b/mpn/cray/ieee/addmul_1.c
@@ -84,7 +84,7 @@ mpn_addmul_1 (mp_ptr rp, mp_srcptr up, mp_size_t n, mp_limb_t vl)
mp_limb_t cyrec = 0;
/* Look for places where rp[k] == 0 and cy[k-1] == 1 or
rp[k] == 1 and cy[k-1] == 2.
- These are where we got a recurrency carry. */
+ These are where we got a recurrence carry. */
for (i = 1; i < n; i++)
{
r = rp[i];
diff --git a/mpn/cray/ieee/mul_1.c b/mpn/cray/ieee/mul_1.c
index 4dc2fd9..2c2c160 100644
--- a/mpn/cray/ieee/mul_1.c
+++ b/mpn/cray/ieee/mul_1.c
@@ -76,7 +76,7 @@ mpn_mul_1 (mp_ptr rp, mp_srcptr up, mp_size_t n, mp_limb_t vl)
{
mp_limb_t cyrec = 0;
/* Look for places where rp[k] is zero and cy[k-1] is non-zero.
- These are where we got a recurrency carry. */
+ These are where we got a recurrence carry. */
for (i = 2; i < n; i++)
{
r = rp[i];
diff --git a/mpn/cray/sub_n.c b/mpn/cray/sub_n.c
index 90a5f1b..590b908 100644
--- a/mpn/cray/sub_n.c
+++ b/mpn/cray/sub_n.c
@@ -63,7 +63,7 @@ mpn_sub_n (mp_ptr rp, mp_srcptr up, mp_srcptr vp, mp_size_t n)
{
mp_limb_t cyrec = 0;
/* Look for places where rp[k] contains just ones and cy[k-1] is
- non-zero. These are where we got a recurrency borrow. */
+ non-zero. These are where we got a recurrence borrow. */
for (i = 1; i < n; i++)
{
r = rp[i];
diff --git a/mpn/generic/divis.c b/mpn/generic/divis.c
index e6d08f7..4881075 100644
--- a/mpn/generic/divis.c
+++ b/mpn/generic/divis.c
@@ -41,7 +41,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
there's no addback, but it would need a multi-precision inverse and so
might be slower than the plain method (on small sizes at least).
- When D must be normalized (shifted to low bit set), it's possible to supress
+ When D must be normalized (shifted to low bit set), it's possible to suppress
the bit-shifting of A down, as long as it's already been checked that A has
at least as many trailing zero bits as D. */
diff --git a/mpn/generic/gcdext_1.c b/mpn/generic/gcdext_1.c
index 3bb4d21..20d818e 100644
--- a/mpn/generic/gcdext_1.c
+++ b/mpn/generic/gcdext_1.c
@@ -55,7 +55,7 @@ mpn_gcdext_1 (mp_limb_signed_t *sp, mp_limb_signed_t *tp,
V = s1 u + s0 v
where U, V are the inputs (without any shared power of two),
- and the matris has determinant ± 2^{shift}.
+ and the matrix has determinant ± 2^{shift}.
*/
mp_limb_t s0 = 1;
mp_limb_t t0 = 0;
diff --git a/mpn/generic/get_str.c b/mpn/generic/get_str.c
index e17497c..a604ca0 100644
--- a/mpn/generic/get_str.c
+++ b/mpn/generic/get_str.c
@@ -102,7 +102,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
/* The x86s and m68020 have a quotient and remainder "div" instruction and
- gcc recognises an adjacent "/" and "%" can be combined using that.
+ gcc recognizes an adjacent "/" and "%" can be combined using that.
Elsewhere "/" and "%" are either separate instructions, or separate
libgcc calls (which unfortunately gcc as of version 3.0 doesn't combine).
A multiply and subtract should be faster than a "%" in those cases. */
diff --git a/mpn/generic/hgcd.c b/mpn/generic/hgcd.c
index f51bbde..5f8503b 100644
--- a/mpn/generic/hgcd.c
+++ b/mpn/generic/hgcd.c
@@ -98,7 +98,7 @@ mpn_hgcd (mp_ptr ap, mp_ptr bp, mp_size_t n,
success = 1;
}
- /* NOTE: It apppears this loop never runs more than once (at
+ /* NOTE: It appears this loop never runs more than once (at
least when not recursing to hgcd_appr). */
while (n > n2)
{
diff --git a/mpn/generic/hgcd_appr.c b/mpn/generic/hgcd_appr.c
index bb8536a..618ece2 100644
--- a/mpn/generic/hgcd_appr.c
+++ b/mpn/generic/hgcd_appr.c
@@ -135,7 +135,7 @@ mpn_hgcd_appr (mp_ptr ap, mp_ptr bp, mp_size_t n,
then shift left extra bits using mpn_shiftr. */
/* NOTE: In the unlikely case that n is large, it would be
preferable to do an initial subdiv step to reduce the size
- before shifting, but that would mean daplicating
+ before shifting, but that would mean duplicating
mpn_gcd_subdiv_step with a bit count rather than a limb
count. */
ap--; bp--;
diff --git a/mpn/generic/hgcd_jacobi.c b/mpn/generic/hgcd_jacobi.c
index 728755a..177b8be 100644
--- a/mpn/generic/hgcd_jacobi.c
+++ b/mpn/generic/hgcd_jacobi.c
@@ -60,7 +60,7 @@ hgcd_jacobi_hook (void *p, mp_srcptr gp, mp_size_t gn,
below the given size s. Return new size for a and b, or 0 if no
more steps are possible.
- If hgcd2 succeds, needs temporary space for hgcd_matrix_mul_1, M->n
+ If hgcd2 succeeds, needs temporary space for hgcd_matrix_mul_1, M->n
limbs, and hgcd_mul_matrix1_inverse_vector, n limbs. If hgcd2
fails, needs space for the quotient, qn <= n - s + 1 limbs, for and
hgcd_matrix_update_q, qn + (size of the appropriate column of M) <=
diff --git a/mpn/generic/hgcd_step.c b/mpn/generic/hgcd_step.c
index 740c56b..fbb0792 100644
--- a/mpn/generic/hgcd_step.c
+++ b/mpn/generic/hgcd_step.c
@@ -51,7 +51,7 @@ hgcd_hook (void *p, mp_srcptr gp, mp_size_t gn,
below the given size s. Return new size for a and b, or 0 if no
more steps are possible.
- If hgcd2 succeds, needs temporary space for hgcd_matrix_mul_1, M->n
+ If hgcd2 succeeds, needs temporary space for hgcd_matrix_mul_1, M->n
limbs, and hgcd_mul_matrix1_inverse_vector, n limbs. If hgcd2
fails, needs space for the quotient, qn <= n - s limbs, for and
hgcd_matrix_update_q, qn + (size of the appropriate column of M) <=
diff --git a/mpn/generic/invertappr.c b/mpn/generic/invertappr.c
index 6430d2e..747c420 100644
--- a/mpn/generic/invertappr.c
+++ b/mpn/generic/invertappr.c
@@ -37,7 +37,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
#include "gmp-impl.h"
#include "longlong.h"
-/* FIXME: The iterative version splits the operand in two slighty unbalanced
+/* FIXME: The iterative version splits the operand in two slightly unbalanced
parts, the use of log_2 (or counting the bits) underestimate the maximum
number of iterations. */
diff --git a/mpn/generic/mod_1.c b/mpn/generic/mod_1.c
index 66c332e..0474c8b 100644
--- a/mpn/generic/mod_1.c
+++ b/mpn/generic/mod_1.c
@@ -59,7 +59,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
#endif
#if TUNE_PROGRAM_BUILD && !HAVE_NATIVE_mpn_mod_1_1p
-/* Duplicates declaratinos in tune/speed.h */
+/* Duplicates declarations in tune/speed.h */
mp_limb_t mpn_mod_1_1p_1 (mp_srcptr, mp_size_t, mp_limb_t, mp_limb_t [4]);
mp_limb_t mpn_mod_1_1p_2 (mp_srcptr, mp_size_t, mp_limb_t, mp_limb_t [4]);
diff --git a/mpn/generic/mode1o.c b/mpn/generic/mode1o.c
index e8978a4..dae9384 100644
--- a/mpn/generic/mode1o.c
+++ b/mpn/generic/mode1o.c
@@ -153,7 +153,7 @@ mpn_modexact_1c_odd (mp_srcptr src, mp_size_t size, mp_limb_t d,
{
/* With high<=d the final step can be a subtract and addback. If c==0
then the addback will restore to l>=0. If c==d then will get l==d
- if s==0, but that's ok per the function definition. */
+ if s==0, but that's OK per the function definition. */
l = c - s;
if (c < s)
diff --git a/mpn/generic/mu_div_qr.c b/mpn/generic/mu_div_qr.c
index b7aaa70..be13994 100644
--- a/mpn/generic/mu_div_qr.c
+++ b/mpn/generic/mu_div_qr.c
@@ -65,7 +65,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
#include "gmp-impl.h"
-/* FIXME: The MU_DIV_QR_SKEW_THRESHOLD was not analysed properly. It gives a
+/* FIXME: The MU_DIV_QR_SKEW_THRESHOLD was not analyzed properly. It gives a
speedup according to old measurements, but does the decision mechanism
really make sense? It seem like the quotient between dn and qn might be
what we really should be checking. */
@@ -166,7 +166,7 @@ mpn_mu_div_qr2 (mp_ptr qp,
#if 1
/* This alternative inverse computation method gets slightly more accurate
- results. FIXMEs: (1) Temp allocation needs not analysed (2) itch function
+ results. FIXMEs: (1) Temp allocation needs not analyzed (2) itch function
not adapted (3) mpn_invertappr scratch needs not met. */
ip = scratch;
tp = scratch + in + 1;
diff --git a/mpn/generic/mu_divappr_q.c b/mpn/generic/mu_divappr_q.c
index 0e9afa3..389afe3 100644
--- a/mpn/generic/mu_divappr_q.c
+++ b/mpn/generic/mu_divappr_q.c
@@ -92,7 +92,7 @@ mpn_mu_divappr_q (mp_ptr qp,
#if 1
/* This alternative inverse computation method gets slightly more accurate
- results. FIXMEs: (1) Temp allocation needs not analysed (2) itch function
+ results. FIXMEs: (1) Temp allocation needs not analyzed (2) itch function
not adapted (3) mpn_invertappr scratch needs not met. */
ip = scratch;
tp = scratch + in + 1;
diff --git a/mpn/generic/mullo_n.c b/mpn/generic/mullo_n.c
index 8c39b2b..e890999 100644
--- a/mpn/generic/mullo_n.c
+++ b/mpn/generic/mullo_n.c
@@ -74,7 +74,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
ML(n) = 2*ML(an) + M((1-a)n) => k*M(n) = 2*k*M(n)*a^e + M(n)*(1-a)^e
- Given a value for e, want to minimise the value of k, i.e. the
+ Given a value for e, want to minimize the value of k, i.e. the
function k=(1-a)^e/(1-2*a^e).
With e=2, the exponent for schoolbook multiplication, the minimum is
diff --git a/mpn/generic/rootrem.c b/mpn/generic/rootrem.c
index b366412..635996e 100644
--- a/mpn/generic/rootrem.c
+++ b/mpn/generic/rootrem.c
@@ -332,7 +332,7 @@ mpn_rootrem_internal (mp_ptr rootp, mp_ptr remp, mp_srcptr up, mp_size_t un,
/* 7: current buffers: {sp,sn}, {qp,qn} */
- ASSERT_ALWAYS (bn >= qn); /* this is ok since in the case qn > bn
+ ASSERT_ALWAYS (bn >= qn); /* this is OK since in the case qn > bn
above, q is set to 2^b-1, which has
exactly bn limbs */
@@ -394,7 +394,7 @@ mpn_rootrem_internal (mp_ptr rootp, mp_ptr remp, mp_srcptr up, mp_size_t un,
mpn_sub (rp, rp, rn, qp, qn);
MPN_NORMALIZE (rp, rn);
}
- /* otherwise we have rn > 0, thus the return value is ok */
+ /* otherwise we have rn > 0, thus the return value is OK */
/* 11: current buffers: {sp,sn}, {rp,rn}, {wp,wn} */
}
diff --git a/mpn/generic/sbpi1_div_sec.c b/mpn/generic/sbpi1_div_sec.c
index aaa1b4b..afef3db 100644
--- a/mpn/generic/sbpi1_div_sec.c
+++ b/mpn/generic/sbpi1_div_sec.c
@@ -48,7 +48,7 @@ with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
remainders, which we reduce later, as described above.
In order to keep quotients from getting too big, corresponding to a negative
- partial remainder, we use an inverse which is sligtly smaller than usually.
+ partial remainder, we use an inverse which is slightly smaller than usually.
*/
#if OPERATION_sbpi1_div_qr_sec
diff --git a/mpn/generic/set_str.c b/mpn/generic/set_str.c
index fd3c595..0368b84 100644
--- a/mpn/generic/set_str.c
+++ b/mpn/generic/set_str.c
@@ -236,7 +236,7 @@ mpn_dc_set_str (mp_ptr rp, const unsigned char *str, size_t str_len,
if (hn == 0)
{
- /* Zero +1 limb here, to avoid reading an allocated but uninitialised
+ /* Zero +1 limb here, to avoid reading an allocated but uninitialized
limb in mpn_incr_u below. */
MPN_ZERO (rp, powtab->n + sn + 1);
}
diff --git a/mpn/generic/toom_interpolate_5pts.c b/mpn/generic/toom_interpolate_5pts.c
index 8416b64..958e30c 100644
--- a/mpn/generic/toom_interpolate_5pts.c
+++ b/mpn/generic/toom_interpolate_5pts.c
@@ -141,7 +141,7 @@ mpn_toom_interpolate_5pts (mp_ptr c, mp_ptr v2, mp_ptr vm1,
1 0 1 0 0; v1
0 1 0 1 0; vm1
0 0 0 0 1] v0
- Some vaues already are in-place (we added vm1 in the correct position)
+ Some values already are in-place (we added vm1 in the correct position)
| vinf| v1 | v0 |
| vm1 |
One still is in a separated area
diff --git a/mpn/sparc64/mode1o.c b/mpn/sparc64/mode1o.c
index 5ec97c5..dfb34fc 100644
--- a/mpn/sparc64/mode1o.c
+++ b/mpn/sparc64/mode1o.c
@@ -118,7 +118,7 @@ mpn_modexact_1c_odd (mp_srcptr src, mp_size_t size, mp_limb_t d, mp_limb_t orig_
{
/* With high s <= d the final step can be a subtract and addback.
If c==0 then the addback will restore to l>=0. If c==d then
- will get l==d if s==0, but that's ok per the function
+ will get l==d if s==0, but that's OK per the function
definition. */
l = c - s;
@@ -162,7 +162,7 @@ mpn_modexact_1c_odd (mp_srcptr src, mp_size_t size, mp_limb_t d, mp_limb_t orig_
{
/* With high s <= d the final step can be a subtract and addback.
If c==0 then the addback will restore to l>=0. If c==d then
- will get l==d if s==0, but that's ok per the function
+ will get l==d if s==0, but that's OK per the function
definition. */
l = c - s;
diff --git a/mpn/x86/fat/fat.c b/mpn/x86/fat/fat.c
index bb42eb9..165c03f 100644
--- a/mpn/x86/fat/fat.c
+++ b/mpn/x86/fat/fat.c
@@ -387,9 +387,9 @@ __gmpn_cpuvec_init (void)
case 0x0f: /* k8 */
case 0x11: /* "fam 11h", mix of k8 and k10 */
- case 0x13: /* unknown, conservativeky assume k8 */
- case 0x16: /* unknown, conservativeky assume k8 */
- case 0x17: /* unknown, conservativeky assume k8 */
+ case 0x13: /* unknown, conservatively assume k8 */
+ case 0x16: /* unknown, conservatively assume k8 */
+ case 0x17: /* unknown, conservatively assume k8 */
TRACE (printf (" k8\n"));
CPUVEC_SETUP_k7;
CPUVEC_SETUP_k7_mmx;
diff --git a/mpz/2fac_ui.c b/mpz/2fac_ui.c
index 2fd7c7f..60ccc87 100644
--- a/mpz/2fac_ui.c
+++ b/mpz/2fac_ui.c
@@ -62,7 +62,7 @@ mpz_2fac_ui (mpz_ptr x, unsigned long n)
mp_limb_t *factors, prod, max_prod, j;
TMP_SDECL;
- /* FIXME: we might alloc a fixed ammount 1+FAC_2DSC_THRESHOLD/FACTORS_PER_LIMB */
+ /* FIXME: we might alloc a fixed amount 1+FAC_2DSC_THRESHOLD/FACTORS_PER_LIMB */
TMP_SMARK;
factors = TMP_SALLOC_LIMBS (1 + n / (2 * FACTORS_PER_LIMB));
diff --git a/mpz/bin_uiui.c b/mpz/bin_uiui.c
index d86fb29..73c4e90 100644
--- a/mpz/bin_uiui.c
+++ b/mpz/bin_uiui.c
@@ -149,7 +149,7 @@ typedef mp_limb_t (* mulfunc_t) (mp_limb_t);
static const mulfunc_t mulfunc[] = {mul1,mul2,mul3,mul4,mul5,mul6,mul7,mul8};
#define M (numberof(mulfunc))
-/* Number of factors-of-2 removed by the corresponding mulN functon. */
+/* Number of factors-of-2 removed by the corresponding mulN function. */
static const unsigned char tcnttab[] = {0, 1, 1, 2, 2, 4, 4, 6};
#if 1
diff --git a/mpz/combit.c b/mpz/combit.c
index 34f4967..4f45e65 100644
--- a/mpz/combit.c
+++ b/mpz/combit.c
@@ -56,7 +56,7 @@ mpz_combit (mpz_ptr d, mp_bitcnt_t bit_index)
{
/* We toggle a zero bit, subtract from the absolute value. */
MPN_DECR_U (dp + limb_index, dsize - limb_index, bit);
- /* The absolute value shrinked by at most one bit. */
+ /* The absolute value shrunk by at most one bit. */
dsize -= dp[dsize - 1] == 0;
ASSERT (dsize > 0 && dp[dsize - 1] != 0);
SIZ (d) = -dsize;
diff --git a/mpz/jacobi.c b/mpz/jacobi.c
index 0a8fb29..ea963d3 100644
--- a/mpz/jacobi.c
+++ b/mpz/jacobi.c
@@ -175,7 +175,7 @@ mpz_jacobi (mpz_srcptr a, mpz_srcptr b)
/* In the case of even B, we conceptually shift out the powers of two first,
and then divide A mod B. Hence, when taking those powers of two into
account, we must use alow *before* the division. Doing the actual division
- first is ok, because the point is to remove multiples of B from A, and
+ first is OK, because the point is to remove multiples of B from A, and
multiples of 2^k B are good enough. */
if (asize > bsize)
mpn_tdiv_qr (bp, ap, 0, asrcp, asize, bsrcp, bsize);
diff --git a/mpz/oddfac_1.c b/mpz/oddfac_1.c
index e1ce119..02f6546 100644
--- a/mpz/oddfac_1.c
+++ b/mpz/oddfac_1.c
@@ -267,7 +267,7 @@ mpz_2multiswing_1 (mpz_ptr x, mp_limb_t n, mp_ptr sieve, mp_ptr factors)
If n is too small, flag is ignored, and an ASSERT can be triggered.
TODO: FAC_DSC_THRESHOLD is used here with two different roles:
- - to decide when prime factorisation is needed,
+ - to decide when prime factorization is needed,
- to stop the recursion, once sieving is done.
Maybe two thresholds can do a better job.
*/
diff --git a/mpz/prodlimbs.c b/mpz/prodlimbs.c
index 8676887..dc411f9 100644
--- a/mpz/prodlimbs.c
+++ b/mpz/prodlimbs.c
@@ -38,7 +38,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
/* Computes the product of the j>1 limbs pointed by factors, puts the
* result in x. It assumes that all limbs are non-zero. Above
- * Karatsuba's threshold it uses a binary splitting startegy, to gain
+ * Karatsuba's threshold it uses a binary splitting strategy, to gain
* speed by the asymptotically fast multiplication algorithms.
*
* The list in {factors, j} is overwritten.
diff --git a/mpz/ui_pow_ui.c b/mpz/ui_pow_ui.c
index 4a0f7bd..382a7ac 100644
--- a/mpz/ui_pow_ui.c
+++ b/mpz/ui_pow_ui.c
@@ -36,7 +36,7 @@ mpz_ui_pow_ui (mpz_ptr r, unsigned long b, unsigned long e)
#endif
{
#ifdef _LONG_LONG_LIMB
- /* i386 gcc 2.95.3 doesn't recognise blimb can be eliminated when
+ /* i386 gcc 2.95.3 doesn't recognize blimb can be eliminated when
mp_limb_t is an unsigned long, so only use a separate blimb when
necessary. */
mp_limb_t blimb = b;
diff --git a/nextprime.c b/nextprime.c
index f0b01d6..2ee193c 100644
--- a/nextprime.c
+++ b/nextprime.c
@@ -36,7 +36,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
3. For primes p >= SIEVESIZE, i.e., typically the majority of primes, we
perform more than one division per sieving write. That might dominate the
- entire run time for the nextprime function. A incrementally initialised
+ entire run time for the nextprime function. A incrementally initialized
remainder table of Pi(65536) = 6542 16-bit entries could replace that
division.
*/
diff --git a/printf/doprnt.c b/printf/doprnt.c
index c1ee0a2..4b898a3 100644
--- a/printf/doprnt.c
+++ b/printf/doprnt.c
@@ -73,7 +73,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
/* printf is convenient because it allows various types to be printed in one
fairly compact call, so having gmp_printf support the standard types as
well as the gmp ones is important. This ends up meaning all the standard
- parsing must be duplicated, to get a new routine recognising the gmp
+ parsing must be duplicated, to get a new routine recognizing the gmp
extras.
With the currently favoured handling of mpz etc as Z, Q and F type
diff --git a/scanf/doscan.c b/scanf/doscan.c
index 2c5b1d9..f6c266c 100644
--- a/scanf/doscan.c
+++ b/scanf/doscan.c
@@ -67,7 +67,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
/* General:
- It's necessary to parse up the format string to recognise the GMP
+ It's necessary to parse up the format string to recognize the GMP
extra types F, Q and Z. Other types and conversions are passed
across to the standard sscanf or fscanf via funs->scan, for ease of
implementation. This is essential in the case of something like glibc
@@ -161,7 +161,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
Consideration was given to using separate code for gmp_fscanf and
gmp_sscanf. The sscanf case could zip across a string doing literal
- matches or recognising digits in gmpscan, rather than making a
+ matches or recognizing digits in gmpscan, rather than making a
function call fun->get per character. The fscanf could use getc
rather than fgetc too, which might help those systems where getc is a
macro or otherwise inlined. But none of this scanning and converting
diff --git a/tests/amd64check.c b/tests/amd64check.c
index 7c313f3..88da895 100644
--- a/tests/amd64check.c
+++ b/tests/amd64check.c
@@ -63,7 +63,7 @@ const char *regname[6] = {"rbx", "rbp", "r12", "r13", "r14", "r15"};
#define DIR_BIT(rflags) (((rflags) & (1<<10)) != 0)
-/* Return 1 if ok, 0 if not */
+/* Return 1 if OK, 0 if not */
int
calling_conventions_check (void)
diff --git a/tests/arm32check.c b/tests/arm32check.c
index 5e8f837..d665d2f 100644
--- a/tests/arm32check.c
+++ b/tests/arm32check.c
@@ -67,7 +67,7 @@ mp_limb_t calling_conventions_values[29] =
#define GOT_CALLEE_SAVES 17
#define JUNK_PARAMS 25
-/* Return 1 if ok, 0 if not */
+/* Return 1 if OK, 0 if not */
int
calling_conventions_check (void)
diff --git a/tests/cxx/t-istream.cc b/tests/cxx/t-istream.cc
index 6cd806b..367f25e 100644
--- a/tests/cxx/t-istream.cc
+++ b/tests/cxx/t-istream.cc
@@ -42,7 +42,7 @@ bool option_check_standard = false;
// On some versions of g++ 2.96 it's been observed that putback() may leave
// tellg() unchanged. We believe this is incorrect and presumably the
-// result of a bug, since for instance it's ok in g++ 2.95 and g++ 3.3. We
+// result of a bug, since for instance it's OK in g++ 2.95 and g++ 3.3. We
// detect the problem at runtime and disable affected checks.
bool putback_tellg_works = true;
diff --git a/tests/devel/try.c b/tests/devel/try.c
index 48d7f5e..1412422 100644
--- a/tests/devel/try.c
+++ b/tests/devel/try.c
@@ -92,7 +92,7 @@ the GNU MP Library test suite. If not, see http://www.gnu.org/licenses/. */
hard to see what iterations are actually done.
Perhaps specific setups and loops for each style of function under test
- would be clearer than a parameterized general loop. There's lots of
+ would be clearer than a parametrized general loop. There's lots of
stuff common to all functions, but the exceptions get messy.
When there's no overlap, run with both src>dst and src<dst. A subtle
@@ -3312,7 +3312,7 @@ try_init (void)
#if HAVE_SYSCONF
if ((pagesize = sysconf (_SC_PAGESIZE)) == -1)
{
- /* According to the linux man page, sysconf doesn't set errno */
+ /* According to the Linux man page, sysconf doesn't set errno */
fprintf (stderr, "Cannot get sysconf _SC_PAGESIZE\n");
exit (1);
}
diff --git a/tests/mpn/t-get_d.c b/tests/mpn/t-get_d.c
index a98472f..93b59e6 100644
--- a/tests/mpn/t-get_d.c
+++ b/tests/mpn/t-get_d.c
@@ -72,7 +72,7 @@ check_onebit (void)
/* FIXME: It'd be better to base this on the float format. */
#if defined (__vax) || defined (__vax__)
- int limit = 127; /* vax fp numbers have limited range */
+ int limit = 127; /* VAX fp numbers have limited range */
#else
int limit = 511;
#endif
@@ -420,7 +420,7 @@ check_rand (void)
if (mant_bits == 0)
return;
- /* Allow for vax D format with exponent 127 to -128 only.
+ /* Allow for VAX D format with exponent 127 to -128 only.
FIXME: Do something to probe for a valid exponent range. */
exp_min = -100 - mant_bits;
exp_max = 100 - mant_bits;
diff --git a/tests/mpq/t-cmp.c b/tests/mpq/t-cmp.c
index 9aaed6a..20e4723 100644
--- a/tests/mpq/t-cmp.c
+++ b/tests/mpq/t-cmp.c
@@ -44,7 +44,7 @@ ref_mpq_cmp (mpq_t a, mpq_t b)
}
#ifndef SIZE
-#define SIZE 8 /* increasing this lowers the probabilty of finding an error */
+#define SIZE 8 /* increasing this lowers the probability of finding an error */
#endif
int
diff --git a/tests/mpq/t-cmp_ui.c b/tests/mpq/t-cmp_ui.c
index 5b606f7..571e43f 100644
--- a/tests/mpq/t-cmp_ui.c
+++ b/tests/mpq/t-cmp_ui.c
@@ -44,7 +44,7 @@ ref_mpq_cmp_ui (mpq_t a, unsigned long int bn, unsigned long int bd)
}
#ifndef SIZE
-#define SIZE 8 /* increasing this lowers the probabilty of finding an error */
+#define SIZE 8 /* increasing this lowers the probability of finding an error */
#endif
int
diff --git a/tests/mpz/t-cmp_d.c b/tests/mpz/t-cmp_d.c
index cc86340..c6d611e 100644
--- a/tests/mpz/t-cmp_d.c
+++ b/tests/mpz/t-cmp_d.c
@@ -165,7 +165,7 @@ check_low_z_one (void)
/* FIXME: It'd be better to base this on the float format. */
#if defined (__vax) || defined (__vax__)
-#define LIM 127 /* vax fp numbers have limited range */
+#define LIM 127 /* VAX fp numbers have limited range */
#else
#define LIM 512
#endif
diff --git a/tests/mpz/t-get_d.c b/tests/mpz/t-get_d.c
index c9f2a90..9d4ddcc 100644
--- a/tests/mpz/t-get_d.c
+++ b/tests/mpz/t-get_d.c
@@ -32,7 +32,7 @@ check_onebit (void)
double got, want;
/* FIXME: It'd be better to base this on the float format. */
#if defined (__vax) || defined (__vax__)
- int limit = 127 - 1; /* vax fp numbers have limited range */
+ int limit = 127 - 1; /* VAX fp numbers have limited range */
#else
int limit = 512;
#endif
diff --git a/tests/tests.h b/tests/tests.h
index 1075e82..10be3ea 100644
--- a/tests/tests.h
+++ b/tests/tests.h
@@ -88,7 +88,7 @@ int calling_conventions_check (void);
#define CALLING_CONVENTIONS_CHECK() (calling_conventions_check())
#else
#define CALLING_CONVENTIONS(function) (function)
-#define CALLING_CONVENTIONS_CHECK() 1 /* always ok */
+#define CALLING_CONVENTIONS_CHECK() 1 /* always OK */
#endif
diff --git a/tests/x86check.c b/tests/x86check.c
index 8fa0f06..8f542d2 100644
--- a/tests/x86check.c
+++ b/tests/x86check.c
@@ -69,7 +69,7 @@ const char *regname[] = {"ebx", "ebp", "esi", "edi"};
#define DIR_BIT(eflags) (((eflags) & (1<<10)) != 0)
-/* Return 1 if ok, 0 if not */
+/* Return 1 if OK, 0 if not */
int
calling_conventions_check (void)
diff --git a/tune/freq.c b/tune/freq.c
index f1092e2..e890ef2 100644
--- a/tune/freq.c
+++ b/tune/freq.c
@@ -271,7 +271,7 @@ freq_sysctlbyname_tsc_freq (int help)
}
-/* Apple powerpc Darwin 1.3 sysctl hw.cpufrequency is in hertz. For some
+/* Apple PowerPC Darwin 1.3 sysctl hw.cpufrequency is in hertz. For some
reason only seems to be available from sysctl(), not sysctlbyname(). */
static int
@@ -344,7 +344,7 @@ freq_sysctl_hw_model (int help)
}
-/* /proc/cpuinfo for linux kernel.
+/* /proc/cpuinfo for Linux kernel.
Linux doesn't seem to have any system call to get the CPU frequency, at
least not in 2.0.x or 2.2.x, so it's necessary to read /proc/cpuinfo.
@@ -360,7 +360,7 @@ freq_sysctl_hw_model (int help)
alpha 2.2.18pre21 - "cycle frequency [Hz]" is 0 on at least one system,
"BogoMIPS" seems near enough.
- powerpc 2.2.19 - "clock" is the frequency, bogomips is something weird
+ PowerPC 2.2.19 - "clock" is the frequency, bogomips is something weird
*/
static int
diff --git a/tune/noop.c b/tune/noop.c
index 7c7f1b5..81cfcd9 100644
--- a/tune/noop.c
+++ b/tune/noop.c
@@ -1,6 +1,6 @@
/* Noop routines.
- These are in a separate file to stop gcc recognising do-nothing functions
+ These are in a separate file to stop gcc recognizing do-nothing functions
and optimizing away calls to them. */
/*
diff --git a/tune/speed.c b/tune/speed.c
index ff04b41..3b3e31b 100644
--- a/tune/speed.c
+++ b/tune/speed.c
@@ -1338,7 +1338,7 @@ main (int argc, char *argv[])
{
#if HAVE_GETRUSAGE
{
- /* This doesn't give data sizes on linux 2.0.x, only utime. */
+ /* This doesn't give data sizes on Linux 2.0.x, only utime. */
struct rusage r;
if (getrusage (RUSAGE_SELF, &r) != 0)
perror ("getrusage");
diff --git a/tune/speed.h b/tune/speed.h
index b68993f..9331de2 100644
--- a/tune/speed.h
+++ b/tune/speed.h
@@ -787,7 +787,7 @@ int speed_routine_count_zeros_setup (struct speed_params *, mp_ptr, int, int);
TMP_MARK; \
SPEED_TMP_ALLOC_LIMBS (wp, s->size, s->align_wp); \
\
- /* (don't have a mechnanism to specify zp alignments) */ \
+ /* (don't have a mechanism to specify zp alignments) */ \
for (i = 0; i < K; i++) \
SPEED_TMP_ALLOC_LIMBS (zp[i], s->size, 0); \
\
diff --git a/tune/time.c b/tune/time.c
index a9e684e..b88c050 100644
--- a/tune/time.c
+++ b/tune/time.c
@@ -1,4 +1,4 @@
-/* Time routines for speed measurments.
+/* Time routines for speed measurements.
Copyright 1999, 2000, 2001, 2002, 2003, 2004, 2010, 2011, 2012 Free Software
Foundation, Inc.
@@ -113,7 +113,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
Not done:
- Solaris gethrtime() seems no more than a slow way to access the Sparc V9
+ Solaris gethrtime() seems no more than a slow way to access the SPARC V9
cycle counter. gethrvtime() seems to be relevant only to light weight
processes, it doesn't for instance give nanosecond virtual time. So
neither of these are used.
@@ -139,7 +139,7 @@ along with the GNU MP Library. If not, see http://www.gnu.org/licenses/. */
On PowerPC the timebase registers could be used, but would have to do
something to find out the speed. On 6xx chips it's normally 1/4 bus
speed, on 4xx chips it's either that or an external clock. Measuring
- against gettimeofday might be ok. */
+ against gettimeofday might be OK. */
#include "config.h"
@@ -289,7 +289,7 @@ static const int have_mftb = 1;
#else
#define MFTB(a) mftb_function (a)
#endif
-#else /* ! powerpc */
+#else /* ! PowerPC */
static const int have_mftb = 0;
#define MFTB(a) \
do { \
@@ -457,11 +457,11 @@ cycles_works_p (void)
if (result != -1)
goto done;
- /* FIXME: On linux, the cycle counter is not saved and restored over
+ /* FIXME: On Linux, the cycle counter is not saved and restored over
* context switches, making it almost useless for precise cputime
* measurements. When available, it's better to use clock_gettime,
* which seems to have reasonable accuracy (tested on x86_32,
- * linux-2.6.26, glibc-2.7). However, there are also some linux
+ * linux-2.6.26, glibc-2.7). However, there are also some Linux
* systems where clock_gettime is broken in one way or the other,
* like CLOCK_PROCESS_CPUTIME_ID not implemented (easy case) or
* kind-of implemented but broken (needs code to detect that), and
@@ -469,7 +469,7 @@ cycles_works_p (void)
* fallback.
*
* So we need some code to disable the cycle counter on some but not
- * all linux systems. */
+ * all Linux systems. */
#ifdef SIGILL
{
RETSIGTYPE (*old_handler) (int);
@@ -965,7 +965,7 @@ sgi_works_p (void)
return result;
}
- /* address of least significant 4 bytes, knowing mips is big endian */
+ /* address of least significant 4 bytes, knowing MIPS is big endian */
sgi_addr = (unsigned *) ((char *) virtpage + offset
+ size/8 - sizeof(unsigned));
result = 1;
diff --git a/tune/tuneup.c b/tune/tuneup.c
index 20f9161..0d24a97 100644
--- a/tune/tuneup.c
+++ b/tune/tuneup.c
@@ -1828,7 +1828,7 @@ tune_gcdext_dc (void)
/* In tune_powm_sec we compute the table used by the win_size function. The
cutoff points are in exponent bits, disregarding other operand sizes. It is
- not possible to use the one framework since it currently uses a granilarity
+ not possible to use the one framework since it currently uses a granularity
of full limbs.
*/
--
1.8.3.2
More information about the gmp-devel
mailing list