Using _ _fpclassify(), the result is correct, not so using fpclassify(double). It looks like the macro picks up the wrong double-underscore version that checks for a wrong bit pattern.
gcc 7.1.0 (mingw-builds) x86_64-w64-mingw32
Demo:
#include <math.h>
#include <stdio.h>
const char* to_string(int x)
{
switch(x)
{
case FP_INFINITE: return "Inf";
case FP_NAN: return "NaN";
case FP_NORMAL: return "normal";
case FP_SUBNORMAL: return "subnormal";
case FP_ZERO: return "zero";
default: return "?";
}
}
#define classify(x) do_classify(#x, x)
void do_classify(const char* msg, double x)
{
if(fpclassify(x) == __fpclassify(x))
printf("%-12s ---> %s\n", msg, to_string(fpclassify(x)));
else
printf("%-12s ---> %s vs %s\n", msg, to_string(fpclassify(x)), to_string(__fpclassify(x)));
}
int main(int argc, char**)
{
// so far so good
classify(5.0);
classify(0.0);
classify(-0.0);
// now comes...
classify(1.0/0.0);
classify(2.0*DBL_MAX);
classify(0.0/0.0);
classify(log(-1.0));
// be sure this is no weird literal/optimizer artefact
// compiler cannot know that argc will be 1
volatile double d = (double) argc * DBL_MAX;
classify(d); // OK
d *= 2.0;
classify(d); // not OK
d = log(argc - 2.0);
classify(d); // not OK
return 0;
}
Output:
5.0 ---> normal
0.0 ---> zero
-0.0 ---> zero
1.0/0.0 ---> normal vs Inf
2.0*DBL_MAX ---> normal vs Inf
0.0/0.0 ---> normal vs NaN
log(-1.0) ---> normal vs NaN
d ---> normal
d ---> normal vs Inf
d ---> normal vs NaN
What is the result of your preprocessed source (gcc -E)? After including the missing float.h (for DBL_MAX) and adding the parameter argv to the code above so that it's actually ANSI C: