Setting the buffer size to 0 had the effect of switching the buffering mode. This is a bad idea since it wasn't actually called for: this is what the 'bufmode' parameter is intended for. What happens now is that the default buffer size is used (BUFSIZ), which sort of follows what the BSD libc did: a zero buffer size delayed the allocation of a buffer until the first read/write access occured.
gets() now tries to copy as much data from the read buffer as possible, and will fall back onto using the __getc() macro only if necessary. This should improve performance on long lines, or crash faster if the read buffer happens to be too short. This is probably wasted on gets(), but you never know...
If the buffer mode is set to "no buffering" then fread() will always bypass the buffer and call read() instead.
If there is enough data waiting to be read from the buffer, fread() will now copy it directly, refilling the buffer as needed.
If the read buffer happens to be empty, buffering is enabled for the stream, and the number of bytes to read is at least as large as the buffer size, then fread() will directly call read(), which should improve performance significantly.
fgets() now copies as much data from the read buffer as possible, falling back onto the __getc() macro only as a last resort. This should help greatly when reading long lines since the overhead of calling __getc() goes away.
Piping will have the consequence that the exit status of the first command
will not be considered. As this is the compiling command in our case, make
will not exit with an error code even if the compiling failed.
While there are shell-specific solutions, disabling LOG_COMMAND seems to
be the most general solution.
Added integer overflow test to calloc().
Tiny change in getopt_long() so that the value pointed to by the "longindex" parameter is always initialized to an invalid index position (that being -1), instead of 0. The value of 0 can break some shell commands, most notably GNU wget.