PR libgfortran/83649 Chunk large reads and writes
It turns out that Linux never reads or writes more than
2147479552
bytes in a single syscall. For writes this is not a problem as
libgfortran already contains a loop around write() to handle short
writes. But for reads we cannot do this, since then read will hang if
we have a short read when reading from the terminal. Also, there are
reports that macOS fails I/O's larger than 2 GB. Thus, to work around
these issues do large reads/writes in chunks.
The testcase from the PR
program largewr
integer(kind=1) :: a(2_8**31+1)
a = 0
a(size(a, kind=8)) = 1
open(10, file="largewr.dat", access="stream", form="unformatted")
write (10) a
close(10)
a(size(a, kind=8)) = 2
open(10, file="largewr.dat", access="stream", form="unformatted")
read (10) a
if (a(size(a, kind=8)) == 1) then
print *, "All is well"
else
print *, "Oh no"
end if
end program largewr
fails on trunk but works with the patch.
Regtested on x86_64-pc-linux-gnu, committed to trunk.
libgfortran/ChangeLog:
2018-01-02 Janne Blomqvist <jb@gcc.gnu.org>
PR libgfortran/83649
* io/unix.c (MAX_CHUNK): New define.
(raw_read): For reads larger than MAX_CHUNK, loop.
(raw_write): Write no more than MAX_CHUNK bytes per iteration.
From-SVN: r256074