This is still wrong.
The main goal of this API is to avoid allocating lots of new objects when calling read.
So what happen is that whoever is responding to your Read
call will write at the start of the buffer, overwriting old data that could have been there.
The goal of the Read API is to create ciphers algorithm (for example, AES works on blocks of 16 bytes).
A cipher algorithm work by eating X bytes at a time, then spitting it out. What is good is that in memory if you need let’s say 4096 bytes long blocks, well you can allocate once 4096 bytes and overwrite them reusing the same buffer each time (and not allocating anything new).
So the issue you have currently is that Read
will just overwrite all the data starting from the start of the buffer, to fix that you need to send a buffer into Read
starting where you would like Read
to continue:
read := make([]byte, 9999999)
nRead := 0
for {
len, err := rw.Read(read[nRead:]) // See here
fmt.Println(DEBUG, len)
nRead+=len
if err != nil {
if err == io.EOF {
break
} else {
log.Println("- Error reading from the stream:", err)
return
}
}
}
fmt.Println(DEBUG, nRead)
read = read[:nRead]
Then I have other issues with this code, mainly that it is very memory wastefull.
For information in go when you create a subslice for a slice they both share the same underlying storage array (this is exploited in my modification step, because all modification to the subslice done in read will impact the read
buffer but at an offset).
But that also means that in the last step read = read[:nRead]
well even if read
is now 1% of what it was previously, go isn’t able to truncate it and thus keep the whole ~10Mb buffer in memory until you stop using read
completely.
To do so add a copy step :
{ // Scoping to avoid keeping nBuf
nBuf := make([]byte, nRead) // Make a new buffer which fits the data perfectly
copy(nBuf, read) // Copy the read buffer into the nBuf one, no need to slice read, if both buffers aren't the same length, copy takes the smallest of both
read = nBuf
}
// At this point the old 10Mb buffer will be fread upon the next GC
But even with this, this code can’t handle with buffers longer than 9999999
bytes and is not very optimised.
The probably best way you want to do this
If you want that for parsing it into json.
Well just know that json has a cipher implementation, you can use it that way :
import "encoding/json"
// ...
// Later in your code
decoder := json.NewDecoder(rw) // Create a cipher json decoder
var result ObjectTypeToDecodeInto
decoder.Decode(&result)
// If you want to stream multiple json objects you can also do it perfectly (on the server side, just append them side by side) :
var otherResult OtherObjectTypeToDecodeInto
decoder.Decode(&otherResult)
If you just want the complete bytes, then use io.ReadAll
, it handle with the bigger edge cases and has been optimised to not eat 10Mb of memory while working :
import "io" // Use "io/ioutil" if you are using go < 1.16 (go 1.15.x and older)
// ...
read, err := io.ReadAll(rw) // Read all there is to read
if err != nil {
// No need to check for io.EOF, ReadAll returns nil upon success
log.Println("- Error reading from the stream:", err)
return
}
// Do whatever you want with read
But if you can, you probably don’t want to do this, as this require holding the complete buffer in memory at once, where most things can just hold the part of data they are working on (like with encoding/json#Decoder
).