Hi,
The most critical thing that will impact bandwidth is whether the camera is sending an MJPEG stream or full frames (any other stream format). For 1280x720@100, assuming RGB24 full frames, bandwidth shown in Kinovea should be something like (1280*720*3*100)/(1024*1024) = 263.67MB/s. This is how many bytes per second pass through the primary ring buffer, (not the same as the delay buffer, see diagram below).
So I'm assuming the 1920x1080@50fps giving 20MB/s was an MJPEG stream? (Assuming the framerate is respected by the camera, low light conditions will increase exposure time and decrease framerate automatically, other cameras let you select options that aren't supported and send low fps streams instead).
When using an MJPEG stream, the recording mode "Camera" will pass the MJPEG frames straight to the output, bypassing decompression/compression, so this is the fastest to record. Frames are decoded in a different thread for display.
When using an RGB stream, the frames are uncompressed for display/compositing and then recompressed for output, and this is taking some time. My understanding is that this is the current bottleneck.
I'm not sure about the truncation. I'll have to do my own experiments. Using 0-second delay vs using longer delay shouldn't really make a difference in terms of performance. The frames are sourced from the buffer in the same way. However there might be a laps of time where the buffer isn't full and there is no frame at the expected delay, in which case nothing will be output. Maybe this could explain the truncation and framerate difference?
Next version will have a mode for recording without compression whatsoever. This is only for the "Camera" recording mode at the moment. It increases performance for high bandwidth cameras that send RGB frames like USB 3.0 industrial cameras and possibly newer high end webcams. It takes a ton of space though.
I'm looking at options on how to improve the workflow. I think this self-training use-case is very interesting and Kinovea should support it. The difficult part is that bottlenecks vary depending people's machines, cameras, and over time as hardware and connection standards evolve.
Here is a diagram of the flow of frames during capture and recording [ edit: removed the link, this is very much obsolete now ] I did a while ago. Hopefully not too cryptic. In the diagram the "performance path" is when you select recording mode "Camera", and the WYSIWYG path is when you select "Display".