This avoids iteration through the video index for the "interior" recordings of
a virtual file. This takes generating the size of a ~8-hour / 15 fps file from
about 60 ms to about 10 ms. I expect better savings on a Raspberry Pi 2, for
longer records, and for higher frame rates. The total time here can be
significant; one one ~day-long recording on the Pi, it was several seconds.
I'm optimistic this will help with that.
It'd also be possible to optimize DecodeVar32 (perhaps by unrolling the loop)
but better to remove a call than to optimize one.
To add the fast path, we need a new field "video_sync_samples" in the
recording table to calculate the length of the stss table. Storage cost should
be minimal; I think typically two bytes in SQLite's record format (serial type
1, value < 128), described here: <https://www.sqlite.org/fileformat2.html>.
* Fix the mdat box size, which was not properly including the length of the
header itself. (The "mp4file" tool nicely diagnosed this corruption.)
* Fix the stsc box. The first number of each entry is meant to be a chunk
index, not a sample index. This was causing strange behavior in basically
any video player for multi-recording videos.
* Populate etag and last-modified so that Range: requests can work properly.
The etag must be changed every time the generated file format changes.
There's a serial number constant for this purpose and a test meant to help
catch such problems.
This is still pretty rough. For example, there's no test coverage of virtual
files based on multiple recordings. The etag and last modified code are stubs.
And various other conditions aren't tested at all. But it does appear to work
in a test that does a round-trip from a .mp4 file, so it should be a decent
starting point.
This code isn't pretty exactly---particularly the hardcoded lengths---but it
does work. I'll have a different mechanism for calculating the length and
nesting structure forthe more dynamic parts of the moov atom. This way is
convenient when generating a single string of mostly static data.