* CameraDayKey::bounds (used to generate the start and end times of days in
the returned JSON) returned UTC, not matching what recordings were mapped
into that day. So fetching a day with its given bounds would return
something different. Test and fix it.
* Several time-related tests weren't calling testutil::init(), so they weren't
fixing the time zone to the expected America/Los_Angeles. If the machine
time is set to something else, they would break.
I think this is an ffmpeg bug, which I plan to report. In the meantime, this
makes the tests pass. Long-term, even if ffmpeg fixes this, I probably don't
want to continue doing acceptance tests against whatever version of ffmpeg
happens to be installed - my real targets of interest are the latest versions
of Chrome, Firefox, Safari, QuickTime, and VLC.
This was causing Firefox to fail to play multipart .mp4s which trimmed away a
prefix. In the developer console, it said NS_ERROR_DOM_MEDIA_METADATA_ERR
without giving any RESULT_DETAIL, making it a pain to diagnose. Given that the
stss is supposed to be needed for seeking, I'm surprised it didn't have any
immediately obvious impact on Chrome or VLC. Maybe they just took longer to
seek than otherwise necessary.
The bug was that when keeping track of the "next frame num" while constructing
the .mp4, I appended the number in the underlying recording, not the number
post-trimming. That meant following segments used the wrong numbers. In some
cases, it caused it to exceed the total number of samples in the generated
.mp4, which seems to be what Firefox was complaining about. Running the result
through "ffmpeg -i bad.mp4 -c copy -f mp4 good.mp4" just trimmed away the most
obviously invalid ones, leaving others that didn't point to the frames they
meant to. That was enough to make Firefox start playing the file. /shruggie
The existing tests were all with a single segment, so I added a new one to
catch this. I also added a Debug implementation to recording::Segment and
mp4::Segment.
This was totally broken in commit 1cf27c18. It would serve bytes from the
beginning of the sample file in question, not from the start of the given
range.
it should be exactly 1, but was slightly more because the fraction was
incorrectly 1 rather than 0. I'm not sure if any actual players care about
this, but it was something I noticed when looking into strange edit list
behavior.
This is intended to support HTML5 Media Source Extensions, which I expect to
be the most practical way to make a good web UI with a proper scrub bar and
such.
This feature has had very limited testing on Chrome and Firefox, and that was
not entirely successful. More work is needed before it's usable, but this
seems like a helpful progress checkpoint.
This significantly improves safety of the ffmpeg interface. The complex
ABIs aren't accessed directly from Rust. Instead, I have a small C
wrapper which uses the ffmpeg C API and the C headers at compile-time to
determine the proper ABI in the same way any C program using ffmpeg
would, so that the ABI doesn't have to be duplicated in Rust code.
I've tested with ffmpeg 2.x and ffmpeg 3.x; it seems to work properly
with both where before ffmpeg 3.x caused segfaults.
It still depends on ABI compatibility between the compiled and running
versions. C programs need this, too, and normal shared library
versioning practices provide this guarantee. But log both versions on
startup for diagnosing problems with this.
Fixes#7
The benchmarks don't get compiled with the standard "cargo test";
they require "cargo +nightly bench --features=nightly", so I didn't notice
they were broken in the previous commit. Now fixed.
SQLite3 has gotten some noticeable speed improvements in recent releases. This
is a modest improvement on reasonably new platforms, and a pretty significant
one on Raspbian Jessie (which has an incredibly old SQLite3).
This makes a huge difference in the reported time - 863 usec rather than 6
milliseconds on my laptop. Part of the difference is in reqwest client setup
(it apparently initializes a SSL_CTX that is never used in this test), part
fresh connections vs keepalive, part I don't know what. None of it seems
relevant to the logic I want to test.
serve_generated_bytes is >3X faster. One caveat is that the reactor thread may
stall when reading from the memory-mapped slice. Moonfire NVR is basically a
single-user program, so that may not be so bad, but we'll see.
It had an Arc which in hindsight isn't necessary; the actual video index
generation is fast anyway. This saves a couple pointers per cache entry and
the overhead of chasing them. LruCache itself also has some extra pointers on
it but that's something to address another day.
This reduces the working set by another 960 bytes for a typical one-hour recording, improving cache efficiency a bit more.
8 bytes from SampleIndexIterator:
* reduce the three "bytes" fields to two. Doing so as "bytes_key" vs
"bytes_nonkey" slowed it down a bit, perhaps because the "bytes" is
needed right away and requires a branch. But "bytes" vs "bytes_other"
seems fine. Looks like it can do this with cmovs in parallel with other
stuff.
* stuff "is_key" into the "i" field.
8 bytes from recording::Segment itself:
* make "frames" and "key_frame" u16s
* stuff "trailing_zero" into "video_sample_entry_id"
There were Nagle's algorithm delays in both the "fresh_client" and
"reuse_client" versions of the .mp4 serving benchmark. Now performance is much
more consistent.
* don't store sizes of mp4-format sample indexes; recalculate them.
* keep SampleIndexIterator position as a u32 rather than a usize.
This is 960 bytes for a 60-minute mp4; another small cache usage improvement.
For a one-hour recording, this is about 2 KiB, so a decent chunk of a
Raspberry Pi 2's L1 cache. Generating the Slices and searching/scanning
it should be a bit faster and pollute the cache less.
This is a pretty small optimization now when transferring a decent chunk
of the moov or mdat, but it's easy enough to do. It will be much more
noticeable if I end up interleaving the captions between each key frame.
This came up when I tried using the "bundled" feature of rusqlite. Its build
script passes -DSQLITE_DEFAULT_FOREIGN_KEYS=1, which caused a test to fail.
Fix the bug that this option revealed, and set the pragma so we'll catch
such problems in the future even when using a system library not compiled in
this way.
This was completely wrong: it overflowed on large filesystems and
double-counted the used bytes.
The new logic is still imperfect in that if there are a bunch of files in the
process of being deleted (moved from recording to reserved_sample_files but
not yet unlinked), they'll be taken out of the total capacity. Maybe it should
stat everything in the sample file directory instead of relying on the
recording table. It's definitely an improvement, though.
This page was noticeably slower than necessary because the recording_cover
index wasn't actually covering the query. Both the schema for new databases
and the upgrade query were broken (and not even in the same way).
No new schema version to correct this, at least for now. I'll probably have
another reason to change the schema soon anyway and can throw this in.