Commit Graph

30 Commits

Author SHA1 Message Date
Scott Lamb 8f792aeb2d live stream frame-by-frame rather than GOP-by-GOP (#59)
This should reduce live stream latency by two seconds when my cameras
are at their default setting (I frame interval = 2 * frame rate)!

I was under the impression that every HTML5 Media Source Extensions
media segment had to start with a Random Access Point. This used to
be true, but apparently changed quite a while ago:
https://bugs.chromium.org/p/chromium/issues/detail?id=229412

Support generating segments that don't start with a key frame, and
plumb this through the mp4 media segment generation logic. Add some
extra error checking in mp4 slice handling, as my first attempts had a
mismatch between expected and actual lengths that silently returned
corrupted .m4s files.

Also pull everything from the most recent key frame on along with the
first live segment to reduce startup latency. Live view is quite a bit
more pleasant now.
2020-08-07 15:56:57 -07:00
Scott Lamb b9c08b18a4 fix live view
This broke with the media vs wall duration split, part of #34.
2020-08-07 10:16:06 -07:00
Scott Lamb 036e8427e6 complete wall/media time split (for #34) 2020-08-06 22:01:59 -07:00
Scott Lamb cb97ccdfeb start splitting wall and media duration for #34
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
2020-08-04 21:44:01 -07:00
Scott Lamb 476bd86b12 Merge branch 'master' into new-schema 2020-07-12 19:22:38 -07:00
Scott Lamb 959defebca track "assumed" filesystem usage (#89)
As described in #89, we need to refactor a bit before we can get the
actual filesystem block size. Assuming 4096 for now. Small steps.
2020-07-12 17:15:41 -07:00
Scott Lamb f3ddbfe22a track cumulative duration and runs
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":

In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.

In "segments" mode, playback respects the timestamps we set:

* The obvious choice is to use wall clock timestamps. This is fine if
  they're known to be fixed and correct. They're not. The
  currently-recording segment may be "unanchored", meaning its start
  timestamp is not yet fixed. Older timestamps may overlap if the system
  clock was stepped between runs. The latter isn't /too/ bad from a user
  perspective, though it's confusing as a developer. We probably will
  only end up showing the more recent recording for a given
  timestamp anyway. But the former is quite annoying. It means we have
  to throw away part of the SourceBuffer that we may want to seek back
  (causing UI pauses when that happens) or keep our own spare copy of it
  (memory bloat). I'd like to avoid the whole mess.

* Another approach is to use timestamps that are guaranteed to be in
  the correct order but that may have gaps. In particular, a timestamp
  of (recording_id * max_recording_duration) + time_within_recording.
  But again seeking isn't instantaneous. In my experiments, there's a
  visible pause between segments that drives me nuts.

* Finally, the approach that led me to this schema change. Use
  timestamps that place each segment after the one before, possibly with
  an intentional gap between runs (to force a wait where we have an
  actual gap). This should make the browser's natural playback behavior
  work properly: it never goes to an incorrect place, and it only waits
  when/if we want it to. We have to maintain a mapping between its
  timestamps and segment ids but that's doable.

This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.

Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.

The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
2020-06-09 16:17:32 -07:00
Scott Lamb 9d7cdc0954 cleanup: s/unflushed sample/unindexed sample/
db/writer.rs used the word "unflushed" in two ways:

* something which has been communicated to the LockedDatabase object but
  not yet committed to disk with SQLite.

* a video sample (aka video frame) which has been written to the sample
  file but not yet included in the video index. This happens because the
  duration of a frame isn't known until the following frame. These are
  always also unflushed in the other sense of the word (as unfinished
  recordings are never committed). But they can't be seen by clients at
  all, where indexed but uncommitted video frames can.

Replace the latter with "unindexed" to make things more clear. And a
couple minor other style cleanups.
2020-04-14 23:09:34 -07:00
Scott Lamb 00991733f2 use Blake3 instead of SHA-1 or Blake2b
Benefits:

* Blake3 is faster. This is most noticeable for the hashing of the
  sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
  (#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
  support this old algorithm, and the pure-Rust sha1 crate is painfully
  slow. OpenSSL might still be a better choice than ring/rustls for TLS
  but it's nice to have the option.

For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
2020-03-20 21:46:53 -07:00
Scott Lamb e5b83c21e1 schema version 6 with pixel aspect ratio
This makes anamorphic sub streams display correctly, even ones from old
Hikvision cameras that don't properly set the aspect ratio at the H.264
layer.
2020-03-19 21:40:59 -07:00
Scott Lamb 317a620e6e upgrade copyright notices
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
  every file rather than whoever created that file. Have one AUTHORS
  file listing everyone.
* Consistently call it a "security camera network video recorder" rather
  than "security camera digital video recorder".
2020-03-01 22:53:41 -08:00
Scott Lamb 0a29f62fd3 better logs during normal operation
* don't log every time we delete stuff; leave it for the flush
* when flushing, break apart counts by sample file dir and include
  human-readable sizes
2019-09-26 16:09:58 -07:00
Scott Lamb e52e725958 s/std::fs::read_dir/nix::dir::Dir/ in a few spots
This is nicer in a few ways:

   * I can use openat so there's no possibility of any kind of a race
     involving scanning a different directory than the one used in
     other ways (locking, metadata file, adding/removing sample files)
   * filename() doesn't need to allocate memory, so it's a bit more
     efficient
   * dogfooding - I wrote nix::dir.
2019-07-12 11:07:14 -07:00
Scott Lamb bb227491b6 use nix to remove many uses of unsafe 2019-07-11 21:59:01 -07:00
Scott Lamb 7fe9d34655 cargo fix --all
* it added "dyn" to trait objects
* it changed "..." in patterns to "..="

cargo --version says: "cargo 1.37.0-nightly (545f35425 2019-05-23)"
2019-06-14 08:47:11 -07:00
Scott Lamb c271cfa2b5 make Writer enforce maximum recording duration
My installation recently somehow ended up with a recording with a
duration of 503793844 90,000ths of a second, way over the maximum of 5
minutes. (Looks like the machine was pretty unresponsive at the time
and/or having network problems.)

When this happens, the system really spirals. Every flush afterward (12
per minute with my installation) fails with a CHECK constraint failure
on the recording table. It never gives up on that recording. /var/log
fills pretty quickly as this failure is extremely verbose (a stack
trace, and a line for each byte of video_index). Eventually the sample
file dirs fill up too as it continues writing video samples while GC is
stuck. The video samples are useless anyway; given that they're not
referenced in the database, they'll be deleted on next startup.

This ensures the offending recording is never added to the database, so
we don't get the same persistent problem. Instead, writing to the
recording will fail. The stream will drop and be retried. If the
underlying condition that caused a too-long recording (many
non-key-frames, or the camera returning a crazy duration, or the
monotonic clock jumping forward extremely, or something) has gone away,
the system should recover.
2019-01-29 08:26:36 -08:00
Scott Lamb 3ba3bf2b18 backend support for live stream (#59)
This is so far completely untested, for use by a new UI prototype.

It creates a new URL endpoint which sends one video/mp4 media segment
per key frame, with the dependent frames included. This means there will
be about one key frame interval of latency (typically about a second).
This seems hard to avoid, as mentioned in issue #59.
2019-01-21 15:58:52 -08:00
Scott Lamb b9e6a6461f fix streamer tests broken by 4cc796f
That commit rewrote the db/writer.rs tests. But the flush op they
used before was also used by src/streamer.rs tests. Reintroduce it.
2019-01-06 07:07:04 -08:00
Scott Lamb 4cc796f697 properly test fix for #64
I went with the third idea in 1ce52e3: have the tests run each iteration
of the syncer explicitly. These are messy tests that know tons of
internal details, but I think they're less confusing and racy than if I
had the syncer running in a separate thread.
2019-01-04 16:11:58 -08:00
Scott Lamb 1ce52e334c fix #64 (extraneous flushes)
Now each syncer has a binary heap of the times it plans to do a flush.
When one of those times arrives, it rechecks if there's something to do.
Seems more straightforward than rechecking each stream's first
uncommitted recording, especially with the logic to retry failed flushes
every minute.

Also improved the info! log for each flush to see the actual recordings
being flushed for better debuggability.

No new tests right now. :-( They're tricky to write. One problem is that
it's hard to get the timing right: a different flush has to happen
after Syncer::save's database operations and before Syncer::run calls
SimulatedClocks::recv_timeout with an empty channel[*], advancing the
time. I've thought of a few ways of doing this:

   * adding a new SyncerCommand to run something, but it's messy (have
     to add it from the mock of one of the actions done by the save),
     and Box<dyn FnOnce() + 'static> not working (see
     rust-lang/rust#28796) makes it especially annoying.

   * replacing SimulatedClocks with something more like MockClocks.
     Lots of boilerplate. Maybe I need to find a good general-purpose
     Rust mock library. (mockers sounds good but I want something that
     works on stable Rust.)

   * bypassing the Syncer::run loop, instead manually running iterations
     from the test.

Maybe the last way is the best for now. I'm likely to try it soon.

[*] actually, it's calling Receiver::recv_timeout directly;
Clocks::recv_timeout is dead code now? oops.
2019-01-04 13:47:44 -08:00
Scott Lamb 55fa458288 fix confusing variable name + comment 2019-01-04 06:16:25 -08:00
Scott Lamb b5387af3d4 lose "extern crate" everywhere (Rust 2018 edition) 2018-12-28 21:59:39 -06:00
Scott Lamb 699ec87968 upgrade to 2018 Rust edition
This is mostly just "cargo fix --edition" + Cargo.toml changes.
There's one fix for upgrading to NLL in db/writer.rs:
Writer::previously_opened wouldn't build with NLL because of a
double-borrow the previous borrow checker somehow didn't catch.
Restructure to avoid it.

I'll put elective NLL changes in a following commit.
2018-12-28 14:59:06 -06:00
Scott Lamb 131c5e0640 Fix "no garbage row for <id>" flush failure loops
Add some comments along the way.

Fixes #63.
2018-12-01 00:03:43 -08:00
Scott Lamb d7a94956eb deflake writer tests
There was a race condition here because it wasn't waiting for the db
flush to complete. This made write_path_retries sometimes not reflect
the consequence of the flush, causing an assertion failure. I assume it
was also responsible for gc_path_retries timeouts under travis-ci.
2018-08-07 21:58:40 -05:00
Scott Lamb 91636d3193 refine flush_if_sec behavior
The new behavior eliminates a couple unpleasant edge cases in which it
would never flush:

* if all recording stops, whatever was unflushed would stay that way
* if every recording attempt produces a 0-duration recording (such as if the
  camera sends only one frame and thus no PTS delta can be calculated),
  the list of recordings to flush would continue to grow
2018-03-23 15:16:43 -07:00
Scott Lamb addeb9d2f6 add a TimerGuard around db locks & ops
I moved the clocks member from LockedDatabase to Database to make this happen,
so the new DatabaseGuard (replacing a direct MutexGuard<LockedDatabase>) can
access it before acquiring the lock.

I also made the type of clock a type parameter of Database (and so several
other things throughout the system). This allowed me to drop the Arc<>, but
more importantly it means that the Clocks trait doesn't need to stay
object-safe. I plan to take advantage of that shortly.
2018-03-23 13:31:23 -07:00
Scott Lamb 4c8daa6d24 save timestamps along with opens 2018-03-10 16:15:36 -08:00
Scott Lamb 03809eee8e clean up leftover throwaway logging 2018-03-09 10:46:34 -08:00
Scott Lamb d6fa470713 tests and fixes for Writer and Syncer
* separate these out into a new file, writer.rs, as dir.rs was getting
  unwieldy.
* extract traits for the parts of SampleFileDir and std::fs::File they needed;
  set up mock implementations.
* move clock.rs to a new base crate to be accessible from the db crate.
* add tests that exercise all the retry paths.
* bugfix: account for the new recording's bytes when calculating how much to
  delete.
* bugfix: when retrying an unlink failure in collect_garbage, we shouldn't
  warn about all the recordings no longer existing. Do this by retrying each
  step rather than the whole procedure again.
* avoid double-panic scenarios, which I hit while tweaking the mocks. These
  are quite annoying to debug as Rust doesn't print information about either
  panic. I ended up using lldb to get a backtrace. Better to be cautious about
  what we're doing when already panicking.
* give more context on raw::insert_recording errors, which I hit as well while
  tweaking the new tests.
2018-03-07 04:42:46 -08:00