Commit Graph

25 Commits

Author SHA1 Message Date
Scott Lamb 6f9612738c pass prev duration and runs through API layer
Builds on f3ddbfe, for #32 and #59.
2020-06-09 22:06:03 -07:00
Scott Lamb f3ddbfe22a track cumulative duration and runs
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":

In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.

In "segments" mode, playback respects the timestamps we set:

* The obvious choice is to use wall clock timestamps. This is fine if
  they're known to be fixed and correct. They're not. The
  currently-recording segment may be "unanchored", meaning its start
  timestamp is not yet fixed. Older timestamps may overlap if the system
  clock was stepped between runs. The latter isn't /too/ bad from a user
  perspective, though it's confusing as a developer. We probably will
  only end up showing the more recent recording for a given
  timestamp anyway. But the former is quite annoying. It means we have
  to throw away part of the SourceBuffer that we may want to seek back
  (causing UI pauses when that happens) or keep our own spare copy of it
  (memory bloat). I'd like to avoid the whole mess.

* Another approach is to use timestamps that are guaranteed to be in
  the correct order but that may have gaps. In particular, a timestamp
  of (recording_id * max_recording_duration) + time_within_recording.
  But again seeking isn't instantaneous. In my experiments, there's a
  visible pause between segments that drives me nuts.

* Finally, the approach that led me to this schema change. Use
  timestamps that place each segment after the one before, possibly with
  an intentional gap between runs (to force a wait where we have an
  actual gap). This should make the browser's natural playback behavior
  work properly: it never goes to an incorrect place, and it only waits
  when/if we want it to. We have to maintain a mapping between its
  timestamps and segment ids but that's doable.

This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.

Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.

The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
2020-06-09 16:17:32 -07:00
Scott Lamb c6fe1b66a1 add a preliminary schema for object detection (#30) 2020-03-25 18:10:37 -07:00
Scott Lamb 00991733f2 use Blake3 instead of SHA-1 or Blake2b
Benefits:

* Blake3 is faster. This is most noticeable for the hashing of the
  sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
  (#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
  support this old algorithm, and the pure-Rust sha1 crate is painfully
  slow. OpenSSL might still be a better choice than ring/rustls for TLS
  but it's nice to have the option.

For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
2020-03-20 21:46:53 -07:00
Scott Lamb 066c086050 style: use rusqlite's {named_,}params! everywhere 2020-03-19 20:46:25 -07:00
Scott Lamb 317a620e6e upgrade copyright notices
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
  every file rather than whoever created that file. Have one AUTHORS
  file listing everyone.
* Consistently call it a "security camera network video recorder" rather
  than "security camera digital video recorder".
2020-03-01 22:53:41 -08:00
Scott Lamb 7fe9d34655 cargo fix --all
* it added "dyn" to trait objects
* it changed "..." in patterns to "..="

cargo --version says: "cargo 1.37.0-nightly (545f35425 2019-05-23)"
2019-06-14 08:47:11 -07:00
Scott Lamb b629fe6ac1 upgrade rusqlite, bump required Rust to 1.33
The new rusqlite requires the transpose_result feature, stabilized in
this Rust version.
2019-05-31 16:19:04 -07:00
Scott Lamb 4cc796f697 properly test fix for #64
I went with the third idea in 1ce52e3: have the tests run each iteration
of the syncer explicitly. These are messy tests that know tons of
internal details, but I think they're less confusing and racy than if I
had the syncer running in a separate thread.
2019-01-04 16:11:58 -08:00
Scott Lamb b5387af3d4 lose "extern crate" everywhere (Rust 2018 edition) 2018-12-28 21:59:39 -06:00
Scott Lamb 699ec87968 upgrade to 2018 Rust edition
This is mostly just "cargo fix --edition" + Cargo.toml changes.
There's one fix for upgrading to NLL in db/writer.rs:
Writer::previously_opened wouldn't build with NLL because of a
double-borrow the previous borrow checker somehow didn't catch.
Restructure to avoid it.

I'll put elective NLL changes in a following commit.
2018-12-28 14:59:06 -06:00
Scott Lamb 35e6891221 update all Rust deps 2018-12-01 15:20:19 -08:00
Scott Lamb 131c5e0640 Fix "no garbage row for <id>" flush failure loops
Add some comments along the way.

Fixes #63.
2018-12-01 00:03:43 -08:00
Scott Lamb 8c52c36b51 upgrade a few deps 2018-08-24 22:06:14 -07:00
Scott Lamb f81d699c8c new recording_integrity table
A couple rarely-used fields move to here, and I expect I'll add more.
Redo the check command to just put everything in RAM for simplicity.
2018-03-09 13:37:30 -08:00
Scott Lamb d6fa470713 tests and fixes for Writer and Syncer
* separate these out into a new file, writer.rs, as dir.rs was getting
  unwieldy.
* extract traits for the parts of SampleFileDir and std::fs::File they needed;
  set up mock implementations.
* move clock.rs to a new base crate to be accessible from the db crate.
* add tests that exercise all the retry paths.
* bugfix: account for the new recording's bytes when calculating how much to
  delete.
* bugfix: when retrying an unlink failure in collect_garbage, we shouldn't
  warn about all the recordings no longer existing. Do this by retrying each
  step rather than the whole procedure again.
* avoid double-panic scenarios, which I hit while tweaking the mocks. These
  are quite annoying to debug as Rust doesn't print information about either
  panic. I ended up using lldb to get a backtrace. Better to be cautious about
  what we're doing when already panicking.
* give more context on raw::insert_recording errors, which I hit as well while
  tweaking the new tests.
2018-03-07 04:42:46 -08:00
Scott Lamb b78ffc3808 view in-progress recordings!
The time from recorded to viewable was previously 60-120 sec for the first
recording of a RTSP session, 0-60 sec otherwise. Now it's one frame.
2018-03-02 15:40:32 -08:00
Scott Lamb 45f7b30619 allow listing and viewing uncommitted recordings
There may be considerable lag between being fully written and being committed
when using the flush_if_sec feature. Additionally, this is a step toward
listing and viewing recordings before they're fully written. That's a
considerable delay: 60 to 120 seconds for the first recording of a run,
0 to 60 seconds for subsequent recordings.

These recordings aren't yet included in the information returned by
/api/?days=true. They probably should be, but small steps.
2018-03-02 11:38:11 -08:00
Scott Lamb b17761e871 move list_recordings_by_* logic into raw.rs
I want to start having the db.rs version augment this with the uncommitted
recordings, and it's nice to have the separation of the raw db vs augmented
versions. Also, this fits with the general theme of shrinking db.rs a bit.

I had to put the raw video_sample_entry_id into the rows rather than
the video_sample_entry Arc. In hindsight, this is better anyway: the common
callers don't need to do the btree lookup and arc clone on every row. I think
I'd originally done it that way only because I was quite new to rust and
didn't understand that db could be used from within the row callback given
that both borrows are immutable.
2018-03-01 20:59:05 -08:00
Scott Lamb b2a8b3c216 update "moonfire-nvr check" for new schema 2018-03-01 17:07:42 -08:00
Scott Lamb fbe1231af0 move open_id from recording_playback to recording
I want to be able to use it in etags without having to do a full scan of the
recording_playback in advance, which would greatly increase time to first
byte. I probably will even use it in urls to ensure the segments they point to
are stable. I haven't actually done this yet - it will wait until I implement
serving unflushed recordings - but I want to get the schema set up properly.
2018-02-28 20:52:43 -08:00
Scott Lamb 81e1ec97b3 more sanity checking for delete_oldest_recordings 2018-02-23 14:05:07 -08:00
Scott Lamb 8d9939603e fix repeated deletions within a flush
When list_oldest_recordings was called twice with no intervening flush, it
returned the same rows twice. This led to trying to delete it twice and all
following flushes failing with a "no such recording x/y" message. Now, return
each row only once, and track how many bytes have been returned.

I think dir.rs's logic is still wrong for how many bytes to delete when
multiple recordings are flushed at once (it ignores the bytes added by the
first when computing the bytes to delete for the second), but this is
progress.
2018-02-23 13:49:57 -08:00
Scott Lamb bf45ae6011 extend recording_playback with an open_id
As noted in schema.sql, this can be used for disambiguation. It also may be
useful in diagnosing data integrity problems.

Also, sneak in a couple minor improvements: better diagnostics in a couple
places, fix to 1->2 upgrade procedure.
2018-02-22 21:46:41 -08:00
Scott Lamb b037c9bdd7 knob to reduce db commits (SSD write cycles)
This improves the practicality of having many streams (including the doubling
of streams by having main + sub streams for each camera). With these tuned
properly, extra streams don't cause any extra write cycles in normal or error
cases. Consider the worst case in which each RTSP session immediately sends a
single frame and then fails. Moonfire retries every second, so this would
formerly cause one commit per second per stream. (flush_if_sec=0 preserves
this behavior.) Now the commits can be arbitrarily infrequent by setting
higher values of flush_if_sec.

WARNING: this isn't production-ready! I hacked up dir.rs to make tests pass
and "moonfire-nvr run" work in the best-case scenario, but it doesn't handle
errors gracefully. I've been debating what to do when writing a recording
fails. I considered "abandoning" the recording then either reusing or skipping
its id. (in the latter case, marking the file as garbage if it can't be
unlinked immediately). I think now there's no point in abandoning a recording.
If I can't write to that file, there's no reason to believe another will work
better. It's better to retry that recording forever, and perhaps put the whole
directory into an error state that stops recording until those writes go
through. I'm planning to redesign dir.rs to make this happen.
2018-02-22 16:35:34 -08:00