With Rust 1.48.0, I was getting panics like "attempted to leave type
`linked_hash_map::Node<std::string::String,
raw_statement::RawStatement>` uninitialized, which is invalid". Looks
like this is due to a bug in linked-hash-map 0.5.2, fixed with 0.5.3.
Along the way, I see that rusqlite no longer uses this crate; it
switched to hashlink. Upgrade rusqlite and match that change for my own
use (video index LRU cache).
I'm still pulling in linked-hash-map via serde_yaml. I didn't even
realize I was using serde_yaml, but libpasta wants it. Seems a little
silly to me but that's something I might explore another day.
This should reduce live stream latency by two seconds when my cameras
are at their default setting (I frame interval = 2 * frame rate)!
I was under the impression that every HTML5 Media Source Extensions
media segment had to start with a Random Access Point. This used to
be true, but apparently changed quite a while ago:
https://bugs.chromium.org/p/chromium/issues/detail?id=229412
Support generating segments that don't start with a key frame, and
plumb this through the mp4 media segment generation logic. Add some
extra error checking in mp4 slice handling, as my first attempts had a
mismatch between expected and actual lengths that silently returned
corrupted .m4s files.
Also pull everything from the most recent key frame on along with the
first live segment to reduce startup latency. Live view is quite a bit
more pleasant now.
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
This is a quick fix to a problem that gives a confusing/poor initial
experience, as in this thread:
https://groups.google.com/g/moonfire-nvr-users/c/WB-TIW3bBZI/m/Gqh-L6I9BgAJ
I don't think it's a permanent solution. In particular, when we
implement an event stream (#40), I don't want to have a separate event
for every frame, so having the days map change that often won't work.
The client side will likely manipulate the days map then to include a
special entry for a growing recording, representing "from this time to
now".
* Get rid of unused video_sample_entry rows. h264_reader rejected some
of these; perhaps they were corrupted by some long-fixed bug.
* Use an i64 for cum_duration_90k (oops); an i32 overflows with only 6.6 hours
of recording, so this was guaranteed to fail on any real setup.
* Add some context to those errors for debugging.
For posterity, a video_sample_entry that failed:
sqlite> select id, hex(sha1), width, height, rfc6381_codec, hex(data) from video_sample_entry where id = 9;
9|B3607B06107E779F57D062331FB54B59E964B9BC|1920|1080|avc1.640028|000000B26176633100000000000000010000000000000000000000000000000007800438004800000048000000000000000100000000000000000000000000000000000000000000000000000000000000000018FFFF0000005C6176634301640028FFE1002967640028AC1B1A80780227E5C05B808080A000007D0000186A1D0C0029FF5DE5C6860014FFAEF2E140010020A886052ACA0500769C28476EFE104A8000F08781320819888E894B5200000000
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
Both a "cargo update" and a bump of major versions of a few deps.
I left a few alone:
* base64 because some of the deps depend on 0.11 (and 0.9), so I don't
want to pull in a third version (0.12).
* ring because libpasta depends on this version and I don't want to pull
in two of them.
* time because it's not trivial. Last I checked, time 0.2 couldn't even
do what I wanted at all.
I also made tokio use parking_lot, since I pull it in anyway.
This reduces the binary size noticeably on my macOS machine (#70):
unstripped stripped
1 before switching to clap 11.1 MiB 6.7 MiB
2 after switching to clap 11.4 MiB 6.9 MiB
3 without regex 10.1 MiB 5.9 MiB
A couple reasons for this:
* the docopt crate is "unlikely to see significant future evolution",
and the wider docopt project is "mostly unmaintained at this point".
clap/structopt is more full-featured, has more natural subcommand
support, etc.
* it may allow me to shrink the binary (#70). This change alone seems
to be a slight regression, but it's a step toward getting rid of
regex, which is pretty large. And I feel less ridiculous now that I
don't have two parsing crates anyway; prettydiff was pulling in
structopt.
There are some behavior changes here:
* misc --help output changes and such as you'd expect from switching
argument-parsing libraries
* I properly used PathBuf and OsString for stuff that theoretically
could be non-UTF-8. I haven't tested that it actually made any
difference. I'm also still storing the sample file dirname as "text"
in the database to avoid causing a diff when not doing a schema
change.
db/writer.rs used the word "unflushed" in two ways:
* something which has been communicated to the LockedDatabase object but
not yet committed to disk with SQLite.
* a video sample (aka video frame) which has been written to the sample
file but not yet included in the video index. This happens because the
duration of a frame isn't known until the following frame. These are
always also unflushed in the other sense of the word (as unfinished
recordings are never committed). But they can't be seen by clients at
all, where indexed but uncommitted video frames can.
Replace the latter with "unindexed" to make things more clear. And a
couple minor other style cleanups.
Benefits:
* Blake3 is faster. This is most noticeable for the hashing of the
sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
(#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
support this old algorithm, and the pure-Rust sha1 crate is painfully
slow. OpenSSL might still be a better choice than ring/rustls for TLS
but it's nice to have the option.
For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
every file rather than whoever created that file. Have one AUTHORS
file listing everyone.
* Consistently call it a "security camera network video recorder" rather
than "security camera digital video recorder".
The reqwest one is particularly notable because it means not having two
versions of hyper/http/tokio/futures/bytes. It also drops a number of
transitive deps; with some work I think I could stop depending on regex
now.
This addressed a deprecation warning on nightly (will be in Rust 1.38).
Use parking_lot instead, which in theory is faster (although I doubt
it's significant here).
Now the test actually has a recording and garbage with matching files.
This caught a few problems in the upgrade procedure:
* it didn't work with foreign keys enabled because the new recording
table was set up after the new camera table, and the old recording
table was destroyed after the old camera table. And now I enable
foreign keys all the time. Reorder the procedure to fix.
* the pathname manipulation in the v2 to v3 procedure was incorrect
since my introduction of nix because I gave it a &[u8] with the
trailing nul, where I should have used CStr::from_bytes_with_nul.
* it wasn't removing garbage files. It'd be most natural to do this
in the v2 to v3 upgrade (with the rename) but I historically removed
the table when upgrading to v2. I can't redefine the schema now, so
do it unnaturally.
I'm considering also renaming all uuid-like files on upgrade to v4/v5
to clean up this mess automatically for installations that have
already done this upgrade.
The immediate motivation is that Cargo.lock referred to a commit version
in a PR branch of my nix fork that no longer exists. (I didn't know, but
it makes sense, that "git push -f" not only forcibly updates the branch
to refer to a new commit but also gets rid of orphaned commits.) Use a
moonfire branch that I'll keep stable until I'm ready to move on.
I also updated parking_lot and rusqlite to new major versions (nothing
in the interface that I care about has changed) and did a full cargo
update.
This is nicer in a few ways:
* I can use openat so there's no possibility of any kind of a race
involving scanning a different directory than the one used in
other ways (locking, metadata file, adding/removing sample files)
* filename() doesn't need to allocate memory, so it's a bit more
efficient
* dogfooding - I wrote nix::dir.
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.
Newer SQLite library versions (such as what you get when using
--features=bundled) actually enforce foreign keys. Unfortunately there's
no way to drop foreign key constraints, so you have to transitively
recreate all the tables with foreign key constraints on the table you're
recreating.
My dad's "GW-GW4089IP" cameras use separate ports for the main and sub
streams:
rtsp://192.168.1.110:5050/H264?channel=0&subtype=0&unicast=true&proto=Onvif
rtsp://192.168.1.110:5049/H264?channel=0&subtype=1&unicast=true&proto=Onvif
Previously I could get one of the streams to work by including :5050 or
:5049 in the host field of the camera. But not both. Now make the
camera's host field reflect the ONVIF port (which is also non-standard
on these cameras, :85). It's not directly used yet but probably will be
sooner or later. Make each stream know its full URL.
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)
This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:
* non-administrator users who can view but not access camera passwords
or change any state
* workers that update signal state based on cameras' built-in motion
detection or a security system's events but don't need to view videos
* control over what can be done without authenticating
Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.
Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).