Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.
This replaces the previous Dockerfile, which was a single stage for
building and deployment.
The new one is a multi-stage build. Its "dev" target has the full
development environment; its "deploy" target is more slim. It supports
cross-compiled builds via BuildKit, eg to prepare a build suitable for
a Raspberry Pi:
docker buildx build --load --platform=linux/arm64/v8 --tag=moonfire-nvr --progress=plain --target=deploy -f docker/Dockerfile .
Coming next: updating the installation docs.
The code was confused about whether this was relative to the start of
the recording or the desired range of the recording. When the two
differ (that is, when the beginning of the segment is skipped), the
duration was incorrect. The thread would panic when skipping beyond
the start of the next second.
Also add some missing info to Segment's Debug impl.
This brings most things reasonably up-to-date. libpasta's deps are
dragging a bit, keeping us on an older ring to avoid duplication,
and causing us to use three versions of base64. And I need to update
a few of my companion crates' parking_lot dep to match tokio.
With Rust 1.48.0, I was getting panics like "attempted to leave type
`linked_hash_map::Node<std::string::String,
raw_statement::RawStatement>` uninitialized, which is invalid". Looks
like this is due to a bug in linked-hash-map 0.5.2, fixed with 0.5.3.
Along the way, I see that rusqlite no longer uses this crate; it
switched to hashlink. Upgrade rusqlite and match that change for my own
use (video index LRU cache).
I'm still pulling in linked-hash-map via serde_yaml. I didn't even
realize I was using serde_yaml, but libpasta wants it. Seems a little
silly to me but that's something I might explore another day.
This works around the problem in which the first frame has an incorrect
pts. Before we waited for the second key frame, which added annoying
latency to starting the stream.
I don't think the buffering does anything helpful for Moonfire NVR
anyway.
This should reduce live stream latency by two seconds when my cameras
are at their default setting (I frame interval = 2 * frame rate)!
I was under the impression that every HTML5 Media Source Extensions
media segment had to start with a Random Access Point. This used to
be true, but apparently changed quite a while ago:
https://bugs.chromium.org/p/chromium/issues/detail?id=229412
Support generating segments that don't start with a key frame, and
plumb this through the mp4 media segment generation logic. Add some
extra error checking in mp4 slice handling, as my first attempts had a
mismatch between expected and actual lengths that silently returned
corrupted .m4s files.
Also pull everything from the most recent key frame on along with the
first live segment to reduce startup latency. Live view is quite a bit
more pleasant now.
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
This is a quick fix to a problem that gives a confusing/poor initial
experience, as in this thread:
https://groups.google.com/g/moonfire-nvr-users/c/WB-TIW3bBZI/m/Gqh-L6I9BgAJ
I don't think it's a permanent solution. In particular, when we
implement an event stream (#40), I don't want to have a separate event
for every frame, so having the days map change that often won't work.
The client side will likely manipulate the days map then to include a
special entry for a growing recording, representing "from this time to
now".
* Get rid of unused video_sample_entry rows. h264_reader rejected some
of these; perhaps they were corrupted by some long-fixed bug.
* Use an i64 for cum_duration_90k (oops); an i32 overflows with only 6.6 hours
of recording, so this was guaranteed to fail on any real setup.
* Add some context to those errors for debugging.
For posterity, a video_sample_entry that failed:
sqlite> select id, hex(sha1), width, height, rfc6381_codec, hex(data) from video_sample_entry where id = 9;
9|B3607B06107E779F57D062331FB54B59E964B9BC|1920|1080|avc1.640028|000000B26176633100000000000000010000000000000000000000000000000007800438004800000048000000000000000100000000000000000000000000000000000000000000000000000000000000000018FFFF0000005C6176634301640028FFE1002967640028AC1B1A80780227E5C05B808080A000007D0000186A1D0C0029FF5DE5C6860014FFAEF2E140010020A886052ACA0500769C28476EFE104A8000F08781320819888E894B5200000000
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
I've never seen this happen in Chrome; each load/reload starts fresh,
so infinite is never selected. But on Firefox it seems to remember the
setting across reloads, triggering this bug.
This bug was introduced in 58152e8: it started parsing/normalizing the
HTML form into a Javascript field on change. It didn't handle the
initial load properly. Prior to that commit, fetch() simply read
directly from the HTML form, so it didn't care about initial vs update.
This uses "async fn" throughout rather than a mix of async and the older
futures style. And it takes advantage of the "self: Arc<Self>" syntax
to avoid having a ServiceInner. It was confusing to have some methods
on Service and some on ServiceInner; now that distinction is gone.
One downside is there's a little more atomic reference-counting. Before,
service_fn essentially took an &Arc<Self>, which means it could call
Arc::clone where its use of self actually outlived the future (see
stream_live_m4s) but didn't need to otherwise. After, it calls
an async fn that takes Arc<Self>. Using &Arc<Self> is apparently
possible (as of Rust 1.41) but using that with "async fn" means the
returned future is tied to its lifetime. The workaround is to use
async blocks as described here:
<https://rust-lang.github.io/async-book/03_async_await/01_chapter.html>
but that's really ugly: it brings back the explicit Future reference,
requires futures::future::Either in some cases, and introduces another
level of indenting. I think it's better to just pay the arc costs which
are probably negligible, or at least cheaper than the boxing was before.
Oh, and I make this compile on Rust 1.40 again as it claimed to.
http-serve accidentally used the &Arc<Self> thing which broke this.
Update to a freshly-pushed commit which doesn't do that.