As noted in mylog's 2b1085c:
Looks like both the GNU tools' --color argument and cargo's
CARGO_TERM_COLOR expect always/never rather than on/off. Match that.
Might as well understand off/no/false and on/yes/true also.
* add more description to the troubleshooting guide
* adjust the log format to match more recent glog
* include a config for the lnav tool, which will help colorize,
browse, and search the logs.
Next up: install an ffmpeg log callback for consistency.
This eases build setup. Where Yarn requires a separate package
repository, npm is available in the standard one.
yarn's package repository signature was recently expired, and apparently
will expire again in a year. Avoid dealing with that.
Fixes#110.
Inspired by the poor error message here:
https://github.com/scottlamb/moonfire-nvr/issues/107#issuecomment-777587727
* print the friendlier Display version of the error rather than Debug.
Eg, "EROFS: Read-only filesystem" rather than "Sys(EROFS)". Do this
everywhere: on command exit, on syncer retries, and on stream
retries.
* print the most immediate problem and additional lines for each
cause.
* print the backtrace or an advertisement for RUST_BACKTRACE=1 if it's
unavailable.
* also mention RUST_BACKTRACE=1 in the troubleshooting guide.
* add context in various places, including pathnames. There are surely
many places more it'd be helpful, but this is a start.
* allow subcommands to return failure without an Error.
In particular, "moonfire-nvr check" does its own error printing
because it wants to print all the errors it finds. Printing "see
earlier errors" with a meaningless stack trace seems like it'd just
confuse. But I also want to get rid of the misleading "Success" at
the end and 0 return to the OS.
* give a rule of thumb for update time in the documentation
* log the SQLite3 version, which can affect performance
* do the vacuum in non-WAL mode, to correctly set the page size and to
avoid very slow behavior on older SQLite3 versions. Larger page sizes
are generally faster (including subsequent vacuum operations).
This won't help much for the first vacuum after this change, but it
will help afterward.
* likewise, set the page size properly on "moonfire-nvr init".
Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.
This brings most things reasonably up-to-date. libpasta's deps are
dragging a bit, keeping us on an older ring to avoid duplication,
and causing us to use three versions of base64. And I need to update
a few of my companion crates' parking_lot dep to match tokio.
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
Benefits:
* Blake3 is faster. This is most noticeable for the hashing of the
sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
(#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
support this old algorithm, and the pure-Rust sha1 crate is painfully
slow. OpenSSL might still be a better choice than ring/rustls for TLS
but it's nice to have the option.
For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
* simplify it. Go from six checked-in config files + one local one to
three checked-in configs + commandline options. I find it less
confusing to have the options plumbed through fewer layers.
* support developing against a https production server, as described in
guide/developing-ui.md.
* fix the source map. The sourceMap parameter in prod.config.js as far
as I can tell evaluated to false when run with production config, and
anyway UglifyJS seems to be incompatible with the specified
cheap-module-source-map. Use source-map instead.
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
This doesn't take much advantage of async fns so far. For example, the
with_{form,json}_body functions are still designed to be used with
future combinators when it'd be more natural to call them from async
fns now. But it's a start.
Similarly, this still uses the old version of reqwest. Small steps.
Requires Rust 1.40 now. (1.39 is a requirement of async, and 1.40 is a
requirement of http-serve 0.2.0.)
* in markdown files, use code fences rather than indented blocks.
This is harder to screw up (one of them was off by a space so didn't
render properly) and allows me to add info strings.
* uniformly use "useradd" to create the user and group in all three
places (install-manual.md, script-functions.sh, Dockerfile) rather
than addgroup + adduser. Create a full home dir, which I suspect was
the problem in #67. Don't allow customizing group name; it's always
the same as the user.
* install the sqlite3 package so that the "moonfire-nvr sql" command
works properly.
* remove "setup_db" function, which was out of place. Since the
creation of the "moonfire-nvr init" command, this has to happen
after installation of the binary. install.md gives instructions on
this part anyway so remove it from the script.
* give a proper command to create the db dir. It was creating it
within the current directory, not within /var/lib/moonfire-nvr.
Don't bother creating sample directory; "moonfire-nvr config"
will do this.
* when setting owners on a newly created directory, use a single
"install -d" command rather than "mkdir" + "chown".
* address confusion about whether sample file dirs need to be
precreated. (Only when Moonfire NVR doesn't have write permissions
on the parent.)
* always just install the packaged version of ffmpeg rather than
building our own. This has been usable since Debian/Raspbian 9
Stretch; Debian/Raspbian 10 Buster is out now so there's no excuse
for still running Debian/Raspbian 8 Jessie.
* don't chown the UI directory; it can be owned by root as with
the binary.
* in scripts/install.sh, don't enable/start the service yet. It hasn't
been configured.
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.