Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.
This brings most things reasonably up-to-date. libpasta's deps are
dragging a bit, keeping us on an older ring to avoid duplication,
and causing us to use three versions of base64. And I need to update
a few of my companion crates' parking_lot dep to match tokio.
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
Benefits:
* Blake3 is faster. This is most noticeable for the hashing of the
sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
(#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
support this old algorithm, and the pure-Rust sha1 crate is painfully
slow. OpenSSL might still be a better choice than ring/rustls for TLS
but it's nice to have the option.
For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
* simplify it. Go from six checked-in config files + one local one to
three checked-in configs + commandline options. I find it less
confusing to have the options plumbed through fewer layers.
* support developing against a https production server, as described in
guide/developing-ui.md.
* fix the source map. The sourceMap parameter in prod.config.js as far
as I can tell evaluated to false when run with production config, and
anyway UglifyJS seems to be incompatible with the specified
cheap-module-source-map. Use source-map instead.
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
This doesn't take much advantage of async fns so far. For example, the
with_{form,json}_body functions are still designed to be used with
future combinators when it'd be more natural to call them from async
fns now. But it's a start.
Similarly, this still uses the old version of reqwest. Small steps.
Requires Rust 1.40 now. (1.39 is a requirement of async, and 1.40 is a
requirement of http-serve 0.2.0.)
* in markdown files, use code fences rather than indented blocks.
This is harder to screw up (one of them was off by a space so didn't
render properly) and allows me to add info strings.
* uniformly use "useradd" to create the user and group in all three
places (install-manual.md, script-functions.sh, Dockerfile) rather
than addgroup + adduser. Create a full home dir, which I suspect was
the problem in #67. Don't allow customizing group name; it's always
the same as the user.
* install the sqlite3 package so that the "moonfire-nvr sql" command
works properly.
* remove "setup_db" function, which was out of place. Since the
creation of the "moonfire-nvr init" command, this has to happen
after installation of the binary. install.md gives instructions on
this part anyway so remove it from the script.
* give a proper command to create the db dir. It was creating it
within the current directory, not within /var/lib/moonfire-nvr.
Don't bother creating sample directory; "moonfire-nvr config"
will do this.
* when setting owners on a newly created directory, use a single
"install -d" command rather than "mkdir" + "chown".
* address confusion about whether sample file dirs need to be
precreated. (Only when Moonfire NVR doesn't have write permissions
on the parent.)
* always just install the packaged version of ffmpeg rather than
building our own. This has been usable since Debian/Raspbian 9
Stretch; Debian/Raspbian 10 Buster is out now so there's no excuse
for still running Debian/Raspbian 8 Jessie.
* don't chown the UI directory; it can be owned by root as with
the binary.
* in scripts/install.sh, don't enable/start the service yet. It hasn't
been configured.
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)
This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:
* non-administrator users who can view but not access camera passwords
or change any state
* workers that update signal state based on cameras' built-in motion
detection or a security system's events but don't need to view videos
* control over what can be done without authenticating
Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.
Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).
travis-ci pointed out that the dependency bump broke 1.31:
Compiling docopt v1.1.0
error[E0658]: imports can only refer to extern crate names passed with `--extern` on stable channel (see issue #53130)
--> /home/travis/.cargo/registry/src/github.com-1ecc6299db9ec823/docopt-1.1.0/src/parse.rs:48:5
|
48 | use regex;
| ^^^^^
|
Looks like uniform_paths was stabilized in 1.32, and I verified locally that
version builds.
The 091217b workaround of telling ffmpeg to only request the video
stream works perfectly fine for now. I'll revisit when adding audio
support (#34).
Fixes#36
Apparently with docopt, --require-auth=false doesn't work, so booleans
with a default value of true can't be turned off. Toggle the default to
false to deal with this, for now. I'd prefer the default be true, but
I also would prefer to not use a negative --no-require-auth or
--allow-unauthenticated flag. I think I'll switch from docopt to clap
in the near future; it seems to be what the cool kids use.
The guide is not as quick to follow and amateur-friendly as I'd like. A
few things that might improve matters:
* complete #27 (built-in https+letsencrypt), so that when not sharing
the port, users don't need to use nginx or certbot.
* more ubiquitous IPv6 (out of my control but should happen over
time) to reduce need to share the port
* embed a dynamic DNS client
* support UPnP Internet Gateway Device Control Protocol (if common
routers have this enabled? probably not for security reasons.)
It's progress, though. Enough that I think I'll merge the auth branch
into master shortly.
Some caveats:
* it doesn't record the peer IP yet, which makes it harder to verify
sessions are valid. This is a little annoying to do in hyper now
(see hyperium/hyper#1410). The direct peer might not be what we want
right now anyway because there's no TLS support yet (see #27). In
the meantime, the sane way to expose Moonfire NVR to the Internet is
via a proxy server, and recording the proxy's IP is not useful.
Maybe better to interpret a RFC 7239 Forwarded header (and/or
the older X-Forwarded-{For,Proto} headers).
* it doesn't ever use Secure (https-only) cookies, for a similar reason.
It's not safe to use even with a tls proxy until this is fixed.
* there's no "moonfire-nvr config" support for inspecting/invalidating
sessions yet.
* in debug builds, logging in is crazy slow. See libpasta/libpasta#9.
Some notes:
* I removed the Javascript "no-use-before-defined" lint, as some of
the functions form a cycle.
* Fixed#20 along the way. I needed to add support for properly
returning non-OK HTTP statuses to signal unauthorized and such.
* I removed the Access-Control-Allow-Origin header support, which was
at odds with the "SameSite=lax" in the cookie header. The "yarn
start" method for running a local proxy server accomplishes the same
thing as the Access-Control-Allow-Origin support in a more secure
manner.
Fixes#60
The reqwest dependency is significant because the old version required
an old version of openssl, complicating compilation on newer platforms.
reqwest also pulled in old/duplicate versions of hyper, tokio, etc.
Nice to drop a lot of that cruft.
I left rusqlite and uuid alone because they had breaking changes I
didn't want to mess with at the moment.
Bumped the minimum Rust version to 1.30.0, as required by the
new encoding_rs crate (and perhaps other things).
1.26 doesn't work with the updated rusqlite:
error[E0658]: use of unstable library feature 'duration_extras' (see issue #46507)
--> /home/travis/.cargo/registry/src/github.com-1ecc6299db9ec823/rusqlite-0.14.0/src/busy.rs:26:49
|
26 | .and_then(|t| t.checked_add(timeout.subsec_millis().into()))
| ^^^^^^^^^^^^^
1.25 also fails with the upgraded reffers because u128 isn't stable:
error[E0658]: 128-bit type is unstable (see issue #35118)
--> /home/travis/.cargo/git/checkouts/reffers-rs-0d00fc7f893338b3/49a4d75/src/rc_bitmask.rs:194:34
|
194 | rc_bit_mask_internal!(primitive, u128, 42, 42, 42);
| ^^^^
travis-ci pointed out that building with 1.21 broke with a recent dep
upgrade (8c52c36). reffers now uses nested groups of imports, which is a
feature introduced with Rust 1.25. Prior to 1.25, it fails as follows:
error: expected one of `,` or `as`, found `::`
--> /home/travis/.cargo/git/checkouts/reffers-rs-0d00fc7f893338b3/49a4d75/src/arc.rs:6:46
|
6 | use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
| ^^ expected one of `,` or `as` here