This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
This uses "async fn" throughout rather than a mix of async and the older
futures style. And it takes advantage of the "self: Arc<Self>" syntax
to avoid having a ServiceInner. It was confusing to have some methods
on Service and some on ServiceInner; now that distinction is gone.
One downside is there's a little more atomic reference-counting. Before,
service_fn essentially took an &Arc<Self>, which means it could call
Arc::clone where its use of self actually outlived the future (see
stream_live_m4s) but didn't need to otherwise. After, it calls
an async fn that takes Arc<Self>. Using &Arc<Self> is apparently
possible (as of Rust 1.41) but using that with "async fn" means the
returned future is tied to its lifetime. The workaround is to use
async blocks as described here:
<https://rust-lang.github.io/async-book/03_async_await/01_chapter.html>
but that's really ugly: it brings back the explicit Future reference,
requires futures::future::Either in some cases, and introduces another
level of indenting. I think it's better to just pay the arc costs which
are probably negligible, or at least cheaper than the boxing was before.
Oh, and I make this compile on Rust 1.40 again as it claimed to.
http-serve accidentally used the &Arc<Self> thing which broke this.
Update to a freshly-pushed commit which doesn't do that.
* use content-hashed paths for static resources (except the top-level
request), with immutable Cache-Control headers. This should improve
cache behavior in both directions: avoid preventable HTTP requests and
cause immediate refresh when needed. I had some staleness when
browsing with my phone.
* set up the favicons properly while I'm at it (closes#50). I used the
convenient favicons-webpack-plugin to build everything from a .svg.
I've hit an error similar to lovell/sharp#1593 at least once though so
I might change my mind about that part if it continues to be
problematic.
* use http-serve's new directory traversal code for static file serving.
This removes the odd behavior where files that weren't present at
server startup couldn't be served. (I wasn't comfortable switching to
the content-hashed paths before doing this.) It also means the static
files can be served compressed. JSON API responses were already served
compressed, so this closes#25.
* for a given API URL, decide if we want it to be cached or not
server-side. Stop using jQuery's kludgy cache-defeating _=<timestamp>
URL parameter. I might start setting etags on some of these things
and could serve 304 Not Modified responses if it's genuinely
unmodified.
Both a "cargo update" and a bump of major versions of a few deps.
I left a few alone:
* base64 because some of the deps depend on 0.11 (and 0.9), so I don't
want to pull in a third version (0.12).
* ring because libpasta depends on this version and I don't want to pull
in two of them.
* time because it's not trivial. Last I checked, time 0.2 couldn't even
do what I wanted at all.
I also made tokio use parking_lot, since I pull it in anyway.
This reduces the binary size noticeably on my macOS machine (#70):
unstripped stripped
1 before switching to clap 11.1 MiB 6.7 MiB
2 after switching to clap 11.4 MiB 6.9 MiB
3 without regex 10.1 MiB 5.9 MiB
A couple reasons for this:
* the docopt crate is "unlikely to see significant future evolution",
and the wider docopt project is "mostly unmaintained at this point".
clap/structopt is more full-featured, has more natural subcommand
support, etc.
* it may allow me to shrink the binary (#70). This change alone seems
to be a slight regression, but it's a step toward getting rid of
regex, which is pretty large. And I feel less ridiculous now that I
don't have two parsing crates anyway; prettydiff was pulling in
structopt.
There are some behavior changes here:
* misc --help output changes and such as you'd expect from switching
argument-parsing libraries
* I properly used PathBuf and OsString for stuff that theoretically
could be non-UTF-8. I haven't tested that it actually made any
difference. I'm also still storing the sample file dirname as "text"
in the database to avoid causing a diff when not doing a schema
change.
When compiled with cargo build --features=analytics and enabled via
moonfire-nvr run --object-detection, this runs object detection on every
sub stream frame through an Edge TPU (a Coral USB accelerator) and logs
the result.
This is a very small step toward a working system. It doesn't actually
record the result in the database or send it out on the live stream yet.
It doesn't support running object detection at a lower frame rate than
the sub streams come in at either. To address those problems, I need to
do some refactoring. Currently moonfire_db::writer::Writer::Write is the
only place that knows the duration of the frame it's about to flush,
before it gets added to the index or sent out on the live stream. I
don't want to do the detection from there; I'd prefer the moonfire_nvr
crate. So I either need to introduce an analytics callback or move a
bunch of that logic to the other crate.
Once I do that, I need to add database support (although I have some
experiments for that in moonfire-playground) and API support, then some
kind of useful frontend.
Note edgetpu.tflite is taken from the Apache 2.0-licensed
https://github.com/google-coral/edgetpu,
test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite. The
following page says it's fine to include Apache 2.0 stuff in GPLv3
projects:
https://www.apache.org/licenses/GPL-compatibility.html
Benefits:
* Blake3 is faster. This is most noticeable for the hashing of the
sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
(#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
support this old algorithm, and the pure-Rust sha1 crate is painfully
slow. OpenSSL might still be a better choice than ring/rustls for TLS
but it's nice to have the option.
For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
I want to start returning the pixel aspect ratio of each video sample
entry. It's silly to duplicate it for each returned recording, so
let's instead return a videoSampleEntryId and then put all the
information about each VSE once.
This change doesn't actually handle pixel aspect ratio server-side yet.
Most likely I'll require a new schema version for that, to store it as a
new column in the database. Codec-specific logic in the database layer
is awkward and I'd like to avoid it. I did a similar schema change to
add the rfc6381_codec.
I also adjusted ui-src/lib/models/Recording.js in a few ways:
* fixed a couple mismatches between its field name and the key defined
in the API. Consistency aids understanding.
* dropped all the getters in favor of just setting the fields (with
type annotations) as described here:
https://google.github.io/styleguide/jsguide.html#features-classes-fields
* where the wire format used undefined (to save space), translate it to
a more natural null or false.
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
every file rather than whoever created that file. Have one AUTHORS
file listing everyone.
* Consistently call it a "security camera network video recorder" rather
than "security camera digital video recorder".
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
This is effective both for Chrome's "Save As" dialog and for curl -OJ.
It makes the filename like 20190717135519-driveway-main.mp4 rather than
view.mp4 (Chrome) or view.mp4?s=33-36&ts=true (Curl).
The reqwest one is particularly notable because it means not having two
versions of hyper/http/tokio/futures/bytes. It also drops a number of
transitive deps; with some work I think I could stop depending on regex
now.
This doesn't take much advantage of async fns so far. For example, the
with_{form,json}_body functions are still designed to be used with
future combinators when it'd be more natural to call them from async
fns now. But it's a start.
Similarly, this still uses the old version of reqwest. Small steps.
Requires Rust 1.40 now. (1.39 is a requirement of async, and 1.40 is a
requirement of http-serve 0.2.0.)
The codec -> codecpar move was sufficiently long ago (libavformat
57.5.0 on 2016-04-11) that I think we can just get away with requiring
the new version. Let's try it.
But if someone complains, AVCodecParameters and AVCodecContext look
sufficiently similar we could probably just use one or the other based on
the version we're compiling with.
This addressed a deprecation warning on nightly (will be in Rust 1.38).
Use parking_lot instead, which in theory is faster (although I doubt
it's significant here).
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.
My dad's "GW-GW4089IP" cameras use separate ports for the main and sub
streams:
rtsp://192.168.1.110:5050/H264?channel=0&subtype=0&unicast=true&proto=Onvif
rtsp://192.168.1.110:5049/H264?channel=0&subtype=1&unicast=true&proto=Onvif
Previously I could get one of the streams to work by including :5050 or
:5049 in the host field of the camera. But not both. Now make the
camera's host field reflect the ONVIF port (which is also non-standard
on these cameras, :85). It's not directly used yet but probably will be
sooner or later. Make each stream know its full URL.
This delegates to the "sqlite3" CLI but has a couple benefits over using
sqlite3 directly:
* safer because it does the same locking as other moonfire-nvr invocations
* more convenient because it takes the same argument format as other
moonfire-nvr subcommands:
* --db-dir rather than full path including /db suffix
* has the --db-dir default value
* --read-only rather than file:...?mode=ro
Use like "moonfire-nvr sql" or "moonfire-nvr sql --read-only".
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)
This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:
* non-administrator users who can view but not access camera passwords
or change any state
* workers that update signal state based on cameras' built-in motion
detection or a security system's events but don't need to view videos
* control over what can be done without authenticating
Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.
Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).