This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
* Get rid of unused video_sample_entry rows. h264_reader rejected some
of these; perhaps they were corrupted by some long-fixed bug.
* Use an i64 for cum_duration_90k (oops); an i32 overflows with only 6.6 hours
of recording, so this was guaranteed to fail on any real setup.
* Add some context to those errors for debugging.
For posterity, a video_sample_entry that failed:
sqlite> select id, hex(sha1), width, height, rfc6381_codec, hex(data) from video_sample_entry where id = 9;
9|B3607B06107E779F57D062331FB54B59E964B9BC|1920|1080|avc1.640028|000000B26176633100000000000000010000000000000000000000000000000007800438004800000048000000000000000100000000000000000000000000000000000000000000000000000000000000000018FFFF0000005C6176634301640028FFE1002967640028AC1B1A80780227E5C05B808080A000007D0000186A1D0C0029FF5DE5C6860014FFAEF2E140010020A886052ACA0500769C28476EFE104A8000F08781320819888E894B5200000000
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":
In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.
In "segments" mode, playback respects the timestamps we set:
* The obvious choice is to use wall clock timestamps. This is fine if
they're known to be fixed and correct. They're not. The
currently-recording segment may be "unanchored", meaning its start
timestamp is not yet fixed. Older timestamps may overlap if the system
clock was stepped between runs. The latter isn't /too/ bad from a user
perspective, though it's confusing as a developer. We probably will
only end up showing the more recent recording for a given
timestamp anyway. But the former is quite annoying. It means we have
to throw away part of the SourceBuffer that we may want to seek back
(causing UI pauses when that happens) or keep our own spare copy of it
(memory bloat). I'd like to avoid the whole mess.
* Another approach is to use timestamps that are guaranteed to be in
the correct order but that may have gaps. In particular, a timestamp
of (recording_id * max_recording_duration) + time_within_recording.
But again seeking isn't instantaneous. In my experiments, there's a
visible pause between segments that drives me nuts.
* Finally, the approach that led me to this schema change. Use
timestamps that place each segment after the one before, possibly with
an intentional gap between runs (to force a wait where we have an
actual gap). This should make the browser's natural playback behavior
work properly: it never goes to an incorrect place, and it only waits
when/if we want it to. We have to maintain a mapping between its
timestamps and segment ids but that's doable.
This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.
Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.
The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
I've never seen this happen in Chrome; each load/reload starts fresh,
so infinite is never selected. But on Firefox it seems to remember the
setting across reloads, triggering this bug.
This bug was introduced in 58152e8: it started parsing/normalizing the
HTML form into a Javascript field on change. It didn't handle the
initial load properly. Prior to that commit, fetch() simply read
directly from the HTML form, so it didn't care about initial vs update.
This uses "async fn" throughout rather than a mix of async and the older
futures style. And it takes advantage of the "self: Arc<Self>" syntax
to avoid having a ServiceInner. It was confusing to have some methods
on Service and some on ServiceInner; now that distinction is gone.
One downside is there's a little more atomic reference-counting. Before,
service_fn essentially took an &Arc<Self>, which means it could call
Arc::clone where its use of self actually outlived the future (see
stream_live_m4s) but didn't need to otherwise. After, it calls
an async fn that takes Arc<Self>. Using &Arc<Self> is apparently
possible (as of Rust 1.41) but using that with "async fn" means the
returned future is tied to its lifetime. The workaround is to use
async blocks as described here:
<https://rust-lang.github.io/async-book/03_async_await/01_chapter.html>
but that's really ugly: it brings back the explicit Future reference,
requires futures::future::Either in some cases, and introduces another
level of indenting. I think it's better to just pay the arc costs which
are probably negligible, or at least cheaper than the boxing was before.
Oh, and I make this compile on Rust 1.40 again as it claimed to.
http-serve accidentally used the &Arc<Self> thing which broke this.
Update to a freshly-pushed commit which doesn't do that.
* use content-hashed paths for static resources (except the top-level
request), with immutable Cache-Control headers. This should improve
cache behavior in both directions: avoid preventable HTTP requests and
cause immediate refresh when needed. I had some staleness when
browsing with my phone.
* set up the favicons properly while I'm at it (closes#50). I used the
convenient favicons-webpack-plugin to build everything from a .svg.
I've hit an error similar to lovell/sharp#1593 at least once though so
I might change my mind about that part if it continues to be
problematic.
* use http-serve's new directory traversal code for static file serving.
This removes the odd behavior where files that weren't present at
server startup couldn't be served. (I wasn't comfortable switching to
the content-hashed paths before doing this.) It also means the static
files can be served compressed. JSON API responses were already served
compressed, so this closes#25.
* for a given API URL, decide if we want it to be cached or not
server-side. Stop using jQuery's kludgy cache-defeating _=<timestamp>
URL parameter. I might start setting etags on some of these things
and could serve 304 Not Modified responses if it's genuinely
unmodified.
nav div changes:
* make it togglable (on all devices) by hamburger button
* on narrow devices, make it closed by default and
be at the top rather than on the left
open zoomed by default
trim some arguably less important columns on narrow displays,
and reduce some horizontal padding
always show videos full-screen on narrow displays
* make it togglable (on all devices) by hamburger button
* on narrow devices, make it closed by default and
be at the top rather than on the left
Improves #68 significantly
Both a "cargo update" and a bump of major versions of a few deps.
I left a few alone:
* base64 because some of the deps depend on 0.11 (and 0.9), so I don't
want to pull in a third version (0.12).
* ring because libpasta depends on this version and I don't want to pull
in two of them.
* time because it's not trivial. Last I checked, time 0.2 couldn't even
do what I wanted at all.
I also made tokio use parking_lot, since I pull it in anyway.
This reduces the binary size noticeably on my macOS machine (#70):
unstripped stripped
1 before switching to clap 11.1 MiB 6.7 MiB
2 after switching to clap 11.4 MiB 6.9 MiB
3 without regex 10.1 MiB 5.9 MiB
A couple reasons for this:
* the docopt crate is "unlikely to see significant future evolution",
and the wider docopt project is "mostly unmaintained at this point".
clap/structopt is more full-featured, has more natural subcommand
support, etc.
* it may allow me to shrink the binary (#70). This change alone seems
to be a slight regression, but it's a step toward getting rid of
regex, which is pretty large. And I feel less ridiculous now that I
don't have two parsing crates anyway; prettydiff was pulling in
structopt.
There are some behavior changes here:
* misc --help output changes and such as you'd expect from switching
argument-parsing libraries
* I properly used PathBuf and OsString for stuff that theoretically
could be non-UTF-8. I haven't tested that it actually made any
difference. I'm also still storing the sample file dirname as "text"
in the database to avoid causing a diff when not doing a schema
change.
db/writer.rs used the word "unflushed" in two ways:
* something which has been communicated to the LockedDatabase object but
not yet committed to disk with SQLite.
* a video sample (aka video frame) which has been written to the sample
file but not yet included in the video index. This happens because the
duration of a frame isn't known until the following frame. These are
always also unflushed in the other sense of the word (as unfinished
recordings are never committed). But they can't be seen by clients at
all, where indexed but uncommitted video frames can.
Replace the latter with "unindexed" to make things more clear. And a
couple minor other style cleanups.
When compiled with cargo build --features=analytics and enabled via
moonfire-nvr run --object-detection, this runs object detection on every
sub stream frame through an Edge TPU (a Coral USB accelerator) and logs
the result.
This is a very small step toward a working system. It doesn't actually
record the result in the database or send it out on the live stream yet.
It doesn't support running object detection at a lower frame rate than
the sub streams come in at either. To address those problems, I need to
do some refactoring. Currently moonfire_db::writer::Writer::Write is the
only place that knows the duration of the frame it's about to flush,
before it gets added to the index or sent out on the live stream. I
don't want to do the detection from there; I'd prefer the moonfire_nvr
crate. So I either need to introduce an analytics callback or move a
bunch of that logic to the other crate.
Once I do that, I need to add database support (although I have some
experiments for that in moonfire-playground) and API support, then some
kind of useful frontend.
Note edgetpu.tflite is taken from the Apache 2.0-licensed
https://github.com/google-coral/edgetpu,
test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite. The
following page says it's fine to include Apache 2.0 stuff in GPLv3
projects:
https://www.apache.org/licenses/GPL-compatibility.html
Benefits:
* Blake3 is faster. This is most noticeable for the hashing of the
sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
(#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
support this old algorithm, and the pure-Rust sha1 crate is painfully
slow. OpenSSL might still be a better choice than ring/rustls for TLS
but it's nice to have the option.
For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
I want to start returning the pixel aspect ratio of each video sample
entry. It's silly to duplicate it for each returned recording, so
let's instead return a videoSampleEntryId and then put all the
information about each VSE once.
This change doesn't actually handle pixel aspect ratio server-side yet.
Most likely I'll require a new schema version for that, to store it as a
new column in the database. Codec-specific logic in the database layer
is awkward and I'd like to avoid it. I did a similar schema change to
add the rfc6381_codec.
I also adjusted ui-src/lib/models/Recording.js in a few ways:
* fixed a couple mismatches between its field name and the key defined
in the API. Consistency aids understanding.
* dropped all the getters in favor of just setting the fields (with
type annotations) as described here:
https://google.github.io/styleguide/jsguide.html#features-classes-fields
* where the wire format used undefined (to save space), translate it to
a more natural null or false.
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
every file rather than whoever created that file. Have one AUTHORS
file listing everyone.
* Consistently call it a "security camera network video recorder" rather
than "security camera digital video recorder".
These apparently were silent until 92c532d mass-upgraded deps.
Apparently eslint returned status 0 despite errors before and now
returns 1.
Most of these were handled by its "--fix" option; I manually took care
of the remaining two:
/Users/slamb/git/moonfire-nvr/ui-src/lib/views/RecordingsView.js
140:1 error This line has a length of 82. Maximum allowed is 80 max-len
/Users/slamb/git/moonfire-nvr/ui-src/lib/views/StreamSelectorView.js
72:1 error This line has a length of 82. Maximum allowed is 80 max-len
Looks like I basically had to do this to keep up. With nodejs version 12
(current LTS), the version of fsevents I installed wouldn't build. A
"yarn upgrade" by itself resulted in a new problem as described in #69.
Conversely, the new versions don't install with nodejs 8. So I bit the
bullet and upgraded all the dev dependency stuff and the nodejs at once.
nodejs 10 seems capable of running either the old or new, fwiw.
I'm a little sad that this seems to have made the UI bundle 5% larger.
Before, "yarn build" said 350 KiB. After, 369 KiB. A little bit in
several places. For example, jquery-ui.bundle.js went from 156 KiB (in
2 chunks) to 160 KiB (in 1 chunk) for some reason.
Apparently WebPack builds in "terser", which is what the cool kids use.
Just go with the default and simplify the configuration as well as
installing fewer node modules.
"babel-minify-webpack-plugin" wasn't actually being used, and
babel-minify is still in beta anyway.
uglifyjs, according to https://github.com/webpack/webpack/issues/7923,
doesn't support ES6 and depends on a package which is no longer
maintained.
* simplify it. Go from six checked-in config files + one local one to
three checked-in configs + commandline options. I find it less
confusing to have the options plumbed through fewer layers.
* support developing against a https production server, as described in
guide/developing-ui.md.
* fix the source map. The sourceMap parameter in prod.config.js as far
as I can tell evaluated to false when run with production config, and
anyway UglifyJS seems to be incompatible with the specified
cheap-module-source-map. Use source-map instead.
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
Let's follow the Google Style Guide, in which private variables are
simply suffixed with "_". It's a sign, not a cop, but that's fine.
I'd rather keep things simple, and code review should suffice for
catching uses of a private variable outside the class.
This is effective both for Chrome's "Save As" dialog and for curl -OJ.
It makes the filename like 20190717135519-driveway-main.mp4 rather than
view.mp4 (Chrome) or view.mp4?s=33-36&ts=true (Curl).
The reqwest one is particularly notable because it means not having two
versions of hyper/http/tokio/futures/bytes. It also drops a number of
transitive deps; with some work I think I could stop depending on regex
now.