Commit Graph

674 Commits

Author SHA1 Message Date
Scott Lamb
877e455686 upgrade some deps including parking_lot
Now there's a single parking_lot version, yay.
2020-11-23 22:46:05 -08:00
Scott Lamb
8812dab639 update libpasta/ring
This uses my (hopefully short-lived) libpasta fork.
2020-11-23 21:47:56 -08:00
Scott Lamb
269db57a53 update various deps
This brings most things reasonably up-to-date. libpasta's deps are
dragging a bit, keeping us on an older ring to avoid duplication,
and causing us to use three versions of base64. And I need to update
a few of my companion crates' parking_lot dep to match tokio.
2020-11-23 00:31:38 -08:00
Scott Lamb
8512199d85 Merge branch 'master' into new-schema 2020-11-22 20:40:16 -08:00
Scott Lamb
642eb4f60b update min Rust version to 1.42.0
rusqlite 0.24.1 uses matches!, which was introduced in this version.
So 1682553 does't work on prior versions.

https://travis-ci.org/github/scottlamb/moonfire-nvr/jobs/745309747
2020-11-22 18:45:54 -08:00
Scott Lamb
16825537b9 upgrade rusqlite, linked-hash-map/hashlink
With Rust 1.48.0, I was getting panics like "attempted to leave type
`linked_hash_map::Node<std::string::String,
raw_statement::RawStatement>` uninitialized, which is invalid". Looks
like this is due to a bug in linked-hash-map 0.5.2, fixed with 0.5.3.

Along the way, I see that rusqlite no longer uses this crate; it
switched to hashlink. Upgrade rusqlite and match that change for my own
use (video index LRU cache).

I'm still pulling in linked-hash-map via serde_yaml. I didn't even
realize I was using serde_yaml, but libpasta wants it. Seems a little
silly to me but that's something I might explore another day.
2020-11-22 17:37:55 -08:00
Scott Lamb
d83bb1bf4d better error msg when db dir open fails
The previous poor error message was reported by Marlon Hendred:
https://groups.google.com/g/moonfire-nvr-users/c/X9k2cOCJUDQ/m/1y1ikrWoAAAJ
2020-10-17 16:56:42 -07:00
Scott Lamb
cc93768a9a
Merge pull request #97 from IronOxidizer/master
Added pwa meta tags
2020-09-07 20:54:56 -07:00
main
2d5c2a4de8 Added pwa meta tags 2020-09-07 16:56:20 -04:00
Scott Lamb
75dce88a20 use "-fflags nobuffer" on ffmpeg rtsp stream
This works around the problem in which the first frame has an incorrect
pts. Before we waited for the second key frame, which added annoying
latency to starting the stream.

I don't think the buffering does anything helpful for Moonfire NVR
anyway.
2020-08-14 21:21:59 -07:00
Scott Lamb
8f792aeb2d live stream frame-by-frame rather than GOP-by-GOP (#59)
This should reduce live stream latency by two seconds when my cameras
are at their default setting (I frame interval = 2 * frame rate)!

I was under the impression that every HTML5 Media Source Extensions
media segment had to start with a Random Access Point. This used to
be true, but apparently changed quite a while ago:
https://bugs.chromium.org/p/chromium/issues/detail?id=229412

Support generating segments that don't start with a key frame, and
plumb this through the mp4 media segment generation logic. Add some
extra error checking in mp4 slice handling, as my first attempts had a
mismatch between expected and actual lengths that silently returned
corrupted .m4s files.

Also pull everything from the most recent key frame on along with the
first live segment to reduce startup latency. Live view is quite a bit
more pleasant now.
2020-08-07 15:56:57 -07:00
Scott Lamb
b9c08b18a4 fix live view
This broke with the media vs wall duration split, part of #34.
2020-08-07 10:16:06 -07:00
Scott Lamb
036e8427e6 complete wall/media time split (for #34) 2020-08-06 22:01:59 -07:00
Scott Lamb
cb97ccdfeb start splitting wall and media duration for #34
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
2020-08-04 21:44:01 -07:00
Scott Lamb
459615a616 include all recordings in days map (fixes #57)
This is a quick fix to a problem that gives a confusing/poor initial
experience, as in this thread:
https://groups.google.com/g/moonfire-nvr-users/c/WB-TIW3bBZI/m/Gqh-L6I9BgAJ

I don't think it's a permanent solution. In particular, when we
implement an event stream (#40), I don't want to have a separate event
for every frame, so having the days map change that often won't work.
The client side will likely manipulate the days map then to include a
special entry for a growing recording, representing "from this time to
now".
2020-07-18 12:13:08 -07:00
Scott Lamb
5515db0513
Merge pull request #91 from jackchallen/master
Add note about initializing empty DB
2020-07-18 07:35:30 -07:00
Jack Challen
9370732ed9 Add note about initializing empty DB
The first time moonfire's run it needs an (empty) db. The docs
appear to miss that step.
Made surrounding documentation slightly more explicit.
2020-07-18 09:26:58 +01:00
Scott Lamb
c5a4af15ff fix --ui-dir parsing
Thanks Jack Challen for reporting the breakage:
https://groups.google.com/g/moonfire-nvr-users/c/WB-TIW3bBZI/m/Gqh-L6I9BgAJ
2020-07-17 20:08:33 -07:00
Scott Lamb
476bd86b12 Merge branch 'master' into new-schema 2020-07-12 19:22:38 -07:00
Scott Lamb
959defebca track "assumed" filesystem usage (#89)
As described in #89, we need to refactor a bit before we can get the
actual filesystem block size. Assuming 4096 for now. Small steps.
2020-07-12 17:15:41 -07:00
Scott Lamb
42a6f4d091 API change: cameraConfigs should include rtsp urls 2020-06-22 15:41:14 -07:00
Scott Lamb
840524ec83 fix a couple v5->v6 schema upgrade problems
* Get rid of unused video_sample_entry rows. h264_reader rejected some
  of these; perhaps they were corrupted by some long-fixed bug.
* Use an i64 for cum_duration_90k (oops); an i32 overflows with only 6.6 hours
  of recording, so this was guaranteed to fail on any real setup.
* Add some context to those errors for debugging.

For posterity, a video_sample_entry that failed:

sqlite> select id, hex(sha1), width, height, rfc6381_codec, hex(data) from video_sample_entry where id = 9;
9|B3607B06107E779F57D062331FB54B59E964B9BC|1920|1080|avc1.640028|000000B26176633100000000000000010000000000000000000000000000000007800438004800000048000000000000000100000000000000000000000000000000000000000000000000000000000000000018FFFF0000005C6176634301640028FFE1002967640028AC1B1A80780227E5C05B808080A000007D0000186A1D0C0029FF5DE5C6860014FFAEF2E140010020A886052ACA0500769C28476EFE104A8000F08781320819888E894B5200000000
2020-06-10 19:38:45 -07:00
Scott Lamb
6f9612738c pass prev duration and runs through API layer
Builds on f3ddbfe, for #32 and #59.
2020-06-09 22:06:03 -07:00
Scott Lamb
f3ddbfe22a track cumulative duration and runs
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":

In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.

In "segments" mode, playback respects the timestamps we set:

* The obvious choice is to use wall clock timestamps. This is fine if
  they're known to be fixed and correct. They're not. The
  currently-recording segment may be "unanchored", meaning its start
  timestamp is not yet fixed. Older timestamps may overlap if the system
  clock was stepped between runs. The latter isn't /too/ bad from a user
  perspective, though it's confusing as a developer. We probably will
  only end up showing the more recent recording for a given
  timestamp anyway. But the former is quite annoying. It means we have
  to throw away part of the SourceBuffer that we may want to seek back
  (causing UI pauses when that happens) or keep our own spare copy of it
  (memory bloat). I'd like to avoid the whole mess.

* Another approach is to use timestamps that are guaranteed to be in
  the correct order but that may have gaps. In particular, a timestamp
  of (recording_id * max_recording_duration) + time_within_recording.
  But again seeking isn't instantaneous. In my experiments, there's a
  visible pause between segments that drives me nuts.

* Finally, the approach that led me to this schema change. Use
  timestamps that place each segment after the one before, possibly with
  an intentional gap between runs (to force a wait where we have an
  actual gap). This should make the browser's natural playback behavior
  work properly: it never goes to an incorrect place, and it only waits
  when/if we want it to. We have to maintain a mapping between its
  timestamps and segment ids but that's doable.

This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.

Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.

The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
2020-06-09 16:17:32 -07:00
Scott Lamb
6b5359b7cb use SQLite3 "extra synchronous" pragma
See
https://github.com/scottlamb/moonfire-nvr/issues/84#issuecomment-640862000
https://www.sqlite.org/pragma.html#pragma_synchronous
2020-06-08 13:35:45 -07:00
Scott Lamb
74fe33ec36 fix lint error
eslint is strict about jsdoc and line breaks on a even one-line nested
function, which is annoyingly verbose. Use an arrow function instead.
2020-06-08 10:41:35 -07:00
Scott Lamb
1fe5ef8e94 fix #79: errors when "infinite" selected on load
I've never seen this happen in Chrome; each load/reload starts fresh,
so infinite is never selected. But on Firefox it seems to remember the
setting across reloads, triggering this bug.

This bug was introduced in 58152e8: it started parsing/normalizing the
HTML form into a Javascript field on change. It didn't handle the
initial load properly. Prior to that commit, fetch() simply read
directly from the HTML form, so it didn't care about initial vs update.
2020-06-08 10:36:01 -07:00
Scott Lamb
2b7a3a31e2
Merge pull request #81 from scottlamb/dependabot/npm_and_yarn/websocket-extensions-0.1.4
Bump websocket-extensions from 0.1.3 to 0.1.4
2020-06-06 09:44:23 -07:00
dependabot[bot]
6888b7cb1c
Bump websocket-extensions from 0.1.3 to 0.1.4
Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-node) from 0.1.3 to 0.1.4.
- [Release notes](https://github.com/faye/websocket-extensions-node/releases)
- [Changelog](https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md)
- [Commits](https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-06 16:29:51 +00:00
Scott Lamb
6187aa64cf Merge branch 'master' into new-schema 2020-06-03 15:47:10 -07:00
Scott Lamb
04ab8cdc7d more readable async web code
This uses "async fn" throughout rather than a mix of async and the older
futures style. And it takes advantage of the "self: Arc<Self>" syntax
to avoid having a ServiceInner. It was confusing to have some methods
on Service and some on ServiceInner; now that distinction is gone.

One downside is there's a little more atomic reference-counting. Before,
service_fn essentially took an &Arc<Self>, which means it could call
Arc::clone where its use of self actually outlived the future (see
stream_live_m4s) but didn't need to otherwise. After, it calls
an async fn that takes Arc<Self>. Using &Arc<Self> is apparently
possible (as of Rust 1.41) but using that with "async fn" means the
returned future is tied to its lifetime. The workaround is to use
async blocks as described here:
<https://rust-lang.github.io/async-book/03_async_await/01_chapter.html>
but that's really ugly: it brings back the explicit Future reference,
requires futures::future::Either in some cases, and introduces another
level of indenting. I think it's better to just pay the arc costs which
are probably negligible, or at least cheaper than the boxing was before.

Oh, and I make this compile on Rust 1.40 again as it claimed to.
http-serve accidentally used the &Arc<Self> thing which broke this.
Update to a freshly-pushed commit which doesn't do that.
2020-05-30 21:34:37 -07:00
Scott Lamb
45abeb22de overhaul HTTP serving and caching
* use content-hashed paths for static resources (except the top-level
  request), with immutable Cache-Control headers. This should improve
  cache behavior in both directions: avoid preventable HTTP requests and
  cause immediate refresh when needed. I had some staleness when
  browsing with my phone.

* set up the favicons properly while I'm at it (closes #50). I used the
  convenient favicons-webpack-plugin to build everything from a .svg.
  I've hit an error similar to lovell/sharp#1593 at least once though so
  I might change my mind about that part if it continues to be
  problematic.

* use http-serve's new directory traversal code for static file serving.
  This removes the odd behavior where files that weren't present at
  server startup couldn't be served. (I wasn't comfortable switching to
  the content-hashed paths before doing this.) It also means the static
  files can be served compressed. JSON API responses were already served
  compressed, so this closes #25.

* for a given API URL, decide if we want it to be cached or not
  server-side. Stop using jQuery's kludgy cache-defeating _=<timestamp>
  URL parameter. I might start setting etags on some of these things
  and could serve 304 Not Modified responses if it's genuinely
  unmodified.
2020-05-29 21:20:15 -07:00
Scott Lamb
88fe6e5135 fix description of MOONFIRE_DEV_HOST 2020-05-08 19:29:47 -07:00
Scott Lamb
ae5b840fae update all js deps
This addresses this error travis-ci builds have been hitting:

Error: Cannot find module '@babel/compat-data/corejs3-shipped-proposals'
https://travis-ci.org/github/scottlamb/moonfire-nvr/jobs/684922930
https://github.com/nodejs/node/issues/32852#issuecomment-613655150
2020-05-08 19:25:45 -07:00
Scott Lamb
150556e105 fix js lint errors 2020-05-08 18:44:02 -07:00
Scott Lamb
c7c0d5a6c1 Merge branch 'master' into new-schema 2020-05-08 16:33:49 -07:00
Scott Lamb
e177cbd042 improve mobile-friendliness (#68)
nav div changes:
* make it togglable (on all devices) by hamburger button
* on narrow devices, make it closed by default and
  be at the top rather than on the left

open zoomed by default

trim some arguably less important columns on narrow displays,
and reduce some horizontal padding

always show videos full-screen on narrow displays
2020-05-04 23:36:24 -07:00
Scott Lamb
1b464fd555 nav div changes for narrow devices
* make it togglable (on all devices) by hamburger button
* on narrow devices, make it closed by default and
  be at the top rather than on the left

Improves #68 significantly
2020-05-04 18:06:12 -07:00
Scott Lamb
be479a1ffe consistently indent css by 2 spaces
matching the Google CSS style guide
2020-05-04 16:46:26 -07:00
Scott Lamb
482d8a3074 use mylog::Format::from_str 2020-04-21 22:19:17 -07:00
Scott Lamb
474d96803c Merge branch 'master' into new-schema 2020-04-19 22:53:42 -07:00
Scott Lamb
de56739571 upgrade deps
Both a "cargo update" and a bump of major versions of a few deps.
I left a few alone:

* base64 because some of the deps depend on 0.11 (and 0.9), so I don't
  want to pull in a third version (0.12).
* ring because libpasta depends on this version and I don't want to pull
  in two of them.
* time because it's not trivial. Last I checked, time 0.2 couldn't even
  do what I wanted at all.

I also made tokio use parking_lot, since I pull it in anyway.
2020-04-19 22:19:57 -07:00
Scott Lamb
618d0d71be Merge branch 'master' into new-schema 2020-04-17 23:33:46 -07:00
Scott Lamb
af9e568344 replace regex use with nom
This reduces the binary size noticeably on my macOS machine (#70):

                             unstripped  stripped
1  before switching to clap    11.1 MiB   6.7 MiB
2  after switching to clap     11.4 MiB   6.9 MiB
3  without regex               10.1 MiB   5.9 MiB
2020-04-17 21:53:52 -07:00
Scott Lamb
e8eb764b90 switch from docopt to structopt
A couple reasons for this:

* the docopt crate is "unlikely to see significant future evolution",
  and the wider docopt project is "mostly unmaintained at this point".
  clap/structopt is more full-featured, has more natural subcommand
  support, etc.

* it may allow me to shrink the binary (#70). This change alone seems
  to be a slight regression, but it's a step toward getting rid of
  regex, which is pretty large. And I feel less ridiculous now that I
  don't have two parsing crates anyway; prettydiff was pulling in
  structopt.

There are some behavior changes here:

* misc --help output changes and such as you'd expect from switching
  argument-parsing libraries

* I properly used PathBuf and OsString for stuff that theoretically
  could be non-UTF-8. I haven't tested that it actually made any
  difference. I'm also still storing the sample file dirname as "text"
  in the database to avoid causing a diff when not doing a schema
  change.
2020-04-17 21:53:37 -07:00
Scott Lamb
9d7cdc0954 cleanup: s/unflushed sample/unindexed sample/
db/writer.rs used the word "unflushed" in two ways:

* something which has been communicated to the LockedDatabase object but
  not yet committed to disk with SQLite.

* a video sample (aka video frame) which has been written to the sample
  file but not yet included in the video index. This happens because the
  duration of a frame isn't known until the following frame. These are
  always also unflushed in the other sense of the word (as unfinished
  recordings are never committed). But they can't be seen by clients at
  all, where indexed but uncommitted video frames can.

Replace the latter with "unindexed" to make things more clear. And a
couple minor other style cleanups.
2020-04-14 23:09:34 -07:00
Scott Lamb
3ed397bacd first step toward object detection (#30)
When compiled with cargo build --features=analytics and enabled via
moonfire-nvr run --object-detection, this runs object detection on every
sub stream frame through an Edge TPU (a Coral USB accelerator) and logs
the result.

This is a very small step toward a working system. It doesn't actually
record the result in the database or send it out on the live stream yet.
It doesn't support running object detection at a lower frame rate than
the sub streams come in at either. To address those problems, I need to
do some refactoring. Currently moonfire_db::writer::Writer::Write is the
only place that knows the duration of the frame it's about to flush,
before it gets added to the index or sent out on the live stream. I
don't want to do the detection from there; I'd prefer the moonfire_nvr
crate. So I either need to introduce an analytics callback or move a
bunch of that logic to the other crate.

Once I do that, I need to add database support (although I have some
experiments for that in moonfire-playground) and API support, then some
kind of useful frontend.

Note edgetpu.tflite is taken from the Apache 2.0-licensed
https://github.com/google-coral/edgetpu,
test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite. The
following page says it's fine to include Apache 2.0 stuff in GPLv3
projects:
https://www.apache.org/licenses/GPL-compatibility.html
2020-04-13 23:03:49 -07:00
Scott Lamb
ad13935ed6 use extracted ffmpeg library 2020-03-28 00:59:25 -07:00
Scott Lamb
c6fe1b66a1 add a preliminary schema for object detection (#30) 2020-03-25 18:10:37 -07:00
Scott Lamb
00991733f2 use Blake3 instead of SHA-1 or Blake2b
Benefits:

* Blake3 is faster. This is most noticeable for the hashing of the
  sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
  (#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
  support this old algorithm, and the pure-Rust sha1 crate is painfully
  slow. OpenSSL might still be a better choice than ring/rustls for TLS
  but it's nice to have the option.

For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
2020-03-20 21:46:53 -07:00