Commit Graph

58 Commits

Author SHA1 Message Date
Scott Lamb
7fbbd82ae7 remove stale warning about time handling 2020-12-22 19:51:50 -08:00
Scott Lamb
8512199d85 Merge branch 'master' into new-schema 2020-11-22 20:40:16 -08:00
Scott Lamb
8f792aeb2d live stream frame-by-frame rather than GOP-by-GOP (#59)
This should reduce live stream latency by two seconds when my cameras
are at their default setting (I frame interval = 2 * frame rate)!

I was under the impression that every HTML5 Media Source Extensions
media segment had to start with a Random Access Point. This used to
be true, but apparently changed quite a while ago:
https://bugs.chromium.org/p/chromium/issues/detail?id=229412

Support generating segments that don't start with a key frame, and
plumb this through the mp4 media segment generation logic. Add some
extra error checking in mp4 slice handling, as my first attempts had a
mismatch between expected and actual lengths that silently returned
corrupted .m4s files.

Also pull everything from the most recent key frame on along with the
first live segment to reduce startup latency. Live view is quite a bit
more pleasant now.
2020-08-07 15:56:57 -07:00
Scott Lamb
b9c08b18a4 fix live view
This broke with the media vs wall duration split, part of #34.
2020-08-07 10:16:06 -07:00
Scott Lamb
036e8427e6 complete wall/media time split (for #34) 2020-08-06 22:01:59 -07:00
Scott Lamb
cb97ccdfeb start splitting wall and media duration for #34
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
2020-08-04 21:44:01 -07:00
Scott Lamb
459615a616 include all recordings in days map (fixes #57)
This is a quick fix to a problem that gives a confusing/poor initial
experience, as in this thread:
https://groups.google.com/g/moonfire-nvr-users/c/WB-TIW3bBZI/m/Gqh-L6I9BgAJ

I don't think it's a permanent solution. In particular, when we
implement an event stream (#40), I don't want to have a separate event
for every frame, so having the days map change that often won't work.
The client side will likely manipulate the days map then to include a
special entry for a growing recording, representing "from this time to
now".
2020-07-18 12:13:08 -07:00
Scott Lamb
476bd86b12 Merge branch 'master' into new-schema 2020-07-12 19:22:38 -07:00
Scott Lamb
959defebca track "assumed" filesystem usage (#89)
As described in #89, we need to refactor a bit before we can get the
actual filesystem block size. Assuming 4096 for now. Small steps.
2020-07-12 17:15:41 -07:00
Scott Lamb
42a6f4d091 API change: cameraConfigs should include rtsp urls 2020-06-22 15:41:14 -07:00
Scott Lamb
6f9612738c pass prev duration and runs through API layer
Builds on f3ddbfe, for #32 and #59.
2020-06-09 22:06:03 -07:00
Scott Lamb
f3ddbfe22a track cumulative duration and runs
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":

In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.

In "segments" mode, playback respects the timestamps we set:

* The obvious choice is to use wall clock timestamps. This is fine if
  they're known to be fixed and correct. They're not. The
  currently-recording segment may be "unanchored", meaning its start
  timestamp is not yet fixed. Older timestamps may overlap if the system
  clock was stepped between runs. The latter isn't /too/ bad from a user
  perspective, though it's confusing as a developer. We probably will
  only end up showing the more recent recording for a given
  timestamp anyway. But the former is quite annoying. It means we have
  to throw away part of the SourceBuffer that we may want to seek back
  (causing UI pauses when that happens) or keep our own spare copy of it
  (memory bloat). I'd like to avoid the whole mess.

* Another approach is to use timestamps that are guaranteed to be in
  the correct order but that may have gaps. In particular, a timestamp
  of (recording_id * max_recording_duration) + time_within_recording.
  But again seeking isn't instantaneous. In my experiments, there's a
  visible pause between segments that drives me nuts.

* Finally, the approach that led me to this schema change. Use
  timestamps that place each segment after the one before, possibly with
  an intentional gap between runs (to force a wait where we have an
  actual gap). This should make the browser's natural playback behavior
  work properly: it never goes to an incorrect place, and it only waits
  when/if we want it to. We have to maintain a mapping between its
  timestamps and segment ids but that's doable.

This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.

Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.

The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
2020-06-09 16:17:32 -07:00
Scott Lamb
00991733f2 use Blake3 instead of SHA-1 or Blake2b
Benefits:

* Blake3 is faster. This is most noticeable for the hashing of the
  sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
  (#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
  support this old algorithm, and the pure-Rust sha1 crate is painfully
  slow. OpenSSL might still be a better choice than ring/rustls for TLS
  but it's nice to have the option.

For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
2020-03-20 21:46:53 -07:00
Scott Lamb
3968bfe912 reorganize /recordings JSON response
I want to start returning the pixel aspect ratio of each video sample
entry. It's silly to duplicate it for each returned recording, so
let's instead return a videoSampleEntryId and then put all the
information about each VSE once.

This change doesn't actually handle pixel aspect ratio server-side yet.
Most likely I'll require a new schema version for that, to store it as a
new column in the database. Codec-specific logic in the database layer
is awkward and I'd like to avoid it. I did a similar schema change to
add the rfc6381_codec.

I also adjusted ui-src/lib/models/Recording.js in a few ways:

* fixed a couple mismatches between its field name and the key defined
  in the API. Consistency aids understanding.
* dropped all the getters in favor of just setting the fields (with
  type annotations) as described here:
  https://google.github.io/styleguide/jsguide.html#features-classes-fields
* where the wire format used undefined (to save space), translate it to
  a more natural null or false.
2020-03-13 21:41:02 -07:00
Scott Lamb
317a620e6e upgrade copyright notices
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
  every file rather than whoever created that file. Have one AUTHORS
  file listing everyone.
* Consistently call it a "security camera network video recorder" rather
  than "security camera digital video recorder".
2020-03-01 22:53:41 -08:00
Scott Lamb
92266612b5 switch to websocket for live stream (#59)
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
2020-02-29 14:39:16 -08:00
Scott Lamb
73f7cdd261 use application/json for login and logout 2020-01-09 16:24:03 -08:00
Scott Lamb
d61b5e1bdd Use fixed-size directory meta files
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.

Fixes #65: now it's possible to open the directory even if it lies on a
completely full disk.
2019-07-04 23:30:37 -05:00
Scott Lamb
a9f64798d6 store full rtsp urls
My dad's "GW-GW4089IP" cameras use separate ports for the main and sub
streams:

rtsp://192.168.1.110:5050/H264?channel=0&subtype=0&unicast=true&proto=Onvif
rtsp://192.168.1.110:5049/H264?channel=0&subtype=1&unicast=true&proto=Onvif

Previously I could get one of the streams to work by including :5050 or
:5049 in the host field of the camera. But not both. Now make the
camera's host field reflect the ONVIF port (which is also non-standard
on these cameras, :85). It's not directly used yet but probably will be
sooner or later. Make each stream know its full URL.
2019-06-30 23:54:52 -05:00
Scott Lamb
644ea4e6ea expose signal id in api
...and update api.md which described a different format than before or
after.
2019-06-20 12:10:23 -07:00
Scott Lamb
fda7e4ca2b add concept of user/session permissions
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)

This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:

* non-administrator users who can view but not access camera passwords
  or change any state
* workers that update signal state based on cameras' built-in motion
  detection or a security system's events but don't need to view videos
* control over what can be done without authenticating

Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.

Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).
2019-06-19 15:34:20 -07:00
Scott Lamb
7dd98bb76a db crate support for updating signals (#28)
This is a definite work in progress. In particular,

* there's no src/web.rs support yet so it can't be used,
* the code is surprisingly complex, and there's almost no tests so far.
  I want to at least get complete branch coverage.
* I may still go back to time_sec rather than time_90k to save RAM and
  flash.

I simplified the approach a bit from the earlier goal in design/api.md.
In particular, there's no longer the separate concept of "observation"
vs "prediction". Now the predictions are just observations that extend a
bit beyond now. They may be flushed prematurely and I'll try living with
that to avoid making things even more complex.
2019-06-13 22:25:55 -07:00
Scott Lamb
d232ca55fa document proposed API for updating signals (#28) 2019-06-07 10:19:38 -07:00
Scott Lamb
6f2c63ffac read-only signals support (#28)
This is mostly untested and useless by itself, but it's a starting
point. In particular:

* there's no way to set up signals or add/remove/update events yet
  except by manual changes to the database.
* if you associate a signal with a camera then remove the camera,
  hitting /api/ will error out.
2019-06-06 16:20:44 -07:00
Scott Lamb
3ba3bf2b18 backend support for live stream (#59)
This is so far completely untested, for use by a new UI prototype.

It creates a new URL endpoint which sends one video/mp4 media segment
per key frame, with the dependent frames included. This means there will
be about one key frame interval of latency (typically about a second).
This seems hard to avoid, as mentioned in issue #59.
2019-01-21 15:58:52 -08:00
Scott Lamb
eb8a51aecb add a url for getting debug info about a .mp4 file
and add a unit test of path decoding along the way
2018-12-29 13:09:16 -06:00
Scott Lamb
422cd2a75e preliminary web support for auth (#26)
Some caveats:

  * it doesn't record the peer IP yet, which makes it harder to verify
    sessions are valid. This is a little annoying to do in hyper now
    (see hyperium/hyper#1410). The direct peer might not be what we want
    right now anyway because there's no TLS support yet (see #27).  In
    the meantime, the sane way to expose Moonfire NVR to the Internet is
    via a proxy server, and recording the proxy's IP is not useful.
    Maybe better to interpret a RFC 7239 Forwarded header (and/or
    the older X-Forwarded-{For,Proto} headers).

  * it doesn't ever use Secure (https-only) cookies, for a similar reason.
    It's not safe to use even with a tls proxy until this is fixed.

  * there's no "moonfire-nvr config" support for inspecting/invalidating
    sessions yet.

  * in debug builds, logging in is crazy slow. See libpasta/libpasta#9.

Some notes:

  * I removed the Javascript "no-use-before-defined" lint, as some of
    the functions form a cycle.

  * Fixed #20 along the way. I needed to add support for properly
    returning non-OK HTTP statuses to signal unauthorized and such.

  * I removed the Access-Control-Allow-Origin header support, which was
    at odds with the "SameSite=lax" in the cookie header. The "yarn
    start" method for running a local proxy server accomplishes the same
    thing as the Access-Control-Allow-Origin support in a more secure
    manner.
2018-11-27 11:08:33 -08:00
Scott Lamb
5bba71345c few small markdown tweaks 2018-08-24 21:04:13 -07:00
Scott Lamb
c5345c1e11 simplify and fix installation instructions
* install.md, install-manual.md, and easy-install.md had a lot of
  redundancy. Rework them so the common prefix and suffix are in
  install.md and it's clear when to navigate back and forth. This
  removes from very stale references to prep.sh and cameras.sql in
  install-manual.md (which never should have mentioned these scripts
  anyway).

* remove all the SAMPLE_MEDIA_DIR, SAMPLE_FILE_DIR, and
  SAMPLE_FILE_PATH stuff from the scripts. This was too complicated
  (one variable will suffice) and inconsistent in terminology (a
  couple "samples dir" occurrences slipped through review; they
  should have been "sample file dir"). It also wasn't really useful
  enough because the procedure for a mount point is manual anyway,
  and because some installs will have multiple sample file dirs
  anyway.

* in the mount point procedure, fix the paths to be consistent. Also
  describe the "nofail" and "Requires=" config I have on my machine.

* fix some incorrect info about how to use "moonfire-nvr config" and
  describe "flush_if_sec".
2018-08-24 20:45:46 -07:00
Scott Lamb
65e68d3255 update design docs for new-schema branch changes 2018-03-24 20:51:30 -07:00
Scott Lamb
dfee66c84b support additional recording_integrity timestamps
These are not actually populated by the code yet. I'm trying to get the
v3 schema frozen as soon as possible; actually using the fields can come
later.

Add some explanation of their value in time.md, along with some general
musing on leap seconds, and a correction on the frequency error of my cameras.
2018-03-21 22:32:41 -07:00
Scott Lamb
88051a1188 adjust startup timings again
I forgot to drop the cache before grabbing the numbers earlier today.
2018-03-20 22:37:45 -07:00
Scott Lamb
bdf52d743b adjust some timings in schema.md
The new numbers are taken from my odroid setup. In particular, the size check
is noticeably slower than what I'd gathered before, enough to show that it
shouldn't be performed on startup.
2018-03-20 08:46:48 -07:00
Scott Lamb
b78ffc3808 view in-progress recordings!
The time from recorded to viewable was previously 60-120 sec for the first
recording of a RTSP session, 0-60 sec otherwise. Now it's one frame.
2018-03-02 15:40:32 -08:00
Scott Lamb
45f7b30619 allow listing and viewing uncommitted recordings
There may be considerable lag between being fully written and being committed
when using the flush_if_sec feature. Additionally, this is a step toward
listing and viewing recordings before they're fully written. That's a
considerable delay: 60 to 120 seconds for the first recording of a run,
0 to 60 seconds for subsequent recordings.

These recordings aren't yet included in the information returned by
/api/?days=true. They probably should be, but small steps.
2018-03-02 11:38:11 -08:00
Scott Lamb
dc402bdc01 schema version 2: support sub streams
This allows each camera to have a main and a sub stream. Previously there was
a field in the schema for the sub stream's url, but it didn't do anything. Now
you can configure individual retention for main and sub streams. They show up
grouped in the UI.

No support for upgrading from schema version 1 yet.
2018-02-03 22:15:54 -08:00
Scott Lamb
315f3594c2 add a basic Javascript UI
The Javascript is pretty amateurish I'm sure but at least it's something to
iterate from. It's already much more pleasant for browsing through videos in
several ways:

* more responsive to load only a day at a time rather than 90+ days
* much easier to see the same time segment on several cameras
* more pleasant to have the videos load as a popup rather than a link
  that blows away your position in an enormous list
* exposes the fancier .mp4 generation options: splitting at lengths
  other than the default, trimming to an arbitrary start and end time,
  including a subtitle track with timestamps.

There's a slight regression in functionality: I didn't match the former
top-level page which showed how much camera used of its disk allocation and
the total duration of video. This is exposed in the JSON API, so it shouldn't
be too hard to add back.
2017-10-21 21:54:27 -07:00
Scott Lamb
6eda26a9cc support run splitting in json api 2017-10-17 09:00:05 -07:00
Scott Lamb
1e4d7d5ad9 make json api more idiomatic
* camelCase
* lose the "days":null in the overall cameras dict
2017-10-09 21:58:44 -07:00
Scott Lamb
7673a00bd9 serve 'video/mp4; codecs="avc1.xxxxxx"' mime type
This can be used when constructing a HTML5 SourceBuffer.
2017-10-03 23:25:58 -07:00
Scott Lamb
04e9f3f160 support segmented mp4s
This is intended to support HTML5 Media Source Extensions, which I expect to
be the most practical way to make a good web UI with a proper scrub bar and
such.

This feature has had very limited testing on Chrome and Firefox, and that was
not entirely successful. More work is needed before it's usable, but this
seems like a helpful progress checkpoint.
2017-10-01 15:29:22 -07:00
Scott Lamb
063708c9ab try again to fix time.md diagram
This time, I've given up on svg and am using png. The inline svg seems to be
totally stripped out by github's markdown->html conversion, and img links
don't work because .svg files are served with an incorrect Content-Type.
2016-12-26 21:41:19 -08:00
Scott Lamb
8ee44efcf2 try to fix some time.md formatting 2016-12-26 21:39:00 -08:00
Scott Lamb
f8f7c755ff attempt to fix svg linking 2016-12-26 21:00:42 -08:00
Scott Lamb
5a6cd4e590 new design doc describing approach to time
This is more sophisticated than the current implementation. It's an attempt
to address the problems created by the 9 seconds/day of drift I'm seeing for
long-running streams.
2016-12-26 20:55:43 -08:00
Scott Lamb
eee887b9a6 schema version 1
The advantages of the new schema are:

* overlapping recordings can be unambiguously described and viewed.
  This is a significant problem right now; the clock on my cameras appears to
  run faster than the (NTP-synchronized) clock on my NVR. Thus, if an
  RTSP session drops and is quickly reconnected, there's likely to be
  overlap.

* less I/O is required to view mp4s when there are multiple cameras.
  This is a pretty dramatic difference in the number of database read
  syscalls with pragma page_size = 1024 (605 -> 39 in one test),
  although I'm not sure how much of that maps to actual I/O wait time.
  That's probably as dramatic as it is due to overflow page chaining.
  But even with larger page sizes, there's an improvement. It helps to
  stop interleaving the video_index fields from different cameras.

There are changes to the JSON API to take advantage of this, described
in design/api.md.

There's an upgrade procedure, described in guide/schema.md.
2016-12-20 22:08:18 -08:00
Scott Lamb
86dd36d7a5 version the sqlite3 database schema
See guide/schema.md for instructions on upgrading past this commit.
2016-12-20 15:44:04 -08:00
Scott Lamb
d083797e42 Coalesce adjacent recordings for efficiency 2016-05-10 17:37:53 -07:00
Scott Lamb
b27df92cac {start,end}_time_usec should be ..._time_90k 2016-05-10 17:10:42 -07:00
Scott Lamb
3aac88aa35 Fixes to design doc markdown. 2016-05-03 05:20:23 -07:00