* in markdown files, use code fences rather than indented blocks.
This is harder to screw up (one of them was off by a space so didn't
render properly) and allows me to add info strings.
* uniformly use "useradd" to create the user and group in all three
places (install-manual.md, script-functions.sh, Dockerfile) rather
than addgroup + adduser. Create a full home dir, which I suspect was
the problem in #67. Don't allow customizing group name; it's always
the same as the user.
* install the sqlite3 package so that the "moonfire-nvr sql" command
works properly.
* remove "setup_db" function, which was out of place. Since the
creation of the "moonfire-nvr init" command, this has to happen
after installation of the binary. install.md gives instructions on
this part anyway so remove it from the script.
* give a proper command to create the db dir. It was creating it
within the current directory, not within /var/lib/moonfire-nvr.
Don't bother creating sample directory; "moonfire-nvr config"
will do this.
* when setting owners on a newly created directory, use a single
"install -d" command rather than "mkdir" + "chown".
* address confusion about whether sample file dirs need to be
precreated. (Only when Moonfire NVR doesn't have write permissions
on the parent.)
* always just install the packaged version of ffmpeg rather than
building our own. This has been usable since Debian/Raspbian 9
Stretch; Debian/Raspbian 10 Buster is out now so there's no excuse
for still running Debian/Raspbian 8 Jessie.
* don't chown the UI directory; it can be owned by root as with
the binary.
* in scripts/install.sh, don't enable/start the service yet. It hasn't
been configured.
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.
Newer SQLite library versions (such as what you get when using
--features=bundled) actually enforce foreign keys. Unfortunately there's
no way to drop foreign key constraints, so you have to transitively
recreate all the tables with foreign key constraints on the table you're
recreating.
My dad's "GW-GW4089IP" cameras use separate ports for the main and sub
streams:
rtsp://192.168.1.110:5050/H264?channel=0&subtype=0&unicast=true&proto=Onvif
rtsp://192.168.1.110:5049/H264?channel=0&subtype=1&unicast=true&proto=Onvif
Previously I could get one of the streams to work by including :5050 or
:5049 in the host field of the camera. But not both. Now make the
camera's host field reflect the ONVIF port (which is also non-standard
on these cameras, :85). It's not directly used yet but probably will be
sooner or later. Make each stream know its full URL.
This delegates to the "sqlite3" CLI but has a couple benefits over using
sqlite3 directly:
* safer because it does the same locking as other moonfire-nvr invocations
* more convenient because it takes the same argument format as other
moonfire-nvr subcommands:
* --db-dir rather than full path including /db suffix
* has the --db-dir default value
* --read-only rather than file:...?mode=ro
Use like "moonfire-nvr sql" or "moonfire-nvr sql --read-only".
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)
This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:
* non-administrator users who can view but not access camera passwords
or change any state
* workers that update signal state based on cameras' built-in motion
detection or a security system's events but don't need to view videos
* control over what can be done without authenticating
Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.
Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).
This is a definite work in progress. In particular,
* there's no src/web.rs support yet so it can't be used,
* the code is surprisingly complex, and there's almost no tests so far.
I want to at least get complete branch coverage.
* I may still go back to time_sec rather than time_90k to save RAM and
flash.
I simplified the approach a bit from the earlier goal in design/api.md.
In particular, there's no longer the separate concept of "observation"
vs "prediction". Now the predictions are just observations that extend a
bit beyond now. They may be flushed prematurely and I'll try living with
that to avoid making things even more complex.
This is mostly untested and useless by itself, but it's a starting
point. In particular:
* there's no way to set up signals or add/remove/update events yet
except by manual changes to the database.
* if you associate a signal with a camera then remove the camera,
hitting /api/ will error out.
travis-ci pointed out that the dependency bump broke 1.31:
Compiling docopt v1.1.0
error[E0658]: imports can only refer to extern crate names passed with `--extern` on stable channel (see issue #53130)
--> /home/travis/.cargo/registry/src/github.com-1ecc6299db9ec823/docopt-1.1.0/src/parse.rs:48:5
|
48 | use regex;
| ^^^^^
|
Looks like uniform_paths was stabilized in 1.32, and I verified locally that
version builds.
Looks like a bug got introduced with the great UI rewrite: when you add
a (start or end) time constraint, then remove one, the change wouldn't
be reflected. Within CalendarTSRange, it used null to mean to keep a
value, and || to check if it was null. These meant empty strings turned
into the existing value, instead of no constraint as they should be.
This was unnecessarily clever; stop doing that.
Also keep the console logging in the deployed config; it's harmless and
eases debugging.
The 091217b workaround of telling ffmpeg to only request the video
stream works perfectly fine for now. I'll revisit when adding audio
support (#34).
Fixes#36
My installation recently somehow ended up with a recording with a
duration of 503793844 90,000ths of a second, way over the maximum of 5
minutes. (Looks like the machine was pretty unresponsive at the time
and/or having network problems.)
When this happens, the system really spirals. Every flush afterward (12
per minute with my installation) fails with a CHECK constraint failure
on the recording table. It never gives up on that recording. /var/log
fills pretty quickly as this failure is extremely verbose (a stack
trace, and a line for each byte of video_index). Eventually the sample
file dirs fill up too as it continues writing video samples while GC is
stuck. The video samples are useless anyway; given that they're not
referenced in the database, they'll be deleted on next startup.
This ensures the offending recording is never added to the database, so
we don't get the same persistent problem. Instead, writing to the
recording will fail. The stream will drop and be retried. If the
underlying condition that caused a too-long recording (many
non-key-frames, or the camera returning a crazy duration, or the
monotonic clock jumping forward extremely, or something) has gone away,
the system should recover.
This is so far completely untested, for use by a new UI prototype.
It creates a new URL endpoint which sends one video/mp4 media segment
per key frame, with the dependent frames included. This means there will
be about one key frame interval of latency (typically about a second).
This seems hard to avoid, as mentioned in issue #59.
Use version 1 of the mvhd, tkhd, and mdhd boxes to support 64-bit
durations. 2^32 units / 90,000 units/sec / 60 sec/min / 60 min/hr ~=
13.25 hrs.
Compatibility: looks like Chrome, Firefox, VLC, and ffmepg all support
version 1 with no problem.
I went with the third idea in 1ce52e3: have the tests run each iteration
of the syncer explicitly. These are messy tests that know tons of
internal details, but I think they're less confusing and racy than if I
had the syncer running in a separate thread.
Now each syncer has a binary heap of the times it plans to do a flush.
When one of those times arrives, it rechecks if there's something to do.
Seems more straightforward than rechecking each stream's first
uncommitted recording, especially with the logic to retry failed flushes
every minute.
Also improved the info! log for each flush to see the actual recordings
being flushed for better debuggability.
No new tests right now. :-( They're tricky to write. One problem is that
it's hard to get the timing right: a different flush has to happen
after Syncer::save's database operations and before Syncer::run calls
SimulatedClocks::recv_timeout with an empty channel[*], advancing the
time. I've thought of a few ways of doing this:
* adding a new SyncerCommand to run something, but it's messy (have
to add it from the mock of one of the actions done by the save),
and Box<dyn FnOnce() + 'static> not working (see
rust-lang/rust#28796) makes it especially annoying.
* replacing SimulatedClocks with something more like MockClocks.
Lots of boilerplate. Maybe I need to find a good general-purpose
Rust mock library. (mockers sounds good but I want something that
works on stable Rust.)
* bypassing the Syncer::run loop, instead manually running iterations
from the test.
Maybe the last way is the best for now. I'm likely to try it soon.
[*] actually, it's calling Receiver::recv_timeout directly;
Clocks::recv_timeout is dead code now? oops.
This no longer requires installing ffmpeg manually, so there should be
significantly less data to cache (faster runs). The build step itself
should also be faster when the cache is unavailable/stale.
Also sneak in a change from "pkg-config" to "pkgconf" package in the
scripts and travis CI. They didn't match the manual instructions; make
them all consistent. They both seem to work fine, but I gather pkgconf
is the newer thing. Its roadmap is here and notes that distros are
moving toward it.
https://github.com/pkgconf/pkgconf/wiki/Roadmap
Fixes#46. If there are no video_sample_entries, it returns
InvalidArgument, which gets mapped to a HTTP 400. Various other failures
turn into non-500s as well.
There are many places that can & should be using typed errors, but it's
a start.