When compiled with cargo build --features=analytics and enabled via
moonfire-nvr run --object-detection, this runs object detection on every
sub stream frame through an Edge TPU (a Coral USB accelerator) and logs
the result.
This is a very small step toward a working system. It doesn't actually
record the result in the database or send it out on the live stream yet.
It doesn't support running object detection at a lower frame rate than
the sub streams come in at either. To address those problems, I need to
do some refactoring. Currently moonfire_db::writer::Writer::Write is the
only place that knows the duration of the frame it's about to flush,
before it gets added to the index or sent out on the live stream. I
don't want to do the detection from there; I'd prefer the moonfire_nvr
crate. So I either need to introduce an analytics callback or move a
bunch of that logic to the other crate.
Once I do that, I need to add database support (although I have some
experiments for that in moonfire-playground) and API support, then some
kind of useful frontend.
Note edgetpu.tflite is taken from the Apache 2.0-licensed
https://github.com/google-coral/edgetpu,
test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite. The
following page says it's fine to include Apache 2.0 stuff in GPLv3
projects:
https://www.apache.org/licenses/GPL-compatibility.html
* As discussed in #48, say "The Moonfire NVR Authors" at the top of
every file rather than whoever created that file. Have one AUTHORS
file listing everyone.
* Consistently call it a "security camera network video recorder" rather
than "security camera digital video recorder".
The codec -> codecpar move was sufficiently long ago (libavformat
57.5.0 on 2016-04-11) that I think we can just get away with requiring
the new version. Let's try it.
But if someone complains, AVCodecParameters and AVCodecContext look
sufficiently similar we could probably just use one or the other based on
the version we're compiling with.
This addressed a deprecation warning on nightly (will be in Rust 1.38).
Use parking_lot instead, which in theory is faster (although I doubt
it's significant here).
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.
Fixes#65: now it's possible to open the directory even if it lies on a
completely full disk.
This is mostly just "cargo fix --edition" + Cargo.toml changes.
There's one fix for upgrading to NLL in db/writer.rs:
Writer::previously_opened wouldn't build with NLL because of a
double-borrow the previous borrow checker somehow didn't catch.
Restructure to avoid it.
I'll put elective NLL changes in a following commit.
I think this is an ffmpeg bug, which I plan to report. In the meantime, this
makes the tests pass. Long-term, even if ffmpeg fixes this, I probably don't
want to continue doing acceptance tests against whatever version of ffmpeg
happens to be installed - my real targets of interest are the latest versions
of Chrome, Firefox, Safari, QuickTime, and VLC.
This significantly improves safety of the ffmpeg interface. The complex
ABIs aren't accessed directly from Rust. Instead, I have a small C
wrapper which uses the ffmpeg C API and the C headers at compile-time to
determine the proper ABI in the same way any C program using ffmpeg
would, so that the ABI doesn't have to be duplicated in Rust code.
I've tested with ffmpeg 2.x and ffmpeg 3.x; it seems to work properly
with both where before ffmpeg 3.x caused segfaults.
It still depends on ABI compatibility between the compiled and running
versions. C programs need this, too, and normal shared library
versioning practices provide this guarantee. But log both versions on
startup for diagnosing problems with this.
Fixes#7
This test is copied from the C++ implementation. It ensures the timestamps are
calculated accurately from the pts rather than using ffmpeg's estimated
duration. The Rust implementation was doing the easy-but-inaccurate thing, so
fix that to make the test pass.
Additionally, I did this with a code structure that should ensure the Rust
code never drops a Writer without indicating to the syncer that its uuid is
abandoned. Such a bug essentially leaks the partially-written file, although a
restart would cause it to be properly unlinked and marked as such. There are
no tests (yet) that exercise this scenario, though.
I should have submitted/pushed more incrementally but just played with it on
my computer as I was learning the language. The new Rust version more or less
matches the functionality of the current C++ version, although there are many
caveats listed below.
Upgrade notes: when moving from the C++ version, I recommend dropping and
recreating the "recording_cover" index in SQLite3 to pick up the addition of
the "video_sync_samples" column:
$ sudo systemctl stop moonfire-nvr
$ sudo -u moonfire-nvr sqlite3 /var/lib/moonfire-nvr/db/db
sqlite> drop index recording_cover;
sqlite3> create index ...rest of command as in schema.sql...;
sqlite3> ^D
Some known visible differences from the C++ version:
* .mp4 generation queries SQLite3 differently. Before it would just get all
video indexes in a single query. Now it leads with a query that should be
satisfiable by the covering index (assuming the index has been recreated as
noted above), then queries individual recording's indexes as needed to fill
a LRU cache. I believe this is roughly similar speed for the initial hit
(which generates the moov part of the file) and significantly faster when
seeking. I would have done it a while ago with the C++ version but didn't
want to track down a lru cache library. It was easier to find with Rust.
* On startup, the Rust version cleans up old reserved files. This is as in the
design; the C++ version was just missing this code.
* The .html recording list output is a little different. It's in ascending
order, with the most current segment shorten than an hour rather than the
oldest. This is less ergonomic, but it was easy. I could fix it or just wait
to obsolete it with some fancier JavaScript UI.
* commandline argument parsing and logging have changed formats due to
different underlying libraries.
* The JSON output isn't quite right (matching the spec / C++ implementation)
yet.
Additional caveats:
* I haven't done any proof-reading of prep.sh + install instructions.
* There's a lot of code quality work to do: adding (back) comments and test
coverage, developing a good Rust style.
* The ffmpeg foreign function interface is particularly sketchy. I'd
eventually like to switch to something based on autogenerated bindings.
I'd also like to use pure Rust code where practical, but once I do on-NVR
motion detection I'll need to existing C/C++ libraries for speed (H.264
decoding + OpenCL-based analysis).