Commit Graph

52 Commits

Author SHA1 Message Date
Scott Lamb
45f7b30619 allow listing and viewing uncommitted recordings
There may be considerable lag between being fully written and being committed
when using the flush_if_sec feature. Additionally, this is a step toward
listing and viewing recordings before they're fully written. That's a
considerable delay: 60 to 120 seconds for the first recording of a run,
0 to 60 seconds for subsequent recordings.

These recordings aren't yet included in the information returned by
/api/?days=true. They probably should be, but small steps.
2018-03-02 11:38:11 -08:00
Scott Lamb
b17761e871 move list_recordings_by_* logic into raw.rs
I want to start having the db.rs version augment this with the uncommitted
recordings, and it's nice to have the separation of the raw db vs augmented
versions. Also, this fits with the general theme of shrinking db.rs a bit.

I had to put the raw video_sample_entry_id into the rows rather than
the video_sample_entry Arc. In hindsight, this is better anyway: the common
callers don't need to do the btree lookup and arc clone on every row. I think
I'd originally done it that way only because I was quite new to rust and
didn't understand that db could be used from within the row callback given
that both borrows are immutable.
2018-03-01 20:59:05 -08:00
Scott Lamb
fb4d88d3e2 make db::dir::Writer equally stubborn
Every recording it starts must be sent to the syncer with at least one sample
written. It will try forever (unless the channel is down, then panic). This
avoids the situation in which it prevents something in the uncommitted
VecDeque from ever being synced and thus any further recordings from being
flushed.
2018-02-28 12:32:52 -08:00
Scott Lamb
843e1b49c8 take FnMut closures by reference
I mistakenly thought these had to be monomorphized. (The FnOnce still
does, until rust-lang/rfcs#1909 is implemented.) Turns out this way works
fine. It should result in less compile time / code size, though I didn't check
this.
2018-02-23 09:19:42 -08:00
Scott Lamb
b037c9bdd7 knob to reduce db commits (SSD write cycles)
This improves the practicality of having many streams (including the doubling
of streams by having main + sub streams for each camera). With these tuned
properly, extra streams don't cause any extra write cycles in normal or error
cases. Consider the worst case in which each RTSP session immediately sends a
single frame and then fails. Moonfire retries every second, so this would
formerly cause one commit per second per stream. (flush_if_sec=0 preserves
this behavior.) Now the commits can be arbitrarily infrequent by setting
higher values of flush_if_sec.

WARNING: this isn't production-ready! I hacked up dir.rs to make tests pass
and "moonfire-nvr run" work in the best-case scenario, but it doesn't handle
errors gracefully. I've been debating what to do when writing a recording
fails. I considered "abandoning" the recording then either reusing or skipping
its id. (in the latter case, marking the file as garbage if it can't be
unlinked immediately). I think now there's no point in abandoning a recording.
If I can't write to that file, there's no reason to believe another will work
better. It's better to retry that recording forever, and perhaps put the whole
directory into an error state that stops recording until those writes go
through. I'm planning to redesign dir.rs to make this happen.
2018-02-22 16:35:34 -08:00
Scott Lamb
31adbc1e9f initial split of database to a separate crate
It should reduce compile time / memory usage to put quite a bit of the code
into a separate crate. I also intend to limit visibility of some things to
only within the db crate, but that's for a future change. This is the smallest
move that will compile.
2018-02-20 23:15:39 -08:00
Scott Lamb
d84e754b2a replace homegrown Error with failure crate
This reduces boilerplate, making it a bit easier for me to split the db stuff
out into its own crate.
2018-02-20 22:46:14 -08:00
Scott Lamb
253f3de399 reorganize the sample file directory
The filenames now represent composite ids (stream id + recording id) rather
than a separate uuid system with its own reservation for a few benefits:

  * This provides more information when there are inconsistencies.

  * This avoids the need for managing the reservations during recording. I
    expect this to simplify delaying flushing of newly written sample files.
    Now the directory has to be scanned at startup for files that never got
    written to the database, but that's acceptably fast even with millions of
    files.

  * Less information to keep in memory and in the recording_playback table.

I'd considered using one directory per stream, which might help if the
filesystem has trouble coping with huge directories. But that would mean each
dir has to be fsync()ed separately (more latency and/or more multithreading).
So I'll stick with this until I see concrete evidence of a problem that would
solve.

Test coverage of the error conditions is poor. I plan to do some restructuring
of the db/dir code, hopefully making steps toward testability along the way.
2018-02-20 10:11:10 -08:00
Scott Lamb
89b6bccaa3 support multiple sample file directories
This is still pretty basic support. There's no config UI support for
renaming/moving the sample file directories after they are created, and no
error checking that the files are still in the expected place. I can imagine
sysadmins getting into trouble trying to change things. I hope to address at
least some of that in a follow-up change to introduce a versioning/locking
scheme that ensures databases and sample file dirs match in some way.

A bonus change that kinda got pulled along for the ride: a dialog pops up in
the config UI while a stream is being tested. The experience was pretty bad
before; there was no indication the button worked at all until it was done,
sometimes many seconds later.
2018-02-11 23:04:02 -08:00
Scott Lamb
dc402bdc01 schema version 2: support sub streams
This allows each camera to have a main and a sub stream. Previously there was
a field in the schema for the sub stream's url, but it didn't do anything. Now
you can configure individual retention for main and sub streams. They show up
grouped in the UI.

No support for upgrading from schema version 1 yet.
2018-02-03 22:15:54 -08:00
Scott Lamb
6902be1981 upgrade deps 2018-01-30 22:05:39 -08:00
Scott Lamb
8caa2e5d0e crate rename: http-(entity|file) -> http-serve 2018-01-23 11:08:21 -08:00
Scott Lamb
5c8970fe8a update dependencies 2017-11-16 23:01:09 -08:00
Scott Lamb
9041eeb907 fix panic when requesting zero segment duration
The recording::Segment was constructing a segment with no frames in it, which
was causing a panic when appending a zero-length stts to the Slices. Fix this
in a couple ways:

* Slices::append should return Err rather than panic. No reason to crash the
  whole program when we have trouble serving a single .mp4 request.
* recording::Segment shouldn't produce zero-frame segments
2017-10-17 08:55:21 -07:00
Scott Lamb
1d08698d0c debug, fix panic with zero-duration recording
I had an assert that fired in this case, dating back to when I hadn't plumbed
Result returns through much of .mp4 construction. Now I have, so there's no
excuse in having an assert here. Change to an error return, and tweak it to
not fire in the zero-duration case.

Also fix a problem in the test harness; I hadn't finished converting it for
multi-recording tests, and it was returning the wrong recording.

Because of that, I seem to have stumbled across a related problem in which
asking for zero duration of a non-zero duration recording will return a
recording::Segment with no frames, which will cause panics because its
corresponding .mp4 slices are zero-length. I just adjusted the panic message
here; I'll follow up with changes to address that.
2017-10-17 06:14:47 -07:00
Scott Lamb
711f7b3409 fix with-editlist hash
I missed this because I was running with ffmpeg 3 and had grown to expect this
test to fail. Quick fix on that coming shortly.
2017-10-09 21:00:45 -07:00
Scott Lamb
af282c309e fix corrupt stss on segments after trimmed segment
This was causing Firefox to fail to play multipart .mp4s which trimmed away a
prefix. In the developer console, it said NS_ERROR_DOM_MEDIA_METADATA_ERR
without giving any RESULT_DETAIL, making it a pain to diagnose. Given that the
stss is supposed to be needed for seeking, I'm surprised it didn't have any
immediately obvious impact on Chrome or VLC.  Maybe they just took longer to
seek than otherwise necessary.

The bug was that when keeping track of the "next frame num" while constructing
the .mp4, I appended the number in the underlying recording, not the number
post-trimming. That meant following segments used the wrong numbers. In some
cases, it caused it to exceed the total number of samples in the generated
.mp4, which seems to be what Firefox was complaining about. Running the result
through "ffmpeg -i bad.mp4 -c copy -f mp4 good.mp4" just trimmed away the most
obviously invalid ones, leaving others that didn't point to the frames they
meant to. That was enough to make Firefox start playing the file. /shruggie

The existing tests were all with a single segment, so I added a new one to
catch this. I also added a Debug implementation to recording::Segment and
mp4::Segment.
2017-10-09 06:32:43 -07:00
Scott Lamb
919e9a6deb remove extraneous debug logging 2017-10-04 22:55:29 -07:00
Scott Lamb
cb18ba44d8 fix /view.mp4 with rel_start
This was totally broken in commit 1cf27c18. It would serve bytes from the
beginning of the sample file in question, not from the start of the given
range.
2017-10-04 22:51:16 -07:00
Scott Lamb
5ea2c2fed1 fix media rate in edit list
it should be exactly 1, but was slightly more because the fraction was
incorrectly 1 rather than 0. I'm not sure if any actual players care about
this, but it was something I noticed when looking into strange edit list
behavior.
2017-10-04 00:03:33 -07:00
Scott Lamb
7673a00bd9 serve 'video/mp4; codecs="avc1.xxxxxx"' mime type
This can be used when constructing a HTML5 SourceBuffer.
2017-10-03 23:25:58 -07:00
Scott Lamb
04e9f3f160 support segmented mp4s
This is intended to support HTML5 Media Source Extensions, which I expect to
be the most practical way to make a good web UI with a proper scrub bar and
such.

This feature has had very limited testing on Chrome and Firefox, and that was
not entirely successful. More work is needed before it's usable, but this
seems like a helpful progress checkpoint.
2017-10-01 15:29:22 -07:00
Scott Lamb
11420df065 update deps (particularly hyper) + fix warnings 2017-09-21 21:51:58 -07:00
Scott Lamb
857a66f29c use my own ffmpeg crate
This significantly improves safety of the ffmpeg interface. The complex
ABIs aren't accessed directly from Rust. Instead, I have a small C
wrapper which uses the ffmpeg C API and the C headers at compile-time to
determine the proper ABI in the same way any C program using ffmpeg
would, so that the ABI doesn't have to be duplicated in Rust code.
I've tested with ffmpeg 2.x and ffmpeg 3.x; it seems to work properly
with both where before ffmpeg 3.x caused segfaults.

It still depends on ABI compatibility between the compiled and running
versions. C programs need this, too, and normal shared library
versioning practices provide this guarantee. But log both versions on
startup for diagnosing problems with this.

Fixes #7
2017-09-20 21:06:06 -07:00
Scott Lamb
ac43e7fe17 fix bench --features=nightly, broken by upgrade
The benchmarks don't get compiled with the standard "cargo test";
they require "cargo +nightly bench --features=nightly", so I didn't notice
they were broken in the previous commit. Now fixed.
2017-06-11 19:40:36 -07:00
Scott Lamb
bebd6ee79a update dependencies
* The mylog update fixes a couple bad bugs.
* Otherwise, just keep up with the Rust ecosystem.
2017-06-11 12:57:55 -07:00
Scott Lamb
30cda85a2e shrink mp4::Segment by another 24 bytes on 32-bit
This is 1,440 bytes for a 60-segment .mp4, so another modest cache
improvement.
2017-03-27 20:55:58 -07:00
Scott Lamb
c3cffb510b make an assert more informative
I got this error but didn't understand how it happened.
2017-03-05 00:58:06 -08:00
Scott Lamb
1cf27c189f upgrade to async hyper
serve_generated_bytes is >3X faster. One caveat is that the reactor thread may
stall when reading from the memory-mapped slice. Moonfire NVR is basically a
single-user program, so that may not be so bad, but we'll see.
2017-03-02 19:29:28 -08:00
Scott Lamb
618709734a trim the recording playback cache a bit
It had an Arc which in hindsight isn't necessary; the actual video index
generation is fast anyway. This saves a couple pointers per cache entry and
the overhead of chasing them. LruCache itself also has some extra pointers on
it but that's something to address another day.
2017-02-28 23:28:25 -08:00
Scott Lamb
ce363162f4 trim 16 bytes from each recording::Segment
This reduces the working set by another 960 bytes for a typical one-hour recording, improving cache efficiency a bit more.

8 bytes from SampleIndexIterator:
   * reduce the three "bytes" fields to two. Doing so as "bytes_key" vs
     "bytes_nonkey" slowed it down a bit, perhaps because the "bytes" is
     needed right away and requires a branch. But "bytes" vs "bytes_other"
     seems fine. Looks like it can do this with cmovs in parallel with other
     stuff.
   * stuff "is_key" into the "i" field.

8 bytes from recording::Segment itself:
   * make "frames" and "key_frame" u16s
   * stuff "trailing_zero" into "video_sample_entry_id"
2017-02-27 21:14:06 -08:00
Scott Lamb
acb6f8d809 isolated benchmark of building stts/stss/stsz 2017-02-26 20:10:02 -08:00
Scott Lamb
f3b17a4bd8 switch to a hyper vendor branch with Nagle fix
There were Nagle's algorithm delays in both the "fresh_client" and
"reuse_client" versions of the .mp4 serving benchmark. Now performance is much
more consistent.
2017-02-26 19:05:05 -08:00
Scott Lamb
f24daba299 shrink mp4::Segment 128 -> 112 bytes (on 64-bit)
* don't store sizes of mp4-format sample indexes; recalculate them.
   * keep SampleIndexIterator position as a u32 rather than a usize.

This is 960 bytes for a 60-minute mp4; another small cache usage improvement.
2017-02-26 00:02:49 -08:00
Scott Lamb
21212be18a use associated types for Slice
This is more readable; in particular, it avoids the need for awkward the
PhantomData in the Slices struct.
2017-02-25 18:54:52 -08:00
Scott Lamb
2d0c78a6d8 style improvements
* remove stuttering: mp4::Mp4Foo -> mp4::Foo
* stop using a &MutexGuard<Foo> where a &Foo will do
2017-02-24 21:33:26 -08:00
Scott Lamb
0a683b0846 Shrink mp4 file slices from 16 to 8 bytes each
For a one-hour recording, this is about 2 KiB, so a decent chunk of a
Raspberry Pi 2's L1 cache. Generating the Slices and searching/scanning
it should be a bit faster and pollute the cache less.

This is a pretty small optimization now when transferring a decent chunk
of the moov or mdat, but it's easy enough to do. It will be much more
noticeable if I end up interleaving the captions between each key frame.
2017-02-21 19:37:36 -08:00
Scott Lamb
da4e439b9c benchmark camera page, fix broken schema
This page was noticeably slower than necessary because the recording_cover
index wasn't actually covering the query. Both the schema for new databases
and the upgrade query were broken (and not even in the same way).

No new schema version to correct this, at least for now. I'll probably have
another reason to change the schema soon anyway and can throw this in.
2017-02-12 20:37:03 -08:00
Scott Lamb
b3a7795407 update to latest http-entity 2017-01-28 20:10:21 -08:00
Scott Lamb
87de4b4f5c update several dependencies
I left serde alone because uuid hasn't been updated for the new version.
2017-01-27 20:58:04 -08:00
Scott Lamb
c96f306e18 fix up the benchmarks
These are currently the only thing which require a nightly Rust. I haven't run
them since adding the feature gates. The feature gates were slightly broken,
and the actual benchmarks had bitrotted a bit. Fix these things. Also put them
into a separate submodule from the regular tests, so that not as many
feature gates (#[cfg(feature="nightly")]) are required.
2017-01-08 14:22:35 -08:00
Scott Lamb
c7443436a5 skip the first rotation
This is as described in design/time.md.
2016-12-29 13:07:25 -08:00
Scott Lamb
d001e4893c new logic for calculating a recording's start time
This is as described in design/time.md. Other aspects of that design
(including using the monotonic clock and adjusting the durations to compensate
for camera clock frequency error) are not implemented yet. No new tests yet.
Just trying to get some flight miles on these ideas as soon as I can.
2016-12-28 20:56:08 -08:00
Scott Lamb
eee887b9a6 schema version 1
The advantages of the new schema are:

* overlapping recordings can be unambiguously described and viewed.
  This is a significant problem right now; the clock on my cameras appears to
  run faster than the (NTP-synchronized) clock on my NVR. Thus, if an
  RTSP session drops and is quickly reconnected, there's likely to be
  overlap.

* less I/O is required to view mp4s when there are multiple cameras.
  This is a pretty dramatic difference in the number of database read
  syscalls with pragma page_size = 1024 (605 -> 39 in one test),
  although I'm not sure how much of that maps to actual I/O wait time.
  That's probably as dramatic as it is due to overflow page chaining.
  But even with larger page sizes, there's an improvement. It helps to
  stop interleaving the video_index fields from different cameras.

There are changes to the JSON API to take advantage of this, described
in design/api.md.

There's an upgrade procedure, described in guide/schema.md.
2016-12-20 22:08:18 -08:00
Scott Lamb
fee4141dc6 replace resource.rs with new http-entity crate
This crate is a slightly-more-polished and MIT-licensed version of
resource.rs. So far it has one advantage: running the tests doesn't
require RUST_TEST_THREADS=1.
2016-12-20 18:29:45 -08:00
Scott Lamb
8e499aa070 compile with stable Rust
The benchmarks now require "cargo bench --features=nightly". The
extra #[cfg(nightly)] switches in the code needed for it are a bit
annoying; I may move the benches to a separate directory to avoid this.
But for now, this works.
2016-12-09 22:04:35 -08:00
Scott Lamb
1865427f75 fully implement json handling as in spec
This is a significant milestone; now the Rust branch matches the C++ branch's
features.

In the process, I switched from using serde_derive (which requires nightly
Rust) to serde_codegen (which does not). It was easier than I thought it'd
be. I'm getting close to no longer requiring nightly Rust.
2016-12-08 21:28:50 -08:00
Scott Lamb
678500bc88 stop using a couple unstable features
It would be nice to build on stable Rust. In particular, I'm hitting
compiler bugs in Rust nightly, such at this one:
https://github.com/rust-lang/rust/issues/38177
I imagine beta/stable compilers would be less problematic.

These two features were easy to get rid of:

* alloc was used to get a Box<[u8]> to uninitialized memory.
  Looks like that's possible with Vec.

* box_syntax wasn't actually used at all. (Maybe a leftover from something.)

The remaining features are:

* plugin, for clippy.
  https://github.com/rust-lang/rust/issues/29597
  I could easily gate it with a "nightly" cargo feature.

* proc_macro, for serde_derive.
  https://github.com/rust-lang/rust/issues/35900
  serde does support stable rust, although it's annoying.
  https://serde.rs/codegen-stable.html
  I might just wait a bit; this feature looks like it's getting close to
  stabilization.
2016-12-07 21:05:49 -08:00
Scott Lamb
8df0eae567 add a basic test of Streamer, fix it
This test is copied from the C++ implementation. It ensures the timestamps are
calculated accurately from the pts rather than using ffmpeg's estimated
duration. The Rust implementation was doing the easy-but-inaccurate thing, so
fix that to make the test pass.

Additionally, I did this with a code structure that should ensure the Rust
code never drops a Writer without indicating to the syncer that its uuid is
abandoned. Such a bug essentially leaks the partially-written file, although a
restart would cause it to be properly unlinked and marked as such. There are
no tests (yet) that exercise this scenario, though.
2016-12-06 18:41:44 -08:00
Scott Lamb
eb2dadd4f0 test and fix .mp4 generation code
* new, more thorough tests based on a "BoxCursor" which navigates the
  resulting .mp4. This tests everything the C++ code was testing on
  Mp4SamplePieces. And it goes beyond: it tests the actual resulting .mp4
  file, not some internal logic.

* fix recording::Segment::foreach to properly handle a truncated ending.
  Before this was causing a panic.

* get rid of the separate recording::Segment::init method. This was some of
  the first Rust I ever wrote, and I must have thought I couldn't loan it my
  locked database. I can, and that's more clean. Now Segments are never
  half-initialized. Less to test, less to go wrong.

* fix recording::Segment::new to treat a trailing zero duration on a segment
  with a non-zero start in the same way as it does with a zero start. I'm
  still not sure what I'm doing makes sense, but at least it's not
  surprisingly inconsistent.

* add separate, smaller tests of recording::Segment

* address a couple TODOs in the .mp4 code and add missing comments

* change a couple panics on database corruption into cleaner error returns

* increment the etag version given the .mp4 output has changed
2016-12-02 20:40:55 -08:00