Commit Graph

536 Commits

Author SHA1 Message Date
Scott Lamb 6eda26a9cc support run splitting in json api 2017-10-17 09:00:05 -07:00
Scott Lamb 9041eeb907 fix panic when requesting zero segment duration
The recording::Segment was constructing a segment with no frames in it, which
was causing a panic when appending a zero-length stts to the Slices. Fix this
in a couple ways:

* Slices::append should return Err rather than panic. No reason to crash the
  whole program when we have trouble serving a single .mp4 request.
* recording::Segment shouldn't produce zero-frame segments
2017-10-17 08:55:21 -07:00
Scott Lamb 1d08698d0c debug, fix panic with zero-duration recording
I had an assert that fired in this case, dating back to when I hadn't plumbed
Result returns through much of .mp4 construction. Now I have, so there's no
excuse in having an assert here. Change to an error return, and tweak it to
not fire in the zero-duration case.

Also fix a problem in the test harness; I hadn't finished converting it for
multi-recording tests, and it was returning the wrong recording.

Because of that, I seem to have stumbled across a related problem in which
asking for zero duration of a non-zero duration recording will return a
recording::Segment with no frames, which will cause panics because its
corresponding .mp4 slices are zero-length. I just adjusted the panic message
here; I'll follow up with changes to address that.
2017-10-17 06:14:47 -07:00
Scott Lamb 1b1ae0bf0a fix link to users list, mention github tracker 2017-10-16 22:20:29 -07:00
Scott Lamb 2bdb2eca5d fix a couple time problems
* CameraDayKey::bounds (used to generate the start and end times of days in
  the returned JSON) returned UTC, not matching what recordings were mapped
  into that day. So fetching a day with its given bounds would return
  something different. Test and fix it.

* Several time-related tests weren't calling testutil::init(), so they weren't
  fixing the time zone to the expected America/Los_Angeles. If the machine
  time is set to something else, they would break.
2017-10-11 20:08:26 -07:00
Scott Lamb bbe04f909c fix ClockAdjuster logic
Small negative deltas caused every_minus_1 to be negative, which caused
underflow errors in debug builds. Fix this, test more comprehensively.
2017-10-09 22:13:38 -07:00
Scott Lamb 1e4d7d5ad9 make json api more idiomatic
* camelCase
* lose the "days":null in the overall cameras dict
2017-10-09 21:58:44 -07:00
Scott Lamb 5bb3dde74e work around #10 with advanced_editlist=false
I think this is an ffmpeg bug, which I plan to report. In the meantime, this
makes the tests pass. Long-term, even if ffmpeg fixes this, I probably don't
want to continue doing acceptance tests against whatever version of ffmpeg
happens to be installed - my real targets of interest are the latest versions
of Chrome, Firefox, Safari, QuickTime, and VLC.
2017-10-09 21:44:48 -07:00
Scott Lamb 711f7b3409 fix with-editlist hash
I missed this because I was running with ffmpeg 3 and had grown to expect this
test to fail. Quick fix on that coming shortly.
2017-10-09 21:00:45 -07:00
Scott Lamb af282c309e fix corrupt stss on segments after trimmed segment
This was causing Firefox to fail to play multipart .mp4s which trimmed away a
prefix. In the developer console, it said NS_ERROR_DOM_MEDIA_METADATA_ERR
without giving any RESULT_DETAIL, making it a pain to diagnose. Given that the
stss is supposed to be needed for seeking, I'm surprised it didn't have any
immediately obvious impact on Chrome or VLC.  Maybe they just took longer to
seek than otherwise necessary.

The bug was that when keeping track of the "next frame num" while constructing
the .mp4, I appended the number in the underlying recording, not the number
post-trimming. That meant following segments used the wrong numbers. In some
cases, it caused it to exceed the total number of samples in the generated
.mp4, which seems to be what Firefox was complaining about. Running the result
through "ffmpeg -i bad.mp4 -c copy -f mp4 good.mp4" just trimmed away the most
obviously invalid ones, leaving others that didn't point to the frames they
meant to. That was enough to make Firefox start playing the file. /shruggie

The existing tests were all with a single segment, so I added a new one to
catch this. I also added a Debug implementation to recording::Segment and
mp4::Segment.
2017-10-09 06:32:43 -07:00
Scott Lamb 919e9a6deb remove extraneous debug logging 2017-10-04 22:55:29 -07:00
Scott Lamb cb18ba44d8 fix /view.mp4 with rel_start
This was totally broken in commit 1cf27c18. It would serve bytes from the
beginning of the sample file in question, not from the start of the given
range.
2017-10-04 22:51:16 -07:00
Scott Lamb 57985079cc bugfix: in /recordings, end_id should be inclusive 2017-10-04 06:36:30 -07:00
Scott Lamb 5ea2c2fed1 fix media rate in edit list
it should be exactly 1, but was slightly more because the fraction was
incorrectly 1 rather than 0. I'm not sure if any actual players care about
this, but it was something I noticed when looking into strange edit list
behavior.
2017-10-04 00:03:33 -07:00
Scott Lamb bd4104b446 add start_id and end_id to .../recordings json
This was added to the API documentation in eee887b9 but never actually
implemented then. It's necessary to actually fetch the .mp4 in question.
2017-10-04 00:00:56 -07:00
Scott Lamb 7673a00bd9 serve 'video/mp4; codecs="avc1.xxxxxx"' mime type
This can be used when constructing a HTML5 SourceBuffer.
2017-10-03 23:25:58 -07:00
Scott Lamb 9eff91f7da fix test clip's mvhd timebase
I manually fixed the offending timebases and durations with a hex editor.
This addresses most of the failing tests described in #10.
2017-10-02 20:43:37 -07:00
Scott Lamb cbd8f7d3d2 Reorganize and expand documentation 2017-10-01 22:02:39 -07:00
Scott Lamb 04e9f3f160 support segmented mp4s
This is intended to support HTML5 Media Source Extensions, which I expect to
be the most practical way to make a good web UI with a proper scrub bar and
such.

This feature has had very limited testing on Chrome and Firefox, and that was
not entirely successful. More work is needed before it's usable, but this
seems like a helpful progress checkpoint.
2017-10-01 15:29:22 -07:00
Scott Lamb cb689b2ec8 Linux/arm compilation fix
libc::c_char is u8 rather than i8 there, unlike Linux/x86_64 or OS X.
Use correct type to compile on all platforms.
2017-09-23 21:12:17 -07:00
Scott Lamb 45b508dff6 fix moonfire_ffmpeg::Error formatting
This would return a string with trailing nuls up to the buffer size (64
bytes) which would cause problems later. One in particular: in
"moonfire-nvr config", if testing a camera failed, displaying the error would
panic with the backtrace below.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: NulError(33, [69, 114, 114, 111, 114, 58, 32, 102, 102, 109, 112, 101, 103, 58, 32, 67, 111, 110, 110, 101, 99, 116, 105, 111, 110, 32, 114, 101, 102, 117, 115, 101, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])', src/libcore/result.rs:860:4
stack backtrace:
   0: std::sys:👿:backtrace::tracing:👿:unwind_backtrace
   1: std::panicking::default_hook::{{closure}}
   2: std::panicking::default_hook
   3: std::panicking::rust_panic_with_hook
   4: std::panicking::begin_panic_new
   5: std::panicking::begin_panic_fmt
   6: rust_begin_unwind
   7: core::panicking::panic_fmt
   8: core::result::unwrap_failed
   9: <core::result::Result<T, E>>::unwrap
  10: <&'a str as ncurses::ToCStr>::to_c_str
  11: ncurses::addstr
  12: ncurses::mvaddstr
  13: <cursive::backend::curses::n::Concrete as cursive::backend::Backend>::print_at
  14: cursive:🖨️:Printer::print
  15: <cursive::views::text_view::TextView as cursive::view::View>::draw::{{closure}}
  16: cursive::view:📜:ScrollBase::draw
  17: <cursive::views::text_view::TextView as cursive::view::View>::draw
  18: <cursive::views::dialog::Dialog as cursive::view::View>::draw
  19: <cursive::views::layer::Layer<T> as cursive::view::view_wrapper::ViewWrapper>::wrap_draw
  20: cursive::view::view_wrapper::<impl cursive::view::View for T>::draw
  21: <cursive::views::shadow_view::ShadowView<T> as cursive::view::view_wrapper::ViewWrapper>::wrap_draw
  22: cursive::view::view_wrapper::<impl cursive::view::View for T>::draw
  23: <cursive::views::stack_view::StackView as cursive::view::View>::draw::{{closure}}
  24: cursive:🖨️:Printer::with_color::{{closure}}
  25: <cursive::backend::curses::n::Concrete as cursive::backend::Backend>::with_color
  26: cursive:🖨️:Printer::with_color
  27: <cursive::views::stack_view::StackView as cursive::view::View>::draw
  28: cursive::Cursive::draw
  29: cursive::Cursive::step
  30: cursive::Cursive::run
  31: moonfire_nvr::cmds::config::run
  32: moonfire_nvr::cmds::Command::run
  33: moonfire_nvr::main
  34: __rust_maybe_catch_panic
  35: std::rt::lang_start
  36: main
2017-09-22 06:47:08 -07:00
Scott Lamb 71e2930ced use the latest libc crate to build on OS X 2017-09-22 06:20:40 -07:00
Scott Lamb 11420df065 update deps (particularly hyper) + fix warnings 2017-09-21 21:51:58 -07:00
Scott Lamb 857a66f29c use my own ffmpeg crate
This significantly improves safety of the ffmpeg interface. The complex
ABIs aren't accessed directly from Rust. Instead, I have a small C
wrapper which uses the ffmpeg C API and the C headers at compile-time to
determine the proper ABI in the same way any C program using ffmpeg
would, so that the ABI doesn't have to be duplicated in Rust code.
I've tested with ffmpeg 2.x and ffmpeg 3.x; it seems to work properly
with both where before ffmpeg 3.x caused segfaults.

It still depends on ABI compatibility between the compiled and running
versions. C programs need this, too, and normal shared library
versioning practices provide this guarantee. But log both versions on
startup for diagnosing problems with this.

Fixes #7
2017-09-20 21:06:06 -07:00
Scott Lamb 8ff1d0dcb8 workaround config crash since cursive 0.5.1
https://github.com/gyscos/Cursive/issues/144
2017-07-02 22:03:16 -07:00
Scott Lamb ac43e7fe17 fix bench --features=nightly, broken by upgrade
The benchmarks don't get compiled with the standard "cargo test";
they require "cargo +nightly bench --features=nightly", so I didn't notice
they were broken in the previous commit. Now fixed.
2017-06-11 19:40:36 -07:00
Scott Lamb bebd6ee79a update dependencies
* The mylog update fixes a couple bad bugs.
* Otherwise, just keep up with the Rust ecosystem.
2017-06-11 12:57:55 -07:00
Scott Lamb 30cda85a2e shrink mp4::Segment by another 24 bytes on 32-bit
This is 1,440 bytes for a 60-segment .mp4, so another modest cache
improvement.
2017-03-27 20:55:58 -07:00
Scott Lamb b83f1f923f specify git path to mylog 2017-03-26 00:03:24 -07:00
Scott Lamb bfc0e2abe8 use my own logging package
This supports formats that I find more useful; one that mimicks the Google
glog package, and one that is similar but adapted for the systemd journal.
2017-03-26 00:01:48 -07:00
Scott Lamb c3cffb510b make an assert more informative
I got this error but didn't understand how it happened.
2017-03-05 00:58:06 -08:00
Scott Lamb a9ca38fa4e disable lto build
On my Raspberry Pi 2, LLVM doesn't have enough memory to link with this option
anymore.
2017-03-04 07:53:27 -08:00
Scott Lamb d263ee1bcf upgrade rusqlite and support bundled library
SQLite3 has gotten some noticeable speed improvements in recent releases. This
is a modest improvement on reasonably new platforms, and a pretty significant
one on Raspbian Jessie (which has an incredibly old SQLite3).
2017-03-03 22:42:13 -08:00
Scott Lamb 4806c62ca1 reuse reqwest client in serve_camera_html bench
This makes a huge difference in the reported time - 863 usec rather than 6
milliseconds on my laptop. Part of the difference is in reqwest client setup
(it apparently initializes a SSL_CTX that is never used in this test), part
fresh connections vs keepalive, part I don't know what. None of it seems
relevant to the logic I want to test.
2017-03-03 22:26:29 -08:00
Scott Lamb 1cf27c189f upgrade to async hyper
serve_generated_bytes is >3X faster. One caveat is that the reactor thread may
stall when reading from the memory-mapped slice. Moonfire NVR is basically a
single-user program, so that may not be so bad, but we'll see.
2017-03-02 19:29:28 -08:00
Scott Lamb 618709734a trim the recording playback cache a bit
It had an Arc which in hindsight isn't necessary; the actual video index
generation is fast anyway. This saves a couple pointers per cache entry and
the overhead of chasing them. LruCache itself also has some extra pointers on
it but that's something to address another day.
2017-02-28 23:28:25 -08:00
Scott Lamb 045ee95820 shrink RecordingPlayback by one pointer 2017-02-27 23:30:53 -08:00
Scott Lamb ce363162f4 trim 16 bytes from each recording::Segment
This reduces the working set by another 960 bytes for a typical one-hour recording, improving cache efficiency a bit more.

8 bytes from SampleIndexIterator:
   * reduce the three "bytes" fields to two. Doing so as "bytes_key" vs
     "bytes_nonkey" slowed it down a bit, perhaps because the "bytes" is
     needed right away and requires a branch. But "bytes" vs "bytes_other"
     seems fine. Looks like it can do this with cmovs in parallel with other
     stuff.
   * stuff "is_key" into the "i" field.

8 bytes from recording::Segment itself:
   * make "frames" and "key_frame" u16s
   * stuff "trailing_zero" into "video_sample_entry_id"
2017-02-27 21:14:06 -08:00
Scott Lamb 15609ddb8e improve build_index performance by 5-10%
I just switched a couple inner loop ?s back to try!(...) to work around
https://github.com/rust-lang/rust/issues/37939
2017-02-26 20:21:46 -08:00
Scott Lamb acb6f8d809 isolated benchmark of building stts/stss/stsz 2017-02-26 20:10:02 -08:00
Scott Lamb f3b17a4bd8 switch to a hyper vendor branch with Nagle fix
There were Nagle's algorithm delays in both the "fresh_client" and
"reuse_client" versions of the .mp4 serving benchmark. Now performance is much
more consistent.
2017-02-26 19:05:05 -08:00
Scott Lamb f24daba299 shrink mp4::Segment 128 -> 112 bytes (on 64-bit)
* don't store sizes of mp4-format sample indexes; recalculate them.
   * keep SampleIndexIterator position as a u32 rather than a usize.

This is 960 bytes for a 60-minute mp4; another small cache usage improvement.
2017-02-26 00:02:49 -08:00
Scott Lamb 21212be18a use associated types for Slice
This is more readable; in particular, it avoids the need for awkward the
PhantomData in the Slices struct.
2017-02-25 18:54:52 -08:00
Scott Lamb 2d0c78a6d8 style improvements
* remove stuttering: mp4::Mp4Foo -> mp4::Foo
* stop using a &MutexGuard<Foo> where a &Foo will do
2017-02-24 21:33:26 -08:00
Scott Lamb 0a683b0846 Shrink mp4 file slices from 16 to 8 bytes each
For a one-hour recording, this is about 2 KiB, so a decent chunk of a
Raspberry Pi 2's L1 cache. Generating the Slices and searching/scanning
it should be a bit faster and pollute the cache less.

This is a pretty small optimization now when transferring a decent chunk
of the moov or mdat, but it's easy enough to do. It will be much more
noticeable if I end up interleaving the captions between each key frame.
2017-02-21 19:37:36 -08:00
Scott Lamb c6813bd886 clean up adjust_day implementation
Entry::Occupied has a remove_entry method that allows this to be a little more
readable, and a little more efficient (one btree traversal rather than two).
mmstick on reddit pointed this out:

https://www.reddit.com/r/rust/comments/5tzw5q/how_are_if_else_scopes_are_handled_in_rust/dds0r37/?context=3
2017-02-15 19:16:34 -08:00
Scott Lamb 13c6af45a1 avoid heap allocation reading uuid from sqlite
As described here:
https://github.com/jgallagher/rusqlite/issues/158#issuecomment-277884643
2017-02-13 19:36:05 -08:00
Scott Lamb 5d727a9c83 enforce foreign keys, swap delete order
This came up when I tried using the "bundled" feature of rusqlite. Its build
script passes -DSQLITE_DEFAULT_FOREIGN_KEYS=1, which caused a test to fail.
Fix the bug that this option revealed, and set the pragma so we'll catch
such problems in the future even when using a system library not compiled in
this way.
2017-02-12 20:56:04 -08:00
Scott Lamb b7957edb5a fix "Edit retention" fs capacity calculation
This was completely wrong: it overflowed on large filesystems and
double-counted the used bytes.

The new logic is still imperfect in that if there are a bunch of files in the
process of being deleted (moved from recording to reserved_sample_files but
not yet unlinked), they'll be taken out of the total capacity. Maybe it should
stat everything in the sample file directory instead of relying on the
recording table. It's definitely an improvement, though.
2017-02-12 20:45:46 -08:00
Scott Lamb da4e439b9c benchmark camera page, fix broken schema
This page was noticeably slower than necessary because the recording_cover
index wasn't actually covering the query. Both the schema for new databases
and the upgrade query were broken (and not even in the same way).

No new schema version to correct this, at least for now. I'll probably have
another reason to change the schema soon anyway and can throw this in.
2017-02-12 20:37:03 -08:00