Commit Graph

34 Commits

Author SHA1 Message Date
Scott Lamb 422cd2a75e preliminary web support for auth (#26)
Some caveats:

  * it doesn't record the peer IP yet, which makes it harder to verify
    sessions are valid. This is a little annoying to do in hyper now
    (see hyperium/hyper#1410). The direct peer might not be what we want
    right now anyway because there's no TLS support yet (see #27).  In
    the meantime, the sane way to expose Moonfire NVR to the Internet is
    via a proxy server, and recording the proxy's IP is not useful.
    Maybe better to interpret a RFC 7239 Forwarded header (and/or
    the older X-Forwarded-{For,Proto} headers).

  * it doesn't ever use Secure (https-only) cookies, for a similar reason.
    It's not safe to use even with a tls proxy until this is fixed.

  * there's no "moonfire-nvr config" support for inspecting/invalidating
    sessions yet.

  * in debug builds, logging in is crazy slow. See libpasta/libpasta#9.

Some notes:

  * I removed the Javascript "no-use-before-defined" lint, as some of
    the functions form a cycle.

  * Fixed #20 along the way. I needed to add support for properly
    returning non-OK HTTP statuses to signal unauthorized and such.

  * I removed the Access-Control-Allow-Origin header support, which was
    at odds with the "SameSite=lax" in the cookie header. The "yarn
    start" method for running a local proxy server accomplishes the same
    thing as the Access-Control-Allow-Origin support in a more secure
    manner.
2018-11-27 11:08:33 -08:00
Scott Lamb 5bba71345c few small markdown tweaks 2018-08-24 21:04:13 -07:00
Dolf Starreveld 5727adf3df Settings can now be taken from separate file with local override.
* Various settings in settings-nvr.js module
* settings-nvr-local.js can override settings-nvr.js
* settings-nvr-local is unchecked file
* Both files can be straight maps, or functions returning maps
* webpack env and args available to those functions
2018-03-08 18:26:41 -08:00
Scott Lamb 6b369a4cf0 3rd try at screenshot in README 2017-10-21 22:03:12 -07:00
Scott Lamb 43874c6a43 second try at screenshot in README 2017-10-21 21:57:04 -07:00
Scott Lamb 315f3594c2 add a basic Javascript UI
The Javascript is pretty amateurish I'm sure but at least it's something to
iterate from. It's already much more pleasant for browsing through videos in
several ways:

* more responsive to load only a day at a time rather than 90+ days
* much easier to see the same time segment on several cameras
* more pleasant to have the videos load as a popup rather than a link
  that blows away your position in an enormous list
* exposes the fancier .mp4 generation options: splitting at lengths
  other than the default, trimming to an arbitrary start and end time,
  including a subtitle track with timestamps.

There's a slight regression in functionality: I didn't match the former
top-level page which showed how much camera used of its disk allocation and
the total duration of video. This is exposed in the JSON API, so it shouldn't
be too hard to add back.
2017-10-21 21:54:27 -07:00
Scott Lamb 1b1ae0bf0a fix link to users list, mention github tracker 2017-10-16 22:20:29 -07:00
Scott Lamb cbd8f7d3d2 Reorganize and expand documentation 2017-10-01 22:02:39 -07:00
Scott Lamb 857a66f29c use my own ffmpeg crate
This significantly improves safety of the ffmpeg interface. The complex
ABIs aren't accessed directly from Rust. Instead, I have a small C
wrapper which uses the ffmpeg C API and the C headers at compile-time to
determine the proper ABI in the same way any C program using ffmpeg
would, so that the ABI doesn't have to be duplicated in Rust code.
I've tested with ffmpeg 2.x and ffmpeg 3.x; it seems to work properly
with both where before ffmpeg 3.x caused segfaults.

It still depends on ABI compatibility between the compiled and running
versions. C programs need this, too, and normal shared library
versioning practices provide this guarantee. But log both versions on
startup for diagnosing problems with this.

Fixes #7
2017-09-20 21:06:06 -07:00
Scott Lamb bfc0e2abe8 use my own logging package
This supports formats that I find more useful; one that mimicks the Google
glog package, and one that is similar but adapted for the systemd journal.
2017-03-26 00:01:48 -07:00
Scott Lamb 3314673b8f fix ncurses package names 2017-02-05 21:58:41 -08:00
Scott Lamb 625c6f807b small dep README.md/prep.sh fixes
These are things that weren't quite right in the comit
c82f038bef.
2017-02-05 20:55:41 -08:00
Scott Lamb f97e232131 upgrade dependencies
Rust 1.15+ now supports serde codegen on stable without the build.rs.
Update to serde 0.9 and uuid crate 0.4 to match.
2017-02-05 20:13:51 -08:00
Scott Lamb c82f038bef new "moonfire-nvr config" subcommand
This is a ncurses-based user interface for configuration. This fills a major
usability gap: the system can be configured without manual SQL commands.
2017-02-05 19:58:41 -08:00
Scott Lamb 168cd743f4 new command to initialize a database 2017-01-17 14:21:13 -08:00
Scott Lamb 3af9aeee96 use xsv-style subcommands like "moonfire-nvr run"
This makes it easier to understand which options are valid with each
command.

Additionally, there's more separation of implementations. The most
obvious consequence is that "moonfire-nvr ts ..." no longer uselessly
locks/opens a database.
2017-01-17 12:51:56 -08:00
Scott Lamb 586902d30f Merge branch 'new-schema' 2017-01-01 22:59:49 -08:00
Scott Lamb 462d2b9a01 merge rust branch into master
The Rust branch still isn't perfect - in particular, the ffmpeg
dependency is very picky - but it's the future of the project.
2017-01-01 18:38:45 -08:00
Scott Lamb eee887b9a6 schema version 1
The advantages of the new schema are:

* overlapping recordings can be unambiguously described and viewed.
  This is a significant problem right now; the clock on my cameras appears to
  run faster than the (NTP-synchronized) clock on my NVR. Thus, if an
  RTSP session drops and is quickly reconnected, there's likely to be
  overlap.

* less I/O is required to view mp4s when there are multiple cameras.
  This is a pretty dramatic difference in the number of database read
  syscalls with pragma page_size = 1024 (605 -> 39 in one test),
  although I'm not sure how much of that maps to actual I/O wait time.
  That's probably as dramatic as it is due to overflow page chaining.
  But even with larger page sizes, there's an improvement. It helps to
  stop interleaving the video_index fields from different cameras.

There are changes to the JSON API to take advantage of this, described
in design/api.md.

There's an upgrade procedure, described in guide/schema.md.
2016-12-20 22:08:18 -08:00
Scott Lamb fee4141dc6 replace resource.rs with new http-entity crate
This crate is a slightly-more-polished and MIT-licensed version of
resource.rs. So far it has one advantage: running the tests doesn't
require RUST_TEST_THREADS=1.
2016-12-20 18:29:45 -08:00
Scott Lamb 86dd36d7a5 version the sqlite3 database schema
See guide/schema.md for instructions on upgrading past this commit.
2016-12-20 15:44:04 -08:00
Scott Lamb 8e499aa070 compile with stable Rust
The benchmarks now require "cargo bench --features=nightly". The
extra #[cfg(nightly)] switches in the code needed for it are a bit
annoying; I may move the benches to a separate directory to avoid this.
But for now, this works.
2016-12-09 22:04:35 -08:00
Scott Lamb 0a7535536d Rust rewrite
I should have submitted/pushed more incrementally but just played with it on
my computer as I was learning the language. The new Rust version more or less
matches the functionality of the current C++ version, although there are many
caveats listed below.

Upgrade notes: when moving from the C++ version, I recommend dropping and
recreating the "recording_cover" index in SQLite3 to pick up the addition of
the "video_sync_samples" column:

    $ sudo systemctl stop moonfire-nvr
    $ sudo -u moonfire-nvr sqlite3 /var/lib/moonfire-nvr/db/db
    sqlite> drop index recording_cover;
    sqlite3> create index ...rest of command as in schema.sql...;
    sqlite3> ^D

Some known visible differences from the C++ version:

* .mp4 generation queries SQLite3 differently. Before it would just get all
  video indexes in a single query. Now it leads with a query that should be
  satisfiable by the covering index (assuming the index has been recreated as
  noted above), then queries individual recording's indexes as needed to fill
  a LRU cache. I believe this is roughly similar speed for the initial hit
  (which generates the moov part of the file) and significantly faster when
  seeking. I would have done it a while ago with the C++ version but didn't
  want to track down a lru cache library. It was easier to find with Rust.

* On startup, the Rust version cleans up old reserved files. This is as in the
  design; the C++ version was just missing this code.

* The .html recording list output is a little different. It's in ascending
  order, with the most current segment shorten than an hour rather than the
  oldest. This is less ergonomic, but it was easy. I could fix it or just wait
  to obsolete it with some fancier JavaScript UI.

* commandline argument parsing and logging have changed formats due to
  different underlying libraries.

* The JSON output isn't quite right (matching the spec / C++ implementation)
  yet.

Additional caveats:

* I haven't done any proof-reading of prep.sh + install instructions.

* There's a lot of code quality work to do: adding (back) comments and test
  coverage, developing a good Rust style.

* The ffmpeg foreign function interface is particularly sketchy. I'd
  eventually like to switch to something based on autogenerated bindings.
  I'd also like to use pure Rust code where practical, but once I do on-NVR
  motion detection I'll need to existing C/C++ libraries for speed (H.264
  decoding + OpenCL-based analysis).
2016-11-25 14:34:00 -08:00
Scott Lamb 0f75e4f94a fix a couple README formatting mistakes 2016-11-25 13:03:58 -08:00
Scott Lamb 98720911c7 update the README to reflect the SQLite schema 2016-11-25 13:00:27 -08:00
Scott Lamb 0aadf227c1 Benchmark & speed up SampleIndexIterator
I'm seeing what is possible performance-wise in the current C++ before
trying out Go and Rust implementations.

* use the google benchmark framework and some real data.

* use release builds - I hadn't done this in a while, and there were a
  few compile errors that manifested only in release mode. Update the
  readme to suggest using a release build.

* optimize the varint decoder and SampleIndexIterator to branch less.

* enable link-time optimization for release builds.

* add some support for feedback-directed optimization. Ideally "make"
  would automatically produce the "generate" build outputs with a
  different object/library/executable suffix, run the generate
  benchmark, and then produce the "use" builds. This is not that fancy;
  you have to run an arcane command:

  alias cmake='cmake -DCMAKE_BUILD_TYPE=Release'
  cmake -DPROFILE_GENERATE=true -DPROFILE_USE=false .. && \
  make recording-bench && \
  src/recording-bench && \
  cmake -DPROFILE_GENERATE=false -DPROFILE_USE=true .. && \
  make recording-bench && \
  perf stat -e cycles,instructions,branches,branch-misses \
      src/recording-bench --benchmark_repetitions=5

  That said, the results are dramatic - at least 50% improvement. (The
  results weren't stable before as small tweaks to the code caused a
  huge shift in performance, presumably something something branch
  alignment something something.)
2016-05-19 22:53:23 -07:00
Scott Lamb 5dd0dca51f Add a simple JSON API.
This is a work in progress. There are no tests yet.
2016-04-23 13:55:36 -07:00
Dolf Starreveld 770fe1512a Updated README to explain and show use of "cameras.sql" 2016-02-09 00:02:07 -08:00
Dolf Starreveld e7456643cd Added prep.sh script for automated builds
* Changed README.md commensurately
* Add cameras.sql to .gitignore to not commit personal camera data
* Change CMakeLists.txt to explicitly refer to hand-built libevent dirs
2016-02-07 22:59:29 -08:00
Scott Lamb 3b0dc5368e Write using the shiny new schema
There's a lot of work left to do on this:

* important latency optimization: the recording threads block
  while fsync()ing sample files, which can take 250+ ms. This
  should be moved to a separate thread to happen asynchronously.

* write cycle optimizations: several SQLite commits per camera per minute.

* test coverage: this drops testing of the file rotation, and
  there are several error paths worth testing.

* ffmpeg oddities to investigate:

  * the out-of-order first frame's pts
  * measurable delay before returning packets
  * it sometimes returns an initial packet it calls a "key" frame that actually
    has an SEI recovery point NAL but not an IDR-coded slice NAL, even though
    in the input these always seem to come together. This makes playback
    starting from this recording not work at all on Chrome. The symptom is
    that it loads a player-looking thing with the proper dimensions but
    playback never actually starts.

  I imagine these are all related but haven't taken the time to dig through
  ffmpeg code and understand them. The right thing anyway may be to ditch
  ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as
  it seems to have other omissions like making it hard/impossible to take
  advantage of Sender Reports. In the meantime, I attempted to mitigate
  problems by decreasing ffmpeg's probesize.

* handling overlapping recordings: right now if there's too much time drift or
  a time jump, you can end up with recordings that the UI won't play without
  manual database changes. It's not obvious what the right thing to do is.

* easy camera setup: currently you have to manually insert rows in the SQLite
  database and restart.

but I think it's best to get something in to iterate from.

This deletes a lot of code, including:

* the ffmpeg video sink code (instead now using a bit of extra code in Stream
  on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase
  code that's been around for a while)

* FileManager (in favor of new code using the database)

* the old UI

* RealFile and friends

* the dependency on protocol buffers, which was used for the config file
  (though I'll likely have other reasons for using protocol buffers later)

* even some utilities like IsWord that were just for validating the config
2016-02-03 23:22:37 -08:00
Scott Lamb 4c7eed293f Construct HTTP responses incrementally.
This isn't as much of a speed-up as you might imagine; most of the large HTTP
content was mmap()ed files which are relatively efficient. The big improvement
here is that it's now possible to serve large files (4 GiB and up) on 32-bit
machines. This actually works: I was just able to browse a 25-hour, 37 GiB
.mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving
each request, which isn't exactly zippy but might be forgivable for such a
large file. I still intend for the common request from the web interface to be
for much smaller fragmented .mp4 files.

Speed could be improved later through caching. Right now my test code is
creating a fresh VirtualFile from a database query on each request, even
though it hasn't changed. The tricky part will be doing cache invalidation
cleanly if it does change---new recordings are added to the requested time
range, recordings are deleted, or existing recordings' timestamps are changed.

The downside to the approach here is that it requires libevent 2.1 for
evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie
still bundle libevent 2.0. There are a few possible improvements here:

1. fall back to assuming chunks are added immediately, so that people with
   libevent 2.0 get the old bad behavior and people with libevent 2.1 get the
   better behavior. This is kind of lame, though; it's easy to go through
   the whole address space pretty fast, particularly when the browsers send
   out requests so quickly so there may be some unintentional concurrency.

2. alter the FileSlice interface to return a pointer/destructor rather than
   add something to the evbuffer. HttpServe would then add each chunk via
   evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to
   calling the FileSlice-supplied destructor) notes that this chunk has been
   fully sent. For all the currently-used FileSlices, this shouldn't be too
   hard, and there are a few other reasons it might be beneficial:

   * RealFileSlice could call madvise() to control the OS buffering
   * RealFileSlice could track when file descriptors are open and thus
     FileManager's unlink() calls don't actually free up space
   * It feels dirty to expose libevent stuff through the otherwise-nice
     FileSlice interface.

3. support building libevent 2.1 statically in-tree if the OS-supplied
   libevent is unsuitable.

I'm tempted to go with #2, but probably not right now. More urgent to commit
support for writing the new format and the wrapper bits for viewing it.
2016-01-14 22:41:49 -08:00
Scott Lamb 29696688b5 Small Uuid class wrapping libuuid.
This will be used to generate the names of sample files,
as well as camera ids.
2016-01-12 09:46:21 -08:00
Scott Lamb 320c4afa94 Fix incorrect path, commandline in README.md.
From Dolf Starreveld <dolf@starreveld.com>.
2016-01-04 22:06:55 -08:00
Scott Lamb c9eda8ac15 Initial commit, with basic functionality. 2016-01-01 22:06:47 -08:00