Commit Graph

227 Commits

Author SHA1 Message Date
Scott Lamb
fbe1231af0 move open_id from recording_playback to recording
I want to be able to use it in etags without having to do a full scan of the
recording_playback in advance, which would greatly increase time to first
byte. I probably will even use it in urls to ensure the segments they point to
are stable. I haven't actually done this yet - it will wait until I implement
serving unflushed recordings - but I want to get the schema set up properly.
2018-02-28 20:52:43 -08:00
Scott Lamb
fb4d88d3e2 make db::dir::Writer equally stubborn
Every recording it starts must be sent to the syncer with at least one sample
written. It will try forever (unless the channel is down, then panic). This
avoids the situation in which it prevents something in the uncommitted
VecDeque from ever being synced and thus any further recordings from being
flushed.
2018-02-28 12:32:52 -08:00
Scott Lamb
b1d71c4e8d improve Syncer's robustness
The new approach is to, rather than panicking, retry forever. The assumption
is that if a given operation is failing, a following operation is unlikely to
succeed, so it's simpler to just keep trying the earlier one than come up with
ways to undo it and proceed with later operations.

I still need to apply this approach to the Writer class. It currently unwraps
(crashes) or just gives up on a recording without ever sending it to the
Syncer. Given that recordings are all synced in order, that means further ones
can never be synced.
2018-02-28 11:07:55 -08:00
Scott Lamb
b790075ca2 fix flush_if_sec updates not hitting db 2018-02-23 14:49:10 -08:00
Scott Lamb
81e1ec97b3 more sanity checking for delete_oldest_recordings 2018-02-23 14:05:07 -08:00
Scott Lamb
8d9939603e fix repeated deletions within a flush
When list_oldest_recordings was called twice with no intervening flush, it
returned the same rows twice. This led to trying to delete it twice and all
following flushes failing with a "no such recording x/y" message. Now, return
each row only once, and track how many bytes have been returned.

I think dir.rs's logic is still wrong for how many bytes to delete when
multiple recordings are flushed at once (it ignores the bytes added by the
first when computing the bytes to delete for the second), but this is
progress.
2018-02-23 13:49:57 -08:00
Scott Lamb
843e1b49c8 take FnMut closures by reference
I mistakenly thought these had to be monomorphized. (The FnOnce still
does, until rust-lang/rfcs#1909 is implemented.) Turns out this way works
fine. It should result in less compile time / code size, though I didn't check
this.
2018-02-23 09:19:42 -08:00
Scott Lamb
d9841fd634 fix compilation error in benchmark
This needs a separate run of "cargo +nightly bench --features=nightly", so I
missed it in a couple previous commits. I probably should set up travis-ci...
2018-02-22 21:57:43 -08:00
Scott Lamb
bf45ae6011 extend recording_playback with an open_id
As noted in schema.sql, this can be used for disambiguation. It also may be
useful in diagnosing data integrity problems.

Also, sneak in a couple minor improvements: better diagnostics in a couple
places, fix to 1->2 upgrade procedure.
2018-02-22 21:46:41 -08:00
Scott Lamb
b037c9bdd7 knob to reduce db commits (SSD write cycles)
This improves the practicality of having many streams (including the doubling
of streams by having main + sub streams for each camera). With these tuned
properly, extra streams don't cause any extra write cycles in normal or error
cases. Consider the worst case in which each RTSP session immediately sends a
single frame and then fails. Moonfire retries every second, so this would
formerly cause one commit per second per stream. (flush_if_sec=0 preserves
this behavior.) Now the commits can be arbitrarily infrequent by setting
higher values of flush_if_sec.

WARNING: this isn't production-ready! I hacked up dir.rs to make tests pass
and "moonfire-nvr run" work in the best-case scenario, but it doesn't handle
errors gracefully. I've been debating what to do when writing a recording
fails. I considered "abandoning" the recording then either reusing or skipping
its id. (in the latter case, marking the file as garbage if it can't be
unlinked immediately). I think now there's no point in abandoning a recording.
If I can't write to that file, there's no reason to believe another will work
better. It's better to retry that recording forever, and perhaps put the whole
directory into an error state that stops recording until those writes go
through. I'm planning to redesign dir.rs to make this happen.
2018-02-22 16:35:34 -08:00
Scott Lamb
31adbc1e9f initial split of database to a separate crate
It should reduce compile time / memory usage to put quite a bit of the code
into a separate crate. I also intend to limit visibility of some things to
only within the db crate, but that's for a future change. This is the smallest
move that will compile.
2018-02-20 23:15:39 -08:00
Scott Lamb
d84e754b2a replace homegrown Error with failure crate
This reduces boilerplate, making it a bit easier for me to split the db stuff
out into its own crate.
2018-02-20 22:46:14 -08:00
Scott Lamb
253f3de399 reorganize the sample file directory
The filenames now represent composite ids (stream id + recording id) rather
than a separate uuid system with its own reservation for a few benefits:

  * This provides more information when there are inconsistencies.

  * This avoids the need for managing the reservations during recording. I
    expect this to simplify delaying flushing of newly written sample files.
    Now the directory has to be scanned at startup for files that never got
    written to the database, but that's acceptably fast even with millions of
    files.

  * Less information to keep in memory and in the recording_playback table.

I'd considered using one directory per stream, which might help if the
filesystem has trouble coping with huge directories. But that would mean each
dir has to be fsync()ed separately (more latency and/or more multithreading).
So I'll stick with this until I see concrete evidence of a problem that would
solve.

Test coverage of the error conditions is poor. I plan to do some restructuring
of the db/dir code, hopefully making steps toward testability along the way.
2018-02-20 10:11:10 -08:00
Scott Lamb
e7f5733f29 new database/sample file dir interlock scheme
The idea is to avoid the problems described in src/schema.proto; those
possibilities have bothered me for a while. A bonus is that (in a future
commit) it can replace the sample file uuid scheme in favor of using
<camera_uuid>-<stream_type>/<recording_id> for several advantages:

  * on data integrity problems (specifically, extra sample files), more
    information to use to understand what happened.
  * no more reserving sample files prior to using them. This avoids some extra
    database transactions on startup (now there's an extra two total rather
    than an extra one per stream). It also simplifies an upcoming change I
    want to make in which some streams are not flushed immediately, reducing
    the write load significantly (maybe one per minute total rather than one
    per stream per minute).
  * get rid of eight bytes per playback cache entry in RAM (and nine bytes
    per recording_playback row on flash).

The implementation is still pretty rough in places:

  * Lack of tests.
  * Poor ode organization. In particular, SampleFileDirectory::write_meta
    shouldn't be exposed beyond db. I'm thinking about moving db.rs and
    SampleFileDirectory to a new crate, moonfire_nvr_db. This would improve
    compile times as well.
  * No tooling for renaming a sample file directory.
  * Config subcommand still panics in conditions that can be reasonably
    expected to happen.
2018-02-14 23:35:52 -08:00
Scott Lamb
89b6bccaa3 support multiple sample file directories
This is still pretty basic support. There's no config UI support for
renaming/moving the sample file directories after they are created, and no
error checking that the files are still in the expected place. I can imagine
sysadmins getting into trouble trying to change things. I hope to address at
least some of that in a follow-up change to introduce a versioning/locking
scheme that ensures databases and sample file dirs match in some way.

A bonus change that kinda got pulled along for the ride: a dialog pops up in
the config UI while a stream is being tested. The experience was pretty bad
before; there was no indication the button worked at all until it was done,
sometimes many seconds later.
2018-02-11 23:04:02 -08:00
Scott Lamb
6f309e432f store rfc6381_codec in the database
This avoids having codec-specific logic to synthesize it in db.rs. It's not
too much of a problem now with only H.264 support, but it'd be a pain when
supporting H.265 and other codecs.
2018-02-05 11:57:59 -08:00
Scott Lamb
cc6579b211 upgrader for v1->v2 2018-02-03 22:19:02 -08:00
Scott Lamb
57c44b5e35 schema 2: add a "record" bool to streams 2018-02-03 22:19:02 -08:00
Scott Lamb
dc402bdc01 schema version 2: support sub streams
This allows each camera to have a main and a sub stream. Previously there was
a field in the schema for the sub stream's url, but it didn't do anything. Now
you can configure individual retention for main and sub streams. They show up
grouped in the UI.

No support for upgrading from schema version 1 yet.
2018-02-03 22:15:54 -08:00
Scott Lamb
0d69f4f49b use add_camera in tests, not direct db inserts
This is a wash in terms of lines of code now, but it makes it a bit easier to
maintain as I make changes to the schema (such as separating out streams from
cameras), and it helps ensure the tests reflect reality.
2018-02-03 21:56:04 -08:00
Scott Lamb
c43fb80639 warn if a streamer op takes too long
My odroid setup has been occasionally (about once a week) losing about 15
seconds of recordings on all cameras. I'm not sure why. So I'm labelling all
the likely suspect spots and logging if any of them takes longer than a
second. I think this will give me more information; hopefully narrow it down
to network or local disk I/O.
2018-01-31 14:20:30 -08:00
Scott Lamb
6902be1981 upgrade deps 2018-01-30 22:05:39 -08:00
Scott Lamb
d192d98129 more fixes to the upgrade guide 2018-01-30 21:14:53 -08:00
Scott Lamb
9a8ed69047 fix more formatting in upgrade guide 2018-01-30 17:01:44 -08:00
Scott Lamb
7566b6a38b fix several formatting problems in upgrade guide 2018-01-30 17:01:03 -08:00
Scott Lamb
529aad9982 upgrade deps, including http-serve bugfix
This was including the wrong Content-Encoding: header on gzipped json
responses, so Chrome wouldn't understand them at all.
2018-01-30 16:51:32 -08:00
Scott Lamb
2c62d977b0 gzip json responses, handle HEAD properly 2018-01-23 11:24:40 -08:00
Scott Lamb
8caa2e5d0e crate rename: http-(entity|file) -> http-serve 2018-01-23 11:08:21 -08:00
Scott Lamb
10550bc35f simplify ffmpeg wrapper crate
This was using PhantomData to enforce lifetimes + raw pointers. Simpler to
convert to a reference. This also enforces non-null.
2017-11-30 14:40:31 -08:00
Scott Lamb
5c8970fe8a update dependencies 2017-11-16 23:01:09 -08:00
Scott Lamb
16ed7f73ba remove redundant panic 2017-11-16 22:54:44 -08:00
Scott Lamb
b9ebb74a58 explicitly test ffmpeg library compatibility
Makes the problem in #11 more obvious.
2017-10-24 07:26:18 -07:00
Scott Lamb
99f10dfba6 fix instructions for building UI
"yarn build" doesn't actually fetch packages. When starting from a clean
client, you have to run "yarn" (with no arguments) first.
2017-10-23 21:10:58 -07:00
Scott Lamb
2c026e1b6b minor cleanups to ffmpeg build setup
* the "lib: {}" print didn't do anything. It turns out that the pkg-config
  crate emits the necessary metadata for linking automatically. I had the
  wrong format and didn't notice because something else did it correctly.

* gcc::Config is deprecated; the new name is Build.

* and the crate is now called cc, version 1.0.

Stuff found while looking at #11. Still haven't figured that issue out.
2017-10-23 21:07:07 -07:00
Scott Lamb
8de7e391f8 populate timeZoneName as expected by UI
This works by a nasty hack, but it seems to work well enough for now.
Fingers crossed.
2017-10-21 23:57:13 -07:00
Scott Lamb
2966cf59b0 couple more js fixes
* make "yarn build" cmd work on first run.
  (it was installing a hardlinked file where the dir should go, yuck)

* remove an obsolete ui/index.html; it's ui-src/index.html now
2017-10-21 23:01:35 -07:00
Scott Lamb
084c44f110 update to http-file with Linux/arm fix 2017-10-21 22:44:04 -07:00
Scott Lamb
92962b9ee6 fix broken Cargo.toml
I'd temporarily pointed this to a local path for development and didn't notice
it was still in place when committing. Back to the git path that works for
everyone.
2017-10-21 22:35:24 -07:00
Scott Lamb
6b369a4cf0 3rd try at screenshot in README 2017-10-21 22:03:12 -07:00
Scott Lamb
43874c6a43 second try at screenshot in README 2017-10-21 21:57:04 -07:00
Scott Lamb
315f3594c2 add a basic Javascript UI
The Javascript is pretty amateurish I'm sure but at least it's something to
iterate from. It's already much more pleasant for browsing through videos in
several ways:

* more responsive to load only a day at a time rather than 90+ days
* much easier to see the same time segment on several cameras
* more pleasant to have the videos load as a popup rather than a link
  that blows away your position in an enormous list
* exposes the fancier .mp4 generation options: splitting at lengths
  other than the default, trimming to an arbitrary start and end time,
  including a subtitle track with timestamps.

There's a slight regression in functionality: I didn't match the former
top-level page which showed how much camera used of its disk allocation and
the total duration of video. This is exposed in the JSON API, so it shouldn't
be too hard to add back.
2017-10-21 21:54:27 -07:00
Scott Lamb
6eda26a9cc support run splitting in json api 2017-10-17 09:00:05 -07:00
Scott Lamb
9041eeb907 fix panic when requesting zero segment duration
The recording::Segment was constructing a segment with no frames in it, which
was causing a panic when appending a zero-length stts to the Slices. Fix this
in a couple ways:

* Slices::append should return Err rather than panic. No reason to crash the
  whole program when we have trouble serving a single .mp4 request.
* recording::Segment shouldn't produce zero-frame segments
2017-10-17 08:55:21 -07:00
Scott Lamb
1d08698d0c debug, fix panic with zero-duration recording
I had an assert that fired in this case, dating back to when I hadn't plumbed
Result returns through much of .mp4 construction. Now I have, so there's no
excuse in having an assert here. Change to an error return, and tweak it to
not fire in the zero-duration case.

Also fix a problem in the test harness; I hadn't finished converting it for
multi-recording tests, and it was returning the wrong recording.

Because of that, I seem to have stumbled across a related problem in which
asking for zero duration of a non-zero duration recording will return a
recording::Segment with no frames, which will cause panics because its
corresponding .mp4 slices are zero-length. I just adjusted the panic message
here; I'll follow up with changes to address that.
2017-10-17 06:14:47 -07:00
Scott Lamb
1b1ae0bf0a fix link to users list, mention github tracker 2017-10-16 22:20:29 -07:00
Scott Lamb
2bdb2eca5d fix a couple time problems
* CameraDayKey::bounds (used to generate the start and end times of days in
  the returned JSON) returned UTC, not matching what recordings were mapped
  into that day. So fetching a day with its given bounds would return
  something different. Test and fix it.

* Several time-related tests weren't calling testutil::init(), so they weren't
  fixing the time zone to the expected America/Los_Angeles. If the machine
  time is set to something else, they would break.
2017-10-11 20:08:26 -07:00
Scott Lamb
bbe04f909c fix ClockAdjuster logic
Small negative deltas caused every_minus_1 to be negative, which caused
underflow errors in debug builds. Fix this, test more comprehensively.
2017-10-09 22:13:38 -07:00
Scott Lamb
1e4d7d5ad9 make json api more idiomatic
* camelCase
* lose the "days":null in the overall cameras dict
2017-10-09 21:58:44 -07:00
Scott Lamb
5bb3dde74e work around #10 with advanced_editlist=false
I think this is an ffmpeg bug, which I plan to report. In the meantime, this
makes the tests pass. Long-term, even if ffmpeg fixes this, I probably don't
want to continue doing acceptance tests against whatever version of ffmpeg
happens to be installed - my real targets of interest are the latest versions
of Chrome, Firefox, Safari, QuickTime, and VLC.
2017-10-09 21:44:48 -07:00
Scott Lamb
711f7b3409 fix with-editlist hash
I missed this because I was running with ffmpeg 3 and had grown to expect this
test to fail. Quick fix on that coming shortly.
2017-10-09 21:00:45 -07:00