mass markdown reformatting

Add tables of contents (using the VS Code Markdown All-In-One extension)
and reformat lists to consistently use 4-space indents. No content
changes.
This commit is contained in:
Scott Lamb 2021-04-01 12:10:43 -07:00
parent 74b13a0fbf
commit 4d4d78ba64
10 changed files with 539 additions and 470 deletions

View File

@ -21,7 +21,7 @@
// I find Prettier's markdown style jarring, including converting `*` // I find Prettier's markdown style jarring, including converting `*`
// bullets to `-` and two-column indents. It's not customizable either. // bullets to `-` and two-column indents. It's not customizable either.
// Don't use it. // Don't use it.
"editor.defaultFormatter": null "editor.defaultFormatter": "yzhang.markdown-all-in-one"
}, },
// Rust-specific overrides. // Rust-specific overrides.
@ -41,5 +41,7 @@
"editor.defaultFormatter": "matklad.rust-analyzer" "editor.defaultFormatter": "matklad.rust-analyzer"
//"editor.defaultFormatter": null //"editor.defaultFormatter": null
}, },
"rust-analyzer.inlayHints.enable": false "rust-analyzer.inlayHints.enable": false,
"markdown.extension.list.indentationSize": "inherit",
"markdown.extension.toc.unorderedList.marker": "*"
} }

View File

@ -1,7 +1,27 @@
# Moonfire NVR API # Moonfire NVR API <!-- omit in toc -->
Status: **current**. Status: **current**.
* [Objective](#objective)
* [Detailed design](#detailed-design)
* [`POST /api/login`](#post-apilogin)
* [`POST /api/logout`](#post-apilogout)
* [`GET /api/`](#get-api)
* [`GET /api/cameras/<uuid>/`](#get-apicamerasuuid)
* [`GET /api/cameras/<uuid>/<stream>/recordings`](#get-apicamerasuuidstreamrecordings)
* [`GET /api/cameras/<uuid>/<stream>/view.mp4`](#get-apicamerasuuidstreamviewmp4)
* [`GET /api/cameras/<uuid>/<stream>/view.mp4.txt`](#get-apicamerasuuidstreamviewmp4txt)
* [`GET /api/cameras/<uuid>/<stream>/view.m4s`](#get-apicamerasuuidstreamviewm4s)
* [`GET /api/cameras/<uuid>/<stream>/view.m4s.txt`](#get-apicamerasuuidstreamviewm4stxt)
* [`GET /api/cameras/<uuid>/<stream>/live.m4s`](#get-apicamerasuuidstreamlivem4s)
* [`GET /api/init/<id>.mp4`](#get-apiinitidmp4)
* [`GET /api/init/<id>.mp4.txt`](#get-apiinitidmp4txt)
* [`GET /api/signals`](#get-apisignals)
* [`POST /api/signals`](#post-apisignals)
* [Request 1](#request-1)
* [Request 2](#request-2)
* [Request 3](#request-3)
## Objective ## Objective
Allow a JavaScript-based web interface to list cameras and view recordings. Allow a JavaScript-based web interface to list cameras and view recordings.
@ -704,7 +724,7 @@ Response:
} }
``` ```
### Request 3 #### Request 3
5 seconds later, the client observes motion has ended. It leaves the prior 5 seconds later, the client observes motion has ended. It leaves the prior
data alone and predicts no more motion. data alone and predicts no more motion.

View File

@ -1,4 +1,4 @@
# Moonfire NVR Storage Schema # Moonfire NVR Storage Schema <!-- omit in toc -->
Status: **current**. Status: **current**.
@ -6,42 +6,56 @@ This is the initial design for the most fundamental parts of the Moonfire NVR
storage schema. See also [guide/schema.md](../guide/schema.md) for more storage schema. See also [guide/schema.md](../guide/schema.md) for more
administrator-focused documentation. administrator-focused documentation.
* [Objective](#objective)
* [Cameras](#cameras)
* [Hard drives](#hard-drives)
* [Overview](#overview)
* [Detailed design](#detailed-design)
* [SQLite3](#sqlite3)
* [Duration of recordings](#duration-of-recordings)
* [Lifecycle of a sample file directory](#lifecycle-of-a-sample-file-directory)
* [Lifecycle of a recording](#lifecycle-of-a-recording)
* [Verifying invariants](#verifying-invariants)
* [Recording table](#recording-table)
* [`video_index`](#video_index)
* [<a href="on-demand"></a>On-demand `.mp4` construction](#on-demand-mp4-construction)
## Objective ## Objective
Goals: Goals:
* record streams from modern ONVIF/PSIA IP security cameras * record streams from modern ONVIF/PSIA IP security cameras
* support several cameras * support several cameras
* maintain full fidelity of incoming compressed video streams * maintain full fidelity of incoming compressed video streams
* record continuously * record continuously
* support on-demand serving in different file formats / protocols * support on-demand serving in different file formats / protocols
(such as standard .mp4 files for arbitrary timespans, fragmented .mp4 files (such as standard .mp4 files for arbitrary timespans, fragmented .mp4 files
for MPEG-DASH or HTML5 Video Source Extensions, MPEG-TS files for HTTP Live for MPEG-DASH or HTML5 Video Source Extensions, MPEG-TS files for HTTP Live
Streaming, and "trick play" RTSP) Streaming, and "trick play" RTSP)
* annotate camera timelines with metadata * annotate camera timelines with metadata
(such as motion detection, security alarm events, etc) (such as motion detection, security alarm events, etc)
* retain video segments with ~1-minute granularity based on metadata * retain video segments with ~1-minute granularity based on metadata
(e.g., extend retention of motion events) (e.g., extend retention of motion events)
* take advantage of compact, inexpensive, low-power, commonly-available * take advantage of compact, inexpensive, low-power, commonly-available
hardware such as the $35 [Raspberry Pi 2 Model B][pi2] hardware such as the $35 [Raspberry Pi 2 Model B][pi2]
* support high- and low-bandwidth playback * support high- and low-bandwidth playback
* support near-live playback (~second old), including "trick play" * support near-live playback (~second old), including "trick play"
* allow verifying database consistency with an `fsck` tool * allow verifying database consistency with an `fsck` tool
Non-goals: Non-goals:
* record streams from older cameras: JPEG/MJPEG USB "webcams" and analog * record streams from older cameras: JPEG/MJPEG USB "webcams" and analog
security cameras/capture cards security cameras/capture cards
* allow users to directly access or manipulate the stored data with standard * allow users to directly access or manipulate the stored data with standard
video or filesystem tools video or filesystem tools
* support H.264 features not used by common IP camera encoders, such as * support H.264 features not used by common IP camera encoders, such as
B-frames and Periodic Infra Refresh. B-frames and Periodic Infra Refresh.
* support recovering the last ~minute of video after a crash or power loss * support recovering the last ~minute of video after a crash or power loss
Possible future goals: Possible future goals:
* record audio and/or other types of timestamped samples (such as * record audio and/or other types of timestamped samples (such as
[Xandem][xandem] tomography data). [Xandem][xandem] tomography data).
### Cameras ### Cameras
@ -51,12 +65,12 @@ streams. They have many customizable settings, such as resolution, frame rate,
compression quality, maximum bitrate, I-frame interval. A typical setup might be compression quality, maximum bitrate, I-frame interval. A typical setup might be
as follows: as follows:
* the high-quality "main" stream as 1080p/30fps, 3000 kbps. * the high-quality "main" stream as 1080p/30fps, 3000 kbps.
This stream is well-suited to local viewing or forensics. This stream is well-suited to local viewing or forensics.
* the low-bandwidth "sub" stream as 704x480/10fps, 100 kbps. * the low-bandwidth "sub" stream as 704x480/10fps, 100 kbps.
This stream may be preferred for mobile/remote viewing, when viewing several This stream may be preferred for mobile/remote viewing, when viewing several
streams side-by-side, and for real-time computer vision (such as salient streams side-by-side, and for real-time computer vision (such as salient
motion detection). motion detection).
The dual pre-encoded H.264 video streams provide a tremendous advantage over The dual pre-encoded H.264 video streams provide a tremendous advantage over
older camera models (which provided raw video or JPEG-encoded frames) because older camera models (which provided raw video or JPEG-encoded frames) because
@ -73,13 +87,17 @@ different quality settings as well.
Decode: Decode:
$ time ffmpeg -y -threads 1 -i input.mp4 \ ```
-f null /dev/null $ time ffmpeg -y -threads 1 -i input.mp4 \
-f null /dev/null
```
Combo (Decode + encode with libx264): Combo (Decode + encode with libx264):
$ time ffmpeg -y -threads 1 -i input.mp4 \ ```
-c:v libx264 -preset ultrafast -threads 1 -f mp4 /dev/null $ time ffmpeg -y -threads 1 -i input.mp4 \
-c:v libx264 -preset ultrafast -threads 1 -f mp4 /dev/null
```
| Processor | 1080p30 decode | 1080p30 combo | 704x480p10 decode | 704x480p10 combo | | Processor | 1080p30 decode | 1080p30 combo | 704x480p10 decode | 704x480p10 combo |
@ -115,8 +133,10 @@ only capable of 50 random accesses per second, and each one takes time that
otherwise could be used to transfer 2+ MB. The constrained resource, *disk otherwise could be used to transfer 2+ MB. The constrained resource, *disk
time fraction*, can be bounded as follows: time fraction*, can be bounded as follows:
disk time fraction <= (seek rate) / (50 seeks/sec) + ```
(bandwidth) / (100 MB/sec) disk time fraction <= (seek rate) / (50 seeks/sec) +
(bandwidth) / (100 MB/sec)
```
## Overview ## Overview
@ -127,19 +147,20 @@ together.
Each recording is stored in two places: Each recording is stored in two places:
* a sample file directory, intended to be stored on spinning disk. * a sample file directory, intended to be stored on spinning disk.
Each file in this directory is simply a concatenation of the compressed, Each file in this directory is simply a concatenation of the compressed,
timestamped video samples (also called "packets" or encoded frames), as timestamped video samples (also called "packets" or encoded frames), as
received from the camera. In MPEG-4 terminology (see [ISO received from the camera. In MPEG-4 terminology (see [ISO
14496-12][iso-14496-12]), this is the contents of a `mdat` box for a `.mp4` 14496-12][iso-14496-12]), this is the contents of a `mdat` box for a
file representing the segment. These files do not contain framing data (start `.mp4` file representing the segment. These files do not contain framing
and end byte offsets of samples) and thus are not meant to be decoded on data (start and end byte offsets of samples) and thus are not meant to be
their own. decoded on their own.
* the `recording` table in a [SQLite3][sqlite3] database, intended to be * the `recording` table in a [SQLite3][sqlite3] database, intended to be
stored on flash if possible. A row in this table contains all the metadata stored on flash if possible. A row in this table contains all the
associated with the segment, including the sample-by-sample contents of the metadata associated with the segment, including the sample-by-sample
MPEG-4 `stbl` box. At 30 fps, a row is expected to require roughly 4 KB of contents of the MPEG-4 `stbl` box. At 30 fps, a row is expected to
storage (2 bytes per sample, plus some fixed overhead). require roughly 4 KB of storage (2 bytes per sample, plus some fixed
overhead).
Putting the metadata on flash means metadata operations can be fast Putting the metadata on flash means metadata operations can be fast
(sub-millisecond random access, with parallelism) and do not take precious (sub-millisecond random access, with parallelism) and do not take precious
@ -167,17 +188,18 @@ All metadata, including the `recording` table and others, will be stored in
the SQLite3 database using [write-ahead logging][sqlite3-wal]. There are the SQLite3 database using [write-ahead logging][sqlite3-wal]. There are
several reasons for this decision: several reasons for this decision:
* No user administration required. SQLite3, unlike its heavier-weight friends * No user administration required. SQLite3, unlike its heavier-weight friends
MySQL and PostgreSQL, can be completely internal to the application. In many MySQL and PostgreSQL, can be completely internal to the application. In
applications, end users are unaware of the existence of a RDBMS, and many applications, end users are unaware of the existence of a RDBMS, and
Moonfire NVR should be no exception. Moonfire NVR should be no exception.
* Correctness. It's relatively easy to make guarantees about the state of an * Correctness. It's relatively easy to make guarantees about the state of an
ACID database, and SQLite3 in particular has a robust implementation. (See ACID database, and SQLite3 in particular has a robust implementation.
[Files Are Hard][file-consistency].) (See [Files Are Hard][file-consistency].)
* Developer ease and familiarity. SQL-based RDBMSs are quite common and * Developer ease and familiarity. SQL-based RDBMSs are quite common and
provide a lot of high-level constructs that ease development. SQLite3 in provide a lot of high-level constructs that ease development. SQLite3 in
particular is ubiquitous. Contributors are likely to come with some particular is ubiquitous. Contributors are likely to come with some
understanding of the database, and there are many resources to learn more. understanding of the database, and there are many resources to learn
more.
Total database size is expected to be roughly 4 KB per minute at 30 fps, or Total database size is expected to be roughly 4 KB per minute at 30 fps, or
1 GB for six camera-months of video. This will easily fit on a modest flash 1 GB for six camera-months of video. This will easily fit on a modest flash
@ -189,40 +211,42 @@ to be a performance bottleneck.
There are many constraints that influenced the choice of 1 minute as the There are many constraints that influenced the choice of 1 minute as the
duration of recordings. duration of recordings.
* Per-recording metadata size. There is a fixed component to the size of each * Per-recording metadata size. There is a fixed component to the size of each
row, including the starting/ending timestamps, sample file UUID, etc. This row, including the starting/ending timestamps, sample file UUID, etc.
should not cause the database to be too large to fit on low-cost flash This should not cause the database to be too large to fit on low-cost
devices. As described in the previous section, with 1 minute recordings the flash devices. As described in the previous section, with 1 minute
size is quite modest. recordings the size is quite modest.
* Disk seeks. Sample files should be large enough that even during * Disk seeks. Sample files should be large enough that even during
simultaneous recording and playback of several streams, the disk seeks simultaneous recording and playback of several streams, the disk seeks
incurred when switching from one file to another should not be significant. incurred when switching from one file to another should not be
At the extreme, a sample file per frame could cause an unacceptable 240 significant. At the extreme, a sample file per frame could cause an
seeks per second just to record 8 30 fps streams. At one minute recording unacceptable 240 seeks per second just to record 8 30 fps streams. At one
time, 16 recording streams (2 per each of 8 cameras) and 4 playback streams minute recording time, 16 recording streams (2 per each of 8 cameras) and
would cause on average 20 seeks per minute, or under 1% disk time. 4 playback streams would cause on average 20 seeks per minute, or under
* Internal fragmentation. Common Linux filesystems have a block size of 4 KiB 1% disk time.
(see `statvfs.f_frsize`). Up to this much space per file will be wasted at * Internal fragmentation. Common Linux filesystems have a block size of 4 KiB
the end of each file. At the bitrates described in "Background", this is an (see `statvfs.f_frsize`). Up to this much space per file will be wasted
insignicant .02% waste for main streams and .5% waste for sub streams. at the end of each file. At the bitrates described in "Background", this
* Number of "slices" in .mp4 files. As described [below](#on-demand), is an insignicant .02% waste for main streams and .5% waste for sub
`.mp4` files will be constructed on-demand for export. It should be streams.
possible to export an hours-long segment without too much overhead. In * Number of "slices" in .mp4 files. As described [below](#on-demand),
particular, it must be possible to iterate through all the recordings, `.mp4` files will be constructed on-demand for export. It should be
assemble the list of slices, and calculate offsets and total size. One possible to export an hours-long segment without too much overhead. In
minute seems acceptable; though we will watch this as work proceeds. particular, it must be possible to iterate through all the recordings,
* Crashes. On program crash or power loss, ideally it's acceptable to simply assemble the list of slices, and calculate offsets and total size. One
discard any recordings in progress rather than add a checkpointing scheme. minute seems acceptable; though we will watch this as work proceeds.
* Granularity of retention. It should be possible to extend retention time * Crashes. On program crash or power loss, ideally it's acceptable to simply
around motion events without forcing retention of too much additional data discard any recordings in progress rather than add a checkpointing scheme.
or copying bytes around on disk. * Granularity of retention. It should be possible to extend retention time
around motion events without forcing retention of too much additional
data or copying bytes around on disk.
The design avoids the need for the following constraints: The design avoids the need for the following constraints:
* Dealing with events crossing segment boundaries. This is meant to be * Dealing with events crossing segment boundaries. This is meant to be
invisible. invisible.
* Serving close to live. It's possible to serve a recording as it is being * Serving close to live. It's possible to serve a recording as it is being
written. written.
### Lifecycle of a sample file directory ### Lifecycle of a sample file directory
@ -230,19 +254,20 @@ One major disadvantage to splitting the state in two (the SQLite3 database in
flash and the sample file directories on spinning disk) is the possibility of flash and the sample file directories on spinning disk) is the possibility of
inconsistency. There are many ways this could arise: inconsistency. There are many ways this could arise:
* a sample file directory's disk is unexpectedly not mounted due to hardware * a sample file directory's disk is unexpectedly not mounted due to hardware
failure or misconfiguration. failure or misconfiguration.
* the administrator mixing up the mount points of two filesystems holding * the administrator mixing up the mount points of two filesystems holding
different sample file directories. different sample file directories.
* the administrator renaming a sample file directory without updating the database. * the administrator renaming a sample file directory without updating the
* the administrator restoring the database from backup but not the sample file database.
directory, or vice versa. * the administrator restoring the database from backup but not the sample file
* the administrator providing two sample file directory paths pointed at the directory, or vice versa.
same inode via symlinks or non-canonical paths. (Note that flock(2) has a * the administrator providing two sample file directory paths pointed at the
design flaw in which multiple file descriptors can share a lock, so the current same inode via symlinks or non-canonical paths. (Note that flock(2) has a
locking scheme is not sufficient to detect this otherwise.) design flaw in which multiple file descriptors can share a lock, so the
* database and sample file directories forked from the same version, opened current locking scheme is not sufficient to detect this otherwise.)
the same number of times, then crossed. * database and sample file directories forked from the same version, opened
the same number of times, then crossed.
To combat this, each sample file directory has some metadata its database row To combat this, each sample file directory has some metadata its database row
and stored file called `meta`. These track uuids associated with the database and stored file called `meta`. These track uuids associated with the database
@ -323,74 +348,74 @@ This is a sub-procedure used in several places below.
Precondition: the directory's lock is held with `LOCK_EX` (exclusive) and Precondition: the directory's lock is held with `LOCK_EX` (exclusive) and
there is an existing metadata file. there is an existing metadata file.
1. Open the metadata file. 1. Open the metadata file.
2. Rewrite the fixed-length data atomically. 2. Rewrite the fixed-length data atomically.
3. `fdatasync` the file. 3. `fdatasync` the file.
*Open the database as read-only* *Open the database as read-only*
1. Lock the database directory with `LOCK_SH` (shared). 1. Lock the database directory with `LOCK_SH` (shared).
2. Open the SQLite database with `SQLITE_OPEN_READ_ONLY`. 2. Open the SQLite database with `SQLITE_OPEN_READ_ONLY`.
*Open the database as read-write* *Open the database as read-write*
1. Lock the database directory with `LOCK_EX` (exclusive). 1. Lock the database directory with `LOCK_EX` (exclusive).
2. Open the SQLite database with `SQLITE_OPEN_READ_WRITE`. 2. Open the SQLite database with `SQLITE_OPEN_READ_WRITE`.
3. Insert a new `open` table row with the new sequence number and uuid. 3. Insert a new `open` table row with the new sequence number and uuid.
*Create a sample file directory* *Create a sample file directory*
Precondition: database open read-write. Precondition: database open read-write.
1. Lock the sample file directory with `LOCK_EX` (exclusive). 1. Lock the sample file directory with `LOCK_EX` (exclusive).
2. Verify there is no metadata file or `last_complete_open` is unset. 2. Verify there is no metadata file or `last_complete_open` is unset.
3. Write new metadata file with a fresh `dir_uuid` and a `in_progress_open` 3. Write new metadata file with a fresh `dir_uuid` and a `in_progress_open`
matching the database's current open. matching the database's current open.
4. Add a matching row to the database with `last_complete_open_id` matching 4. Add a matching row to the database with `last_complete_open_id` matching
the current open. the current open.
5. Update the metadata file to move `in_progress_open` to 5. Update the metadata file to move `in_progress_open` to
`last_complete_open`. `last_complete_open`.
*Open a sample file directory read-only* *Open a sample file directory read-only*
Precondition: database open (read-only or read-write). Precondition: database open (read-only or read-write).
1. Lock the sample file directory with `LOCK_SH` (shared). 1. Lock the sample file directory with `LOCK_SH` (shared).
2. Verify the metadata file matches the database: 2. Verify the metadata file matches the database:
* database uuid matches. * database uuid matches.
* dir uuid matches. * dir uuid matches.
* if the database's `last_complete_open` is set, it must match the * if the database's `last_complete_open` is set, it must match the
directory's `last_complete_open` or `in_progress_open`. directory's `last_complete_open` or `in_progress_open`.
* if the database's `last_complete_open` is absent, the directory's * if the database's `last_complete_open` is absent, the directory's
must be as well. must be as well.
*Open a sample file directory read-write* *Open a sample file directory read-write*
Precondition: database open read-write. Precondition: database open read-write.
1. Lock the sample file directory with `LOCK_EX` (exclusive). 1. Lock the sample file directory with `LOCK_EX` (exclusive).
2. Verify the metadata file matches the database (as above). 2. Verify the metadata file matches the database (as above).
3. Update the metadata file with `in_progress_open` matching the current 3. Update the metadata file with `in_progress_open` matching the current
open. open.
3. Update the database row with `last_complete_open_id` matching the current 4. Update the database row with `last_complete_open_id` matching the current
open. open.
4. Update the metadata file with `last_complete_open` rather than 5. Update the metadata file with `last_complete_open` rather than
`in_progress_open`. `in_progress_open`.
5. Run the recording startup procedure for this directory. 6. Run the recording startup procedure for this directory.
*Close a sample file directory* *Close a sample file directory*
1. Drop the sample file directory lock. 1. Drop the sample file directory lock.
*Delete a sample file directory* *Delete a sample file directory*
1. Remove all sample files (of all three categories described below: 1. Remove all sample files (of all three categories described below:
`recording` table rows, `garbage` table rows, and files with recording `recording` table rows, `garbage` table rows, and files with recording
ids >= their stream's `cum_recordings`); see "delete a recording" ids >= their stream's `cum_recordings`); see "delete a recording"
procedure below. procedure below.
2. Rewrite the directory metadata with `in_progress_open` set to the current open, 2. Rewrite the directory metadata with `in_progress_open` set to the current open,
`last_complete_open` cleared. `last_complete_open` cleared.
3. Delete the directory's row from the database. 3. Delete the directory's row from the database.
### Lifecycle of a recording ### Lifecycle of a recording
@ -398,15 +423,15 @@ Because a major part of the recording state is outside the SQL database, care
must be taken to guarantee consistency and durability. Moonfire NVR maintains must be taken to guarantee consistency and durability. Moonfire NVR maintains
three invariants about sample files: three invariants about sample files:
1. `recording` table rows have sample files on disk with the indicated size 1. `recording` table rows have sample files on disk with the indicated size
and SHA-1 hash. and SHA-1 hash.
2. Exactly one of the following statements is true for every sample file: 2. Exactly one of the following statements is true for every sample file:
* It has a `recording` table row. * It has a `recording` table row.
* It has a `garbage` table row. * It has a `garbage` table row.
* Its recording id is greater than or equal to the `cum_recordings` * Its recording id is greater than or equal to the `cum_recordings`
for its stream. for its stream.
3. After an orderly shutdown of Moonfire NVR, there is a `recording` table row 3. After an orderly shutdown of Moonfire NVR, there is a `recording` table row
for every sample file, even if there have been previous crashes. for every sample file, even if there have been previous crashes.
The first invariant provides certainty that a recording is properly stored. It The first invariant provides certainty that a recording is properly stored. It
would be prohibitively expensive to verify hashes on demand (when listing or would be prohibitively expensive to verify hashes on demand (when listing or
@ -423,31 +448,31 @@ These invariants are updated through the following procedure:
*Create a recording:* *Create a recording:*
1. Write the sample file, aborting if `open(..., O\_WRONLY|O\_CREATE|O\_EXCL)` 1. Write the sample file, aborting if `open(..., O\_WRONLY|O\_CREATE|O\_EXCL)`
fails with `EEXIST`. fails with `EEXIST`.
3. `fsync()` the sample file. 3. `fsync()` the sample file.
4. `fsync()` the sample file directory. 4. `fsync()` the sample file directory.
5. Insert the `recording` row, marking its size and SHA-1 hash in the process. 5. Insert the `recording` row, marking its size and SHA-1 hash in the process.
*Delete a recording:* *Delete a recording:*
1. Replace the `recording` row with a `garbage` row. 1. Replace the `recording` row with a `garbage` row.
2. `unlink()` the sample file, warning on `ENOENT`. (This would indicate 2. `unlink()` the sample file, warning on `ENOENT`. (This would indicate
invariant #2 is false.) invariant #2 is false.)
3. `fsync()` the sample file directory. 3. `fsync()` the sample file directory.
4. Delete the `garbage` row. 4. Delete the `garbage` row.
*Startup (crash recovery):* *Startup (crash recovery):*
1. Acquire a lock to guarantee this is the only Moonfire NVR process running 1. Acquire a lock to guarantee this is the only Moonfire NVR process running
against the given database. This lock is not released until program shutdown. against the given database. This lock is not released until program shutdown.
2. Query `garbage` table and `cum_recordings` field in the `stream` table. 2. Query `garbage` table and `cum_recordings` field in the `stream` table.
3. `unlink()` all the sample files associated with garbage rows, ignoring 3. `unlink()` all the sample files associated with garbage rows, ignoring
`ENOENT`. `ENOENT`.
4. For each stream, `unlink()` all the existing files with recording ids >= 4. For each stream, `unlink()` all the existing files with recording ids >=
`cum_recordings`. `cum_recordings`.
4. `fsync()` the sample file directory. 4. `fsync()` the sample file directory.
5. Delete all rows from the `garbage` table. 5. Delete all rows from the `garbage` table.
The procedures can be batched: while for a given recording, the steps must be The procedures can be batched: while for a given recording, the steps must be
strictly ordered, multiple recordings can be proceeding through the steps strictly ordered, multiple recordings can be proceeding through the steps
@ -471,9 +496,9 @@ problem.
There should be a means to verify the invariants above. There are three There should be a means to verify the invariants above. There are three
possible levels of verification: possible levels of verification:
1. Compare presence of sample files. 1. Compare presence of sample files.
2. Compare size of sample files. 2. Compare size of sample files.
3. Compare hashes of sample files. 3. Compare hashes of sample files.
Consider a database with a 6 camera-months of recordings at 3.1 Mbps (for Consider a database with a 6 camera-months of recordings at 3.1 Mbps (for
both main and sub streams). There would be 0.5 million files, taking 5.9 TB. both main and sub streams). There would be 0.5 million files, taking 5.9 TB.
@ -487,15 +512,17 @@ The times are roughly:
The `readdir()` and `fstat()` times can be tested simply: The `readdir()` and `fstat()` times can be tested simply:
$ mkdir testdir ```
$ cd testdir $ mkdir testdir
$ seq 1 $[60*24*365*6/12*2] | xargs touch $ cd testdir
$ sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches' $ seq 1 $[60*24*365*6/12*2] | xargs touch
$ time ls -1 -f | wc -l $ sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches'
$ sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches' $ time ls -1 -f | wc -l
$ time ls -1 -f --size | wc -l $ sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches'
$ time ls -1 -f --size | wc -l
```
(The system calls used by `ls` can be verified through strace.) (The system calls used by `ls` can be verified through strace.)
The hash verification time is easiest to calculate: reading 5.9 TB at 100 The hash verification time is easiest to calculate: reading 5.9 TB at 100
MB/sec takes about 8 hours. On some systems, it will be even slower. On MB/sec takes about 8 hours. On some systems, it will be even slower. On
@ -515,42 +542,44 @@ the background at low priority.
The snippet below is a illustrative excerpt of the SQLite schema; see The snippet below is a illustrative excerpt of the SQLite schema; see
`schema.sql` for the authoritative, up-to-date version. `schema.sql` for the authoritative, up-to-date version.
-- A single, typically 60-second, recorded segment of video. ```sql
create table recording ( -- A single, typically 60-second, recorded segment of video.
id integer primary key, create table recording (
open_id integer references open (id), id integer primary key,
camera_id integer references camera (id) not null, open_id integer references open (id),
camera_id integer references camera (id) not null,
sample_file_uuid blob unique not null, sample_file_uuid blob unique not null,
sample_file_blake3 blob, sample_file_blake3 blob,
sample_file_size integer, sample_file_size integer,
-- The starting time and duration of the recording, in 90 kHz units since -- The starting time and duration of the recording, in 90 kHz units since
-- 1970-01-01 00:00:00 UTC. -- 1970-01-01 00:00:00 UTC.
start_time_90k integer not null, start_time_90k integer not null,
duration_90k integer, duration_90k integer,
video_samples integer, video_samples integer,
video_sample_entry_id blob references visual_sample_entry (id), video_sample_entry_id blob references visual_sample_entry (id),
video_index blob, video_index blob,
... ...
); );
-- A concrete box derived from a ISO/IEC 14496-12 section 8.5.2 -- A concrete box derived from a ISO/IEC 14496-12 section 8.5.2
-- VisualSampleEntry box. Describes the codec, width, height, etc. -- VisualSampleEntry box. Describes the codec, width, height, etc.
create table visual_sample_entry ( create table visual_sample_entry (
id integerprimary key, id integerprimary key,
-- The width and height in pixels; must match values within -- The width and height in pixels; must match values within
-- `sample_entry_bytes`. -- `sample_entry_bytes`.
width integer, width integer,
height integer, height integer,
-- A serialized SampleEntry box, including the leading length and box -- A serialized SampleEntry box, including the leading length and box
-- type (avcC in the case of H.264). -- type (avcC in the case of H.264).
data blob data blob
); );
```
As mentioned by the `start_time_90k` field above, recordings use a 90 kHz time As mentioned by the `start_time_90k` field above, recordings use a 90 kHz time
base. This matches the RTP timestamp frequency used for H.264 and other video base. This matches the RTP timestamp frequency used for H.264 and other video
@ -579,21 +608,21 @@ only with certain firmware versions (see [thread][hikvision-sr]). Most likely
it will be useful to have any available clock/timing information for it will be useful to have any available clock/timing information for
diagnosing problems, such as the following: diagnosing problems, such as the following:
* the NVR's wall clock time * the NVR's wall clock time
* the NVR's NTP server sync status * the NVR's NTP server sync status
* the NVR's uptime * the NVR's uptime
* the camera's time as of the RTP play response * the camera's time as of the RTP play response
* the camera's time as of any RTCP Sender Reports, and the corresponding RTP * the camera's time as of any RTCP Sender Reports, and the corresponding RTP
timestamps timestamps
#### `video_index` #### `video_index`
The `video_index` field conceptually holds three pieces of information about The `video_index` field conceptually holds three pieces of information about
the samples: the samples:
1. the duration (in 90kHz units) of each sample 1. the duration (in 90kHz units) of each sample
2. the byte size of each sample 2. the byte size of each sample
3. which samples are "sync samples" (aka key frames or I-frames) 3. which samples are "sync samples" (aka key frames or I-frames)
These correspond to [ISO/IEC 14496-12][iso-14496-12] `stts` (TimeToSampleBox, These correspond to [ISO/IEC 14496-12][iso-14496-12] `stts` (TimeToSampleBox,
section 8.6.1.2), `stsz` (SampleSizeBox, section 8.7.3), and `stss` section 8.6.1.2), `stsz` (SampleSizeBox, section 8.7.3), and `stss`
@ -614,16 +643,18 @@ This encoding is chosen so that values will be near zero, and thus the varints
will be at their most compact possible form. An index might be written by the will be at their most compact possible form. An index might be written by the
following pseudocode: following pseudocode:
prev_duration = 0 ```
prev_bytes_key = 0 prev_duration = 0
prev_bytes_nonkey = 0 prev_bytes_key = 0
for each frame: prev_bytes_nonkey = 0
duration_delta = duration - prev_duration for each frame:
bytes_delta = bytes - (is_key ? prev_bytes_key : prev_bytes_nonkey) duration_delta = duration - prev_duration
prev_duration_ms = duration_ms bytes_delta = bytes - (is_key ? prev_bytes_key : prev_bytes_nonkey)
if key: prev_bytes_key = bytes else: prev_bytes_nonkey = bytes prev_duration_ms = duration_ms
PutVarint((Zigzag(duration_delta) << 1) | is_key) if key: prev_bytes_key = bytes else: prev_bytes_nonkey = bytes
PutVarint(Zigzag(bytes_delta) PutVarint((Zigzag(duration_delta) << 1) | is_key)
PutVarint(Zigzag(bytes_delta)
```
See also the example below: See also the example below:
@ -643,10 +674,10 @@ See also the example below:
A major goal of this format is to support on-demand serving in various formats, A major goal of this format is to support on-demand serving in various formats,
including two types of `.mp4` files: including two types of `.mp4` files:
* unfragmented `.mp4` files, for traditional video players. * unfragmented `.mp4` files, for traditional video players.
* fragmented `.mp4` files for MPEG-DASH or HTML5 Media Source Extensions * fragmented `.mp4` files for MPEG-DASH or HTML5 Media Source Extensions
(see [Media Source ISO BMFF Byte Stream Format][media-bmff]), for (see [Media Source ISO BMFF Byte Stream Format][media-bmff]), for
a browser-based user interface. a browser-based user interface.
This does not require writing new `.mp4` files to disk. In fact, HTTP range This does not require writing new `.mp4` files to disk. In fact, HTTP range
requests (for "pseudo-streaming") can be satisfied on `.mp4` files aggregated requests (for "pseudo-streaming") can be satisfied on `.mp4` files aggregated
@ -654,38 +685,6 @@ from several segments. The implementation details are outside the scope of this
document, but this is possible in part due to the use of an on-flash database document, but this is possible in part due to the use of an on-flash database
to store metadata and the simple, consistent format of sample indexes. to store metadata and the simple, consistent format of sample indexes.
### Copyright
This file is part of Moonfire NVR, a security camera network video recorder.
Copyright (C) 2016 The Moonfire NVR Authors
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
In addition, as a special exception, the copyright holders give
permission to link the code of portions of this program with the
OpenSSL library under certain conditions as described in each
individual source file, and distribute linked combinations including
the two.
You must obey the GNU General Public License in all respects for all
of the code used other than OpenSSL. If you modify file(s) with this
exception, you may extend this exception to your version of the
file(s), but you are not obligated to do so. If you do not wish to do
so, delete this exception statement from your version. If you delete
this exception statement from all source files in the program, then
also delete it here.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
[pi2]: https://www.raspberrypi.org/products/raspberry-pi-2-model-b/ [pi2]: https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
[xandem]: http://www.xandemhome.com/ [xandem]: http://www.xandemhome.com/
[hikcam]: http://overseas.hikvision.com/us/Products_accessries_10533_i7696.html [hikcam]: http://overseas.hikvision.com/us/Products_accessries_10533_i7696.html

View File

@ -1,4 +1,4 @@
# Moonfire NVR Time Handling # Moonfire NVR Time Handling <!-- omit in toc -->
Status: **current**. Status: **current**.
@ -7,6 +7,19 @@ Status: **current**.
> >
> — Segal's law > — Segal's law
* [Objective](#objective)
* [Background](#background)
* [Overview](#overview)
* [Detailed design](#detailed-design)
* [Caveats](#caveats)
* [Stream mismatches](#stream-mismatches)
* [Time discontinuities](#time-discontinuities)
* [Leap seconds](#leap-seconds)
* [Use `clock_gettime(CLOCK_TAI, ...)` timestamps](#use-clock_gettimeclock_tai--timestamps)
* [Use a leap second table when calculating differences](#use-a-leap-second-table-when-calculating-differences)
* [Use smeared time](#use-smeared-time)
* [Alternatives considered](#alternatives-considered)
## Objective ## Objective
Maximize the likelihood Moonfire NVR's timestamps are useful. Maximize the likelihood Moonfire NVR's timestamps are useful.
@ -14,20 +27,20 @@ Maximize the likelihood Moonfire NVR's timestamps are useful.
The timestamp corresponding to a video frame should roughly match timestamps The timestamp corresponding to a video frame should roughly match timestamps
from other sources: from other sources:
* another video stream from the same camera. Given a video frame from the * another video stream from the same camera. Given a video frame from the
"main" stream, a video frame from the "sub" stream with a similar "main" stream, a video frame from the "sub" stream with a similar
timestamp should have been recorded near the same time, and vice versa. timestamp should have been recorded near the same time, and vice versa.
This minimizes confusion when switching between views of these streams, This minimizes confusion when switching between views of these streams,
and when viewing the "main" stream timestamps corresponding to a motion and when viewing the "main" stream timestamps corresponding to a motion
event gathered from the less CPU-intensive "sub" stream. event gathered from the less CPU-intensive "sub" stream.
* on-camera motion events from the same camera. If the video frame reflects * on-camera motion events from the same camera. If the video frame reflects
the motion event, its timestamp should be roughly within the event's the motion event, its timestamp should be roughly within the event's
timespan. timespan.
* streams from other cameras. Recorded views from two cameras of the same * streams from other cameras. Recorded views from two cameras of the same
event should have similar timestamps. event should have similar timestamps.
* events noted by the owner of the system, neighbors, police, etc., for the * events noted by the owner of the system, neighbors, police, etc., for the
purpose of determining chronology, to the extent those persons use purpose of determining chronology, to the extent those persons use
accurate clocks. accurate clocks.
Two recordings from the same stream should not overlap. This would make it Two recordings from the same stream should not overlap. This would make it
impossible for a user interface to present a simple timeline for accessing all impossible for a user interface to present a simple timeline for accessing all
@ -35,28 +48,28 @@ recorded video.
Durations should be useful over short timescales: Durations should be useful over short timescales:
* If an object's motion is recorded, distance travelled divided by the * If an object's motion is recorded, distance travelled divided by the
duration of the frames over which this motion occurred should reflect the duration of the frames over which this motion occurred should reflect the
object's average speed. object's average speed.
* Motion should appear smooth. There shouldn't be excessive frame-to-frame * Motion should appear smooth. There shouldn't be excessive frame-to-frame
jitter due to such factors as differences in encoding time or network jitter due to such factors as differences in encoding time or network
transmission. transmission.
This document describes an approach to achieving these goals when the This document describes an approach to achieving these goals when the
following statements are true: following statements are true:
* the NVR's system clock is within a second of correct on startup. (True * the NVR's system clock is within a second of correct on startup. (True
when NTP is functioning or when the system has a real-time clock battery when NTP is functioning or when the system has a real-time clock battery
to preserve a previous correct time.) to preserve a previous correct time.)
* the NVR's system time does not experience forward or backward "step" * the NVR's system time does not experience forward or backward "step"
corrections (as opposed to frequency correction) during operation. corrections (as opposed to frequency correction) during operation.
* the NVR's system time advances at roughly the correct frequency. (NTP * the NVR's system time advances at roughly the correct frequency. (NTP
achieves this through frequency correction when operating correctly.) achieves this through frequency correction when operating correctly.)
* the cameras' clock frequencies are off by no more than 500 parts per * the cameras' clock frequencies are off by no more than 500 parts per
million (roughly 43 seconds per day). million (roughly 43 seconds per day).
* the cameras are geographically close to the NVR, so in most cases network * the cameras are geographically close to the NVR, so in most cases network
transmission time is under 50 ms. (Occasional delays are to be expected, transmission time is under 50 ms. (Occasional delays are to be expected,
however.) however.)
When one or more of those statements are false, the system should degrade When one or more of those statements are false, the system should degrade
gracefully: preserve what properties it can, gather video anyway, and when gracefully: preserve what properties it can, gather video anyway, and when
@ -81,40 +94,40 @@ so such problems are to be expected.
Moonfire NVR typically has access to the following sources of time Moonfire NVR typically has access to the following sources of time
information: information:
* the local `CLOCK_REALTIME`. Ideally this is maintained by `ntpd`: * the local `CLOCK_REALTIME`. Ideally this is maintained by `ntpd`:
synchronized on startup, and frequency-corrected during operation. A synchronized on startup, and frequency-corrected during operation. A
hardware real-time clock and battery keep accurate time across restarts hardware real-time clock and battery keep accurate time across restarts
if the network is unavailable on startup. In the worst case, the system if the network is unavailable on startup. In the worst case, the system
has no real-time clock or no battery and a network connection is has no real-time clock or no battery and a network connection is
unavailable. The time is far in the past on startup and is never unavailable. The time is far in the past on startup and is never
corrected or is corrected via a step while Moonfire NVR is running. corrected or is corrected via a step while Moonfire NVR is running.
* the local `CLOCK_MONOTONIC`. This should be frequency-corrected by `ntpd` * the local `CLOCK_MONOTONIC`. This should be frequency-corrected by `ntpd`
and guaranteed to never experience "steps", though its reference point is and guaranteed to never experience "steps", though its reference point is
unspecified. unspecified.
* the local `ntpd`, which can be used to determine if the system is * the local `ntpd`, which can be used to determine if the system is
synchronized to NTP and quantify the precision of synchronization. synchronized to NTP and quantify the precision of synchronization.
* each camera's clock. The ONVIF specification mandates cameras must * each camera's clock. The ONVIF specification mandates cameras must
support synchronizing clocks via NTP, but in practice cameras appear to support synchronizing clocks via NTP, but in practice cameras appear to
use SNTP clients which simply step time periodically and provide no use SNTP clients which simply step time periodically and provide no
interface to determine if the clock is currently synchronized. This interface to determine if the clock is currently synchronized. This
document's author owns several cameras with clocks that run roughly 20 document's author owns several cameras with clocks that run roughly 20
ppm fast (2 seconds per day) and are adjusted via steps. ppm fast (2 seconds per day) and are adjusted via steps.
* the RTP timestamps from each of a camera's streams. As described in * the RTP timestamps from each of a camera's streams. As described in
[RFC 3550 section 5.1](https://tools.ietf.org/html/rfc3550#section-5.1), [RFC 3550 section 5.1](https://tools.ietf.org/html/rfc3550#section-5.1),
these are monotonically increasing with an unspecified reference point. these are monotonically increasing with an unspecified reference point.
They can't be directly compared to other cameras or other streams from They can't be directly compared to other cameras or other streams from
the same camera. Emperically, budget cameras don't appear to do any the same camera. Emperically, budget cameras don't appear to do any
frequency correction on these timestamps. frequency correction on these timestamps.
* in some cases, RTCP sender reports, as described in * in some cases, RTCP sender reports, as described in
[RFC 3550 section 6.4](https://tools.ietf.org/html/rfc3550#section-6.4). [RFC 3550 section 6.4](https://tools.ietf.org/html/rfc3550#section-6.4).
These correlate RTP timestamps with the camera's real time clock. These correlate RTP timestamps with the camera's real time clock.
However, these are only sent periodically, not necessarily at the However, these are only sent periodically, not necessarily at the
beginning of the session. Some cameras omit them entirely depending on beginning of the session. Some cameras omit them entirely depending on
firmware version, as noted in firmware version, as noted in
[this forum post](https://www.cctvforum.com/topic/40914-video-sync-with-hikvision-ipcams-tech-query-about-rtcp/). [this forum post](https://www.cctvforum.com/topic/40914-video-sync-with-hikvision-ipcams-tech-query-about-rtcp/).
Additionally, Moonfire NVR currently uses ffmpeg's libavformat for RTSP Additionally, Moonfire NVR currently uses ffmpeg's libavformat for RTSP
protocol handling; this library exposes these reports in a limited protocol handling; this library exposes these reports in a limited
fashion. fashion.
The camera records video frames as in the diagram below: The camera records video frames as in the diagram below:
@ -224,14 +237,14 @@ wall clock, and thus calculate the camera's time as of the first frame.
The _start time_ of the first recording could be either its local start time The _start time_ of the first recording could be either its local start time
or its camera start time, determined via the following rules: or its camera start time, determined via the following rules:
1. if there is no camera start time (due to the lack of a RTCP sender 1. if there is no camera start time (due to the lack of a RTCP sender
report), the local start time wins by default. report), the local start time wins by default.
2. if the camera start time is before 2016-01-01 00:00:00 UTC, the local 2. if the camera start time is before 2016-01-01 00:00:00 UTC, the local
start time wins. start time wins.
3. if the local start time is before 2016-01-01 00:00:00 UTC, the camera 3. if the local start time is before 2016-01-01 00:00:00 UTC, the camera
start time wins. start time wins.
4. if the times differ by more than 5 seconds, the local start time wins. 4. if the times differ by more than 5 seconds, the local start time wins.
5. otherwise, the camera start time wins. 5. otherwise, the camera start time wins.
These rules are a compromise. When a system starts up without NTP or a clock These rules are a compromise. When a system starts up without NTP or a clock
battery, it typically reverts to a time in the distant past. Therefore times battery, it typically reverts to a time in the distant past. Therefore times
@ -259,10 +272,10 @@ happened](https://github.com/scottlamb/moonfire-nvr/issues/9#issuecomment-322663
Moonfire NVR will continue to use the initial wall clock time for as long as Moonfire NVR will continue to use the initial wall clock time for as long as
the recording lasts. This can result in some unfortunate behaviors: the recording lasts. This can result in some unfortunate behaviors:
* a recording that lasts for months might have an incorrect time all the * a recording that lasts for months might have an incorrect time all the
way through because `ntpd` took a few minutes on startup. way through because `ntpd` took a few minutes on startup.
* two recordings that were in fact simultaneous might be recorded with very * two recordings that were in fact simultaneous might be recorded with very
different times because a time jump happened between their starts. different times because a time jump happened between their starts.
It might be better to use the new time (assuming that ntpd has made a It might be better to use the new time (assuming that ntpd has made a
correction) retroactively. This is unimplemented, but the correction) retroactively. This is unimplemented, but the
@ -299,18 +312,18 @@ Timestamps in the TAI clock system don't skip leap seconds. There's a system
interface intended to provide timestamps in this clock system, and Moonfire interface intended to provide timestamps in this clock system, and Moonfire
NVR could use it. Unfortunately this has several problems: NVR could use it. Unfortunately this has several problems:
* `CLOCK_TAI` is only available on Linux. It'd be preferable to handle * `CLOCK_TAI` is only available on Linux. It'd be preferable to handle
timestamps in a consistent way on other platforms. (At least on macOS, timestamps in a consistent way on other platforms. (At least on macOS,
Moonfire NVR's current primary development platform.) Moonfire NVR's current primary development platform.)
* `CLOCK_TAI` is wrong on startup and possibly adjusted later. The offset * `CLOCK_TAI` is wrong on startup and possibly adjusted later. The offset
between TAI and UTC is initially assumed to be 0. It's corrected when/if between TAI and UTC is initially assumed to be 0. It's corrected when/if
a sufficiently new `ntpd` starts. a sufficiently new `ntpd` starts.
* We'd need a leap second table to translate this into calendar time. One * We'd need a leap second table to translate this into calendar time. One
would have to be downloaded from the Internet periodically, and we'd need would have to be downloaded from the Internet periodically, and we'd need
to consider the case in which the available table is expired. to consider the case in which the available table is expired.
* `CLOCK_TAI` likely doesn't work properly with leap smear systems. Where * `CLOCK_TAI` likely doesn't work properly with leap smear systems. Where
the leap smear prevents a time jump for `CLOCK_REALTIME`, it likely the leap smear prevents a time jump for `CLOCK_REALTIME`, it likely
introduces one for `CLOCK_TAI`. introduces one for `CLOCK_TAI`.
#### Use a leap second table when calculating differences #### Use a leap second table when calculating differences
@ -345,4 +358,4 @@ Schema versions prior to 6 used a simpler database schema which didn't
distinguish between "wall" and "media" time. Instead, the durations of video distinguish between "wall" and "media" time. Instead, the durations of video
samples were adjusted for clock correction. This approach worked well for samples were adjusted for clock correction. This approach worked well for
video. It couldn't be extended to audio without decoding and re-encoding to video. It couldn't be extended to audio without decoding and re-encoding to
adjust same lengths and pitch. adjust same lengths and pitch.

View File

@ -1,4 +1,4 @@
# Building Moonfire NVR # Building Moonfire NVR <!-- omit in toc -->
This document has notes for software developers on building Moonfire NVR from This document has notes for software developers on building Moonfire NVR from
source code for development. If you just want to install precompiled source code for development. If you just want to install precompiled
@ -10,13 +10,12 @@ tracker](https://github.com/scottlamb/moonfire-nvr/issues) or
[mailing list](https://groups.google.com/d/forum/moonfire-nvr-users) when [mailing list](https://groups.google.com/d/forum/moonfire-nvr-users) when
stuck. Please also send pull requests to improve this doc. stuck. Please also send pull requests to improve this doc.
* [Building Moonfire NVR](#building-moonfire-nvr) * [Downloading](#downloading)
* [Downloading](#downloading) * [Docker builds](#docker-builds)
* [Docker builds](#docker-builds) * [Release procedure](#release-procedure)
* [Release procedure](#release-procedure) * [Non-Docker setup](#non-docker-setup)
* [Non-Docker setup](#non-docker-setup) * [Running interactively straight from the working copy](#running-interactively-straight-from-the-working-copy)
* [Running interactively straight from the working copy](#running-interactively-straight-from-the-working-copy) * [Running as a `systemd` service](#running-as-a-systemd-service)
* [Running as a `systemd` service](#running-as-a-systemd-service)
## Downloading ## Downloading
@ -151,19 +150,19 @@ Linux VM and filesystem overlay.
To build the server, you will need the following C libraries installed: To build the server, you will need the following C libraries installed:
* [ffmpeg](http://ffmpeg.org/) version 2.x or 3.x, including `libavutil`, * [ffmpeg](http://ffmpeg.org/) version 2.x or 3.x, including `libavutil`,
`libavcodec` (to inspect H.264 frames), and `libavformat` (to connect to RTSP `libavcodec` (to inspect H.264 frames), and `libavformat` (to connect to
servers and write `.mp4` files). RTSP servers and write `.mp4` files).
Note ffmpeg library versions older than 55.1.101, along with all versions of Note ffmpeg library versions older than 55.1.101, along with all versions of
the competing project [libav](http://libav.org), don't support socket the competing project [libav](http://libav.org), don't support socket
timeouts for RTSP. For reliable reconnections on error, it's strongly timeouts for RTSP. For reliable reconnections on error, it's strongly
recommended to use ffmpeg library versions >= 55.1.101. recommended to use ffmpeg library versions >= 55.1.101.
* [SQLite3](https://www.sqlite.org/). * [SQLite3](https://www.sqlite.org/).
* [`ncursesw`](https://www.gnu.org/software/ncurses/), the UTF-8 version of * [`ncursesw`](https://www.gnu.org/software/ncurses/), the UTF-8 version of
the `ncurses` library. the `ncurses` library.
To build the UI, you'll need [node and npm](https://nodejs.org/en/download/). To build the UI, you'll need [node and npm](https://nodejs.org/en/download/).

View File

@ -1,4 +1,8 @@
# Working on UI development # Working on UI development <!-- omit in toc -->
* [Getting started](#getting-started)
* [Overriding defaults](#overriding-defaults)
* [A note on `https`](#a-note-on-https)
The UI is presented from a single HTML page (index.html) and any number The UI is presented from a single HTML page (index.html) and any number
of Javascript files, css files, images, etc. These are "packed" together of Javascript files, css files, images, etc. These are "packed" together

View File

@ -1,4 +1,11 @@
# Downloading, installing, and configuring Moonfire NVR with Docker # Installing Moonfire NVR <!-- omit in toc -->
* [Downloading, installing, and configuring Moonfire NVR with Docker](#downloading-installing-and-configuring-moonfire-nvr-with-docker)
* [Dedicated hard drive setup](#dedicated-hard-drive-setup)
* [Completing configuration through the UI](#completing-configuration-through-the-ui)
* [Starting it up](#starting-it-up)
## Downloading, installing, and configuring Moonfire NVR with Docker
This document describes how to download, install, and configure Moonfire NVR This document describes how to download, install, and configure Moonfire NVR
via the prebuilt Docker images available for x86-64, arm64, and arm. If you via the prebuilt Docker images available for x86-64, arm64, and arm. If you
@ -102,7 +109,7 @@ $ nvr init
This will create a directory `/var/lib/moonfire-nvr/db` with a SQLite3 database This will create a directory `/var/lib/moonfire-nvr/db` with a SQLite3 database
within it. within it.
## Dedicated hard drive setup ### Dedicated hard drive setup
If a dedicated hard drive is available, set up the mount point: If a dedicated hard drive is available, set up the mount point:
@ -139,7 +146,7 @@ mount lines. It will look similar to this:
--mount=type=bind,source=/media/nvr/sample,destination=/media/nvr/sample --mount=type=bind,source=/media/nvr/sample,destination=/media/nvr/sample
``` ```
## Completing configuration through the UI ### Completing configuration through the UI
Once your system is set up, it's time to initialize an empty database Once your system is set up, it's time to initialize an empty database
and add the cameras and sample directories. You can do this and add the cameras and sample directories. You can do this
@ -159,28 +166,28 @@ In the user interface,
2. add cameras under "Cameras and streams". 2. add cameras under "Cameras and streams".
* See the [wiki](https://github.com/scottlamb/moonfire-nvr/wiki) for notes * See the [wiki](https://github.com/scottlamb/moonfire-nvr/wiki) for notes
about specific camera models. about specific camera models.
* There's a "Test" button to verify your settings directly from the add/edit * There's a "Test" button to verify your settings directly from the add/edit
camera dialog. camera dialog.
* Be sure to assign each stream you want to capture to a sample file * Be sure to assign each stream you want to capture to a sample file
directory and check the "record" box. directory and check the "record" box.
* `flush_if_sec` should typically be 120 seconds. This causes the database to * `flush_if_sec` should typically be 120 seconds. This causes the database to
be flushed when the first instant of one of this stream's completed be flushed when the first instant of one of this stream's completed
recordings is 2 minutes old. A "recording" is a segment of a video recordings is 2 minutes old. A "recording" is a segment of a video
stream that is 60120 seconds when first establishing the stream, about stream that is 60120 seconds when first establishing the stream,
60 seconds midstream, and shorter when an error or server shutdown about 60 seconds midstream, and shorter when an error or server
terminates the stream. Thus, a value just below 60 will cause the shutdown terminates the stream. Thus, a value just below 60 will
database to be flushed once per minute per stream in the steady state. A cause the database to be flushed once per minute per stream in the
value around 180 will cause the database to be once every 3 minutes per steady state. A value around 180 will cause the database to be once
stream, or less frequently if other streams cause flushes first. Lower every 3 minutes per stream, or less frequently if other streams cause
values cause less video to be lost on power loss. Higher values reduce flushes first. Lower values cause less video to be lost on power
wear on the SSD holding the SQLite database, particularly when you have loss. Higher values reduce wear on the SSD holding the SQLite
many cameras and when you record both the "main" and "sub" streams of database, particularly when you have many cameras and when you record
each camera. both the "main" and "sub" streams of each camera.
3. Assign disk space to your cameras back in "Directories and retention". 3. Assign disk space to your cameras back in "Directories and retention".
Leave a little slack between the total limit and the filesystem capacity, Leave a little slack between the total limit and the filesystem capacity,
@ -202,7 +209,7 @@ In the user interface,
4. Add a user for yourself (and optionally others) under "Users". You'll need 4. Add a user for yourself (and optionally others) under "Users". You'll need
this to access the web UI once you enable authentication. this to access the web UI once you enable authentication.
## Starting it up ### Starting it up
Note that at this stage, Moonfire NVR's web interface is **insecure**: it Note that at this stage, Moonfire NVR's web interface is **insecure**: it
doesn't use `https` and doesn't require you to authenticate doesn't use `https` and doesn't require you to authenticate

View File

@ -1,4 +1,12 @@
# Moonfire NVR Schema Guide # Moonfire NVR Schema Guide <!-- omit in toc -->
* [Upgrading](#upgrading)
* [Procedure](#procedure)
* [Unversioned to version 0](#unversioned-to-version-0)
* [Version 0 to version 1](#version-0-to-version-1)
* [Version 1 to version 2 to version 3](#version-1-to-version-2-to-version-3)
* [Version 3 to version 4 to version 5](#version-3-to-version-4-to-version-5)
* [Version 6](#version-6)
This document has notes about the Moonfire NVR storage schema. As described in This document has notes about the Moonfire NVR storage schema. As described in
[README.md](../README.md), this consists of two kinds of state: [README.md](../README.md), this consists of two kinds of state:
@ -26,42 +34,46 @@ read-only mode prior to deleting the old database.
First ensure there is sufficient space available for four copies of the First ensure there is sufficient space available for four copies of the
SQLite database: SQLite database:
* copy 1: the copy to upgrade * copy 1: the copy to upgrade
* copy 2: a backup you manually create so that you can restore if you * copy 2: a backup you manually create so that you can restore if you
discover a problem while running the new software against the upgraded discover a problem while running the new software against the upgraded
database in read-only mode. If disk space is tight, you can save this database in read-only mode. If disk space is tight, you can save this
to a different filesystem than the primary copy. to a different filesystem than the primary copy.
* copies 3 and 4: internal copies made and destroyed by Moonfire NVR and * copies 3 and 4: internal copies made and destroyed by Moonfire NVR and
SQLite during the upgrade: SQLite during the upgrade:
* during earlier steps, possibly duplicate copies of tables, which * during earlier steps, possibly duplicate copies of tables, which
may occupy space both in the main database and the journal may occupy space both in the main database and the journal
* during the final vacuum step, a complete database copy * during the final vacuum step, a complete database copy
If disk space is tight, and you are _very careful_, you can skip these If disk space is tight, and you are _very careful_, you can skip these
copies with the `--preset-journal=off --no-vacuum` arguments to copies with the `--preset-journal=off --no-vacuum` arguments to
the updater. If you aren't confident in your ability to do this, *don't the updater. If you aren't confident in your ability to do this, *don't
do it*. If you are confident, take additional safety precautions anyway: do it*. If you are confident, take additional safety precautions anyway:
* double-check you have the full backup described above. Without the * double-check you have the full backup described above. Without the
journal any problems during the upgrade will corrupt your database journal any problems during the upgrade will corrupt your database
and you will need to restore. and you will need to restore.
* ensure you re-enable journalling via `pragma journal_mode = wal;` * ensure you re-enable journalling via `pragma journal_mode = wal;`
before using the upgraded database, or any problems after the before using the upgraded database, or any problems after the
upgrade will corrupt your database. The upgrade procedure should do upgrade will corrupt your database. The upgrade procedure should do
this automatically, but you will want to verify by hand that you are this automatically, but you will want to verify by hand that you are
no longer in the dangerous mode. no longer in the dangerous mode.
Next ensure Moonfire NVR is not running and does not automatically restart if Next ensure Moonfire NVR is not running and does not automatically restart if
the system is rebooted during the upgrade. If you followed the Docker the system is rebooted during the upgrade. If you followed the Docker
instructions, you can do this as follows: instructions, you can do this as follows:
$ nvr stop ```
$ nvr stop
```
Then back up your SQLite database. If you are using the default path, you can Then back up your SQLite database. If you are using the default path, you can
do so as follows: do so as follows:
$ sudo -u moonfire-nvr cp /var/lib/moonfire-nvr/db/db{,.pre-upgrade} ```
$ sudo -u moonfire-nvr cp /var/lib/moonfire-nvr/db/db{,.pre-upgrade}
```
By default, the upgrade command will reset the SQLite `journal_mode` to By default, the upgrade command will reset the SQLite `journal_mode` to
`delete` prior to the upgrade. This works around a problem with `delete` prior to the upgrade. This works around a problem with
@ -112,17 +124,17 @@ $ nvr run
Hopefully your system is functioning correctly. If not, there are two options Hopefully your system is functioning correctly. If not, there are two options
for restore; neither are easy: for restore; neither are easy:
* go back to your old database. There will be two classes of problems: * go back to your old database. There will be two classes of problems:
* If the new system deleted any recordings, the old system will * If the new system deleted any recordings, the old system will
incorrectly believe they are still present. You could wait until all incorrectly believe they are still present. You could wait until all
existing files are rotated away, or you could try to delete them existing files are rotated away, or you could try to delete them
manually from the database. manually from the database.
* if the new system created any recordings, the old system will not * If the new system created any recordings, the old system will not
know about them and will not delete them. Your disk may become full. know about them and will not delete them. Your disk may become full.
You should find some way to discover these files and manually delete You should find some way to discover these files and manually delete
them. them.
* undo the changes by hand. There's no documentation on this; you'll need * undo the changes by hand. There's no documentation on this; you'll need
to read the code and come up with a reverse transformation. to read the code and come up with a reverse transformation.
The `nvr check` command will show you what problems exist on your system. The `nvr check` command will show you what problems exist on your system.
@ -136,9 +148,9 @@ will also accept a version 0 database.
Version 0 makes two changes: Version 0 makes two changes:
* it adds schema versioning, as described above. * it adds schema versioning, as described above.
* it adds a column (`video_sync_samples`) to a database index to speed up * it adds a column (`video_sync_samples`) to a database index to speed up
certain operations. certain operations.
There's a special procedure for this upgrade. The good news is that a backup There's a special procedure for this upgrade. The good news is that a backup
is unnecessary; there's no risk with this procedure. is unnecessary; there's no risk with this procedure.
@ -150,8 +162,10 @@ Then use `sqlite3` to manually edit the database. The default
path is `/var/lib/moonfire-nvr/db/db`; if you've specified a different path is `/var/lib/moonfire-nvr/db/db`; if you've specified a different
`--db_dir`, use that directory with a suffix of `/db`. `--db_dir`, use that directory with a suffix of `/db`.
$ sudo -u moonfire-nvr sqlite3 /var/lib/moonfire-nvr/db/db ```
sqlite3> $ sudo -u moonfire-nvr sqlite3 /var/lib/moonfire-nvr/db/db
sqlite3>
```
At the prompt, run the following commands: At the prompt, run the following commands:

View File

@ -1,4 +1,16 @@
# Securing Moonfire NVR and exposing it to the Internet # Securing Moonfire NVR and exposing it to the Internet <!-- omit in toc -->
* [The problem](#the-problem)
* [VPN or port forwarding?](#vpn-or-port-forwarding)
* [Overview](#overview)
* [1. Install a webserver](#1-install-a-webserver)
* [2. Configure a static internal IP](#2-configure-a-static-internal-ip)
* [3. Set up port forwarding](#3-set-up-port-forwarding)
* [4. Configure a public DNS name](#4-configure-a-public-dns-name)
* [5. Install a TLS certificate](#5-install-a-tls-certificate)
* [6. Reconfigure Moonfire NVR](#6-reconfigure-moonfire-nvr)
* [7. Configure the webserver](#7-configure-the-webserver)
* [Verify it works](#verify-it-works)
## The problem ## The problem

View File

@ -1,27 +1,26 @@
# Troubleshooting # Troubleshooting <!-- omit in toc -->
Here are some tips for diagnosing various problems with Moonfire NVR. Feel free Here are some tips for diagnosing various problems with Moonfire NVR. Feel free
to open an [issue](https://github.com/scottlamb/moonfire-nvr/issues) if you to open an [issue](https://github.com/scottlamb/moonfire-nvr/issues) if you
need more help. need more help.
* [Troubleshooting](#troubleshooting) * [Viewing Moonfire NVR's logs](#viewing-moonfire-nvrs-logs)
* [Viewing Moonfire NVR's logs](#viewing-moonfire-nvrs-logs) * [Flushes](#flushes)
* [Flushes](#flushes) * [Panic errors](#panic-errors)
* [Panic errors](#panic-errors) * [Slow operations](#slow-operations)
* [Slow operations](#slow-operations) * [Camera stream errors](#camera-stream-errors)
* [Camera stream errors](#camera-stream-errors) * [Problems](#problems)
* [Problems](#problems) * [Server errors](#server-errors)
* [Server errors](#server-errors) * [`Error: pts not monotonically increasing; got 26615520 then 26539470`](#error-pts-not-monotonically-increasing-got-26615520-then-26539470)
* [`Error: pts not monotonically increasing; got 26615520 then 26539470`](#error-pts-not-monotonically-increasing-got-26615520-then-26539470) * [Out of disk space](#out-of-disk-space)
* [Out of disk space](#out-of-disk-space) * [Database or filesystem corruption errors](#database-or-filesystem-corruption-errors)
* [Database or filesystem corruption errors](#database-or-filesystem-corruption-errors) * [Configuration interface problems](#configuration-interface-problems)
* [Configuration interface problems](#configuration-interface-problems) * [`moonfire-nvr config` displays garbage](#moonfire-nvr-config-displays-garbage)
* [`moonfire-nvr config` displays garbage](#moonfire-nvr-config-displays-garbage) * [Browser user interface problems](#browser-user-interface-problems)
* [Browser user interface problems](#browser-user-interface-problems) * [Live stream always fails with `ws close: 1006`](#live-stream-always-fails-with-ws-close-1006)
* [Live stream always fails with `ws close: 1006`](#live-stream-always-fails-with-ws-close-1006) * [Errors in kernel logs](#errors-in-kernel-logs)
* [Errors in kernel logs](#errors-in-kernel-logs) * [UAS errors](#uas-errors)
* [UAS errors](#uas-errors) * [Filesystem errors](#filesystem-errors)
* [Filesystem errors](#filesystem-errors)
## Viewing Moonfire NVR's logs ## Viewing Moonfire NVR's logs