moonfire-nvr/README.md

320 lines
14 KiB
Markdown
Raw Normal View History

# Introduction
Moonfire NVR is an open-source security camera network video recorder, started
by Scott Lamb <slamb@slamb.org>. Currently it is basic: it saves
H.264-over-RTSP streams from IP cameras to disk as .mp4 files and provides a
simple HTTP interface for listing and viewing fixed-length segments of video.
It does not decode, analyze, or re-encode video frames, so it requires little
CPU. It handles six 720p/15fps streams on a [Raspberry Pi
2](https://www.raspberrypi.org/products/raspberry-pi-2-model-b/), using roughly
5% of the machine's total CPU.
This is version 0.1, the initial release. Until version 1.0, there will be no
compatibility guarantees: configuration and storage formats may change from
version to version.
I hope to add features such as salient motion detection. It's way too early to
make promises, but it seems possible to build a full-featured
hobbyist-oriented multi-camera NVR that requires nothing but a cheap machine
with a big hard drive. I welcome help; see [Getting help and getting
involved](#help) below. There are many exciting techniques we could use to
make this possible:
* avoiding CPU-intensive H.264 encoding in favor of simply continuing to use the
camera's already-encoded video streams. Cheap IP cameras these days provide
pre-encoded H.264 streams in both "main" (full-sized) and "sub" (lower
resolution, compression quality, and/or frame rate) varieties. The "sub"
stream is more suitable for fast computer vision work as well as
remote/mobile streaming. Disk space these days is quite cheap (with 3 TB
drives costing about $100), so we can afford to keep many camera-months of
both streams on disk.
* decoding and analyzing only select "key" video frames (see
[wikipedia](https://en.wikipedia.org/wiki/Video_compression_picture_types).
* off-loading expensive work to a GPU. Even the Raspberry Pi has a
surprisingly powerful GPU.
* using [HTTP Live Streaming](https://en.wikipedia.org/wiki/HTTP_Live_Streaming)
rather than requiring custom browser plug-ins.
* taking advantage of cameras' built-in motion detection. This is
the most obvious way to reduce motion detection CPU. It's a last resort
because these cheap cameras' proprietary algorithms are awful compared to
those described on [changedetection.net](http://changedetection.net).
Cameras have high false-positive and false-negative rates, are hard to
experiment with (as opposed to rerunning against saved video files), and
don't provide any information beyond if motion exceeded the threshold or
not.
# Downloading
See the [github page](https://github.com/scottlamb/moonfire-nvr) (in case
you're not reading this text there already). You can download the bleeding
edge version from the command line via git:
$ git clone https://github.com/scottlamb/moonfire-nvr.git
# Building from source
There are no binary packages of Moonfire NVR available yet, so it must be built
from source. It requires several packages to build:
* [CMake](https://cmake.org/) version 3.1.0 or higher.
* a C++11 compiler, such as [gcc](https://gcc.gnu.org/) 4.7 or higher.
* [ffmpeg](http://ffmpeg.org/), including `libavutil`,
`libavcodec` (to inspect H.264 frames), and `libavformat` (to connect to RTSP
servers and write `.mp4` files). Note ffmpeg versions older than 55.1.101,
along with all versions of the competing project [libav](http://libav.org),
does not support socket timeouts for RTSP. For reliable reconnections on
error, it's strongly recommended to use ffmpeg >= 55.1.101.
Construct HTTP responses incrementally. This isn't as much of a speed-up as you might imagine; most of the large HTTP content was mmap()ed files which are relatively efficient. The big improvement here is that it's now possible to serve large files (4 GiB and up) on 32-bit machines. This actually works: I was just able to browse a 25-hour, 37 GiB .mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving each request, which isn't exactly zippy but might be forgivable for such a large file. I still intend for the common request from the web interface to be for much smaller fragmented .mp4 files. Speed could be improved later through caching. Right now my test code is creating a fresh VirtualFile from a database query on each request, even though it hasn't changed. The tricky part will be doing cache invalidation cleanly if it does change---new recordings are added to the requested time range, recordings are deleted, or existing recordings' timestamps are changed. The downside to the approach here is that it requires libevent 2.1 for evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie still bundle libevent 2.0. There are a few possible improvements here: 1. fall back to assuming chunks are added immediately, so that people with libevent 2.0 get the old bad behavior and people with libevent 2.1 get the better behavior. This is kind of lame, though; it's easy to go through the whole address space pretty fast, particularly when the browsers send out requests so quickly so there may be some unintentional concurrency. 2. alter the FileSlice interface to return a pointer/destructor rather than add something to the evbuffer. HttpServe would then add each chunk via evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to calling the FileSlice-supplied destructor) notes that this chunk has been fully sent. For all the currently-used FileSlices, this shouldn't be too hard, and there are a few other reasons it might be beneficial: * RealFileSlice could call madvise() to control the OS buffering * RealFileSlice could track when file descriptors are open and thus FileManager's unlink() calls don't actually free up space * It feels dirty to expose libevent stuff through the otherwise-nice FileSlice interface. 3. support building libevent 2.1 statically in-tree if the OS-supplied libevent is unsuitable. I'm tempted to go with #2, but probably not right now. More urgent to commit support for writing the new format and the wrapper bits for viewing it.
2016-01-15 01:41:49 -05:00
* [libevent](http://libevent.org/) 2.1, for the built-in HTTP server.
(This might be replaced with the more full-featured
[nghttp2](https://github.com/tatsuhiro-t/nghttp2) in the future.)
Construct HTTP responses incrementally. This isn't as much of a speed-up as you might imagine; most of the large HTTP content was mmap()ed files which are relatively efficient. The big improvement here is that it's now possible to serve large files (4 GiB and up) on 32-bit machines. This actually works: I was just able to browse a 25-hour, 37 GiB .mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving each request, which isn't exactly zippy but might be forgivable for such a large file. I still intend for the common request from the web interface to be for much smaller fragmented .mp4 files. Speed could be improved later through caching. Right now my test code is creating a fresh VirtualFile from a database query on each request, even though it hasn't changed. The tricky part will be doing cache invalidation cleanly if it does change---new recordings are added to the requested time range, recordings are deleted, or existing recordings' timestamps are changed. The downside to the approach here is that it requires libevent 2.1 for evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie still bundle libevent 2.0. There are a few possible improvements here: 1. fall back to assuming chunks are added immediately, so that people with libevent 2.0 get the old bad behavior and people with libevent 2.1 get the better behavior. This is kind of lame, though; it's easy to go through the whole address space pretty fast, particularly when the browsers send out requests so quickly so there may be some unintentional concurrency. 2. alter the FileSlice interface to return a pointer/destructor rather than add something to the evbuffer. HttpServe would then add each chunk via evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to calling the FileSlice-supplied destructor) notes that this chunk has been fully sent. For all the currently-used FileSlices, this shouldn't be too hard, and there are a few other reasons it might be beneficial: * RealFileSlice could call madvise() to control the OS buffering * RealFileSlice could track when file descriptors are open and thus FileManager's unlink() calls don't actually free up space * It feels dirty to expose libevent stuff through the otherwise-nice FileSlice interface. 3. support building libevent 2.1 statically in-tree if the OS-supplied libevent is unsuitable. I'm tempted to go with #2, but probably not right now. More urgent to commit support for writing the new format and the wrapper bits for viewing it.
2016-01-15 01:41:49 -05:00
Unfortunately, the libevent 2.0 bundled with current Debian releases is
unsuitable.
* [gflags](http://gflags.github.io/gflags/), for command line flag parsing.
* [glog](https://github.com/google/glog), for debug logging.
* [gperftools](https://github.com/gperftools/gperftools), for debugging.
* [googletest](https://github.com/google/googletest), for automated testing.
This will be automatically downloaded during the build process, so it's
not necessary to install it beforehand.
* [re2](https://github.com/google/re2), for parsing with regular expressions.
* libuuid from (util-linux)[https://en.wikipedia.org/wiki/Util-linux].
* [SQLite3](https://www.sqlite.org/).
Construct HTTP responses incrementally. This isn't as much of a speed-up as you might imagine; most of the large HTTP content was mmap()ed files which are relatively efficient. The big improvement here is that it's now possible to serve large files (4 GiB and up) on 32-bit machines. This actually works: I was just able to browse a 25-hour, 37 GiB .mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving each request, which isn't exactly zippy but might be forgivable for such a large file. I still intend for the common request from the web interface to be for much smaller fragmented .mp4 files. Speed could be improved later through caching. Right now my test code is creating a fresh VirtualFile from a database query on each request, even though it hasn't changed. The tricky part will be doing cache invalidation cleanly if it does change---new recordings are added to the requested time range, recordings are deleted, or existing recordings' timestamps are changed. The downside to the approach here is that it requires libevent 2.1 for evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie still bundle libevent 2.0. There are a few possible improvements here: 1. fall back to assuming chunks are added immediately, so that people with libevent 2.0 get the old bad behavior and people with libevent 2.1 get the better behavior. This is kind of lame, though; it's easy to go through the whole address space pretty fast, particularly when the browsers send out requests so quickly so there may be some unintentional concurrency. 2. alter the FileSlice interface to return a pointer/destructor rather than add something to the evbuffer. HttpServe would then add each chunk via evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to calling the FileSlice-supplied destructor) notes that this chunk has been fully sent. For all the currently-used FileSlices, this shouldn't be too hard, and there are a few other reasons it might be beneficial: * RealFileSlice could call madvise() to control the OS buffering * RealFileSlice could track when file descriptors are open and thus FileManager's unlink() calls don't actually free up space * It feels dirty to expose libevent stuff through the otherwise-nice FileSlice interface. 3. support building libevent 2.1 statically in-tree if the OS-supplied libevent is unsuitable. I'm tempted to go with #2, but probably not right now. More urgent to commit support for writing the new format and the wrapper bits for viewing it.
2016-01-15 01:41:49 -05:00
On Ubuntu 15.10 or Raspbian Jessie, the following command will install most
pre-requisites (see also the `Build-Depends` field in `debian/control`):
$ sudo apt-get install \
build-essential \
cmake \
libavcodec-dev \
libavformat-dev \
libavutil-dev \
libgflags-dev \
libgoogle-glog-dev \
libgoogle-perftools-dev \
libre2-dev \
sqlite3 \
libsqlite3-dev \
pkgconf \
uuid-runtime \
Construct HTTP responses incrementally. This isn't as much of a speed-up as you might imagine; most of the large HTTP content was mmap()ed files which are relatively efficient. The big improvement here is that it's now possible to serve large files (4 GiB and up) on 32-bit machines. This actually works: I was just able to browse a 25-hour, 37 GiB .mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving each request, which isn't exactly zippy but might be forgivable for such a large file. I still intend for the common request from the web interface to be for much smaller fragmented .mp4 files. Speed could be improved later through caching. Right now my test code is creating a fresh VirtualFile from a database query on each request, even though it hasn't changed. The tricky part will be doing cache invalidation cleanly if it does change---new recordings are added to the requested time range, recordings are deleted, or existing recordings' timestamps are changed. The downside to the approach here is that it requires libevent 2.1 for evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie still bundle libevent 2.0. There are a few possible improvements here: 1. fall back to assuming chunks are added immediately, so that people with libevent 2.0 get the old bad behavior and people with libevent 2.1 get the better behavior. This is kind of lame, though; it's easy to go through the whole address space pretty fast, particularly when the browsers send out requests so quickly so there may be some unintentional concurrency. 2. alter the FileSlice interface to return a pointer/destructor rather than add something to the evbuffer. HttpServe would then add each chunk via evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to calling the FileSlice-supplied destructor) notes that this chunk has been fully sent. For all the currently-used FileSlices, this shouldn't be too hard, and there are a few other reasons it might be beneficial: * RealFileSlice could call madvise() to control the OS buffering * RealFileSlice could track when file descriptors are open and thus FileManager's unlink() calls don't actually free up space * It feels dirty to expose libevent stuff through the otherwise-nice FileSlice interface. 3. support building libevent 2.1 statically in-tree if the OS-supplied libevent is unsuitable. I'm tempted to go with #2, but probably not right now. More urgent to commit support for writing the new format and the wrapper bits for viewing it.
2016-01-15 01:41:49 -05:00
uuid-dev
libevent 2.1 will have to be installed from source. In the future, this
dependency may be replaced or support may be added for automatically building
libevent in-tree to avoid the inconvenience.
uuid-runtime is only necessary if you wish to use the uuid command to generate
uuids for your cameras (see below). If you obtain them elsewhere, you can skip this
package.
You can continue to follow the build/install instructions below for a manual
build and install, or alternatively you can run the prep script called `prep.sh`.
$ cd moonfire-nvr
$ ./prep.sh
The script will take the following command line options, should you need them:
* `-E`: Forcibly purge all existing libevent packages. You would only do this
if there is some apparent conflict (see remarks about building libevent
from source).
* `-f`: Force a build even if the binary appears to be installed. This can be useful
on repeat builds.
* `-S`: Skip updating and installing dependencies through apt-get. This too can be
useful on repeated builds.
You can edit variables at the start of the script to influence names and
directories, but defaults should suffice in most cases. For details refer to
the script itself. We will mention just one option, needed when you follow the
suggestion to separate database and samples between flash storage and a hard disk.
If you have the hard disk mounted on, lets say `/media/nvr`, and you want to
store the video samples inside a directory named `samples` there, you would set:
SAMPLES_DIR=/media/nvr/samples
The script will perform all necessary steps to leave you with a fully built,
installed moonfire-nvr binary and (running) system service. The only thing
you'll have to do manually is add your camera configuration(s) to the database.
For instructions, you can skip to "[Camera configuration and hard disk mounting](#camera)".
Once prerequisites are installed, Moonfire NVR can be built as follows:
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install
Construct HTTP responses incrementally. This isn't as much of a speed-up as you might imagine; most of the large HTTP content was mmap()ed files which are relatively efficient. The big improvement here is that it's now possible to serve large files (4 GiB and up) on 32-bit machines. This actually works: I was just able to browse a 25-hour, 37 GiB .mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving each request, which isn't exactly zippy but might be forgivable for such a large file. I still intend for the common request from the web interface to be for much smaller fragmented .mp4 files. Speed could be improved later through caching. Right now my test code is creating a fresh VirtualFile from a database query on each request, even though it hasn't changed. The tricky part will be doing cache invalidation cleanly if it does change---new recordings are added to the requested time range, recordings are deleted, or existing recordings' timestamps are changed. The downside to the approach here is that it requires libevent 2.1 for evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie still bundle libevent 2.0. There are a few possible improvements here: 1. fall back to assuming chunks are added immediately, so that people with libevent 2.0 get the old bad behavior and people with libevent 2.1 get the better behavior. This is kind of lame, though; it's easy to go through the whole address space pretty fast, particularly when the browsers send out requests so quickly so there may be some unintentional concurrency. 2. alter the FileSlice interface to return a pointer/destructor rather than add something to the evbuffer. HttpServe would then add each chunk via evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to calling the FileSlice-supplied destructor) notes that this chunk has been fully sent. For all the currently-used FileSlices, this shouldn't be too hard, and there are a few other reasons it might be beneficial: * RealFileSlice could call madvise() to control the OS buffering * RealFileSlice could track when file descriptors are open and thus FileManager's unlink() calls don't actually free up space * It feels dirty to expose libevent stuff through the otherwise-nice FileSlice interface. 3. support building libevent 2.1 statically in-tree if the OS-supplied libevent is unsuitable. I'm tempted to go with #2, but probably not right now. More urgent to commit support for writing the new format and the wrapper bits for viewing it.
2016-01-15 01:41:49 -05:00
Alternatively, if you do have a sufficiently new apt-installed libevent
installed, you may be able to prepare a `.deb` package:
$ sudo apt-get install devscripts dh-systemd
$ debuild -us -uc
# Further configuration
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
Moonfire NVR should be run under a dedicated user. It keeps two kinds of
state:
* a SQLite database, typically <1 GiB. It should be stored on flash if
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
available.
* the "sample file directory", which holds the actual samples/frames of H.264
video. This should be quite large and typically is stored on a hard drive.
Both are intended to be accessed only by Moonfire NVR itself. However, the
interface for adding new cameras is not yet written, so you will have to
manually create the database and insert cameras with the `sqlite3` command line
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
tool prior to starting Moonfire NVR.
Manual commands would look something like this:
$ sudo addgroup --system moonfire-nvr
$ sudo adduser --system moonfire-nvr --home /var/lib/moonfire-nvr
$ sudo mkdir /var/lib/moonfire-nvr
$ sudo -u moonfire-nvr -H mkdir db sample
$ sudo -u moonfire-nvr sqlite3 ~moonfire-nvr/db/db < path/to/schema.sql
## <a name="cameras"></a>Camera configuration and hard drive mounting
If a dedicated hard drive is available, set up the mount point:
$ sudo vim /etc/fstab
$ sudo mount /var/lib/moonfire-nvr/sample
Once setup is complete, it is time to add camera configurations to the
database. However, the interface for adding new cameras is not yet written,
so you will have to manually insert cameras configurations with the `sqlite3`
command line tool prior to starting Moonfire NVR.
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
Before setting up a camera, it may be helpful to test settings with the
`ffmpeg` command line tool:
$ ffmpeg \
-i "rtsp://admin:12345@192.168.1.101:554/Streaming/Channels/1" \
-c copy \
-map 0:0 \
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
-rtsp_transport tcp \
-flags:v +global_header \
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
test.mp4
Once you have a working `ffmpeg` command line, insert the camera config as
follows. See the schema SQL file's comments for more information.
Note that the sum of `retain_bytes` for all cameras combined should be
somewhat less than the available bytes on the sample file directory's
filesystem, as the currently-writing sample files are not included in
this sum. Be sure also to subtract out the filesystem's reserve for root
(typically 5%).
In the following example, we generate a uuid which is then later used
to uniquely identify this camera. Thus, you will generate a new one for
each camera you insert using this method.
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
$ uuidgen | sed -e 's/-//g'
b47f48706d91414591cd6c931bf836b4
$ sudo -u moonfire-nvr sqlite3 ~moonfire-nvr/db/db
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
sqlite3> insert into camera (
...> uuid, short_name, description, host, username, password,
...> main_rtsp_path, sub_rtsp_path, retain_bytes) values (
...> X'b47f48706d91414591cd6c931bf836b4', 'driveway',
...> 'Longer description of this camera', '192.168.1.101',
...> 'admin', '12345', '/Streaming/Channels/1',
...> '/Streaming/Channels/2', 104857600);
sqlite3> ^D
## System Service
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
Moonfire NVR can be run as a systemd service. If you used `prep.sh` this has
been done for you. If not, Create
`/etc/systemd/system/moonfire-nvr.service`:
[Unit]
Description=Moonfire NVR
After=network-online.target
[Service]
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
ExecStart=/usr/local/bin/moonfire-nvr \
--sample_file_dir=/var/lib/moonfire-nvr/sample \
--db_dir=/var/lib/moonfire-nvr/db \
--http_port=8080
Type=simple
User=moonfire-nvr
Nice=-20
Restart=on-abnormal
CPUAccounting=true
MemoryAccounting=true
BlockIOAccounting=true
[Install]
WantedBy=multi-user.target
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
Note that the HTTP port currently has no authentication; it should not be
directly exposed to the Internet.
Complete the installation through `systemctl` commands:
$ sudo systemctl daemon-reload
$ sudo systemctl start moonfire-nvr.service
$ sudo systemctl status moonfire-nvr.service
$ sudo systemctl enable moonfire-nvr.service
See the [systemd](http://www.freedesktop.org/wiki/Software/systemd/)
documentation for more information. The [manual
pages](http://www.freedesktop.org/software/systemd/man/) for `systemd.service`
and `systemctl` may be of particular interest.
# Troubleshooting
Write using the shiny new schema There's a lot of work left to do on this: * important latency optimization: the recording threads block while fsync()ing sample files, which can take 250+ ms. This should be moved to a separate thread to happen asynchronously. * write cycle optimizations: several SQLite commits per camera per minute. * test coverage: this drops testing of the file rotation, and there are several error paths worth testing. * ffmpeg oddities to investigate: * the out-of-order first frame's pts * measurable delay before returning packets * it sometimes returns an initial packet it calls a "key" frame that actually has an SEI recovery point NAL but not an IDR-coded slice NAL, even though in the input these always seem to come together. This makes playback starting from this recording not work at all on Chrome. The symptom is that it loads a player-looking thing with the proper dimensions but playback never actually starts. I imagine these are all related but haven't taken the time to dig through ffmpeg code and understand them. The right thing anyway may be to ditch ffmpeg for RTSP streaming (perhaps in favor of the live555 library), as it seems to have other omissions like making it hard/impossible to take advantage of Sender Reports. In the meantime, I attempted to mitigate problems by decreasing ffmpeg's probesize. * handling overlapping recordings: right now if there's too much time drift or a time jump, you can end up with recordings that the UI won't play without manual database changes. It's not obvious what the right thing to do is. * easy camera setup: currently you have to manually insert rows in the SQLite database and restart. but I think it's best to get something in to iterate from. This deletes a lot of code, including: * the ffmpeg video sink code (instead now using a bit of extra code in Stream on top of the SampleFileWriter, SampleIndexEncoder, and MoonfireDatabase code that's been around for a while) * FileManager (in favor of new code using the database) * the old UI * RealFile and friends * the dependency on protocol buffers, which was used for the config file (though I'll likely have other reasons for using protocol buffers later) * even some utilities like IsWord that were just for validating the config
2016-02-04 02:22:37 -05:00
While Moonfire NVR is running, logs will be written to `/tmp/moonfire-nvr.INFO`.
Also available will be `/tmp/moonfire-nvr.WARNING` and `/tmp/moonfire-nvr.ERROR`.
These latter to contain only warning or more serious messages, respectively only
error messages.
# <a name="help"></a> Getting help and getting involved
Please email the
[moonfire-nvr-users]([https://groups.google.com/d/forum/moonfire-nvr-users)
mailing list with questions, bug reports, feature requests, or just to say
you love/hate the software and why.
I'd welcome help with testing, development (in C++, JavaScript, and HTML), user
interface/graphic design, and documentation. Please email the mailing list
if interested. Patches are welcome, but I encourage you to discuss large
changes on the mailing list first to save effort.
C++ code should be written using C++11 features, should follow the [Google C++
style guide](https://google.github.io/styleguide/cppguide.html) for
consistency, and should be automatically tested where practical. But don't
worry about this too much; I'm much happier to work with you to refine a rough
draft patch than never see your contribution at all!
# License
This file is part of Moonfire NVR, a security camera digital video recorder.
Copyright (C) 2016 Scott Lamb <slamb@slamb.org>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
In addition, as a special exception, the copyright holders give
permission to link the code of portions of this program with the
OpenSSL library under certain conditions as described in each
individual source file, and distribute linked combinations including
the two.
You must obey the GNU General Public License in all respects for all
of the code used other than OpenSSL. If you modify file(s) with this
exception, you may extend this exception to your version of the
file(s), but you are not obligated to do so. If you do not wish to do
so, delete this exception statement from your version. If you delete
this exception statement from all source files in the program, then
also delete it here.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.