In particular, this returns all the extra configuration data that will be
necessary to actually instantiate streams from the database rather than the
soon-to-be-removed configuration file.
Before, I had a gross hardcoded path in moonfire-db.cc + a hacky
Recording::sample_file_path (which is StrCat(sample_file_dir, "/", uuid),
essentially). Now, things expect to take a File* to the sample file directory
and use openat(2). Several things had to change in the process:
* RealFileSlice now takes a File* dir.
* File has an Open that returns an fd (for RealFileSlice's benefit).
* BuildMp4 now is in WebInterface rather than MoonfireDatabase. The latter
only manages the SQLite database, so it shouldn't know anything about the
sample file directory.
This reverts commit ad4beac464.
That commit wasn't as advertised; I had several other changes mixed in my
working copy. I'd also copied a working copy from one path to another, and
it turns out the cmake build subdir was still referring to the original, so
I hadn't realized this commit didn't even build. :(
I didn't properly update the new duration calculation when switching from
ascending to descending order.
Also, on the Pi, 1-hour recordings are noticeably faster to load.
* Schema revisions. The most dramatic is the addition of a covering index on
(camera_id, start_time_90k) that avoids the need to make sparse accesses
into the recording table (where the desired data is intermixed with both
the large blobs and rows from other cameras). A query over a year's data
previously took many seconds (6+ even in a form without the video_index)
and now is roughly 10X faster. Queries for a couple weeks now should be
unnoticeably fast.
Other changes to shrink the rows, such as duration_90k instead of
end_time_90k (more compact varint encoding) and video_sample_entry_id
(typically 1 byte) instead of video_sample_entry_sha1 (20 bytes).
And more CHECK constraints for good measure.
* Caching of expensive computations and logic to keep them up to date.
The top-level web view previously went through the entire recording table,
which was even slower. Now it is served from a small map in RAM.
* Expanded the scope of operations to cover (hopefully) everything needed for
recording into the SQLite database.
* Added tests of MoonfireDatabase. These are basic tests that don't
exercise a lot of error cases, but at least they exist.
The main MoonfireDatabase functionality still missing is support for quickly
seeing what calendar days have data over the full timespan of a camera. This
is more data to compute and cache.
On my laptop, with a month's data, a test query would take 0.1 to 0.2 seconds
before. Now it takes 0.001 to 0.004 seconds.
I improved this by creating and taking advantage of an index on start time.
It's a little more complicated than that because the desired timespan is
specified in terms of a recording's start and end time, not start time alone.
I defined a maximum duration of a recording (5 minutes) and specified this
with an extra condition in the query so that the end time can be used to
narrow the valid range of start times.
"explain query plan select ..." output confirms it's using the index with
both > and < comparisons:
0|0|0|SEARCH TABLE recording USING INDEX recording_start_time_90k (start_time_90k>? AND start_time_90k<?)
0|1|1|SEARCH TABLE video_sample_entry USING INDEX sqlite_autoindex_video_sample_entry_1 (sha1=?)
I also refactored ListMp4Recordings out of BuildMp4File to make the measurement
easier.
This is almost certain to have performance problems with large databases,
but it's a useful starting point.
No tests yet. It shouldn't be too hard to add some for moonfire-db.h, but
I'm impatient to fake up enough data to check on the performance and see
what needs to change there first.
This wraps libevent's evhttp_parse_query_str and friends. It's easier to use
than the raw libevent stuff because it handles initialization (formerly not
done properly in profiler.cc) and cleans up with RAII.
I wrote the old interface before playing much with SQLite. Now that I've
played around with it a bit, I found many ways to make the interface more
pleasant and fool-proof:
* it opens the database in a mode that honors foreign keys and
returns extended result codes.
* it forces locking to avoid SQLITE_BUSY and
sqlite3_{changes,last_insert_rowid} race conditions.
* it supports named bind parameters.
* it defers some errors until Step() to reduce caller verbosity.
* it automatically handles calling reset, which was quite easy to forget.
* it remembers the Step() return value, which makes the row loop every so
slightly more pleasant.
* it tracks transaction status.
This isn't as much of a speed-up as you might imagine; most of the large HTTP
content was mmap()ed files which are relatively efficient. The big improvement
here is that it's now possible to serve large files (4 GiB and up) on 32-bit
machines. This actually works: I was just able to browse a 25-hour, 37 GiB
.mp4 file on my Raspberry Pi 2 Model B. It takes about 400 ms to start serving
each request, which isn't exactly zippy but might be forgivable for such a
large file. I still intend for the common request from the web interface to be
for much smaller fragmented .mp4 files.
Speed could be improved later through caching. Right now my test code is
creating a fresh VirtualFile from a database query on each request, even
though it hasn't changed. The tricky part will be doing cache invalidation
cleanly if it does change---new recordings are added to the requested time
range, recordings are deleted, or existing recordings' timestamps are changed.
The downside to the approach here is that it requires libevent 2.1 for
evhttp_send_reply_chunk_with_cb. Unfortunately, Ubuntu 15.10 and Debian Jessie
still bundle libevent 2.0. There are a few possible improvements here:
1. fall back to assuming chunks are added immediately, so that people with
libevent 2.0 get the old bad behavior and people with libevent 2.1 get the
better behavior. This is kind of lame, though; it's easy to go through
the whole address space pretty fast, particularly when the browsers send
out requests so quickly so there may be some unintentional concurrency.
2. alter the FileSlice interface to return a pointer/destructor rather than
add something to the evbuffer. HttpServe would then add each chunk via
evbuffer_add_reference, and it'd supply a cleanupfn that (in addition to
calling the FileSlice-supplied destructor) notes that this chunk has been
fully sent. For all the currently-used FileSlices, this shouldn't be too
hard, and there are a few other reasons it might be beneficial:
* RealFileSlice could call madvise() to control the OS buffering
* RealFileSlice could track when file descriptors are open and thus
FileManager's unlink() calls don't actually free up space
* It feels dirty to expose libevent stuff through the otherwise-nice
FileSlice interface.
3. support building libevent 2.1 statically in-tree if the OS-supplied
libevent is unsuitable.
I'm tempted to go with #2, but probably not right now. More urgent to commit
support for writing the new format and the wrapper bits for viewing it.
This avoids iteration through the video index for the "interior" recordings of
a virtual file. This takes generating the size of a ~8-hour / 15 fps file from
about 60 ms to about 10 ms. I expect better savings on a Raspberry Pi 2, for
longer records, and for higher frame rates. The total time here can be
significant; one one ~day-long recording on the Pi, it was several seconds.
I'm optimistic this will help with that.
It'd also be possible to optimize DecodeVar32 (perhaps by unrolling the loop)
but better to remove a call than to optimize one.
To add the fast path, we need a new field "video_sync_samples" in the
recording table to calculate the length of the stss table. Storage cost should
be minimal; I think typically two bytes in SQLite's record format (serial type
1, value < 128), described here: <https://www.sqlite.org/fileformat2.html>.
* Fix the mdat box size, which was not properly including the length of the
header itself. (The "mp4file" tool nicely diagnosed this corruption.)
* Fix the stsc box. The first number of each entry is meant to be a chunk
index, not a sample index. This was causing strange behavior in basically
any video player for multi-recording videos.
* Populate etag and last-modified so that Range: requests can work properly.
The etag must be changed every time the generated file format changes.
There's a serial number constant for this purpose and a test meant to help
catch such problems.
This is still pretty rough. For example, there's no test coverage of virtual
files based on multiple recordings. The etag and last modified code are stubs.
And various other conditions aren't tested at all. But it does appear to work
in a test that does a round-trip from a .mp4 file, so it should be a decent
starting point.
This code isn't pretty exactly---particularly the hardcoded lengths---but it
does work. I'll have a different mechanism for calculating the length and
nesting structure forthe more dynamic parts of the moov atom. This way is
convenient when generating a single string of mostly static data.