Both a "cargo update" and a bump of major versions of a few deps.
I left a few alone:
* base64 because some of the deps depend on 0.11 (and 0.9), so I don't
want to pull in a third version (0.12).
* ring because libpasta depends on this version and I don't want to pull
in two of them.
* time because it's not trivial. Last I checked, time 0.2 couldn't even
do what I wanted at all.
I also made tokio use parking_lot, since I pull it in anyway.
This reduces the binary size noticeably on my macOS machine (#70):
unstripped stripped
1 before switching to clap 11.1 MiB 6.7 MiB
2 after switching to clap 11.4 MiB 6.9 MiB
3 without regex 10.1 MiB 5.9 MiB
A couple reasons for this:
* the docopt crate is "unlikely to see significant future evolution",
and the wider docopt project is "mostly unmaintained at this point".
clap/structopt is more full-featured, has more natural subcommand
support, etc.
* it may allow me to shrink the binary (#70). This change alone seems
to be a slight regression, but it's a step toward getting rid of
regex, which is pretty large. And I feel less ridiculous now that I
don't have two parsing crates anyway; prettydiff was pulling in
structopt.
There are some behavior changes here:
* misc --help output changes and such as you'd expect from switching
argument-parsing libraries
* I properly used PathBuf and OsString for stuff that theoretically
could be non-UTF-8. I haven't tested that it actually made any
difference. I'm also still storing the sample file dirname as "text"
in the database to avoid causing a diff when not doing a schema
change.
The multipart stream / hanging GET approach worked in a prototype for a
single stream, but Chrome has a per-host limit of six connections. If I
try streaming all my cameras at once, I hit that limit. I can't open all
the streams, much less additional connections to load init segments and
such. Websockets apparently has a much higher limit of 256.
The reqwest one is particularly notable because it means not having two
versions of hyper/http/tokio/futures/bytes. It also drops a number of
transitive deps; with some work I think I could stop depending on regex
now.
This doesn't take much advantage of async fns so far. For example, the
with_{form,json}_body functions are still designed to be used with
future combinators when it'd be more natural to call them from async
fns now. But it's a start.
Similarly, this still uses the old version of reqwest. Small steps.
Requires Rust 1.40 now. (1.39 is a requirement of async, and 1.40 is a
requirement of http-serve 0.2.0.)
I'm getting deprecation warnings for std::sync::ONCE_INIT, and I'm
not sure when std::sync::Once::new() became a const fn. Just as easy to
switch to parking_lot.
The immediate motivation is that Cargo.lock referred to a commit version
in a PR branch of my nix fork that no longer exists. (I didn't know, but
it makes sense, that "git push -f" not only forcibly updates the branch
to refer to a new commit but also gets rid of orphaned commits.) Use a
moonfire branch that I'll keep stable until I'm ready to move on.
I also updated parking_lot and rusqlite to new major versions (nothing
in the interface that I care about has changed) and did a full cargo
update.
(I also considered the names "capabilities" and "scopes", but I think
"permissions" is the most widely understood.)
This is increasingly necessary as the web API becomes more capable.
Among other things, it allows:
* non-administrator users who can view but not access camera passwords
or change any state
* workers that update signal state based on cameras' built-in motion
detection or a security system's events but don't need to view videos
* control over what can be done without authenticating
Currently session permissions are just copied from user permissions, but
you can also imagine admin sessions vs not, as a checkbox when signing
in. This would match the standard Unix workflow of using a
non-administrative session most of the time.
Relevant to my current signals work (#28) and to the addition of an
administrative API (#35, including #66).
This is so far completely untested, for use by a new UI prototype.
It creates a new URL endpoint which sends one video/mp4 media segment
per key frame, with the dependent frames included. This means there will
be about one key frame interval of latency (typically about a second).
This seems hard to avoid, as mentioned in issue #59.
Now each syncer has a binary heap of the times it plans to do a flush.
When one of those times arrives, it rechecks if there's something to do.
Seems more straightforward than rechecking each stream's first
uncommitted recording, especially with the logic to retry failed flushes
every minute.
Also improved the info! log for each flush to see the actual recordings
being flushed for better debuggability.
No new tests right now. :-( They're tricky to write. One problem is that
it's hard to get the timing right: a different flush has to happen
after Syncer::save's database operations and before Syncer::run calls
SimulatedClocks::recv_timeout with an empty channel[*], advancing the
time. I've thought of a few ways of doing this:
* adding a new SyncerCommand to run something, but it's messy (have
to add it from the mock of one of the actions done by the save),
and Box<dyn FnOnce() + 'static> not working (see
rust-lang/rust#28796) makes it especially annoying.
* replacing SimulatedClocks with something more like MockClocks.
Lots of boilerplate. Maybe I need to find a good general-purpose
Rust mock library. (mockers sounds good but I want something that
works on stable Rust.)
* bypassing the Syncer::run loop, instead manually running iterations
from the test.
Maybe the last way is the best for now. I'm likely to try it soon.
[*] actually, it's calling Receiver::recv_timeout directly;
Clocks::recv_timeout is dead code now? oops.
Some caveats:
* it doesn't record the peer IP yet, which makes it harder to verify
sessions are valid. This is a little annoying to do in hyper now
(see hyperium/hyper#1410). The direct peer might not be what we want
right now anyway because there's no TLS support yet (see #27). In
the meantime, the sane way to expose Moonfire NVR to the Internet is
via a proxy server, and recording the proxy's IP is not useful.
Maybe better to interpret a RFC 7239 Forwarded header (and/or
the older X-Forwarded-{For,Proto} headers).
* it doesn't ever use Secure (https-only) cookies, for a similar reason.
It's not safe to use even with a tls proxy until this is fixed.
* there's no "moonfire-nvr config" support for inspecting/invalidating
sessions yet.
* in debug builds, logging in is crazy slow. See libpasta/libpasta#9.
Some notes:
* I removed the Javascript "no-use-before-defined" lint, as some of
the functions form a cycle.
* Fixed#20 along the way. I needed to add support for properly
returning non-OK HTTP statuses to signal unauthorized and such.
* I removed the Access-Control-Allow-Origin header support, which was
at odds with the "SameSite=lax" in the cookie header. The "yarn
start" method for running a local proxy server accomplishes the same
thing as the Access-Control-Allow-Origin support in a more secure
manner.
Fixes#60
The reqwest dependency is significant because the old version required
an old version of openssl, complicating compilation on newer platforms.
reqwest also pulled in old/duplicate versions of hyper, tokio, etc.
Nice to drop a lot of that cruft.
I left rusqlite and uuid alone because they had breaking changes I
didn't want to mess with at the moment.
Bumped the minimum Rust version to 1.30.0, as required by the
new encoding_rs crate (and perhaps other things).
I want to use hyper::server::Request::bytes_mut(), so an update is
needed. Update everything at once. Most notably, the http-serve update
starts using the http crate types for some things. (More to come.)
* separate these out into a new file, writer.rs, as dir.rs was getting
unwieldy.
* extract traits for the parts of SampleFileDir and std::fs::File they needed;
set up mock implementations.
* move clock.rs to a new base crate to be accessible from the db crate.
* add tests that exercise all the retry paths.
* bugfix: account for the new recording's bytes when calculating how much to
delete.
* bugfix: when retrying an unlink failure in collect_garbage, we shouldn't
warn about all the recordings no longer existing. Do this by retrying each
step rather than the whole procedure again.
* avoid double-panic scenarios, which I hit while tweaking the mocks. These
are quite annoying to debug as Rust doesn't print information about either
panic. I ended up using lldb to get a backtrace. Better to be cautious about
what we're doing when already panicking.
* give more context on raw::insert_recording errors, which I hit as well while
tweaking the new tests.
It should reduce compile time / memory usage to put quite a bit of the code
into a separate crate. I also intend to limit visibility of some things to
only within the db crate, but that's for a future change. This is the smallest
move that will compile.
The idea is to avoid the problems described in src/schema.proto; those
possibilities have bothered me for a while. A bonus is that (in a future
commit) it can replace the sample file uuid scheme in favor of using
<camera_uuid>-<stream_type>/<recording_id> for several advantages:
* on data integrity problems (specifically, extra sample files), more
information to use to understand what happened.
* no more reserving sample files prior to using them. This avoids some extra
database transactions on startup (now there's an extra two total rather
than an extra one per stream). It also simplifies an upcoming change I
want to make in which some streams are not flushed immediately, reducing
the write load significantly (maybe one per minute total rather than one
per stream per minute).
* get rid of eight bytes per playback cache entry in RAM (and nine bytes
per recording_playback row on flash).
The implementation is still pretty rough in places:
* Lack of tests.
* Poor ode organization. In particular, SampleFileDirectory::write_meta
shouldn't be exposed beyond db. I'm thinking about moving db.rs and
SampleFileDirectory to a new crate, moonfire_nvr_db. This would improve
compile times as well.
* No tooling for renaming a sample file directory.
* Config subcommand still panics in conditions that can be reasonably
expected to happen.
* the "lib: {}" print didn't do anything. It turns out that the pkg-config
crate emits the necessary metadata for linking automatically. I had the
wrong format and didn't notice because something else did it correctly.
* gcc::Config is deprecated; the new name is Build.
* and the crate is now called cc, version 1.0.
Stuff found while looking at #11. Still haven't figured that issue out.
I'd temporarily pointed this to a local path for development and didn't notice
it was still in place when committing. Back to the git path that works for
everyone.
The Javascript is pretty amateurish I'm sure but at least it's something to
iterate from. It's already much more pleasant for browsing through videos in
several ways:
* more responsive to load only a day at a time rather than 90+ days
* much easier to see the same time segment on several cameras
* more pleasant to have the videos load as a popup rather than a link
that blows away your position in an enormous list
* exposes the fancier .mp4 generation options: splitting at lengths
other than the default, trimming to an arbitrary start and end time,
including a subtitle track with timestamps.
There's a slight regression in functionality: I didn't match the former
top-level page which showed how much camera used of its disk allocation and
the total duration of video. This is exposed in the JSON API, so it shouldn't
be too hard to add back.