After a frustrating search for a suitable channel to use for shutdown
(tokio::sync:⌚:Receiver and
futures::future::Shared<tokio::sync::oneshot::Receiver> didn't look
quite right) in which I rethought my life decisions, I finally just made
my own (server/base/shutdown.rs). We can easily poll it or wait for it
in async or sync contexts. Most importantly, it's convenient; not that
it really matters here, but it's also efficient.
We now do a slightly better job of propagating a "graceful" shutdown
signal, and this channel will give us tools to improve it over time.
* Shut down even when writer or syncer operations are stuck. Fixes#117
* Not done yet: streamers should instantly shut down without waiting for
a connection attempt or frame or something. I'll probably
implement that when removing --rtsp-library=ffmpeg. The code should be
cleaner then.
* Not done yet: fix a couple places that sleep for up to a second when
they could shut down immediately. I just need to do the plumbing for
mock clocks to work.
I also implemented an immediate shutdown mode, activated by a second
signal. I think this will mitigate the streamer wait situation.
* upgrade to Retina 0.3.1 which automatically tears down sessions
* wait out stale sessions before reconnecting
* wait for teardown to complete before shutting down
This adds some pressure on #117: it will keep waiting for the stale
session to expire even if the user has requested shutdown. I'll try
to address that next.
While I'm here, return a clean error if a non-initial video frame
includes a parameter change, rather than doing something crazy (#42).
It's still broken under ffmpeg, it's untested, and it's not as clean
as seamlessly starting a new recording with the new parameters, but
it's better than nothing.
This isn't well-tested and doesn't yet support an initial connection
timeout. But in a quick test, it successfully returns video!
I'd like to do some more aggressive code restructuring for zero-copy
and to have only one writer thread per sample file directory (rather
than the syncer thread + one writer thread per RTSP stream). But I'll
likely wait until I drop support for ffmpeg entirely.
This is (slightly) complicating the switch from ffmpeg to retina
as the RTSP client. And it's not really that close to what I want
to end up with for analytics:
* I'd prefer the analytics happen in a separate process for
several reasons
* Feeding the entire frame to the object detector doesn't produce
good results.
* It doesn't do anything with the results yet anyway.
Reading from the mmap()ed region in the tokio threads could cause
them to stall:
* That could affect UI serving when there were concurrent
UI requests (i.e., not just requests that needed the reads in
question anyway).
* If there's a faulty disk, it could cause the UI to totally hang.
Better to not mix disks between threads.
* Soon, I want to handle RTSP from the tokio threads (#37). Similarly,
we don't want RTSP streaming to block on operations from unrelated
disks.
I went with just one thread per disk which I think is sufficient.
But it'd be possible to do a fixed-size pool instead which might improve
latency when some pages are already cached.
I also dropped the memmap dependency. I had to compute the page
alignment anyway to get mremap to work, and Moonfire NVR already is
Unix-specific, so there wasn't much value from the memmap or memmap2
crates.
Fixes#88
* my dad's GW4089IP cameras use 720x480
* some Reolink cameras use 640x352
* I'm playing with rotated cameras (16x9 -> 9x16)
I'd prefer to calculate pasp from a configured camera aspect ratio
than to hardcode the assumption these are 16x9, but that requires
a schema change. This is an improvement for now.
Chrome appears to time out at 60 seconds of inactivity otherwise.
I think it's better to keep the stream open, even if the camera is
broken.
The implementation looks awkward, but that might be the state of Rust
async right now.
I'm tired of all the boilerplate, so use the new
GPL-3.0-linking-exception license identifier instead in all the server
components.
I left the ui stuff alone because I'm just going to replace it (#111).
Add a checker for the header because it's easy to forget.
This was mostly straightforward. The most confusing part waas the Sync
bound change on body streams. I copied what hyper did and it seemed to
work. /shruggie
Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.