While I'm here, return a clean error if a non-initial video frame
includes a parameter change, rather than doing something crazy (#42).
It's still broken under ffmpeg, it's untested, and it's not as clean
as seamlessly starting a new recording with the new parameters, but
it's better than nothing.
This isn't well-tested and doesn't yet support an initial connection
timeout. But in a quick test, it successfully returns video!
I'd like to do some more aggressive code restructuring for zero-copy
and to have only one writer thread per sample file directory (rather
than the syncer thread + one writer thread per RTSP stream). But I'll
likely wait until I drop support for ffmpeg entirely.
This is (slightly) complicating the switch from ffmpeg to retina
as the RTSP client. And it's not really that close to what I want
to end up with for analytics:
* I'd prefer the analytics happen in a separate process for
several reasons
* Feeding the entire frame to the object detector doesn't produce
good results.
* It doesn't do anything with the results yet anyway.
Reading from the mmap()ed region in the tokio threads could cause
them to stall:
* That could affect UI serving when there were concurrent
UI requests (i.e., not just requests that needed the reads in
question anyway).
* If there's a faulty disk, it could cause the UI to totally hang.
Better to not mix disks between threads.
* Soon, I want to handle RTSP from the tokio threads (#37). Similarly,
we don't want RTSP streaming to block on operations from unrelated
disks.
I went with just one thread per disk which I think is sufficient.
But it'd be possible to do a fixed-size pool instead which might improve
latency when some pages are already cached.
I also dropped the memmap dependency. I had to compute the page
alignment anyway to get mremap to work, and Moonfire NVR already is
Unix-specific, so there wasn't much value from the memmap or memmap2
crates.
Fixes#88
* my dad's GW4089IP cameras use 720x480
* some Reolink cameras use 640x352
* I'm playing with rotated cameras (16x9 -> 9x16)
I'd prefer to calculate pasp from a configured camera aspect ratio
than to hardcode the assumption these are 16x9, but that requires
a schema change. This is an improvement for now.
Chrome appears to time out at 60 seconds of inactivity otherwise.
I think it's better to keep the stream open, even if the camera is
broken.
The implementation looks awkward, but that might be the state of Rust
async right now.
I'm tired of all the boilerplate, so use the new
GPL-3.0-linking-exception license identifier instead in all the server
components.
I left the ui stuff alone because I'm just going to replace it (#111).
Add a checker for the header because it's easy to forget.
This was mostly straightforward. The most confusing part waas the Sync
bound change on body streams. I copied what hyper did and it seemed to
work. /shruggie
Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.