As written in the changelog: Live streams formerly worked around a
Firefox pixel aspect ratio bug by forcing all videos to 16:9, which
dramatically distorted 9:16 camera views. Playback didn't, so anamorphic
videos looked correct on Chrome but slightly stretched on Firefox. Now
both live streams and playback are fully correct on all browsers.
While I'm here, return a clean error if a non-initial video frame
includes a parameter change, rather than doing something crazy (#42).
It's still broken under ffmpeg, it's untested, and it's not as clean
as seamlessly starting a new recording with the new parameters, but
it's better than nothing.
This isn't well-tested and doesn't yet support an initial connection
timeout. But in a quick test, it successfully returns video!
I'd like to do some more aggressive code restructuring for zero-copy
and to have only one writer thread per sample file directory (rather
than the syncer thread + one writer thread per RTSP stream). But I'll
likely wait until I drop support for ffmpeg entirely.
This is (slightly) complicating the switch from ffmpeg to retina
as the RTSP client. And it's not really that close to what I want
to end up with for analytics:
* I'd prefer the analytics happen in a separate process for
several reasons
* Feeding the entire frame to the object detector doesn't produce
good results.
* It doesn't do anything with the results yet anyway.
Reading from the mmap()ed region in the tokio threads could cause
them to stall:
* That could affect UI serving when there were concurrent
UI requests (i.e., not just requests that needed the reads in
question anyway).
* If there's a faulty disk, it could cause the UI to totally hang.
Better to not mix disks between threads.
* Soon, I want to handle RTSP from the tokio threads (#37). Similarly,
we don't want RTSP streaming to block on operations from unrelated
disks.
I went with just one thread per disk which I think is sufficient.
But it'd be possible to do a fixed-size pool instead which might improve
latency when some pages are already cached.
I also dropped the memmap dependency. I had to compute the page
alignment anyway to get mremap to work, and Moonfire NVR already is
Unix-specific, so there wasn't much value from the memmap or memmap2
crates.
Fixes#88
* my dad's GW4089IP cameras use 720x480
* some Reolink cameras use 640x352
* I'm playing with rotated cameras (16x9 -> 9x16)
I'd prefer to calculate pasp from a configured camera aspect ratio
than to hardcode the assumption these are 16x9, but that requires
a schema change. This is an improvement for now.
The immediate motivation is to address these CI failures with nightly:
https://github.com/scottlamb/moonfire-nvr/runs/2593322801?check_suite_focus=true
```
Compiling lock_api v0.4.2
error[E0557]: feature has been removed
--> /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/lock_api-0.4.2/src/lib.rs:91:42
|
91 | #![cfg_attr(feature = "nightly", feature(const_fn))]
| ^^^^^^^^ feature has been removed
|
= note: split into finer-grained feature gates
```
Strangely, they don't occur locally with "rustc 1.54.0-nightly
(fe72845f7 2021-05-16)" but do on CI with the exact same version?!?
I don't get it, but lock-api 0.4.4 is advertised as being updated for
latest nightly, so I expect this will address the problem anyway.
I saw this error once:
Apr 27 21:01:33 nuc moonfire-nvr[188570]: s-reolink-sub
moonfire_nvr::streamer] reolink-sub: sleeping for Duration { secs: 1,
nanos: 0 } after error: CHECK constraint failed: video_sample_entry
and would like to understand it better.
* API change: in update signals, allow setting a start time relative
to now. This is an accuracy improvement in the case where the client
has been retrying an initial request for a while. Kind of an obscure
corner case but easy enough to address. And use a more convenient
enum representation.
* in update signals, choose `now` before acquiring the database lock.
If lock acquisition takes a long time, this more accurately reflects
the time the caller intended.
* in general, make Time and Duration (de)serializable and use them
in json types. This makes the types more self-describing, with
better debug printing on both the server side and on the client
library (in moonfire-playground). To make this work, base has to
import serde which initially seemed like poor layering to me, but
serde seems to be imported in some pretty foundational Rust crates
for this reason. I'll go with it.
Chrome appears to time out at 60 seconds of inactivity otherwise.
I think it's better to keep the stream open, even if the camera is
broken.
The implementation looks awkward, but that might be the state of Rust
async right now.
The CI nightly builds had been broken with the following error:
```
error: custom inner attributes are unstable
--> /home/runner/work/moonfire-nvr/moonfire-nvr/server/target/debug/build/moonfire-db-415ce696a754c614/out/schema.rs:10:4
|
10 | #![rustfmt::skip]
| ^^^^^^^^^^^^^
|
= note: `#[deny(soft_unstable)]` on by default
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #64266 <https://github.com/rust-lang/rust/issues/64266>
```
I'd thought this was by mistake given that #[rustfmt::skip] is still
advertised on rustfmt's github page, but maybe not. Looks like
rust-protobuf's newest version uses
`#![cfg_attr(rustfmt, rustfmt::skip)]` to avoid this error.
Also fix a warning on nightly about an extraneous semicolon.
As noted in mylog's 2b1085c:
Looks like both the GNU tools' --color argument and cargo's
CARGO_TERM_COLOR expect always/never rather than on/off. Match that.
Might as well understand off/no/false and on/yes/true also.
This picks up moonfire-ffmpeg's 4b13378:
support ffmpeg's multi-call log messages
This should fix this annoying log output:
```
W20210310 13:17:09.060 s-garage_west-main moonfire_ffmpeg::rtsp] 0xaf300950: RTP H.264 NAL unit type 29
W20210310 13:17:09.060 s-garage_west-main moonfire_ffmpeg::rtsp] 0xaf300950: is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
```
so it looks like this instead:
```
W20210310 13:17:09.060 s-garage_west-main moonfire_ffmpeg::rtsp] 0xaf300950: RTP H.264 NAL unit type 29 is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
```
* add more description to the troubleshooting guide
* adjust the log format to match more recent glog
* include a config for the lnav tool, which will help colorize,
browse, and search the logs.
Next up: install an ffmpeg log callback for consistency.
This was mostly straightforward. The most confusing part waas the Sync
bound change on body streams. I copied what hyper did and it seemed to
work. /shruggie
Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.