* switch the config interface over to use Retina and make the test
button honor rtsp_transport = udp.
* adjust the threading model of the Retina streaming code.
Before, it spawned a background future that read from the runtime and
wrote to a channel. Other calls read from this channel.
After, it does work directly from within the block_on calls (no
channels).
The immediate motivation was that the config interface didn't have
another runtime handy. And passing in a current thread runtime
deadlocked. I later learned this is a difference between
Runtime::block_on and Handle::block_on. The former will drive IO and
timers; the latter will not.
But this is also more efficient to avoid so many thread hand-offs.
Both the context switches and the extra spinning that
tokio appears to do as mentioned here:
https://github.com/scottlamb/retina/issues/5#issuecomment-871971550
This may not be the final word on the threading model. Eventually
I may not have per-stream writing threads at all. But I think it will
be easier to look at this after getting rid of the separate
`moonfire-nvr config` subcommand in favor of a web interface.
* in tests, read `.mp4` files via the `mp4` crate rather than ffmpeg.
The annoying part is that this doesn't parse edit lists; oh well.
* simplify the `Opener` interface. Formerly, it'd take either a RTSP
URL or a path to a `.mp4` file, and they'd share some code because
they both sometimes used ffmpeg. Now, they're totally different
libraries (`retina` vs `mp4`). Pull the latter out to a `testutil`
module with a different interface that exposes more of the `mp4`
stuff. Now `Opener` is just for RTSP.
* simplify the h264 module. It had a lot of logic to deal with Annex B.
Retina doesn't use this encoding.
Fixes#36Fixes#126
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
These are not actually populated by the code yet. I'm trying to get the
v3 schema frozen as soon as possible; actually using the fields can come
later.
Add some explanation of their value in time.md, along with some general
musing on leap seconds, and a correction on the frequency error of my cameras.
This time, I've given up on svg and am using png. The inline svg seems to be
totally stripped out by github's markdown->html conversion, and img links
don't work because .svg files are served with an incorrect Content-Type.
This is more sophisticated than the current implementation. It's an attempt
to address the problems created by the 9 seconds/day of drift I'm seeing for
long-running streams.