* support cross-compiling an x86-64 target on an arm64 host. This
it turns out is a matter of *removing* an unnecessary dependency.
(aarch64-linux-gnu-pkg-config exists but x86_64-linux-gnu-pkg-config
doesn't. Turns out neither is necessary.) Added a comment explaining
where ${gcc_target}-pkg-config comes from now.
* documentation tweaks
* improve debug output a bit
* switch the config interface over to use Retina and make the test
button honor rtsp_transport = udp.
* adjust the threading model of the Retina streaming code.
Before, it spawned a background future that read from the runtime and
wrote to a channel. Other calls read from this channel.
After, it does work directly from within the block_on calls (no
channels).
The immediate motivation was that the config interface didn't have
another runtime handy. And passing in a current thread runtime
deadlocked. I later learned this is a difference between
Runtime::block_on and Handle::block_on. The former will drive IO and
timers; the latter will not.
But this is also more efficient to avoid so many thread hand-offs.
Both the context switches and the extra spinning that
tokio appears to do as mentioned here:
https://github.com/scottlamb/retina/issues/5#issuecomment-871971550
This may not be the final word on the threading model. Eventually
I may not have per-stream writing threads at all. But I think it will
be easier to look at this after getting rid of the separate
`moonfire-nvr config` subcommand in favor of a web interface.
* in tests, read `.mp4` files via the `mp4` crate rather than ffmpeg.
The annoying part is that this doesn't parse edit lists; oh well.
* simplify the `Opener` interface. Formerly, it'd take either a RTSP
URL or a path to a `.mp4` file, and they'd share some code because
they both sometimes used ffmpeg. Now, they're totally different
libraries (`retina` vs `mp4`). Pull the latter out to a `testutil`
module with a different interface that exposes more of the `mp4`
stuff. Now `Opener` is just for RTSP.
* simplify the h264 module. It had a lot of logic to deal with Annex B.
Retina doesn't use this encoding.
Fixes#36Fixes#126
* run node 12, 14, and 16 (next to be supported) on CI. This will catch
node version-specific problems like that solved in dad9bdc.
* mention 12 and 14 in build instructions and link to instructions for
installing that version.
* follow this in Dockerfile, installing version 14. This addresses
a "Cannot find module 'worker_threads'" error introduced in
39a63e0, which (inadvisedly) upgraded gzipper 4->5 in addition to
the material-ui upgrade.
* use utf-8 encoding rather than ascii in live part parser. Those
builds apparently don't support ascii. iThey must use "small-icu" or
have ICU disabled, as described here:
https://nodejs.org/api/util.html#util_encodings_supported_when_node_js_is_built_with_the_small_icu_option
Most importantly, in build-ui.bash, fix an extra "&&". This meant
that if the build command fails, it would proceed and cause confusion.
This happened for me: I ran it without the emulation installed. Both
the build-server and deploy stages had problems, but because of the "&&"
the deploy target didn't actually return failure. After I fixed the
emulation problem, there was a bad cached layer.
Also save the build output from the dev stage.
I'm tired of all the boilerplate, so use the new
GPL-3.0-linking-exception license identifier instead in all the server
components.
I left the ui stuff alone because I'm just going to replace it (#111).
Add a checker for the header because it's easy to forget.
This eases build setup. Where Yarn requires a separate package
repository, npm is available in the standard one.
yarn's package repository signature was recently expired, and apparently
will expire again in a year. Avoid dealing with that.
Fixes#110.
Besides being more clear about what belongs to which, this helps with
docker caching. The server and ui parts are only rebuilt when their
respective subdirectories change.
Extend this a bit further by making the webpack build not depend on
the target architecture. And adding cache dirs so parts of the server
and ui build process can be reused when layer-wide caching fails.
This replaces the previous Dockerfile, which was a single stage for
building and deployment.
The new one is a multi-stage build. Its "dev" target has the full
development environment; its "deploy" target is more slim. It supports
cross-compiled builds via BuildKit, eg to prepare a build suitable for
a Raspberry Pi:
docker buildx build --load --platform=linux/arm64/v8 --tag=moonfire-nvr --progress=plain --target=deploy -f docker/Dockerfile .
Coming next: updating the installation docs.