switch from log to tracing

I think this is a big improvement in readability.

I removed the `lnav` config, which is a little sad, but I don't think it
supports this structured logging format well. Still seems worthwhile on
balance.
This commit is contained in:
Scott Lamb 2023-02-15 23:14:54 -08:00
parent db2e0f1d39
commit ebcdd76084
38 changed files with 632 additions and 344 deletions

View File

@ -13,6 +13,7 @@ even on minor releases, e.g. `0.7.5` -> `0.7.6`.
## unreleased ## unreleased
* new log formats using `tracing`. This will allow richer context information.
* bump minimum Rust version to 1.65. * bump minimum Rust version to 1.65.
* expect camelCase in `moonfire-nvr.toml` file, for consistency with the JSON * expect camelCase in `moonfire-nvr.toml` file, for consistency with the JSON
API. You'll need to adjust your config file when upgrading. API. You'll need to adjust your config file when upgrading.

View File

@ -34,9 +34,13 @@ While Moonfire NVR is running, logs will be written to stderr.
Try `nvr logs` or `docker logs moonfire-nvr`. Try `nvr logs` or `docker logs moonfire-nvr`.
* When running through systemd, stderr will be redirected to the journal. * When running through systemd, stderr will be redirected to the journal.
Try `sudo journalctl --unit moonfire-nvr` to view the logs. You also Try `sudo journalctl --unit moonfire-nvr` to view the logs. You also
likely want to set `MOONFIRE_FORMAT=google-systemd` to format logs as likely want to set `MOONFIRE_FORMAT=systemd` to format logs as
expected by systemd. expected by systemd.
*Note:* Moonfire's log format has recently changed significantly. You may
encounter the older format in the issue tracker or (despite best efforts)
documentation that hasn't been updated.
Logging options are controlled by environment variables: Logging options are controlled by environment variables:
* `MOONFIRE_LOG` controls the log level. Its format is similar to the * `MOONFIRE_LOG` controls the log level. Its format is similar to the
@ -45,80 +49,50 @@ Logging options are controlled by environment variables:
`MOONFIRE_LOG=info` is the default. `MOONFIRE_LOG=info` is the default.
`MOONFIRE_LOG=info,moonfire_nvr=debug` gives more detailed logging of the `MOONFIRE_LOG=info,moonfire_nvr=debug` gives more detailed logging of the
`moonfire_nvr` crate itself. `moonfire_nvr` crate itself.
* `MOONFIRE_FORMAT` selects the output format. The two options currently * `MOONFIRE_FORMAT` selects an output format. It defaults to an output meant
accepted are `google` (the default, like the Google for human consumption. It can be overridden to either of the following:
[glog](https://github.com/google/glog) package) and `google-systemd` (a * `systemd` uses [sd-daemon logging prefixes](https://man7.org/linux/man-pages/man3/sd-daemon.3.html))
variation for better systemd compatibility). * `json` outputs one JSON-formatted log message per line, for machine
* `MOONFIRE_COLOR` controls color coding when using the `google` format. consumption.
It accepts `always`, `never`, or `auto`. `auto` means to color code if
stderr is a terminal.
* Errors include a backtrace if `RUST_BACKTRACE=1` is set. * Errors include a backtrace if `RUST_BACKTRACE=1` is set.
If you use Docker, set these via Docker's `--env` argument. If you use Docker, set these via Docker's `--env` argument.
With the default `MOONFIRE_FORMAT=google`, log lines are in the following With `MOONFIRE_FORMAT` left unset, log events look as follows:
format:
```text ```text
I20210308 21:31:24.255 main moonfire_nvr] Success. 2023-02-15T22:45:06.999329 INFO s-courtyard-sub streamer{stream="courtyard-sub"}: moonfire_nvr::streamer: opening input url=rtsp://192.168.5.112/cam/realmonitor?channel=1&subtype=1&unicast=true&proto=Onvif
LYYYYmmdd HH:MM:SS.FFF TTTT PPPPPPPPPPPP] ...
L = level:
E = error; when color mode is on, the message will be bright red.
W = warn; " " " " " " " " " " yellow.
I = info
D = debug
T = trace
YYYY = year
mm = month
dd = day
HH = hour (using a 24-hour clock)
MM = minute
SS = second
FFF = fractional portion of the second
TTTT = thread name (if set) or tid (otherwise)
PPPP = log target (usually a module path)
... = message body
``` ```
This example contains the following elements:
* the timestamp (`2023-02-15T22:45:06.9999329`) in the system's local zone.
* the log level (`INFO`) is one of `TRACE`, `DEBUG`, `INFO`, `WARN`, or
`ERROR`.
* the thread name (`s-courtyard-sub`), see explanation below.
* the "spans" (`streamer{stream="courtyard-sub"}`), which contain
context information for a group of messages. In this case there is a single
span `streamer` with a single field `stream`. There can be multiple
spans; they are listed starting from the root. Each may have fields.
* the target (`moonfire_nvr::streamer`), which generally corresponds to a Rust
module name.
* the log message (`opening input`), a human-readable string
* event fields (`url=...`)
Moonfire NVR names a few important thread types as follows: Moonfire NVR names a few important thread types as follows:
* `main`: during `moonfire-nvr run`, the main thread does initial setup then * `main`: during `moonfire-nvr run`, the main thread does initial setup then
just waits for the other threads. In other subcommands, it does everything. just waits for the other threads. In other subcommands, it does everything.
* `s-CAMERA-TYPE` (one per stream, where `TYPE` is `main`, `sub`, or `ext`): * `s-CAMERA-TYPE` (one per stream, where `TYPE` is `main`, `sub`, or `ext`):
these threads write video to disk. these threads write video to disk.
* `sync-PATH` (one per sample file directory): These threads call `fsync` to * `sync-DIR_ID` (one per sample file directory): These threads call `fsync` to
* commit sample files to disk, delete old sample files, and flush the * commit sample files to disk, delete old sample files, and flush the
database. database.
* `r-PATH` (one per sample file directory): These threads read sample files * `r-DIR_ID` (one per sample file directory): These threads read sample files
from disk for serving `.mp4` files. from disk for serving `.mp4` files.
* `tokio-runtime-worker` (one per core, unless overridden with * `tokio-runtime-worker` (one per core, unless overridden with
`--worker-threads`): these threads handle HTTP requests and read video `--worker-threads`): these threads handle HTTP requests and read video
data from cameras via RTSP. data from cameras via RTSP.
* `logger`: this thread writes the log buffer to `stderr`. Logging is
asynchronous; other threads don't wait for log messages to be written
unless the log buffer is full.
You can use the following command to teach [`lnav`](http://lnav.org/) Moonfire
NVR's log format:
```console
$ lnav -i misc/moonfire_log.json
```
`lnav` versions prior to 0.9.0 print a (harmless) warning message on startup:
```console
$ lnav -i git/moonfire-nvr/misc/moonfire_log.json
warning:git/moonfire-nvr/misc/moonfire_log.json:line 2
warning: unexpected path --
warning: /$schema
warning: accepted paths --
warning: /(?<format_name>\w+)/ -- The definition of a log file format.
info: installed: /home/slamb/.lnav/formats/installed/moonfire_log.json
```
You can avoid this by removing the `$schema` line from `moonfire_log.json`
and rerunning the `lnav -i` command.
Below are some interesting log lines you may encounter. Below are some interesting log lines you may encounter.
@ -128,16 +102,16 @@ During normal operation, Moonfire NVR will periodically flush changes to its
SQLite3 database. Every flush is logged, as in the following info message: SQLite3 database. Every flush is logged, as in the following info message:
``` ```
I20210308 23:14:18.388 sync-/media/14tb/sample moonfire_db::db] Flush 3810 (why: 120 sec after start of 1 minute 14 seconds courtyard-main recording 3/1842086): 2021-03-08T23:14:18.388000 sync-2 syncer{path=/media/14tb/sample}:flush{flush_count=2 reason="120 sec after start of 1 minute 14 seconds courtyard-main recording 3/1842086"}: moonfire_db::db: flush complete:
/media/6tb/sample: added 98M 864K 842B in 8 recordings (4/1839795, 7/1503516, 6/1853939, 1/1838087, 2/1852096, 12/1516945, 8/1514942, 10/1506111), deleted 111M 435K 587B in 5 (4/1801170, 4/1801171, 6/1799708, 1/1801528, 2/1815572), GCed 9 recordings (6/1799707, 7/1376577, 4/1801168, 1/1801527, 4/1801167, 4/1801169, 10/1243252, 2/1815571, 12/1418785). /media/6tb/sample: added 98M 864K 842B in 8 recordings (4/1839795, 7/1503516, 6/1853939, 1/1838087, 2/1852096, 12/1516945, 8/1514942, 10/1506111), deleted 111M 435K 587B in 5 (4/1801170, 4/1801171, 6/1799708, 1/1801528, 2/1815572), GCed 9 recordings (6/1799707, 7/1376577, 4/1801168, 1/1801527, 4/1801167, 4/1801169, 10/1243252, 2/1815571, 12/1418785).
/media/14tb/sample: added 8M 364K 643B in 3 recordings (3/1842086, 9/1505359, 11/1516695), deleted 0B in 0 (), GCed 0 recordings (). /media/14tb/sample: added 8M 364K 643B in 3 recordings (3/1842086, 9/1505359, 11/1516695), deleted 0B in 0 (), GCed 0 recordings ().
``` ```
This log message is packed with debugging information: This log message is packed with debugging information:
* the date and time: `20210308 23:14:18.388`. * the date and time: `2021-03-08T23:14:18.388`.
* the name of the thread that prompted the flush: `sync-/media/14tb/sample`. * the name of the thread that prompted the flush: `sync-2`.
* a sequence number: `3810`. This is handy for checking how often Moonfire NVR * a flush count: `3810`. This is handy for checking how often Moonfire NVR
is flushing. is flushing.
* a reason for the flush: `120 sec after start of 1 minute 14 seconds courtyard-main recording 3/1842086`. * a reason for the flush: `120 sec after start of 1 minute 14 seconds courtyard-main recording 3/1842086`.
This was a regular periodic flush at the `flush_if_sec` for the stream, This was a regular periodic flush at the `flush_if_sec` for the stream,
@ -178,9 +152,7 @@ file a bug if you see one. It's helpful to set the `RUST_BACKTRACE`
environment variable to include more information. environment variable to include more information.
``` ```
E20210304 11:09:29.230 main s-peck_west-main] panic at 'src/moonfire-nvr/server/db/writer.rs:750:54': should always be an unindexed sample 2021-03-04T11:09:29.230291 ERROR s-peck_west-main streamer{stream="peck_west-main"}: panic: should always be an unindexed sample location=src/moonfire-nvr/server/db/writer.rs:750:54 backtrace=...
(set environment variable RUST_BACKTRACE=1 to see backtraces)"
``` ```
In this case, a stream thread (one starting with `s-`) panicked. That stream In this case, a stream thread (one starting with `s-`) panicked. That stream
@ -195,11 +167,11 @@ It's normal to see these warnings on startup and occasionally while running.
Frequent occurrences may indicate a performance problem. Frequent occurrences may indicate a performance problem.
``` ```
W20201129 12:01:21.128 s-driveway-main moonfire_base::clock] opening rtsp://admin:redacted@192.168.5.108/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif took PT2.070715796S! 2020-11-29T12:01:21.128725 WARN s-driveway-main streamer{stream="driveway-main"}: moonfire_base::clock: opening rtsp://admin:redacted@192.168.5.108/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif took PT2.070715796S!
W20201129 12:32:15.870 s-west_side-sub moonfire_base::clock] getting next packet took PT10.158121387S! 2020-11-29T12:32:15.870658 WARN s-west_side-sub streamer{stream="west_side-sub"}: moonfire_base::clock: getting next packet took PT10.158121387S!
W20201228 12:09:29.050 s-back_east-sub moonfire_base::clock] database lock acquisition took PT8.122452 2020-12-28T12:09:29.050464 WARN s-back_east-sub streamer{stream="s-back_east-sub"}: moonfire_base::clock: database lock acquisition took PT8.122452
W20201228 21:22:32.012 main moonfire_base::clock] database operation took PT39.526386958S! 2020-12-28T21:22:32.012811 WARN main moonfire_base::clock: database operation took PT39.526386958S!
W20201228 21:27:11.402 s-driveway-sub moonfire_base::clock] writing 37 bytes took PT20.701894190S! 2020-12-28T21:27:11.402259 WARN s-driveway-sub streamer{stream="s-driveway-sub"}: moonfire_base::clock: writing 37 bytes took PT20.701894190S!
``` ```
### Camera stream errors ### Camera stream errors
@ -211,7 +183,7 @@ quickly enough. In the latter case, you'll likely see a
`getting next packet took PT...S!` message as described above. `getting next packet took PT...S!` message as described above.
``` ```
W20210309 00:28:55.527 s-courtyard-sub moonfire_nvr::streamer] courtyard-sub: sleeping for PT1S after error: Stream ended 2021-03-09T00:28:55.527078 WARN s-courtyard-sub streamer{stream="courtyard-sub"}: moonfire_nvr::streamer: sleeping for PT1S after error: Stream ended
(set environment variable RUST_BACKTRACE=1 to see backtraces) (set environment variable RUST_BACKTRACE=1 to see backtraces)
``` ```
@ -251,7 +223,7 @@ If Moonfire NVR runs out of disk space on a sample file directory, recording
will be stuck and you'll see log messages like the following: will be stuck and you'll see log messages like the following:
``` ```
W20210401 11:21:07.365 s-driveway-main moonfire_base::clock] sleeping for PT1S after error: No space left on device (os error 28) 2021-04-01T11:21:07.365 WARN s-driveway-main streamer{stream="s-driveway-main"}: moonfire_base::clock: sleeping for PT1S after error: No space left on device (os error 28)
``` ```
If something else used more disk space on the filesystem than planned, just If something else used more disk space on the filesystem than planned, just

View File

@ -1,47 +0,0 @@
{
"$schema": "https://lnav.org/schemas/format-v1.schema.json",
"moonfire_log": {
"title": "Moonfire Log",
"description": "The Moonfire NVR log format.",
"timestamp-format": [
"%Y%m%d %H:%M:%S.%L",
"%m%d %H%M%S.%L"
],
"url": "https://github.com/scottlamb/mylog/blob/master/src/lib.rs",
"regex": {
"std": {
"pattern": "^(?<level>[EWIDT])(?<timestamp>\\d{8} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}|\\d{4} \\d{6}\\.\\d{3}) +(?<thread>[^ ]+) (?<module_path>[^ \\]]+)\\] (?<body>(?:.|\\n)*)"
}
},
"level-field": "level",
"level": {
"error": "E",
"warning": "W",
"info": "I",
"debug": "D",
"trace": "T"
},
"opid-field": "thread",
"value": {
"thread": {
"kind": "string",
"identifier": true
},
"module_path": {
"kind": "string",
"identifier": true
}
},
"sample": [
{
"line": "I20210308 21:31:24.255 main moonfire_nvr] Success."
},
{
"line": "I0308 213124.255 main moonfire_nvr] Older style."
},
{
"line": "W20210303 22:20:53.081 s-west_side-main moonfire_base::clock] getting next packet took PT8.153173367S!"
}
]
}
}

164
server/Cargo.lock generated
View File

@ -1069,6 +1069,15 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e2e65a1a2e43cfcb47a895c4c8b10d1f4a61097f9f254f183aee60cad9c651d" checksum = "3e2e65a1a2e43cfcb47a895c4c8b10d1f4a61097f9f254f183aee60cad9c651d"
[[package]]
name = "matchers"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558"
dependencies = [
"regex-automata",
]
[[package]] [[package]]
name = "md-5" name = "md-5"
version = "0.10.5" version = "0.10.5"
@ -1133,12 +1142,12 @@ dependencies = [
"failure", "failure",
"futures", "futures",
"libc", "libc",
"log",
"nom", "nom",
"serde", "serde",
"serde_json", "serde_json",
"slab", "slab",
"time 0.1.45", "time 0.1.45",
"tracing",
] ]
[[package]] [[package]]
@ -1157,9 +1166,7 @@ dependencies = [
"hashlink", "hashlink",
"itertools", "itertools",
"libc", "libc",
"log",
"moonfire-base", "moonfire-base",
"mylog",
"nix", "nix",
"num-rational", "num-rational",
"odds", "odds",
@ -1176,6 +1183,8 @@ dependencies = [
"tempfile", "tempfile",
"time 0.1.45", "time 0.1.45",
"tokio", "tokio",
"tracing",
"ulid",
"url", "url",
"uuid", "uuid",
] ]
@ -1189,6 +1198,7 @@ dependencies = [
"bpaf", "bpaf",
"byteorder", "byteorder",
"bytes", "bytes",
"chrono",
"cursive", "cursive",
"failure", "failure",
"fnv", "fnv",
@ -1204,7 +1214,6 @@ dependencies = [
"moonfire-base", "moonfire-base",
"moonfire-db", "moonfire-db",
"mp4", "mp4",
"mylog",
"nix", "nix",
"nom", "nom",
"num-rational", "num-rational",
@ -1227,6 +1236,12 @@ dependencies = [
"tokio-tungstenite", "tokio-tungstenite",
"toml", "toml",
"tracing", "tracing",
"tracing-core",
"tracing-futures",
"tracing-log",
"tracing-subscriber",
"tracing-test",
"ulid",
"url", "url",
"uuid", "uuid",
] ]
@ -1259,16 +1274,6 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96a1fe2275b68991faded2c80aa4a33dba398b77d276038b8f50701a22e55918" checksum = "96a1fe2275b68991faded2c80aa4a33dba398b77d276038b8f50701a22e55918"
[[package]]
name = "mylog"
version = "0.1.0"
source = "git+https://github.com/scottlamb/mylog#71bbb14fc8d6047b37e2275027cfa96535f9c556"
dependencies = [
"chrono",
"libc",
"log",
]
[[package]] [[package]]
name = "ncurses" name = "ncurses"
version = "5.101.0" version = "5.101.0"
@ -1304,6 +1309,16 @@ dependencies = [
"minimal-lexical", "minimal-lexical",
] ]
[[package]]
name = "nu-ansi-term"
version = "0.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84"
dependencies = [
"overload",
"winapi",
]
[[package]] [[package]]
name = "num" name = "num"
version = "0.4.0" version = "0.4.0"
@ -1425,6 +1440,12 @@ version = "1.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f61fba1741ea2b3d6a1e3178721804bb716a68a6aeba1149b5d52e3d464ea66" checksum = "6f61fba1741ea2b3d6a1e3178721804bb716a68a6aeba1149b5d52e3d464ea66"
[[package]]
name = "overload"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]] [[package]]
name = "owning_ref" name = "owning_ref"
version = "0.4.1" version = "0.4.1"
@ -1664,6 +1685,9 @@ name = "regex-automata"
version = "0.1.10" version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132" checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
dependencies = [
"regex-syntax",
]
[[package]] [[package]]
name = "regex-syntax" name = "regex-syntax"
@ -1922,6 +1946,15 @@ dependencies = [
"digest", "digest",
] ]
[[package]]
name = "sharded-slab"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "900fba806f70c630b0a382d0d825e17a0f19fcd059a2ade1ff237bcddf446b31"
dependencies = [
"lazy_static",
]
[[package]] [[package]]
name = "signal-hook" name = "signal-hook"
version = "0.3.14" version = "0.3.14"
@ -2093,6 +2126,16 @@ dependencies = [
"syn 1.0.107", "syn 1.0.107",
] ]
[[package]]
name = "thread_local"
version = "1.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fdd6f064ccff2d6567adcb3873ca630700f00b5ad3f060c25b5dcfd9a4ce152"
dependencies = [
"cfg-if",
"once_cell",
]
[[package]] [[package]]
name = "time" name = "time"
version = "0.1.45" version = "0.1.45"
@ -2261,6 +2304,84 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24eb03ba0eab1fd845050058ce5e616558e8f8d8fca633e6b163fe25c797213a" checksum = "24eb03ba0eab1fd845050058ce5e616558e8f8d8fca633e6b163fe25c797213a"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"valuable",
]
[[package]]
name = "tracing-futures"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97d095ae15e245a057c8e8451bab9b3ee1e1f68e9ba2b4fbc18d0ac5237835f2"
dependencies = [
"futures",
"futures-task",
"pin-project",
"tracing",
]
[[package]]
name = "tracing-log"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ddad33d2d10b1ed7eb9d1f518a5674713876e97e5bb9b7345a7984fbb4f922"
dependencies = [
"lazy_static",
"log",
"tracing-core",
]
[[package]]
name = "tracing-serde"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc6b213177105856957181934e4920de57730fc69bf42c37ee5bb664d406d9e1"
dependencies = [
"serde",
"tracing-core",
]
[[package]]
name = "tracing-subscriber"
version = "0.3.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6176eae26dd70d0c919749377897b54a9276bd7061339665dd68777926b5a70"
dependencies = [
"matchers",
"nu-ansi-term",
"once_cell",
"regex",
"serde",
"serde_json",
"sharded-slab",
"smallvec",
"thread_local",
"tracing",
"tracing-core",
"tracing-log",
"tracing-serde",
]
[[package]]
name = "tracing-test"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a2c0ff408fe918a94c428a3f2ad04e4afd5c95bbc08fcf868eff750c15728a4"
dependencies = [
"lazy_static",
"tracing-core",
"tracing-subscriber",
"tracing-test-macro",
]
[[package]]
name = "tracing-test-macro"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "258bc1c4f8e2e73a977812ab339d503e6feeb92700f6d07a6de4d321522d5c08"
dependencies = [
"lazy_static",
"quote",
"syn 1.0.107",
] ]
[[package]] [[package]]
@ -2294,6 +2415,15 @@ version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba" checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba"
[[package]]
name = "ulid"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13a3aaa69b04e5b66cc27309710a569ea23593612387d67daaf102e73aa974fd"
dependencies = [
"rand",
]
[[package]] [[package]]
name = "unchecked-index" name = "unchecked-index"
version = "0.2.2" version = "0.2.2"
@ -2373,6 +2503,12 @@ dependencies = [
"serde", "serde",
] ]
[[package]]
name = "valuable"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d"
[[package]] [[package]]
name = "vcpkg" name = "vcpkg"
version = "0.2.15" version = "0.2.15"

View File

@ -40,7 +40,6 @@ itertools = "0.10.0"
libc = "0.2" libc = "0.2"
log = { version = "0.4" } log = { version = "0.4" }
memchr = "2.0.2" memchr = "2.0.2"
mylog = { git = "https://github.com/scottlamb/mylog" }
nix = "0.26.1" nix = "0.26.1"
nom = "7.0.0" nom = "7.0.0"
password-hash = "0.4.2" password-hash = "0.4.2"
@ -62,12 +61,19 @@ tracing = { version = "0.1", features = ["log"] }
url = "2.1.1" url = "2.1.1"
uuid = { version = "1.1.2", features = ["serde", "std", "v4"] } uuid = { version = "1.1.2", features = ["serde", "std", "v4"] }
once_cell = "1.17.0" once_cell = "1.17.0"
tracing-subscriber = { version = "0.3.16", features = ["env-filter", "json"] }
tracing-core = "0.1.30"
tracing-log = "0.1.3"
chrono = "0.4.23"
ulid = "1.0.0"
tracing-futures = { version = "0.2.5", features = ["futures-03", "std-future"] }
[dev-dependencies] [dev-dependencies]
mp4 = { git = "https://github.com/scottlamb/mp4-rust", branch = "moonfire" } mp4 = { git = "https://github.com/scottlamb/mp4-rust", branch = "moonfire" }
num-rational = { version = "0.4.0", default-features = false, features = ["std"] } num-rational = { version = "0.4.0", default-features = false, features = ["std"] }
reqwest = { version = "0.11.0", default-features = false, features = ["json"] } reqwest = { version = "0.11.0", default-features = false, features = ["json"] }
tempfile = "3.2.0" tempfile = "3.2.0"
tracing-test = "0.2.4"
[profile.dev.package.scrypt] [profile.dev.package.scrypt]
# On an Intel i3-6100U @ 2.30 GHz, a single scrypt password hash takes 7.6 # On an Intel i3-6100U @ 2.30 GHz, a single scrypt password hash takes 7.6

View File

@ -16,9 +16,9 @@ path = "lib.rs"
failure = "0.1.1" failure = "0.1.1"
futures = "0.3" futures = "0.3"
libc = "0.2" libc = "0.2"
log = "0.4"
nom = "7.0.0" nom = "7.0.0"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
slab = "0.4" slab = "0.4"
time = "0.1" time = "0.1"
tracing = "0.1.37"

View File

@ -5,13 +5,13 @@
//! Clock interface and implementations for testability. //! Clock interface and implementations for testability.
use failure::Error; use failure::Error;
use log::warn;
use std::mem; use std::mem;
use std::sync::Mutex; use std::sync::Mutex;
use std::sync::{mpsc, Arc}; use std::sync::{mpsc, Arc};
use std::thread; use std::thread;
use std::time::Duration as StdDuration; use std::time::Duration as StdDuration;
use time::{Duration, Timespec}; use time::{Duration, Timespec};
use tracing::warn;
use crate::shutdown::ShutdownError; use crate::shutdown::ShutdownError;
@ -54,9 +54,8 @@ where
shutdown_rx.check()?; shutdown_rx.check()?;
let sleep_time = Duration::seconds(1); let sleep_time = Duration::seconds(1);
warn!( warn!(
"sleeping for {} after error: {}", err = crate::error::prettify_failure(&e),
sleep_time, "sleeping for 1 s after error"
crate::error::prettify_failure(&e)
); );
clocks.sleep(sleep_time); clocks.sleep(sleep_time);
} }

View File

@ -25,8 +25,6 @@ futures = "0.3"
h264-reader = "0.6.0" h264-reader = "0.6.0"
hashlink = "0.8.1" hashlink = "0.8.1"
libc = "0.2" libc = "0.2"
log = "0.4"
mylog = { git = "https://github.com/scottlamb/mylog" }
nix = "0.26.1" nix = "0.26.1"
num-rational = { version = "0.4.0", default-features = false, features = ["std"] } num-rational = { version = "0.4.0", default-features = false, features = ["std"] }
odds = { version = "0.4.0", features = ["std-vec"] } odds = { version = "0.4.0", features = ["std-vec"] }
@ -46,6 +44,8 @@ url = { version = "2.1.1", features = ["serde"] }
uuid = { version = "1.1.2", features = ["serde", "std", "v4"] } uuid = { version = "1.1.2", features = ["serde", "std", "v4"] }
itertools = "0.10.0" itertools = "0.10.0"
once_cell = "1.17.0" once_cell = "1.17.0"
tracing = "0.1.37"
ulid = "1.0.0"
[build-dependencies] [build-dependencies]
protobuf-codegen = "3.0" protobuf-codegen = "3.0"

View File

@ -9,7 +9,6 @@ use crate::schema::Permissions;
use base::{bail_t, format_err_t, strutil, ErrorKind, ResultExt as _}; use base::{bail_t, format_err_t, strutil, ErrorKind, ResultExt as _};
use failure::{bail, format_err, Error, Fail, ResultExt as _}; use failure::{bail, format_err, Error, Fail, ResultExt as _};
use fnv::FnvHashMap; use fnv::FnvHashMap;
use log::info;
use protobuf::Message; use protobuf::Message;
use ring::rand::{SecureRandom, SystemRandom}; use ring::rand::{SecureRandom, SystemRandom};
use rusqlite::{named_params, params, Connection, Transaction}; use rusqlite::{named_params, params, Connection, Transaction};
@ -19,6 +18,7 @@ use std::fmt;
use std::net::IpAddr; use std::net::IpAddr;
use std::str::FromStr; use std::str::FromStr;
use std::sync::Mutex; use std::sync::Mutex;
use tracing::info;
static PARAMS: once_cell::sync::Lazy<Mutex<scrypt::Params>> = static PARAMS: once_cell::sync::Lazy<Mutex<scrypt::Params>> =
once_cell::sync::Lazy::new(|| Mutex::new(scrypt::Params::recommended())); once_cell::sync::Lazy::new(|| Mutex::new(scrypt::Params::recommended()));

View File

@ -13,10 +13,10 @@ use crate::recording;
use crate::schema; use crate::schema;
use failure::Error; use failure::Error;
use fnv::{FnvHashMap, FnvHashSet}; use fnv::{FnvHashMap, FnvHashSet};
use log::{error, info, warn};
use nix::fcntl::AtFlags; use nix::fcntl::AtFlags;
use rusqlite::params; use rusqlite::params;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use tracing::{error, info, warn};
pub struct Options { pub struct Options {
pub compare_lens: bool, pub compare_lens: bool,

View File

@ -6,7 +6,6 @@
use base::time::{Duration, Time, TIME_UNITS_PER_SEC}; use base::time::{Duration, Time, TIME_UNITS_PER_SEC};
use failure::Error; use failure::Error;
use log::{error, trace};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::cmp; use std::cmp;
use std::collections::BTreeMap; use std::collections::BTreeMap;
@ -14,6 +13,7 @@ use std::convert::TryFrom;
use std::io::Write; use std::io::Write;
use std::ops::Range; use std::ops::Range;
use std::str; use std::str;
use tracing::{error, trace};
/// A calendar day in `YYYY-mm-dd` format. /// A calendar day in `YYYY-mm-dd` format.
#[derive(Copy, Clone, Eq, Ord, PartialEq, PartialOrd)] #[derive(Copy, Clone, Eq, Ord, PartialEq, PartialOrd)]

View File

@ -41,8 +41,6 @@ use failure::{bail, format_err, Error, ResultExt};
use fnv::{FnvHashMap, FnvHashSet}; use fnv::{FnvHashMap, FnvHashSet};
use hashlink::LinkedHashMap; use hashlink::LinkedHashMap;
use itertools::Itertools; use itertools::Itertools;
use log::warn;
use log::{error, info, trace};
use rusqlite::{named_params, params}; use rusqlite::{named_params, params};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::cell::RefCell; use std::cell::RefCell;
@ -58,6 +56,8 @@ use std::string::String;
use std::sync::Arc; use std::sync::Arc;
use std::sync::{Mutex, MutexGuard}; use std::sync::{Mutex, MutexGuard};
use std::vec::Vec; use std::vec::Vec;
use tracing::warn;
use tracing::{error, info, trace};
use uuid::Uuid; use uuid::Uuid;
/// Expected schema version. See `guide/schema.md` for more information. /// Expected schema version. See `guide/schema.md` for more information.
@ -986,6 +986,8 @@ impl LockedDatabase {
/// ///
/// The public API is in `DatabaseGuard::flush()`; it supplies the `Clocks` to this function. /// The public API is in `DatabaseGuard::flush()`; it supplies the `Clocks` to this function.
fn flush<C: Clocks>(&mut self, clocks: &C, reason: &str) -> Result<(), Error> { fn flush<C: Clocks>(&mut self, clocks: &C, reason: &str) -> Result<(), Error> {
let span = tracing::info_span!("flush", flush_count = self.flush_count, reason);
let _enter = span.enter();
let o = match self.open.as_ref() { let o = match self.open.as_ref() {
None => bail!("database is read-only"), None => bail!("database is read-only"),
Some(o) => o, Some(o) => o,
@ -1161,7 +1163,7 @@ impl LockedDatabase {
if log_msg.is_empty() { if log_msg.is_empty() {
log_msg.push_str(" no recording changes"); log_msg.push_str(" no recording changes");
} }
info!("Flush {} (why: {}):{}", self.flush_count, reason, &log_msg); info!("flush complete: {log_msg}");
for cb in &self.on_flush { for cb in &self.on_flush {
cb(); cb();
} }

View File

@ -14,7 +14,6 @@ use crate::db::CompositeId;
use crate::schema; use crate::schema;
use cstr::cstr; use cstr::cstr;
use failure::{bail, format_err, Error, Fail}; use failure::{bail, format_err, Error, Fail};
use log::warn;
use nix::sys::statvfs::Statvfs; use nix::sys::statvfs::Statvfs;
use nix::{ use nix::{
fcntl::{FlockArg, OFlag}, fcntl::{FlockArg, OFlag},
@ -29,6 +28,7 @@ use std::ops::Range;
use std::os::unix::io::{AsRawFd, RawFd}; use std::os::unix::io::{AsRawFd, RawFd};
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use tracing::warn;
/// The fixed length of a directory's `meta` file. /// The fixed length of a directory's `meta` file.
/// ///

View File

@ -54,9 +54,13 @@ impl Reader {
) )
.expect("PAGE_SIZE fits in usize"); .expect("PAGE_SIZE fits in usize");
assert_eq!(page_size.count_ones(), 1, "invalid page size {page_size}"); assert_eq!(page_size.count_ones(), 1, "invalid page size {page_size}");
let span = tracing::info_span!("reader", path = %path.display());
std::thread::Builder::new() std::thread::Builder::new()
.name(format!("r-{}", path.display())) .name(format!("r-{}", path.display()))
.spawn(move || ReaderInt { dir, page_size }.run(rx)) .spawn(move || {
let _guard = span.enter();
ReaderInt { dir, page_size }.run(rx)
})
.expect("unable to create reader thread"); .expect("unable to create reader thread");
Self(tx) Self(tx)
} }
@ -70,6 +74,7 @@ impl Reader {
} }
let (tx, rx) = tokio::sync::oneshot::channel(); let (tx, rx) = tokio::sync::oneshot::channel();
self.send(ReaderCommand::OpenFile { self.send(ReaderCommand::OpenFile {
span: tracing::Span::current(),
composite_id, composite_id,
range, range,
tx, tx,
@ -176,6 +181,8 @@ impl Drop for FileStream {
/// around between it and the [FileStream] to avoid maintaining extra data /// around between it and the [FileStream] to avoid maintaining extra data
/// structures. /// structures.
struct OpenFile { struct OpenFile {
span: tracing::Span,
composite_id: CompositeId, composite_id: CompositeId,
/// The memory-mapped region backed by the file. Valid up to length `map_len`. /// The memory-mapped region backed by the file. Valid up to length `map_len`.
@ -197,7 +204,7 @@ impl Drop for OpenFile {
fn drop(&mut self) { fn drop(&mut self) {
if let Err(e) = unsafe { nix::sys::mman::munmap(self.map_ptr, self.map_len) } { if let Err(e) = unsafe { nix::sys::mman::munmap(self.map_ptr, self.map_len) } {
// This should never happen. // This should never happen.
log::error!( tracing::error!(
"unable to munmap {}, {:?} len {}: {}", "unable to munmap {}, {:?} len {}: {}",
self.composite_id, self.composite_id,
self.map_ptr, self.map_ptr,
@ -218,6 +225,7 @@ struct SuccessfulRead {
enum ReaderCommand { enum ReaderCommand {
/// Opens a file and reads the first chunk. /// Opens a file and reads the first chunk.
OpenFile { OpenFile {
span: tracing::Span,
composite_id: CompositeId, composite_id: CompositeId,
range: std::ops::Range<u64>, range: std::ops::Range<u64>,
tx: tokio::sync::oneshot::Sender<Result<SuccessfulRead, Error>>, tx: tokio::sync::oneshot::Sender<Result<SuccessfulRead, Error>>,
@ -248,6 +256,7 @@ impl ReaderInt {
// the CloseFile operation. // the CloseFile operation.
match cmd { match cmd {
ReaderCommand::OpenFile { ReaderCommand::OpenFile {
span,
composite_id, composite_id,
range, range,
tx, tx,
@ -256,8 +265,11 @@ impl ReaderInt {
// avoid spending effort on expired commands // avoid spending effort on expired commands
continue; continue;
} }
let _guard = TimerGuard::new(&RealClocks {}, || format!("open {composite_id}")); let span2 = span.clone();
let _ = tx.send(self.open(composite_id, range)); let _span_enter = span2.enter();
let _timer_guard =
TimerGuard::new(&RealClocks {}, || format!("open {composite_id}"));
let _ = tx.send(self.open(span, composite_id, range));
} }
ReaderCommand::ReadNextChunk { file, tx } => { ReaderCommand::ReadNextChunk { file, tx } => {
if tx.is_closed() { if tx.is_closed() {
@ -265,16 +277,30 @@ impl ReaderInt {
continue; continue;
} }
let composite_id = file.composite_id; let composite_id = file.composite_id;
let span2 = file.span.clone();
let _span_enter = span2.enter();
let _guard = let _guard =
TimerGuard::new(&RealClocks {}, || format!("read from {composite_id}")); TimerGuard::new(&RealClocks {}, || format!("read from {composite_id}"));
let _ = tx.send(Ok(self.chunk(file))); let _ = tx.send(Ok(self.chunk(file)));
} }
ReaderCommand::CloseFile(_) => {} ReaderCommand::CloseFile(mut file) => {
let composite_id = file.composite_id;
let span = std::mem::replace(&mut file.span, tracing::Span::none());
let _span_enter = span.enter();
let _guard =
TimerGuard::new(&RealClocks {}, || format!("close {composite_id}"));
drop(file);
}
} }
} }
} }
fn open(&self, composite_id: CompositeId, range: Range<u64>) -> Result<SuccessfulRead, Error> { fn open(
&self,
span: tracing::Span,
composite_id: CompositeId,
range: Range<u64>,
) -> Result<SuccessfulRead, Error> {
let p = super::CompositeIdPath::from(composite_id); let p = super::CompositeIdPath::from(composite_id);
// Reader::open_file checks for an empty range, but check again right // Reader::open_file checks for an empty range, but check again right
@ -348,7 +374,7 @@ impl ReaderInt {
) )
} { } {
// This shouldn't happen but is "just" a performance problem. // This shouldn't happen but is "just" a performance problem.
log::warn!( tracing::warn!(
"madvise(MADV_SEQUENTIAL) failed for {} off={} len={}: {}", "madvise(MADV_SEQUENTIAL) failed for {} off={} len={}: {}",
composite_id, composite_id,
offset, offset,
@ -358,6 +384,7 @@ impl ReaderInt {
} }
Ok(self.chunk(OpenFile { Ok(self.chunk(OpenFile {
span,
composite_id, composite_id,
map_ptr, map_ptr,
map_pos: unaligned, map_pos: unaligned,

View File

@ -7,9 +7,9 @@
use crate::coding::{append_varint32, decode_varint32, unzigzag32, zigzag32}; use crate::coding::{append_varint32, decode_varint32, unzigzag32, zigzag32};
use crate::db; use crate::db;
use failure::{bail, Error}; use failure::{bail, Error};
use log::trace;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::ops::Range; use std::ops::Range;
use tracing::trace;
pub use base::time::TIME_UNITS_PER_SEC; pub use base::time::TIME_UNITS_PER_SEC;

View File

@ -11,12 +11,12 @@ use crate::{recording, SqlUuid};
use base::bail_t; use base::bail_t;
use failure::{bail, format_err, Error}; use failure::{bail, format_err, Error};
use fnv::FnvHashMap; use fnv::FnvHashMap;
use log::debug;
use rusqlite::{params, Connection, Transaction}; use rusqlite::{params, Connection, Transaction};
use std::collections::btree_map::Entry; use std::collections::btree_map::Entry;
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::ops::Range; use std::ops::Range;
use tracing::debug;
use uuid::Uuid; use uuid::Uuid;
/// All state associated with signals. This is the entry point to this module. /// All state associated with signals. This is the entry point to this module.

View File

@ -38,10 +38,7 @@ pub const TEST_VIDEO_SAMPLE_ENTRY_DATA: &[u8] =
/// * use a fast but insecure password hashing format. /// * use a fast but insecure password hashing format.
pub fn init() { pub fn init() {
INIT.call_once(|| { INIT.call_once(|| {
let h = mylog::Builder::new() // TODO: tracing setup.
.set_spec(&::std::env::var("MOONFIRE_LOG").unwrap_or_else(|_| "info".to_owned()))
.build();
h.install().unwrap();
env::set_var("TZ", "America/Los_Angeles"); env::set_var("TZ", "America/Los_Angeles");
time::tzset(); time::tzset();
crate::auth::set_test_config(); crate::auth::set_test_config();

View File

@ -8,11 +8,11 @@
use crate::db; use crate::db;
use failure::{bail, Error}; use failure::{bail, Error};
use log::info;
use nix::NixPath; use nix::NixPath;
use rusqlite::params; use rusqlite::params;
use std::ffi::CStr; use std::ffi::CStr;
use std::io::Write; use std::io::Write;
use tracing::info;
use uuid::Uuid; use uuid::Uuid;
mod v0_to_v1; mod v0_to_v1;

View File

@ -6,9 +6,9 @@
use crate::db; use crate::db;
use crate::recording; use crate::recording;
use failure::Error; use failure::Error;
use log::warn;
use rusqlite::{named_params, params}; use rusqlite::{named_params, params};
use std::collections::HashMap; use std::collections::HashMap;
use tracing::warn;
pub fn run(_args: &super::Args, tx: &rusqlite::Transaction) -> Result<(), Error> { pub fn run(_args: &super::Args, tx: &rusqlite::Transaction) -> Result<(), Error> {
// These create statements match the schema.sql when version 1 was the latest. // These create statements match the schema.sql when version 1 was the latest.

View File

@ -10,13 +10,13 @@ use crate::db::SqlUuid;
use crate::{dir, schema}; use crate::{dir, schema};
use cstr::cstr; use cstr::cstr;
use failure::{bail, Error, Fail}; use failure::{bail, Error, Fail};
use log::info;
use nix::fcntl::{FlockArg, OFlag}; use nix::fcntl::{FlockArg, OFlag};
use nix::sys::stat::Mode; use nix::sys::stat::Mode;
use protobuf::Message; use protobuf::Message;
use rusqlite::params; use rusqlite::params;
use std::io::{Read, Write}; use std::io::{Read, Write};
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use tracing::info;
use uuid::Uuid; use uuid::Uuid;
const FIXED_DIR_META_LEN: usize = 512; const FIXED_DIR_META_LEN: usize = 512;

View File

@ -5,9 +5,9 @@
/// Upgrades a version 6 schema to a version 7 schema. /// Upgrades a version 6 schema to a version 7 schema.
use failure::{format_err, Error, ResultExt}; use failure::{format_err, Error, ResultExt};
use fnv::FnvHashMap; use fnv::FnvHashMap;
use log::debug;
use rusqlite::{named_params, params}; use rusqlite::{named_params, params};
use std::{convert::TryFrom, path::PathBuf}; use std::{convert::TryFrom, path::PathBuf};
use tracing::debug;
use url::Url; use url::Url;
use uuid::Uuid; use uuid::Uuid;

View File

@ -11,7 +11,6 @@ use base::clock::{self, Clocks};
use base::shutdown::ShutdownError; use base::shutdown::ShutdownError;
use failure::{bail, format_err, Error}; use failure::{bail, format_err, Error};
use fnv::FnvHashMap; use fnv::FnvHashMap;
use log::{debug, trace, warn};
use std::cmp::{self, Ordering}; use std::cmp::{self, Ordering};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::io; use std::io;
@ -22,6 +21,7 @@ use std::sync::{mpsc, Arc};
use std::thread; use std::thread;
use std::time::Duration as StdDuration; use std::time::Duration as StdDuration;
use time::{Duration, Timespec}; use time::{Duration, Timespec};
use tracing::{debug, trace, warn};
/// Trait to allow mocking out [crate::dir::SampleFileDir] in syncer tests. /// Trait to allow mocking out [crate::dir::SampleFileDir] in syncer tests.
/// This is public because it's exposed in the [SyncerChannel] type parameters, /// This is public because it's exposed in the [SyncerChannel] type parameters,
@ -166,21 +166,30 @@ where
{ {
let db2 = db.clone(); let db2 = db.clone();
let (mut syncer, path) = Syncer::new(&db.lock(), shutdown_rx, db2, dir_id)?; let (mut syncer, path) = Syncer::new(&db.lock(), shutdown_rx, db2, dir_id)?;
syncer.initial_rotation()?; let span = tracing::info_span!("syncer", path = %path.display());
span.in_scope(|| {
tracing::info!("initial rotation");
syncer.initial_rotation()
})?;
let (snd, rcv) = mpsc::channel(); let (snd, rcv) = mpsc::channel();
db.lock().on_flush(Box::new({ db.lock().on_flush(Box::new({
let snd = snd.clone(); let snd = snd.clone();
move || { move || {
if let Err(e) = snd.send(SyncerCommand::DatabaseFlushed) { if let Err(err) = snd.send(SyncerCommand::DatabaseFlushed) {
warn!("Unable to notify syncer for dir {} of flush: {}", dir_id, e); warn!(%err, "Unable to notify syncer for dir {}", dir_id);
} }
} }
})); }));
Ok(( Ok((
SyncerChannel(snd), SyncerChannel(snd),
thread::Builder::new() thread::Builder::new()
.name(format!("sync-{}", path.display())) .name(format!("sync-{dir_id}"))
.spawn(move || while syncer.iter(&rcv) {}) .spawn(move || {
span.in_scope(|| {
tracing::info!("starting");
while syncer.iter(&rcv) {}
})
})
.unwrap(), .unwrap(),
)) ))
} }
@ -812,7 +821,7 @@ impl<'a, C: Clocks + Clone, D: DirWriter> Writer<'a, C, D> {
Ok(w) => w, Ok(w) => w,
Err(e) => { Err(e) => {
// close() will do nothing because unindexed_sample will be None. // close() will do nothing because unindexed_sample will be None.
log::warn!( tracing::warn!(
"Abandoning incompletely written recording {} on shutdown", "Abandoning incompletely written recording {} on shutdown",
w.id w.id
); );
@ -983,12 +992,12 @@ mod tests {
use crate::recording; use crate::recording;
use crate::testutil; use crate::testutil;
use base::clock::{Clocks, SimulatedClocks}; use base::clock::{Clocks, SimulatedClocks};
use log::{trace, warn};
use std::collections::VecDeque; use std::collections::VecDeque;
use std::io; use std::io;
use std::sync::mpsc; use std::sync::mpsc;
use std::sync::Arc; use std::sync::Arc;
use std::sync::Mutex; use std::sync::Mutex;
use tracing::{trace, warn};
#[derive(Clone)] #[derive(Clone)]
struct MockDir(Arc<Mutex<VecDeque<MockDirAction>>>); struct MockDir(Arc<Mutex<VecDeque<MockDirAction>>>);

View File

@ -110,7 +110,7 @@ fn get_camera(siv: &mut Cursive) -> Camera {
sample_file_dir_id, sample_file_dir_id,
}; };
} }
log::trace!("camera is: {:#?}", &camera); tracing::trace!("camera is: {:#?}", &camera);
camera camera
} }
@ -521,7 +521,7 @@ fn load_camera_values(
v.set_content(s.config.flush_if_sec.to_string()) v.set_content(s.config.flush_if_sec.to_string())
}); });
} }
log::debug!("setting {} dir to {}", t.as_str(), selected_dir); tracing::debug!("setting {} dir to {}", t.as_str(), selected_dir);
dialog.call_on_name( dialog.call_on_name(
&format!("{}_sample_file_dir", t), &format!("{}_sample_file_dir", t),
|v: &mut views::SelectView<Option<i32>>| v.set_selection(selected_dir), |v: &mut views::SelectView<Option<i32>>| v.set_selection(selected_dir),

View File

@ -9,12 +9,12 @@ use cursive::Cursive;
use cursive::{views, With}; use cursive::{views, With};
use db::writer; use db::writer;
use failure::Error; use failure::Error;
use log::{debug, trace};
use std::cell::RefCell; use std::cell::RefCell;
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::Path; use std::path::Path;
use std::rc::Rc; use std::rc::Rc;
use std::sync::Arc; use std::sync::Arc;
use tracing::{debug, trace};
use super::tab_complete::TabCompleteEditView; use super::tab_complete::TabCompleteEditView;

View File

@ -5,8 +5,8 @@
use cursive::traits::{Nameable, Resizable}; use cursive::traits::{Nameable, Resizable};
use cursive::views; use cursive::views;
use cursive::Cursive; use cursive::Cursive;
use log::info;
use std::sync::Arc; use std::sync::Arc;
use tracing::info;
/// Builds a `UserChange` from an active `edit_user_dialog`. /// Builds a `UserChange` from an active `edit_user_dialog`.
fn get_change( fn get_change(

View File

@ -4,8 +4,8 @@
use bpaf::Bpaf; use bpaf::Bpaf;
use failure::Error; use failure::Error;
use log::info;
use std::path::PathBuf; use std::path::PathBuf;
use tracing::info;
/// Initializes a database. /// Initializes a database.
#[derive(Bpaf, Debug)] #[derive(Bpaf, Debug)]

View File

@ -4,9 +4,9 @@
use db::dir; use db::dir;
use failure::{Error, Fail}; use failure::{Error, Fail};
use log::info;
use nix::fcntl::FlockArg; use nix::fcntl::FlockArg;
use std::path::Path; use std::path::Path;
use tracing::info;
pub mod check; pub mod check;
pub mod config; pub mod config;

View File

@ -11,8 +11,6 @@ use db::{dir, writer};
use failure::{bail, Error, ResultExt}; use failure::{bail, Error, ResultExt};
use fnv::FnvHashMap; use fnv::FnvHashMap;
use hyper::service::{make_service_fn, service_fn}; use hyper::service::{make_service_fn, service_fn};
use log::error;
use log::{info, warn};
use retina::client::SessionGroup; use retina::client::SessionGroup;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::path::Path; use std::path::Path;
@ -20,6 +18,8 @@ use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use std::thread; use std::thread;
use tokio::signal::unix::{signal, SignalKind}; use tokio::signal::unix::{signal, SignalKind};
use tracing::error;
use tracing::{info, warn};
use self::config::ConfigFile; use self::config::ConfigFile;
@ -342,15 +342,18 @@ async fn inner(
rotate_offset_sec, rotate_offset_sec,
streamer::ROTATE_INTERVAL_SEC, streamer::ROTATE_INTERVAL_SEC,
)?; )?;
info!("Starting streamer for {}", streamer.short_name()); let span = tracing::info_span!("streamer", stream = streamer.short_name());
let name = format!("s-{}", streamer.short_name()); let thread_name = format!("s-{}", streamer.short_name());
let handle = handle.clone(); let handle = handle.clone();
streamers.push( streamers.push(
thread::Builder::new() thread::Builder::new()
.name(name) .name(thread_name)
.spawn(move || { .spawn(move || {
let _enter = handle.enter(); span.in_scope(|| {
streamer.run(); let _enter_tokio = handle.enter();
info!("starting");
streamer.run();
})
}) })
.expect("can't create thread"), .expect("can't create thread"),
); );
@ -402,7 +405,7 @@ async fn inner(
move || { move || {
for streamer in streamers.drain(..) { for streamer in streamers.drain(..) {
if streamer.join().is_err() { if streamer.join().is_err() {
log::error!("streamer panicked; look for previous panic message"); tracing::error!("streamer panicked; look for previous panic message");
} }
} }
if let Some(mut ss) = syncers { if let Some(mut ss) = syncers {

View File

@ -5,11 +5,9 @@
#![cfg_attr(all(feature = "nightly", test), feature(test))] #![cfg_attr(all(feature = "nightly", test), feature(test))]
use bpaf::{Bpaf, Parser}; use bpaf::{Bpaf, Parser};
use log::{debug, error};
use std::ffi::OsStr; use std::ffi::OsStr;
use std::fmt::Write;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::str::FromStr; use tracing::{debug, error};
mod body; mod body;
mod cmds; mod cmds;
@ -19,6 +17,7 @@ mod mp4;
mod slices; mod slices;
mod stream; mod stream;
mod streamer; mod streamer;
mod tracing_setup;
mod web; mod web;
const DEFAULT_DB_DIR: &str = "/var/lib/moonfire-nvr/db"; const DEFAULT_DB_DIR: &str = "/var/lib/moonfire-nvr/db";
@ -61,37 +60,9 @@ fn parse_db_dir() -> impl Parser<PathBuf> {
.debug_fallback() .debug_fallback()
} }
/// Custom panic hook that logs instead of directly writing to stderr.
///
/// This means it includes a timestamp and is more recognizable as a serious
/// error (including console color coding by default, a format `lnav` will
/// recognize, etc.).
fn panic_hook(p: &std::panic::PanicInfo) {
let mut msg;
if let Some(l) = p.location() {
msg = format!("panic at '{l}'");
} else {
msg = "panic".to_owned();
}
if let Some(s) = p.payload().downcast_ref::<&str>() {
write!(&mut msg, ": {s}").unwrap();
} else if let Some(s) = p.payload().downcast_ref::<String>() {
write!(&mut msg, ": {s}").unwrap();
}
let b = failure::Backtrace::new();
if b.is_empty() {
write!(
&mut msg,
"\n\n(set environment variable RUST_BACKTRACE=1 to see backtraces)"
)
.unwrap();
} else {
write!(&mut msg, "\n\nBacktrace:\n{b}").unwrap();
}
error!("{}", msg);
}
fn main() { fn main() {
// If using the clock will fail, find out now *before* trying to log
// anything (with timestamps...) so we can print a helpful error.
if let Err(e) = nix::time::clock_gettime(nix::time::ClockId::CLOCK_MONOTONIC) { if let Err(e) = nix::time::clock_gettime(nix::time::ClockId::CLOCK_MONOTONIC) {
eprintln!( eprintln!(
"clock_gettime failed: {e}\n\n\ "clock_gettime failed: {e}\n\n\
@ -100,22 +71,7 @@ fn main() {
std::process::exit(1); std::process::exit(1);
} }
let mut h = mylog::Builder::new() tracing_setup::install();
.set_format(
::std::env::var("MOONFIRE_FORMAT")
.map_err(|_| ())
.and_then(|s| mylog::Format::from_str(&s))
.unwrap_or(mylog::Format::Google),
)
.set_color(
::std::env::var("MOONFIRE_COLOR")
.map_err(|_| ())
.and_then(|s| mylog::ColorMode::from_str(&s))
.unwrap_or(mylog::ColorMode::Auto),
)
.set_spec(&::std::env::var("MOONFIRE_LOG").unwrap_or_else(|_| "info".to_owned()))
.build();
h.clone().install().unwrap();
// Get the program name from the OS (e.g. if invoked as `target/debug/nvr`: `nvr`), // Get the program name from the OS (e.g. if invoked as `target/debug/nvr`: `nvr`),
// falling back to the crate name if conversion to a path/UTF-8 string fails. // falling back to the crate name if conversion to a path/UTF-8 string fails.
@ -127,13 +83,6 @@ fn main() {
.and_then(OsStr::to_str) .and_then(OsStr::to_str)
.unwrap_or(env!("CARGO_PKG_NAME")); .unwrap_or(env!("CARGO_PKG_NAME"));
let use_panic_hook = ::std::env::var("MOONFIRE_PANIC_HOOK")
.map(|s| s != "false" && s != "0")
.unwrap_or(true);
if use_panic_hook {
std::panic::set_hook(Box::new(&panic_hook));
}
let args = match args() let args = match args()
.fallback_to_usage() .fallback_to_usage()
.run_inner(bpaf::Args::current_args().set_name(progname)) .run_inner(bpaf::Args::current_args().set_name(progname))
@ -141,13 +90,9 @@ fn main() {
Ok(a) => a, Ok(a) => a,
Err(e) => std::process::exit(e.exit_code()), Err(e) => std::process::exit(e.exit_code()),
}; };
log::trace!("Parsed command-line arguments: {args:#?}"); tracing::trace!("Parsed command-line arguments: {args:#?}");
let r = { match args.run() {
let _a = h.async_scope();
args.run()
};
match r {
Err(e) => { Err(e) => {
error!("Exiting due to error: {}", base::prettify_failure(&e)); error!("Exiting due to error: {}", base::prettify_failure(&e));
::std::process::exit(1); ::std::process::exit(1);

View File

@ -65,7 +65,6 @@ use futures::stream::{self, TryStreamExt};
use futures::Stream; use futures::Stream;
use http::header::HeaderValue; use http::header::HeaderValue;
use hyper::body::Buf; use hyper::body::Buf;
use log::{debug, error, trace, warn};
use reffers::ARefss; use reffers::ARefss;
use smallvec::SmallVec; use smallvec::SmallVec;
use std::cell::UnsafeCell; use std::cell::UnsafeCell;
@ -78,6 +77,7 @@ use std::ops::Range;
use std::sync::Arc; use std::sync::Arc;
use std::sync::Once; use std::sync::Once;
use std::time::SystemTime; use std::time::SystemTime;
use tracing::{debug, error, trace, warn};
/// This value should be incremented any time a change is made to this file that causes different /// This value should be incremented any time a change is made to this file that causes different
/// bytes or headers to be output for a particular set of `FileBuilder` options. Incrementing this /// bytes or headers to be output for a particular set of `FileBuilder` options. Incrementing this
@ -2002,12 +2002,12 @@ mod tests {
use futures::stream::TryStreamExt; use futures::stream::TryStreamExt;
use http_serve::{self, Entity}; use http_serve::{self, Entity};
use hyper::body::Buf; use hyper::body::Buf;
use log::info;
use std::fs; use std::fs;
use std::ops::Range; use std::ops::Range;
use std::path::Path; use std::path::Path;
use std::pin::Pin; use std::pin::Pin;
use std::str; use std::str;
use tracing::info;
async fn fill_slice<E: http_serve::Entity>(slice: &mut [u8], e: &E, start: u64) async fn fill_slice<E: http_serve::Entity>(slice: &mut [u8], e: &E, start: u64)
where where

View File

@ -4,13 +4,15 @@
//! Tools for implementing a `http_serve::Entity` body composed from many "slices". //! Tools for implementing a `http_serve::Entity` body composed from many "slices".
use std::fmt;
use std::ops::Range;
use std::pin::Pin;
use crate::body::{wrap_error, BoxedError}; use crate::body::{wrap_error, BoxedError};
use base::format_err_t; use base::format_err_t;
use failure::{bail, Error}; use failure::{bail, Error};
use futures::{stream, stream::StreamExt, Stream}; use futures::{stream, stream::StreamExt, Stream};
use std::fmt; use tracing_futures::Instrument;
use std::ops::Range;
use std::pin::Pin;
/// Gets a byte range given a context argument. /// Gets a byte range given a context argument.
/// Each `Slice` instance belongs to a single `Slices`. /// Each `Slice` instance belongs to a single `Slices`.
@ -173,7 +175,7 @@ where
futures::future::ready(Some((Pin::from(body), (c, i + 1, 0, min_end)))) futures::future::ready(Some((Pin::from(body), (c, i + 1, 0, min_end))))
}, },
); );
Box::new(bodies.flatten()) Box::new(bodies.flatten().in_current_span())
} }
} }

View File

@ -11,6 +11,7 @@ use retina::client::Demuxed;
use retina::codec::CodecItem; use retina::codec::CodecItem;
use std::pin::Pin; use std::pin::Pin;
use std::result::Result; use std::result::Result;
use tracing::Instrument;
use url::Url; use url::Url;
static RETINA_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(30); static RETINA_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(30);
@ -64,10 +65,15 @@ impl Opener for RealOpener {
.user_agent(format!("Moonfire NVR {}", env!("CARGO_PKG_VERSION"))); .user_agent(format!("Moonfire NVR {}", env!("CARGO_PKG_VERSION")));
let rt_handle = tokio::runtime::Handle::current(); let rt_handle = tokio::runtime::Handle::current();
let (inner, first_frame) = rt_handle let (inner, first_frame) = rt_handle
.block_on(rt_handle.spawn(tokio::time::timeout( .block_on(
RETINA_TIMEOUT, rt_handle.spawn(
RetinaStreamInner::play(label, url, options), tokio::time::timeout(
))) RETINA_TIMEOUT,
RetinaStreamInner::play(label, url, options),
)
.in_current_span(),
),
)
.expect("RetinaStream::play task panicked, see earlier error")??; .expect("RetinaStream::play task panicked, see earlier error")??;
Ok(Box::new(RetinaStream { Ok(Box::new(RetinaStream {
inner: Some(inner), inner: Some(inner),
@ -116,7 +122,7 @@ impl RetinaStreamInner {
options: Options, options: Options,
) -> Result<(Box<Self>, retina::codec::VideoFrame), Error> { ) -> Result<(Box<Self>, retina::codec::VideoFrame), Error> {
let mut session = retina::client::Session::describe(url, options.session).await?; let mut session = retina::client::Session::describe(url, options.session).await?;
log::debug!("connected to {:?}, tool {:?}", &label, session.tool()); tracing::debug!("connected to {:?}, tool {:?}", &label, session.tool());
let video_i = session let video_i = session
.streams() .streams()
.iter() .iter()
@ -169,7 +175,7 @@ impl RetinaStreamInner {
None => bail!("end of stream"), None => bail!("end of stream"),
Some(CodecItem::VideoFrame(v)) => { Some(CodecItem::VideoFrame(v)) => {
if v.loss() > 0 { if v.loss() > 0 {
log::warn!( tracing::warn!(
"{}: lost {} RTP packets @ {}", "{}: lost {} RTP packets @ {}",
&self.label, &self.label,
v.loss(), v.loss(),
@ -210,17 +216,19 @@ impl Stream for RetinaStream {
let inner = self.inner.take().unwrap(); let inner = self.inner.take().unwrap();
let (mut inner, frame, new_parameters) = self let (mut inner, frame, new_parameters) = self
.rt_handle .rt_handle
.block_on(self.rt_handle.spawn(tokio::time::timeout( .block_on(
RETINA_TIMEOUT, self.rt_handle.spawn(
inner.fetch_next_frame(), tokio::time::timeout(RETINA_TIMEOUT, inner.fetch_next_frame())
))) .in_current_span(),
),
)
.expect("fetch_next_frame task panicked, see earlier error") .expect("fetch_next_frame task panicked, see earlier error")
.map_err(|_| format_err!("timeout getting next frame"))??; .map_err(|_| format_err!("timeout getting next frame"))??;
let mut new_video_sample_entry = false; let mut new_video_sample_entry = false;
if let Some(p) = new_parameters { if let Some(p) = new_parameters {
let video_sample_entry = h264::parse_extra_data(p.extra_data())?; let video_sample_entry = h264::parse_extra_data(p.extra_data())?;
if video_sample_entry != inner.video_sample_entry { if video_sample_entry != inner.video_sample_entry {
log::debug!( tracing::debug!(
"{}: parameter change:\nold: {:?}\nnew: {:?}", "{}: parameter change:\nold: {:?}\nnew: {:?}",
&inner.label, &inner.label,
&inner.video_sample_entry, &inner.video_sample_entry,

View File

@ -6,10 +6,10 @@ use crate::stream;
use base::clock::{Clocks, TimerGuard}; use base::clock::{Clocks, TimerGuard};
use db::{dir, recording, writer, Camera, Database, Stream}; use db::{dir, recording, writer, Camera, Database, Stream};
use failure::{bail, format_err, Error}; use failure::{bail, format_err, Error};
use log::{debug, info, trace, warn};
use std::result::Result; use std::result::Result;
use std::str::FromStr; use std::str::FromStr;
use std::sync::Arc; use std::sync::Arc;
use tracing::{debug, info, trace, warn, Instrument};
use url::Url; use url::Url;
pub static ROTATE_INTERVAL_SEC: i64 = 60; pub static ROTATE_INTERVAL_SEC: i64 = 60;
@ -78,7 +78,7 @@ where
match retina::client::Transport::from_str(&s.config.rtsp_transport) { match retina::client::Transport::from_str(&s.config.rtsp_transport) {
Ok(t) => Some(t), Ok(t) => Some(t),
Err(_) => { Err(_) => {
log::warn!( tracing::warn!(
"Unable to parse configured transport {:?} for {}/{}; ignoring.", "Unable to parse configured transport {:?} for {}/{}; ignoring.",
&s.config.rtsp_transport, &s.config.rtsp_transport,
&c.short_name, &c.short_name,
@ -116,22 +116,20 @@ where
/// the context of a multithreaded tokio runtime with IO and time enabled. /// the context of a multithreaded tokio runtime with IO and time enabled.
pub fn run(&mut self) { pub fn run(&mut self) {
while self.shutdown_rx.check().is_ok() { while self.shutdown_rx.check().is_ok() {
if let Err(e) = self.run_once() { if let Err(err) = self.run_once() {
let sleep_time = time::Duration::seconds(1); let sleep_time = time::Duration::seconds(1);
warn!( warn!(
"{}: sleeping for {} after error: {}", err = base::prettify_failure(&err),
self.short_name, "sleeping for 1 s after error"
sleep_time,
base::prettify_failure(&e)
); );
self.db.clocks().sleep(sleep_time); self.db.clocks().sleep(sleep_time);
} }
} }
info!("{}: shutting down", self.short_name); info!("shutting down");
} }
fn run_once(&mut self) -> Result<(), Error> { fn run_once(&mut self) -> Result<(), Error> {
info!("{}: Opening input: {}", self.short_name, self.url.as_str()); info!(url = %self.url, "opening input");
let clocks = self.db.clocks(); let clocks = self.db.clocks();
let handle = tokio::runtime::Handle::current(); let handle = tokio::runtime::Handle::current();
@ -139,29 +137,31 @@ where
loop { loop {
let status = self.session_group.stale_sessions(); let status = self.session_group.stale_sessions();
if let Some(max_expires) = status.max_expires { if let Some(max_expires) = status.max_expires {
log::info!( tracing::info!(
"{}: waiting up to {:?} for TEARDOWN or expiration of {} stale sessions", "waiting up to {:?} for TEARDOWN or expiration of {} stale sessions",
&self.short_name,
max_expires.saturating_duration_since(tokio::time::Instant::now()), max_expires.saturating_duration_since(tokio::time::Instant::now()),
status.num_sessions status.num_sessions
); );
handle.block_on(async { handle.block_on(
tokio::select! { async {
_ = self.session_group.await_stale_sessions(&status) => Ok(()), tokio::select! {
_ = self.shutdown_rx.as_future() => Err(base::shutdown::ShutdownError), _ = self.session_group.await_stale_sessions(&status) => Ok(()),
_ = self.shutdown_rx.as_future() => Err(base::shutdown::ShutdownError),
}
} }
})?; .in_current_span(),
)?;
waited = true; waited = true;
} else { } else {
if waited { if waited {
log::info!("{}: done waiting; no more stale sessions", &self.short_name); tracing::info!("done waiting; no more stale sessions");
} }
break; break;
} }
} }
let mut stream = { let mut stream = {
let _t = TimerGuard::new(&clocks, || format!("opening {}", self.url.as_str())); let _t = TimerGuard::new(&clocks, || format!("opening {}", self.url));
let options = stream::Options { let options = stream::Options {
session: retina::client::SessionOptions::default() session: retina::client::SessionOptions::default()
.creds(if self.username.is_empty() { .creds(if self.username.is_empty() {
@ -208,14 +208,14 @@ where
if !seen_key_frame && !frame.is_key { if !seen_key_frame && !frame.is_key {
continue; continue;
} else if !seen_key_frame { } else if !seen_key_frame {
debug!("{}: have first key frame", self.short_name); debug!("have first key frame");
seen_key_frame = true; seen_key_frame = true;
} }
let frame_realtime = clocks.monotonic() + realtime_offset; let frame_realtime = clocks.monotonic() + realtime_offset;
let local_time = recording::Time::new(frame_realtime); let local_time = recording::Time::new(frame_realtime);
rotate = if let Some(r) = rotate { rotate = if let Some(r) = rotate {
if frame_realtime.sec > r && frame.is_key { if frame_realtime.sec > r && frame.is_key {
trace!("{}: close on normal rotation", self.short_name); trace!("close on normal rotation");
let _t = TimerGuard::new(&clocks, || "closing writer"); let _t = TimerGuard::new(&clocks, || "closing writer");
w.close(Some(frame.pts), None)?; w.close(Some(frame.pts), None)?;
None None
@ -223,7 +223,7 @@ where
if !frame.is_key { if !frame.is_key {
bail!("parameter change on non-key frame"); bail!("parameter change on non-key frame");
} }
trace!("{}: close on parameter change", self.short_name); trace!("close on parameter change");
video_sample_entry_id = { video_sample_entry_id = {
let _t = TimerGuard::new(&clocks, || "inserting video sample entry"); let _t = TimerGuard::new(&clocks, || "inserting video sample entry");
self.db self.db
@ -288,11 +288,11 @@ mod tests {
use base::clock::{self, Clocks}; use base::clock::{self, Clocks};
use db::{recording, testutil, CompositeId}; use db::{recording, testutil, CompositeId};
use failure::{bail, Error}; use failure::{bail, Error};
use log::trace;
use std::cmp; use std::cmp;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::sync::Arc; use std::sync::Arc;
use std::sync::Mutex; use std::sync::Mutex;
use tracing::trace;
struct ProxyingStream { struct ProxyingStream {
clocks: clock::SimulatedClocks, clocks: clock::SimulatedClocks,

153
server/src/tracing_setup.rs Normal file
View File

@ -0,0 +1,153 @@
// This file is part of Moonfire NVR, a security camera network video recorder.
// Copyright (C) 2023 The Moonfire NVR Authors; see AUTHORS and LICENSE.txt.
// SPDX-License-Identifier: GPL-v3.0-or-later WITH GPL-3.0-linking-exception.
//! Logic for setting up a `tracing` subscriber according to our preferences
//! and [OpenTelemetry conventions](https://opentelemetry.io/docs/reference/specification/logs/).
use tracing::error;
use tracing_core::{Event, Level, Subscriber};
use tracing_log::NormalizeEvent;
use tracing_subscriber::{
fmt::{format::Writer, time::FormatTime, FmtContext, FormatFields, FormattedFields},
layer::SubscriberExt,
registry::LookupSpan,
Layer,
};
struct FormatSystemd;
struct ChronoTimer;
impl FormatTime for ChronoTimer {
fn format_time(&self, w: &mut Writer<'_>) -> std::fmt::Result {
const TIME_FORMAT: &str = "%Y-%m-%dT%H:%M:%S%.6f";
write!(w, "{}", chrono::Local::now().format(TIME_FORMAT))
}
}
fn systemd_prefix(level: Level) -> &'static str {
if level >= Level::TRACE {
"<7>" // SD_DEBUG
} else if level >= Level::DEBUG {
"<6>" // SD_INFO
} else if level >= Level::INFO {
"<5>" // SD_NOTICE
} else if level >= Level::WARN {
"<4>" // SD_WARN
} else {
"<3>" // SD_ERROR
}
}
impl<S, N> tracing_subscriber::fmt::FormatEvent<S, N> for FormatSystemd
where
S: Subscriber + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
{
fn format_event(
&self,
ctx: &FmtContext<'_, S, N>,
mut writer: Writer<'_>,
event: &Event<'_>,
) -> std::fmt::Result {
let normalized_meta = event.normalized_metadata();
let meta = normalized_meta.as_ref().unwrap_or_else(|| event.metadata());
let prefix = systemd_prefix(*meta.level());
let thread = std::thread::current();
let thread_name = thread.name().unwrap_or("unnamed-thread");
write!(writer, "{prefix}{thread_name} ")?;
if let Some(scope) = ctx.event_scope() {
let mut seen = false;
for span in scope.from_root() {
write!(writer, "{}", span.metadata().name())?;
seen = true;
let ext = span.extensions();
if let Some(fields) = &ext.get::<FormattedFields<N>>() {
if !fields.is_empty() {
write!(writer, "{{{fields}}}")?;
}
}
writer.write_char(':')?;
}
if seen {
writer.write_char(' ')?;
}
}
write!(writer, "{}: ", meta.target())?;
ctx.format_fields(writer.by_ref(), event)?;
writeln!(writer)
}
}
/// Custom panic hook that logs instead of directly writing to stderr.
///
/// This means it includes a timestamp, follows [OpenTelemetry Semantic
/// Conventions for Exceptions](https://opentelemetry.io/docs/reference/specification/logs/semantic_conventions/exceptions/),
/// etc.
fn panic_hook(p: &std::panic::PanicInfo) {
let payload: Option<&str> = if let Some(s) = p.payload().downcast_ref::<&str>() {
Some(*s)
} else if let Some(s) = p.payload().downcast_ref::<String>() {
Some(s)
} else {
None
};
error!(
target: std::env!("CARGO_CRATE_NAME"),
location = p.location().map(tracing::field::display),
payload = payload.map(tracing::field::display),
backtrace = %std::backtrace::Backtrace::force_capture(),
"panic",
);
}
pub fn install() {
let filter = tracing_subscriber::EnvFilter::builder()
.with_default_directive(tracing_subscriber::filter::LevelFilter::INFO.into())
.with_env_var("MOONFIRE_LOG")
.from_env_lossy();
tracing_log::LogTracer::init().unwrap();
match std::env::var("MOONFIRE_FORMAT") {
Ok(s) if s == "systemd" => {
let sub = tracing_subscriber::registry().with(
tracing_subscriber::fmt::Layer::new()
.with_ansi(false)
.event_format(FormatSystemd)
.with_filter(filter),
);
tracing::subscriber::set_global_default(sub).unwrap();
}
Ok(s) if s == "json" => {
let sub = tracing_subscriber::registry().with(
tracing_subscriber::fmt::Layer::new()
.with_thread_names(true)
.json()
.with_filter(filter),
);
tracing::subscriber::set_global_default(sub).unwrap();
}
_ => {
let sub = tracing_subscriber::registry().with(
tracing_subscriber::fmt::Layer::new()
.with_timer(ChronoTimer)
.with_thread_names(true)
.with_filter(filter),
);
tracing::subscriber::set_global_default(sub).unwrap();
}
}
let use_panic_hook = ::std::env::var("MOONFIRE_PANIC_HOOK")
.map(|s| s != "false" && s != "0")
.unwrap_or(true);
if use_panic_hook {
std::panic::set_hook(Box::new(&panic_hook));
}
}

View File

@ -28,9 +28,10 @@ use http::header::{self, HeaderValue};
use http::{status::StatusCode, Request, Response}; use http::{status::StatusCode, Request, Response};
use http_serve::dir::FsDir; use http_serve::dir::FsDir;
use hyper::body::Bytes; use hyper::body::Bytes;
use log::{debug, warn};
use std::net::IpAddr; use std::net::IpAddr;
use std::sync::Arc; use std::sync::Arc;
use tracing::warn;
use tracing::Instrument;
use url::form_urlencoded; use url::form_urlencoded;
use uuid::Uuid; use uuid::Uuid;
@ -245,6 +246,7 @@ impl Service {
async fn serve_inner( async fn serve_inner(
self: Arc<Self>, self: Arc<Self>,
req: Request<::hyper::Body>, req: Request<::hyper::Body>,
authreq: auth::Request,
conn_data: ConnData, conn_data: ConnData,
) -> ResponseResult { ) -> ResponseResult {
let p = Path::decode(req.uri().path()); let p = Path::decode(req.uri().path());
@ -252,7 +254,15 @@ impl Service {
p, p,
Path::NotFound | Path::Request | Path::Login | Path::Logout | Path::Static Path::NotFound | Path::Request | Path::Login | Path::Logout | Path::Static
); );
let caller = self.authenticate(&req, &conn_data, always_allow_unauthenticated); let caller = self.authenticate(&req, &authreq, &conn_data, always_allow_unauthenticated);
if let Some(username) = caller
.as_ref()
.ok()
.and_then(|c| c.user.as_ref())
.map(|u| &u.name)
{
tracing::Span::current().record("auth.user", tracing::field::display(username));
}
// WebSocket stuff is handled separately, because most authentication // WebSocket stuff is handled separately, because most authentication
// errors are returned as text messages over the protocol, rather than // errors are returned as text messages over the protocol, rather than
@ -264,14 +274,16 @@ impl Service {
} }
let caller = caller?; let caller = caller?;
debug!("request on: {}: {:?}", req.uri(), p);
let (cache, mut response) = match p { let (cache, mut response) = match p {
Path::InitSegment(sha1, debug) => ( Path::InitSegment(sha1, debug) => (
CacheControl::PrivateStatic, CacheControl::PrivateStatic,
self.init_segment(sha1, debug, &req)?, self.init_segment(sha1, debug, &req)?,
), ),
Path::TopLevel => (CacheControl::PrivateDynamic, self.top_level(&req, caller)?), Path::TopLevel => (CacheControl::PrivateDynamic, self.top_level(&req, caller)?),
Path::Request => (CacheControl::PrivateDynamic, self.request(&req, caller)?), Path::Request => (
CacheControl::PrivateDynamic,
self.request(&req, &authreq, caller)?,
),
Path::Camera(uuid) => (CacheControl::PrivateDynamic, self.camera(&req, uuid)?), Path::Camera(uuid) => (CacheControl::PrivateDynamic, self.camera(&req, uuid)?),
Path::StreamRecordings(uuid, type_) => ( Path::StreamRecordings(uuid, type_) => (
CacheControl::PrivateDynamic, CacheControl::PrivateDynamic,
@ -289,8 +301,14 @@ impl Service {
unreachable!("StreamLiveMp4Segments should have already been handled") unreachable!("StreamLiveMp4Segments should have already been handled")
} }
Path::NotFound => return Err(not_found("path not understood")), Path::NotFound => return Err(not_found("path not understood")),
Path::Login => (CacheControl::PrivateDynamic, self.login(req).await?), Path::Login => (
Path::Logout => (CacheControl::PrivateDynamic, self.logout(req).await?), CacheControl::PrivateDynamic,
self.login(req, authreq).await?,
),
Path::Logout => (
CacheControl::PrivateDynamic,
self.logout(req, authreq).await?,
),
Path::Signals => ( Path::Signals => (
CacheControl::PrivateDynamic, CacheControl::PrivateDynamic,
self.signals(req, caller).await?, self.signals(req, caller).await?,
@ -321,6 +339,7 @@ impl Service {
} }
/// Serves an HTTP request. /// Serves an HTTP request.
///
/// An error return from this method causes hyper to abruptly drop the /// An error return from this method causes hyper to abruptly drop the
/// HTTP connection rather than respond. That's not terribly useful, so this /// HTTP connection rather than respond. That's not terribly useful, so this
/// method always returns `Ok`. It delegates to a `serve_inner` which is /// method always returns `Ok`. It delegates to a `serve_inner` which is
@ -331,10 +350,63 @@ impl Service {
req: Request<::hyper::Body>, req: Request<::hyper::Body>,
conn_data: ConnData, conn_data: ConnData,
) -> Result<Response<Body>, std::convert::Infallible> { ) -> Result<Response<Body>, std::convert::Infallible> {
Ok(self let id = ulid::Ulid::new();
.serve_inner(req, conn_data) let authreq = auth::Request {
when_sec: Some(self.db.clocks().realtime().sec),
addr: if self.trust_forward_hdrs {
req.headers()
.get("X-Real-IP")
.and_then(|v| v.to_str().ok())
.and_then(|v| IpAddr::from_str(v).ok())
} else {
conn_data.client_addr.map(|a| a.ip())
},
user_agent: req
.headers()
.get(header::USER_AGENT)
.map(|ua| ua.as_bytes().to_vec()),
};
let start = std::time::Instant::now();
// https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/http/
let span = tracing::info_span!(
"request",
%id,
net.sock.peer.uid = conn_data.client_unix_uid.map(tracing::field::display),
http.client_ip = authreq.addr.map(tracing::field::display),
http.method = %req.method(),
http.target = %req.uri(),
http.status_code = tracing::field::Empty,
auth.user = tracing::field::Empty,
);
tracing::debug!(parent: &span, "received request headers");
let response = self
.serve_inner(req, authreq, conn_data)
.instrument(span.clone())
.await .await
.unwrap_or_else(|e| e.0)) .unwrap_or_else(|e| e.0);
span.record("http.status_code", response.status().as_u16());
let latency = std::time::Instant::now().duration_since(start);
if response.status().is_server_error() {
tracing::error!(
parent: &span,
latency = format_args!("{:.6}s", latency.as_secs_f32()),
"sending response headers",
);
} else if response.status().is_client_error() {
tracing::warn!(
parent: &span,
latency = format_args!("{:.6}s", latency.as_secs_f32()),
"sending response headers",
);
} else {
tracing::info!(
parent: &span,
latency = format_args!("{:.6}s", latency.as_secs_f32()),
"sending response headers",
);
}
Ok(response)
} }
fn top_level(&self, req: &Request<::hyper::Body>, caller: Caller) -> ResponseResult { fn top_level(&self, req: &Request<::hyper::Body>, caller: Caller) -> ResponseResult {
@ -478,26 +550,12 @@ impl Service {
} }
} }
fn authreq(&self, req: &Request<::hyper::Body>) -> auth::Request { fn request(
auth::Request { &self,
when_sec: Some(self.db.clocks().realtime().sec), req: &Request<::hyper::Body>,
addr: if self.trust_forward_hdrs { authreq: &auth::Request,
req.headers() caller: Caller,
.get("X-Real-IP") ) -> ResponseResult {
.and_then(|v| v.to_str().ok())
.and_then(|v| IpAddr::from_str(v).ok())
} else {
None
},
user_agent: req
.headers()
.get(header::USER_AGENT)
.map(|ua| ua.as_bytes().to_vec()),
}
}
fn request(&self, req: &Request<::hyper::Body>, caller: Caller) -> ResponseResult {
let authreq = self.authreq(req);
let host = req let host = req
.headers() .headers()
.get(header::HOST) .get(header::HOST)
@ -561,13 +619,16 @@ impl Service {
fn authenticate( fn authenticate(
&self, &self,
req: &Request<hyper::Body>, req: &Request<hyper::Body>,
authreq: &auth::Request,
conn_data: &ConnData, conn_data: &ConnData,
unauth_path: bool, unauth_path: bool,
) -> Result<Caller, base::Error> { ) -> Result<Caller, base::Error> {
if let Some(sid) = extract_sid(req) { if let Some(sid) = extract_sid(req) {
let authreq = self.authreq(req); match self
.db
match self.db.lock().authenticate_session(authreq, &sid.hash()) { .lock()
.authenticate_session(authreq.clone(), &sid.hash())
{
Ok((s, u)) => { Ok((s, u)) => {
return Ok(Caller { return Ok(Caller {
permissions: s.permissions.clone(), permissions: s.permissions.clone(),

View File

@ -6,8 +6,8 @@
use db::auth; use db::auth;
use http::{header, HeaderValue, Method, Request, Response, StatusCode}; use http::{header, HeaderValue, Method, Request, Response, StatusCode};
use log::{info, warn};
use memchr::memchr; use memchr::memchr;
use tracing::{info, warn};
use crate::json; use crate::json;
@ -18,14 +18,17 @@ use super::{
use std::convert::TryFrom; use std::convert::TryFrom;
impl Service { impl Service {
pub(super) async fn login(&self, mut req: Request<::hyper::Body>) -> ResponseResult { pub(super) async fn login(
&self,
mut req: Request<::hyper::Body>,
authreq: auth::Request,
) -> ResponseResult {
if *req.method() != Method::POST { if *req.method() != Method::POST {
return Err(plain_response(StatusCode::METHOD_NOT_ALLOWED, "POST expected").into()); return Err(plain_response(StatusCode::METHOD_NOT_ALLOWED, "POST expected").into());
} }
let r = extract_json_body(&mut req).await?; let r = extract_json_body(&mut req).await?;
let r: json::LoginRequest = let r: json::LoginRequest =
serde_json::from_slice(&r).map_err(|e| bad_req(e.to_string()))?; serde_json::from_slice(&r).map_err(|e| bad_req(e.to_string()))?;
let authreq = self.authreq(&req);
let host = req let host = req
.headers() .headers()
.get(header::HOST) .get(header::HOST)
@ -69,7 +72,11 @@ impl Service {
.unwrap()) .unwrap())
} }
pub(super) async fn logout(&self, mut req: Request<hyper::Body>) -> ResponseResult { pub(super) async fn logout(
&self,
mut req: Request<hyper::Body>,
authreq: auth::Request,
) -> ResponseResult {
if *req.method() != Method::POST { if *req.method() != Method::POST {
return Err(plain_response(StatusCode::METHOD_NOT_ALLOWED, "POST expected").into()); return Err(plain_response(StatusCode::METHOD_NOT_ALLOWED, "POST expected").into());
} }
@ -79,7 +86,6 @@ impl Service {
let mut res = Response::new(b""[..].into()); let mut res = Response::new(b""[..].into());
if let Some(sid) = extract_sid(&req) { if let Some(sid) = extract_sid(&req) {
let authreq = self.authreq(&req);
let mut l = self.db.lock(); let mut l = self.db.lock();
let hash = sid.hash(); let hash = sid.hash();
let need_revoke = match l.authenticate_session(authreq.clone(), &hash) { let need_revoke = match l.authenticate_session(authreq.clone(), &hash) {
@ -142,7 +148,7 @@ fn encode_sid(sid: db::RawSessionId, flags: i32) -> String {
mod tests { mod tests {
use db::testutil; use db::testutil;
use fnv::FnvHashMap; use fnv::FnvHashMap;
use log::info; use tracing::info;
use crate::web::tests::Server; use crate::web::tests::Server;

View File

@ -7,7 +7,6 @@
use base::bail_t; use base::bail_t;
use db::recording::{self, rescale}; use db::recording::{self, rescale};
use http::{Request, StatusCode}; use http::{Request, StatusCode};
use log::trace;
use nom::bytes::complete::{tag, take_while1}; use nom::bytes::complete::{tag, take_while1};
use nom::combinator::{all_consuming, map, map_res, opt}; use nom::combinator::{all_consuming, map, map_res, opt};
use nom::sequence::{preceded, tuple}; use nom::sequence::{preceded, tuple};
@ -17,6 +16,7 @@ use std::cmp;
use std::convert::TryFrom; use std::convert::TryFrom;
use std::ops::Range; use std::ops::Range;
use std::str::FromStr; use std::str::FromStr;
use tracing::trace;
use url::form_urlencoded; use url::form_urlencoded;
use uuid::Uuid; use uuid::Uuid;

View File

@ -12,6 +12,7 @@ use base::bail_t;
use futures::{Future, SinkExt}; use futures::{Future, SinkExt};
use http::{header, Request, Response}; use http::{header, Request, Response};
use tokio_tungstenite::{tungstenite, WebSocketStream}; use tokio_tungstenite::{tungstenite, WebSocketStream};
use tracing::Instrument;
use super::{bad_req, ResponseResult}; use super::{bad_req, ResponseResult};
@ -37,26 +38,33 @@ where
tungstenite::handshake::server::create_response_with_body(&req, hyper::Body::empty) tungstenite::handshake::server::create_response_with_body(&req, hyper::Body::empty)
.map_err(|e| bad_req(e.to_string()))?; .map_err(|e| bad_req(e.to_string()))?;
let (parts, _) = response.into_parts(); let (parts, _) = response.into_parts();
tokio::spawn(async move { let span = tracing::info_span!("websocket");
let upgraded = match hyper::upgrade::on(req).await { tokio::spawn(
Ok(u) => u, async move {
Err(e) => { let upgraded = match hyper::upgrade::on(req).await {
log::error!("WebSocket upgrade failed: {e}"); Ok(u) => u,
return; Err(err) => {
} tracing::error!(%err, "upgrade failed");
}; return;
let mut ws = tokio_tungstenite::WebSocketStream::from_raw_socket( }
upgraded, };
tungstenite::protocol::Role::Server, let mut ws = tokio_tungstenite::WebSocketStream::from_raw_socket(
None, upgraded,
) tungstenite::protocol::Role::Server,
.await; None,
if let Err(e) = handler(&mut ws).await { )
// TODO: use a nice JSON message format for errors. .await;
log::error!("WebSocket stream terminating with error {e}"); if let Err(err) = handler(&mut ws).await {
let _ = ws.send(tungstenite::Message::Text(e.to_string())).await; // TODO: use a nice JSON message format for errors.
tracing::error!(%err, "closing with error");
let _ = ws.send(tungstenite::Message::Text(err.to_string())).await;
} else {
tracing::info!("closing");
};
let _ = ws.close(None).await;
} }
}); .instrument(span),
);
Ok(Response::from_parts(parts, Body::from(""))) Ok(Response::from_parts(parts, Body::from("")))
} }