The `from_environment` function in
`ApplicationDefaultCredentialsAuthenticator` had an `unwrap` call on an
io::Result after reading the service account key from file. File
operations are inherently fallible, and panicking on such a failure is
generally a bad convention compared to propagating the IO error.
Propagating that error from the `from_environment` function is not
practical however, because the returned Result type does not include IO
errors, and changing the function signature would be semver
incompatible.
This change instead defers reading the key file to a later function
call. Now `from_environment` only reads the value of the
`GOOGLE_APPLICATION_CREDENTIALS` into a PathBuf, and a later call to
`ServiceAccountFlow::new` will actually read the file. That constructor
already returns an io::Result, so folding the read error into it is
possible, and none of the changes impact public items so it's all
semver-compatible.
This adjusts the code and documentation for `--all-features` and
`--no-default-features` to work correctly. With `--no-default-features`
no `DefaultAuthenticator` is made available. Users are in control of
picking the `Connector` they want to use, and are not forced to stomach
a dependency on `rustls` or `hyper-tls` if their TLS implementation of
choice doesn't happen to match one of the two.
To indicate this, the unstable `doc_cfg` feature is used to build
documentation on docs.rs. That way the generated documentation has
notices on these types that look as such:
> This is supported on crate features hyper-rustls or hyper-tls only.
Additionally this functionality is tested via additional coverage in the
Actions' CI.
By only allowing a custom storage. To use one of the built-in storage mechanism, there is already a special-purpose `persist_tokens_to_disk` method available.
Instead, suggest using interior mutability (and RwLock in the example) to manage storage of token states. This makes it easier to share authenticators between threads.
Allow users to build their own token storage system by implementing the `TokenStorage` trait. This allows use of more secure storage mechanisms like OS keychains, encrypted files, or secret-management tools.
Custom storage providers are Box-ed to avoid adding more generics to the API — the indirection cost will only apply if using a custom store.
I've added `anyhow` to allow easy handling of a wide range of errors from custom storage providers.
Could be helpful when troubleshooting issues with various providers if
the user is able to turn on debug logging. The most critical logging
provided is the request and responses sent and received from the oauth
servers.
What was previously called Token is now TokenInfo and is merely an
internal implementation detail. The publicly visible type is now called
AccessToken and differs from TokenInfo by not including the refresh
token. This makes it a smaller type for users to pass around as well as
reducing the ways that a refresh token may be leaked. Since the
Authenticator is responsible for refreshing the tokens there isn't any
reason users should need to concern themselves with refresh tokens.
Defering disk writes is still probably a good idea, but unfortunately
there are some tradeoffs with rust's async story that make it non-ideal.
Ideally we would defer writes, but have a Drop impl on DiskStorage that
waited for all the deferred writes to complete. While it's trival to
create a future that waits for all deferred writes to finish it's not
currently possible to write a Drop impl that waits on a future.
It would be possible to write an inherent async fn that takes self by
value and waits for the writes, but that method would need to be
propogated up all the way to users of the library and they would need to
remember to invoke it before dropping the Authenticator.
When bloom filters were added the btreemap values changed to be a
vector of tokens to accomodate the possibility of bloom filter
collisions. The implementation naively just pushed new tokens onto the
vec even if they were replacing previous tokens meaning old tokens were
still kept around even after a refresh has replaced it. To fix this
efficiently the storage layer now tracks both a hash value and a bloom
filter along with each token. Their is a map keyed by hash for every
token that points to a reference counted version of the token, and each
token also exists in a separate vector. Updates to existing tokens
happens in place, when new entries are added they are added to both data
structures.