Could be helpful when troubleshooting issues with various providers if
the user is able to turn on debug logging. The most critical logging
provided is the request and responses sent and received from the oauth
servers.
Verify that a second refresh can happen after the first. This adds
coverage to ensure that a refresh flow keeps the refresh token intact by
showing that a second refresh can succeed.
What was previously called Token is now TokenInfo and is merely an
internal implementation detail. The publicly visible type is now called
AccessToken and differs from TokenInfo by not including the refresh
token. This makes it a smaller type for users to pass around as well as
reducing the ways that a refresh token may be leaked. Since the
Authenticator is responsible for refreshing the tokens there isn't any
reason users should need to concern themselves with refresh tokens.
Defering disk writes is still probably a good idea, but unfortunately
there are some tradeoffs with rust's async story that make it non-ideal.
Ideally we would defer writes, but have a Drop impl on DiskStorage that
waited for all the deferred writes to complete. While it's trival to
create a future that waits for all deferred writes to finish it's not
currently possible to write a Drop impl that waits on a future.
It would be possible to write an inherent async fn that takes self by
value and waits for the writes, but that method would need to be
propogated up all the way to users of the library and they would need to
remember to invoke it before dropping the Authenticator.
Keeping the same tokens in a Vec and BTreeMap created more overhead than
was warranted. It makes much more sense to simply iterator over the
BTreeMap than keep a separate Vec.
This keeps DiskStorage Sync + Send and therefore Authenticator Sync +
Send. The DiskStorage was threadsafe because JSONTokens contains a Mutex
around all the Rc<RefCell<T>> objects, but there's no way to prove to
the type system that none of the Rc's get cloned to an alias used
outside the Mutex so it's not provably safe. I'll probably reevaluate
the design here, but in the meantime the double locking is fine.
When bloom filters were added the btreemap values changed to be a
vector of tokens to accomodate the possibility of bloom filter
collisions. The implementation naively just pushed new tokens onto the
vec even if they were replacing previous tokens meaning old tokens were
still kept around even after a refresh has replaced it. To fix this
efficiently the storage layer now tracks both a hash value and a bloom
filter along with each token. Their is a map keyed by hash for every
token that points to a reference counted version of the token, and each
token also exists in a separate vector. Updates to existing tokens
happens in place, when new entries are added they are added to both data
structures.