In the interest of the user's attention, some ancillary APIs have been
moved to new submodules:
- server::limits contains what was previously called Throttler and
ChannelFilter. Both of those names were very generic, when the methods
applied by these types were very specific (and also simplistic). Renames
have occurred:
- ThrottlerStream => MaxRequestsPerChannel
- Throttler => MaxRequests
- ChannelFilter => MaxChannelsPerKey
- server::incoming contains the Incoming trait.
- server::tokio contains the tokio-specific helper types.
The 5 structs and 1 enum remaining in the base server module are all
core to the functioning of the server.
It's important for Channel decorators, like Throttler, to have access to
the Span. This means that the BaseChannel becomes responsible for
starting its own requests. Actually, this simplifies the integration for
the Channel users, as they can assume any yielded requests are already
tracked.
This entails the following breaking changes:
- removed trait method Channel::start_request as it is now done
internally.
Becaue tarpc is a library, not an application, it should strive to
use structured errors in its API so that users have maximal flexibility
in how they handle errors. io::Error makes that hard, because it is a
kitchen-sink error type.
RPCs in particular only have 3 classes of errors:
- The connection breaks.
- The request expires.
- The server decides not to process the request.
(Of course, RPCs themselves can have application-specific errors, but
from the perspective of the RPC library, those can be classified as
successful responsees).
tarpc is now instrumented with tracing primitives extended with
OpenTelemetry traces. Using a compatible tracing-opentelemetry
subscriber like Jaeger, each RPC can be traced through the client,
server, amd other dependencies downstream of the server. Even for
applications not connected to a distributed tracing collector, the
instrumentation can also be ingested by regular loggers like env_logger.
# Breaking Changes
## Logging
Logged events are now structured using tracing. For applications using a
logger and not a tracing subscriber, these logs may look different or
contain information in a less consumable manner. The easiest solution is
to add a tracing subscriber that logs to stdout, such as
tracing_subscriber::fmt.
## Context
- Context no longer has parent_span, which was actually never needed,
because the context sent in an RPC is inherently the parent context.
For purposes of distributed tracing, the client side of the RPC has all
necessary information to link the span to its parent; the server side
need do nothing more than export the (trace ID, span ID) tuple.
- Context has a new field, SamplingDecision, which has two variants,
Sampled and Unsampled. This field can be used by downstream systems to
determine whether a trace needs to be exported. If the parent span is
sampled, the expectation is that all child spans be exported, as well;
to do otherwise could result in lossy traces being exported. Note that
if an Openetelemetry tracing subscriber is not installed, the fallback
context will still be used, but the Context's sampling decision will
always be inherited by the parent Context's sampling decision.
- Context::scope has been removed. Context propagation is now done via
tracing's task-local spans. Spans can be propagated across tasks via
Span::in_scope. When a service receives a request, it attaches an
Opentelemetry context to the local Span created before request handling,
and this context contains the request deadline. This span-local deadline
is retrieved by Context::current, but it cannot be modified so that
future Context::current calls contain a different deadline. However, the
deadline in the context passed into an RPC call will override it, so
users can retrieve the current context and then modify the deadline
field, as has been historically possible.
- Context propgation precedence changes: when an RPC is initiated, the
current Span's Opentelemetry context takes precedence over the trace
context passed into the RPC method. If there is no current Span, then
the trace context argument is used as it has been historically. Note
that Opentelemetry context propagation requires an Opentelemetry
tracing subscriber to be installed.
## Server
- The server::Channel trait now has an additional required associated
type and method which returns the underlying transport. This makes it
more ergonomic for users to retrieve transport-specific information,
like IP Address. BaseChannel implements Channel::transport by returning
the underlying transport, and channel decorators like Throttler just
delegate to the Channel::transport method of the wrapped channel.
# References
[1] https://github.com/tokio-rs/tracing
[2] https://opentelemetry.io
[3] https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-jaeger
[4] https://github.com/env-logger-rs/env_logger
Previously, `Context::current` would always return a new context. Now,
it uses tokio task-local data to look for the current context. Tokio
task locals are not actually tied to a tokio executor; instead, they
provide data scoped to a future.
The basic pattern is:
```rust
let ctx = Context::new_root();
ctx.scope(async {
let ctx2 = context::current();
assert_eq!(ctx2.trace_context.span_id, ctx.trace_context.span_id);
});
```
`server::InFlightRequest::execute` uses `Context::scope` to set the
current context before executing a request, so calls to
`context::current` in request handlers will return the context provided
by the client. This does not propagate to new spawned tasks. To
propagate the client context to child tasks, the following pattern will
work:
```rust
tokio::spawn(context::current().scope(async { /* do work here */ }));
```
This commit also introduces a breaking change to Context serialization.
Previously, the deadline only serialized second-level precision. Now, it
provides full fidelity serialization to the nanosecond.
👋 hello there! I'm a fellow Googler who works on projects that leverage GitHub Actions for CI/CD. Recently I noticed a large increase in our queue time, and I've tracked it down to the [limit of 180 concurrent jobs](https://docs.github.com/en/actions/reference/usage-limits-billing-and-administration) for an organization. To help be better citizens, I'm proposing changes across a few repositories that will reduce GitHub Actions hours and consumption. I hope these changes are reasonable and I'm happy to talk through them in more detail.
- Only run GitHub Actions for pushes and PRs against the main branch of the repository. If your team uses a forking model, this change will not affect you. If your team pushes branches to the repository directly, this changes actions to only run against the primary branches or if you open a Pull Request against a primary branch.
- For long-running jobs (especially tests), I added the "Cancel previous" workflow. This is very helpful to prevent a large queue backlog when you are doing rapid development and pushing multiple commits. Without this, GitHub Actions' default behavior is to run all actions on all commits.
There are other changes you could make, depending on your project (but I'm not an expert):
- If you have tests that should only run when a subset of code changes, consider gating your workflow to particular file paths. For example, we have some jobs that do Terraform linting, but [they only run when Terraform files are changed](c4f59fee71/.github/workflows/terraform.yml (L3-L11)).
Hopefully these changes are not too controversial and also hopefully you can see how this would reduce actions consumption to be good citizens to fellow Googlers. If you have any questions, feel free to respond here or ping me on chat. Thank you!
Previously, there were two loops:
- Expired in-flight requests are polled until Pending.
- New requests are polled until Pending.
Now there is one loop that alternates between polling expired requests
and new requests. This way, neither type of action can face starvation.
There is some important logic that is easy to overlook in the client and
server channels: streams of data to write to the transport should not be
polled until the transport is known to be ready to buffer a message. In
the case that a transport's buffer is full, it needs to be flushed to
make room for more messages.
Without this logic, start_send() could return an error when the buffer
is full, which would cause the entire Channel to error out.
Due to the importance of this logic, it's now factored out into its own
method that's easier to understand: fn ensure_writeable. There is one in
the client module and and one in the server module.
The deadline-exceeded response was largely redundant, because the client
shouldn't normally be waiting for such a response, anyway -- the normal
client will automatically remove the in-flight request when it reaches
the deadline.
This also allows for internalizing the expiration+cleanup logic entirely
within BaseChannel, without having it leak into the Channel trait and
requiring action taken by the Requests struct.
- Remove unnecessary Sync and Clone bounds.
- Merge client and client::channel modules.
- Run cargo clippy in the pre-push hook.
- Put DispatchResponse.cancellation in an Option. Previously, the
cancellation logic looked to see if `complete == true`, but it's a bit
less error prone to put the Cancellation in an Option, so that the
request can't accidentally be cancelled.
- Remove some unnecessary pins/projections.
- Clean up docs a bit. rustdoc had some warnings that are now gone.
This required the breaking change of removing the Client trait. The
intent of the Client trait was to facilitate the decorator pattern by
allowing users to create their own Clients that added behavior on top of
the base client. Unfortunately, this trait had become a maintenance
burden, consistently causing issues with lifetimes and the lack of
generic associated types. Specifically, it meant that Client impls could
not use async fns, which is no longer tenable today.
Before this commit, each request future had its own timeout and would
communicate to the client Channel when a request was no longer being
listened to. Now, instead, the Channel tracks deadlines of in-flight
requests and completes requests with deadline-exceeded errors when they
expire.
This should be functionally equivalent to the previous way. It just cuts
down on the amount of two-way processing required. Unfortunately,
dropping a response future early still requires the client to send a
cancellation message to the Channel.
Before this commit, deadlines were handled by a timeout future that
wrapped each request handler. However, request handlers can be dropped
before sending a response back to the channel, so they can't be relied
on for channel state cleanup. Additionally, clients can't be relied on
to send cancellation messages. It was therefore theoretically possible
for pathological behaviors to cause an unbounded growth in orphan
request data in the Channel.
With this change, as long as requests sent have reasonable deadlines,
then the channel will be able to clean itself up. It is still possible
for requests to be sent with very large deadlines, which would prevent
the channel from cleaning itself up.
1. Renames
Some of the items in this module were renamed to be less generic:
- Handler => Incoming
- ClientHandler => Requests
- ResponseHandler => InFlightRequest
- Channel::{respond_with => requests}
In the case of Handler: handler of *what*? Now it's a bit clearer that
this is a stream of Channels (aka *incoming* connections).
Similarly, ClientHandler was a stream of requests over a single
connection. Hopefully Requests better reflects that.
ResponseHandler was renamed InFlightRequest because it no longer
contains the serving function. Instead, it is just the request, plus
the response channel and an abort hook. As a result of this,
Channel::respond_with underwent a big change: it used to take the
serving function and return a ClientHandler; now it has been renamed
Channel::requests and does not take any args.
2. Execute methods
All methods thats actually result in responses being generated
have been consolidated into methods named `execute`:
- InFlightRequest::execute returns a future that completes when a
response has been generated and sent to the server Channel.
- Requests::execute automatically spawns response handlers for all
requests over a single channel.
- Channel::execute is a convenience for `channel.requests().execute()`.
- Incoming::execute automatically spawns response handlers for all
requests over all channels.
3. Removal of Server.
server::Server was removed, as it provided no value over the Incoming/Channel
abstractions. Additionally, server::new was removed, since it just
returned a Server.
warning: explicit lifetimes given in parameter types where they could be elided (or replaced with `'_` if needed by type declaration)
--> tarpc/src/rpc/server/filter.rs:127:5
|
127 | fn channel<'a>(self: Pin<&'a mut Self>) -> Pin<&'a mut C> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(clippy::needless_lifetimes)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes
warning: 1 warning emitted