As part of this, I made an optional tokio feature which, when enabled,
adds utility functions that spawn on the default tokio executor. This
allows for the removal of the runtime crate.
On the one hand, this makes the spawning utils slightly less generic. On
the other hand:
- The fns are just helpers and are easily rewritten by the user.
- Tokio is the clear dominant futures executor, so most people will just
use these versions.
Send + 'static was baked in to make it possible to spawn futures onto
the default executor. We can accomplish the same thing by offering
helper fns that do the spawning while not requiring it for the rest of
the functionality.
Fixes https://github.com/google/tarpc/issues/212
- fn serve -> Service::serve
- fn new_stub -> Client::new
This allows the generated function names to remain consistent across
service definitions while preventing collisions.
-- Connection Limits
The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:
incoming.max_channels_per_key(10, ip_addr)
The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.
This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.
-- Per-Channel Request Throttling
With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:
channel.max_concurrent_requests(10).respond_with(serve(Server))
This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:
incoming
.max_channels_per_key(10, ip_addr)
.max_concurrent_requests_per_channel(10).respond_with(serve(Server))
-- Global Request Throttling
I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
# New Crates
- crate rpc contains the core client/server request-response framework, as well as a transport trait.
- crate bincode-transport implements a transport that works almost exactly as tarpc works today (not to say it's wire-compatible).
- crate trace has some foundational types for tracing. This isn't really fleshed out yet, but it's useful for in-process log tracing, at least.
All crates are now at the top level. e.g. tarpc-plugins is now tarpc/plugins rather than tarpc/src/plugins. tarpc itself is now a *very* small code surface, as most functionality has been moved into the other more granular crates.
# New Features
- deadlines: all requests specify a deadline, and a server will stop processing a response when past its deadline.
- client cancellation propagation: when a client drops a request, the client sends a message to the server informing it to cancel its response. This means cancellations can propagate across multiple server hops.
- trace context stuff as mentioned above
- more server configuration for total connection limits, per-connection request limits, etc.
# Removals
- no more shutdown handle. I left it out for now because of time and not being sure what the right solution is.
- all async now, no blocking stub or server interface. This helps with maintainability, and async/await makes async code much more usable. The service trait is thusly renamed Service, and the client is renamed Client.
- no built-in transport. Tarpc is now transport agnostic (see bincode-transport for transitioning existing uses).
- going along with the previous bullet, no preferred transport means no TLS support at this time. We could make a tls transport or make bincode-transport compatible with TLS.
- a lot of examples were removed because I couldn't keep up with maintaining all of them. Hopefully the ones I kept are still illustrative.
- no more plugins!
# Open Questions
1. Should client.send() return `Future<Response>` or `Future<Future<Response>>`? The former appears more ergonomic but it doesn’t allow concurrent requests with a single client handle. The latter is less ergonomic but yields back control of the client once it’s successfully sent out the request. Should we offer fns for both?
2. Should rpc service! Fns take &mut self or &self or self? The service needs to impl Clone anyway, technically we only need to clone it once per connection, and then leave it up to the user to decide if they want to clone it per RPC. In practice, everyone doing nontrivial stuff will need to clone it per RPC, I think.
3. Do the request/response structs look ok?
4. Is supporting server shutdown/lameduck important?
Fixes#178#155#124#104#83#38
* Head off imminent breakage due to https://github.com/rust-lang/rust/pull/51285.
* Fix examples and documentation to use a recently-gated feature, `proc_macro_path_invoc`.
* Update dependency versions.
* Change `client::{future, sync}, server::{future, sync}` to `future::{client, server}, sync::{client, server}`
This reflects the most common usage pattern and allows for some types to not need to be fully qualified when used together (e.g. the previously-named `client::future::Options` and `server::future::Options` can now be `client::Options` and `server::Options`).
* sync::client: create a RequestHandler struct to encapsulate logic of processing client requests.
The largest benefit is that unit testing becomes easier, e.g. testing that the request processing stops when all request senders are dropped.
* Rename Serialize error variants to make sense.
* Rename tarpc_service_ConnectFuture__ => Connect (because it's public)
* Use tokio proto's ClientProxy.
Rather than reimplement the same logic. Curiously, this commit
isn't a net loss in LOC. Oh well.
* Factor out os-specific functionality in listener() into their own fns
* Remove service-fn dep
The basic strategy is to start a reactor on a dedicated thread running a request stream.
Requests are spawned onto the reactor, allowing multiple requests to be
processed concurrently. For example, if you clone the client to make requests
from multiple threads, they won't have to wait for each others'
requests to complete before theirs start being sent out.
Also, client rpcs only take &self now, which was also required for
clients to be usable in a service.
Also added a test to prevent regressions.
* Add server::Handle::shutdown
* Hybrid approach: lameduck + total shutdown when all clients disconnect.
* The future handle has addr() and shutdown(), but not run().