As part of this, I made an optional tokio feature which, when enabled,
adds utility functions that spawn on the default tokio executor. This
allows for the removal of the runtime crate.
On the one hand, this makes the spawning utils slightly less generic. On
the other hand:
- The fns are just helpers and are easily rewritten by the user.
- Tokio is the clear dominant futures executor, so most people will just
use these versions.
Send + 'static was baked in to make it possible to spawn futures onto
the default executor. We can accomplish the same thing by offering
helper fns that do the spawning while not requiring it for the rest of
the functionality.
Fixes https://github.com/google/tarpc/issues/212
With this change, the service definitions don't need to be isolated in their own modules.
Given:
```rust
#[tarpc::service]
trait World { ... }
```
Before this would generate the following items
------
- `trait World`
- `fn serve`
- `struct Client`
- `fn new_stub`
`// Implementation details below`
- `enum Request`
- `enum Response`
- `enum ResponseFut`
And now these items
------
- `trait World { ... fn serve }`
- `struct WorldClient ... impl WorldClient { ... async fn new }`
`// Implementation details below`
- `enum WorldRequest`
- `enum WorldResponse`
- `enum WorldResponseFut`
- `struct ServeWorld` (new manual closure impl because you can't use impl Trait in trait fns)
```
- fn serve -> Service::serve
- fn new_stub -> Client::new
This allows the generated function names to remain consistent across
service definitions while preventing collisions.
-- Connection Limits
The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:
incoming.max_channels_per_key(10, ip_addr)
The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.
This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.
-- Per-Channel Request Throttling
With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:
channel.max_concurrent_requests(10).respond_with(serve(Server))
This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:
incoming
.max_channels_per_key(10, ip_addr)
.max_concurrent_requests_per_channel(10).respond_with(serve(Server))
-- Global Request Throttling
I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
DispatchResponse was incorrectly marking itself as complete even when
expiring without receiving a response. This can cause a chain of
deleterious effects:
- Request cancellation won't propagate when request timers expire.
- Which causes client dispatch to have an inconsistent in-flight request
map containing stale IDs.
- Which can cause clients to hang rather than exiting.