Make server methods more composable.

-- Connection Limits

The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:

incoming.max_channels_per_key(10, ip_addr)

The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.

This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.

-- Per-Channel Request Throttling

With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:

channel.max_concurrent_requests(10).respond_with(serve(Server))

This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:

incoming
    .max_channels_per_key(10, ip_addr)
    .max_concurrent_requests_per_channel(10).respond_with(serve(Server))

-- Global Request Throttling

I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
This commit is contained in:
Tim Kuehn
2019-07-15 18:58:36 -07:00
parent 146496d08c
commit 1089415451
36 changed files with 1303 additions and 989 deletions

125
rpc/src/server/testing.rs Normal file
View File

@@ -0,0 +1,125 @@
use crate::server::{Channel, Config};
use crate::{context, Request, Response};
use fnv::FnvHashSet;
use futures::future::{AbortHandle, AbortRegistration};
use futures::{Sink, Stream};
use futures_test::task::noop_waker_ref;
use pin_utils::{unsafe_pinned, unsafe_unpinned};
use std::collections::VecDeque;
use std::io;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::time::SystemTime;
pub(crate) struct FakeChannel<In, Out> {
pub stream: VecDeque<In>,
pub sink: VecDeque<Out>,
pub config: Config,
pub in_flight_requests: FnvHashSet<u64>,
}
impl<In, Out> FakeChannel<In, Out> {
unsafe_pinned!(stream: VecDeque<In>);
unsafe_pinned!(sink: VecDeque<Out>);
unsafe_unpinned!(in_flight_requests: FnvHashSet<u64>);
}
impl<In, Out> Stream for FakeChannel<In, Out>
where
In: Unpin,
{
type Item = In;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
self.stream().poll_next(cx)
}
}
impl<In, Resp> Sink<Response<Resp>> for FakeChannel<In, Response<Resp>> {
type Error = io::Error;
fn poll_ready(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.sink().poll_ready(cx).map_err(|e| match e {})
}
fn start_send(
mut self: Pin<&mut Self>,
response: Response<Resp>,
) -> Result<(), Self::Error> {
self.as_mut()
.in_flight_requests()
.remove(&response.request_id);
self.sink().start_send(response).map_err(|e| match e {})
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.sink().poll_flush(cx).map_err(|e| match e {})
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.sink().poll_close(cx).map_err(|e| match e {})
}
}
impl<Req, Resp> Channel for FakeChannel<io::Result<Request<Req>>, Response<Resp>>
where
Req: Unpin + Send + 'static,
Resp: Send + 'static,
{
type Req = Req;
type Resp = Resp;
fn config(&self) -> &Config {
&self.config
}
fn in_flight_requests(self: Pin<&mut Self>) -> usize {
self.in_flight_requests.len()
}
fn start_request(self: Pin<&mut Self>, id: u64) -> AbortRegistration {
self.in_flight_requests().insert(id);
AbortHandle::new_pair().1
}
}
impl<Req, Resp> FakeChannel<io::Result<Request<Req>>, Response<Resp>> {
pub fn push_req(&mut self, id: u64, message: Req) {
self.stream.push_back(Ok(Request {
context: context::Context {
deadline: SystemTime::UNIX_EPOCH,
trace_context: Default::default(),
},
id,
message,
}));
}
}
impl FakeChannel<(), ()> {
pub fn default<Req, Resp>() -> FakeChannel<io::Result<Request<Req>>, Response<Resp>> {
FakeChannel {
stream: VecDeque::default(),
sink: VecDeque::default(),
config: Config::default(),
in_flight_requests: FnvHashSet::default(),
}
}
}
pub trait PollExt {
fn is_done(&self) -> bool;
}
impl<T> PollExt for Poll<Option<T>> {
fn is_done(&self) -> bool {
match self {
Poll::Ready(None) => true,
_ => false,
}
}
}
pub fn cx() -> Context<'static> {
Context::from_waker(&noop_waker_ref())
}