Files
tarpc/json-transport/tests/pushback.rs
Tim Kuehn 1089415451 Make server methods more composable.
-- Connection Limits

The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:

incoming.max_channels_per_key(10, ip_addr)

The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.

This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.

-- Per-Channel Request Throttling

With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:

channel.max_concurrent_requests(10).respond_with(serve(Server))

This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:

incoming
    .max_channels_per_key(10, ip_addr)
    .max_concurrent_requests_per_channel(10).respond_with(serve(Server))

-- Global Request Throttling

I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
2019-07-15 19:01:46 -07:00

120 lines
3.7 KiB
Rust

// Copyright 2019 Google LLC
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file or at
// https://opensource.org/licenses/MIT.
//! Tests client/server control flow.
#![feature(async_await)]
#![feature(async_closure)]
use futures::{
compat::{Executor01CompatExt, Future01CompatExt},
prelude::*,
};
use log::{error, info, trace};
use rand_distr::{Distribution, Normal};
use rpc::{
client, context,
server::{Channel, Server},
};
use std::{
io,
time::{Duration, Instant, SystemTime},
};
use tokio::timer::Delay;
pub trait AsDuration {
/// Delay of 0 if self is in the past
fn as_duration(&self) -> Duration;
}
impl AsDuration for SystemTime {
fn as_duration(&self) -> Duration {
self.duration_since(SystemTime::now()).unwrap_or_default()
}
}
async fn run() -> io::Result<()> {
let listener = tarpc_json_transport::listen(&"0.0.0.0:0".parse().unwrap())?
.filter_map(|r| future::ready(r.ok()));
let addr = listener.get_ref().local_addr();
let server = Server::<String, String>::default()
.incoming(listener)
.take(1)
.for_each(async move |channel| {
let client_addr = channel.get_ref().peer_addr().unwrap();
let handler = channel.respond_with(move |ctx, request| {
// Sleep for a time sampled from a normal distribution with:
// - mean: 1/2 the deadline.
// - std dev: 1/2 the deadline.
let deadline: Duration = ctx.deadline.as_duration();
let deadline_millis = deadline.as_secs() * 1000 + deadline.subsec_millis() as u64;
let distribution =
Normal::new(deadline_millis as f64 / 2., deadline_millis as f64 / 2.).unwrap();
let delay_millis = distribution.sample(&mut rand::thread_rng()).max(0.);
let delay = Duration::from_millis(delay_millis as u64);
trace!(
"[{}/{}] Responding to request in {:?}.",
ctx.trace_id(),
client_addr,
delay,
);
let sleep = Delay::new(Instant::now() + delay).compat();
async {
sleep.await.unwrap();
Ok(request)
}
});
tokio_executor::spawn(handler.unit_error().boxed().compat());
});
tokio_executor::spawn(server.unit_error().boxed().compat());
let mut config = client::Config::default();
config.max_in_flight_requests = 10;
config.pending_request_buffer = 10;
let conn = tarpc_json_transport::connect(&addr).await?;
let client = client::new::<String, String, _>(config, conn).await?;
let clients = (1..=100u32).map(|_| client.clone()).collect::<Vec<_>>();
for mut client in clients {
let ctx = context::current();
tokio_executor::spawn(
async move {
let trace_id = *ctx.trace_id();
let response = client.call(ctx, "ping".into());
match response.await {
Ok(response) => info!("[{}] response: {}", trace_id, response),
Err(e) => error!("[{}] request error: {:?}: {}", trace_id, e.kind(), e),
}
}
.unit_error()
.boxed()
.compat(),
);
}
Ok(())
}
#[test]
fn ping_pong() -> io::Result<()> {
env_logger::init();
rpc::init(tokio::executor::DefaultExecutor::current().compat());
tokio::run(
run()
.map_ok(|_| println!("done"))
.map_err(|e| panic!(e.to_string()))
.boxed()
.compat(),
);
Ok(())
}