Make server methods more composable.

-- Connection Limits

The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:

incoming.max_channels_per_key(10, ip_addr)

The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.

This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.

-- Per-Channel Request Throttling

With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:

channel.max_concurrent_requests(10).respond_with(serve(Server))

This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:

incoming
    .max_channels_per_key(10, ip_addr)
    .max_concurrent_requests_per_channel(10).respond_with(serve(Server))

-- Global Request Throttling

I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
This commit is contained in:
Tim Kuehn
2019-07-15 18:58:36 -07:00
parent 146496d08c
commit 1089415451
36 changed files with 1303 additions and 989 deletions

View File

@@ -12,7 +12,7 @@ use std::{io, net::SocketAddr};
use tarpc::{client, context};
async fn run(server_addr: SocketAddr, name: String) -> io::Result<()> {
let transport = bincode_transport::connect(&server_addr).await?;
let transport = json_transport::connect(&server_addr).await?;
// new_stub is generated by the service! macro. Like Server, it takes a config and any
// Transport as input, and returns a Client, also generated by the macro.

View File

@@ -10,5 +10,9 @@
// It defines one RPC, hello, which takes one arg, name, and returns a String.
tarpc::service! {
/// Returns a greeting for name.
rpc hello(name: String) -> String;
rpc hello(#[serde(default = "default_name")] name: String) -> String;
}
fn default_name() -> String {
"DefaultName".into()
}

View File

@@ -15,13 +15,13 @@ use futures::{
use std::{io, net::SocketAddr};
use tarpc::{
context,
server::{Handler, Server},
server::{self, Channel, Handler},
};
// This is the type that implements the generated Service trait. It is the business logic
// and is used to start the server.
#[derive(Clone)]
struct HelloServer;
struct HelloServer(SocketAddr);
impl service::Service for HelloServer {
// Each defined rpc generates two items in the trait, a fn that serves the RPC, and
@@ -30,29 +30,39 @@ impl service::Service for HelloServer {
type HelloFut = Ready<String>;
fn hello(self, _: context::Context, name: String) -> Self::HelloFut {
future::ready(format!("Hello, {}!", name))
future::ready(format!(
"Hello, {}! You are connected from {:?}.",
name, self.0
))
}
}
async fn run(server_addr: SocketAddr) -> io::Result<()> {
// bincode_transport is provided by the associated crate bincode-transport. It makes it easy
// to start up a serde-powered bincode serialization strategy over TCP.
let transport = bincode_transport::listen(&server_addr)?;
// The server is configured with the defaults.
let server = Server::default()
// Server can listen on any type that implements the Transport trait.
.incoming(transport)
json_transport::listen(&server_addr)?
// Ignore accept errors.
.filter_map(|r| future::ready(r.ok()))
.map(server::BaseChannel::with_defaults)
// Limit channels to 1 per IP.
.max_channels_per_key(1, |t| t.as_ref().peer_addr().unwrap().ip())
// serve is generated by the service! macro. It takes as input any type implementing
// the generated Service trait.
.respond_with(service::serve(HelloServer));
server.await;
.map(|channel| {
let server = HelloServer(channel.as_ref().as_ref().peer_addr().unwrap());
channel.respond_with(service::serve(server))
})
// Max 10 channels.
.buffer_unordered(10)
.for_each(|_| futures::future::ready(()))
.await;
Ok(())
}
fn main() {
env_logger::init();
let flags = App::new("Hello Server")
.version("0.1")
.author("Tim <tikue@google.com>")