Remove deprecated tokio-proto and replace with homegrown rpc framework (#199)

# New Crates

- crate rpc contains the core client/server request-response framework, as well as a transport trait.
- crate bincode-transport implements a transport that works almost exactly as tarpc works today (not to say it's wire-compatible).
- crate trace has some foundational types for tracing. This isn't really fleshed out yet, but it's useful for in-process log tracing, at least.

All crates are now at the top level. e.g. tarpc-plugins is now tarpc/plugins rather than tarpc/src/plugins. tarpc itself is now a *very* small code surface, as most functionality has been moved into the other more granular crates.

# New Features
- deadlines: all requests specify a deadline, and a server will stop processing a response when past its deadline.
- client cancellation propagation: when a client drops a request, the client sends a message to the server informing it to cancel its response. This means cancellations can propagate across multiple server hops.
- trace context stuff as mentioned above
- more server configuration for total connection limits, per-connection request limits, etc.

# Removals
- no more shutdown handle.  I left it out for now because of time and not being sure what the right solution is.
- all async now, no blocking stub or server interface. This helps with maintainability, and async/await makes async code much more usable. The service trait is thusly renamed Service, and the client is renamed Client.
- no built-in transport. Tarpc is now transport agnostic (see bincode-transport for transitioning existing uses).
- going along with the previous bullet, no preferred transport means no TLS support at this time. We could make a tls transport or make bincode-transport compatible with TLS.
- a lot of examples were removed because I couldn't keep up with maintaining all of them. Hopefully the ones I kept are still illustrative.
- no more plugins!

# Open Questions

1. Should client.send() return `Future<Response>` or `Future<Future<Response>>`? The former appears more ergonomic but it doesn’t allow concurrent requests with a single client handle. The latter is less ergonomic but yields back control of the client once it’s successfully sent out the request. Should we offer fns for both?
2. Should rpc service! Fns take &mut self or &self or self? The service needs to impl Clone anyway, technically we only need to clone it once per connection, and then leave it up to the user to decide if they want to clone it per RPC. In practice, everyone doing nontrivial stuff will need to clone it per RPC, I think.
3. Do the request/response structs look ok?
4. Is supporting server shutdown/lameduck important?

Fixes #178 #155 #124 #104 #83 #38
This commit is contained in:
Tim
2018-10-16 11:26:27 -07:00
committed by GitHub
parent 5e4b97e589
commit 905e5be8bb
73 changed files with 4690 additions and 5143 deletions

View File

@@ -0,0 +1,15 @@
#![feature(
futures_api,
pin,
arbitrary_self_types,
await_macro,
async_await,
proc_macro_hygiene,
)]
// This is the service definition. It looks a lot like a trait definition.
// It defines one RPC, hello, which takes one arg, name, and returns a String.
tarpc::service! {
/// Returns a greeting for name.
rpc hello(name: String) -> String;
}

View File

@@ -0,0 +1,79 @@
#![feature(
futures_api,
pin,
arbitrary_self_types,
await_macro,
async_await,
)]
use futures::{
compat::TokioDefaultSpawner,
future::{self, Ready},
prelude::*,
};
use tarpc::{
client, context,
server::{self, Handler, Server},
};
use std::io;
// This is the type that implements the generated Service trait. It is the business logic
// and is used to start the server.
#[derive(Clone)]
struct HelloServer;
impl service::Service for HelloServer {
// Each defined rpc generates two items in the trait, a fn that serves the RPC, and
// an associated type representing the future output by the fn.
type HelloFut = Ready<String>;
fn hello(&self, _: context::Context, name: String) -> Self::HelloFut {
future::ready(format!("Hello, {}!", name))
}
}
async fn run() -> io::Result<()> {
// bincode_transport is provided by the associated crate bincode-transport. It makes it easy
// to start up a serde-powered bincode serialization strategy over TCP.
let transport = bincode_transport::listen(&"0.0.0.0:0".parse().unwrap())?;
let addr = transport.local_addr();
// The server is configured with the defaults.
let server = Server::new(server::Config::default())
// Server can listen on any type that implements the Transport trait.
.incoming(transport)
// Close the stream after the client connects
.take(1)
// serve is generated by the service! macro. It takes as input any type implementing
// the generated Service trait.
.respond_with(service::serve(HelloServer));
tokio_executor::spawn(server.unit_error().boxed().compat());
let transport = await!(bincode_transport::connect(&addr))?;
// new_stub is generated by the service! macro. Like Server, it takes a config and any
// Transport as input, and returns a Client, also generated by the macro.
// by the service mcro.
let mut client = await!(service::new_stub(client::Config::default(), transport))?;
// The client has an RPC method for each RPC defined in service!. It takes the same args
// as defined, with the addition of a Context, which is always the first arg. The Context
// specifies a deadline and trace information which can be helpful in debugging requests.
let hello = await!(client.hello(context::current(), "Stim".to_string()))?;
println!("{}", hello);
Ok(())
}
fn main() {
tarpc::init(TokioDefaultSpawner);
tokio::run(run()
.map_err(|e| eprintln!("Oh no: {}", e))
.boxed()
.compat()
);
}