Commit Graph

9 Commits

Author SHA1 Message Date
Artem Vorotnikov
d8c7b9feb2 JSON transport: use Tokio resolver for connect() 2019-10-08 18:03:25 -07:00
Artem Vorotnikov
5ab3866d96 Add Unpin note 2019-10-08 17:15:17 -07:00
Artem Vorotnikov
184ea42033 Upgrade json-transport to Tokio 0.2 2019-10-08 17:15:17 -07:00
Artem Vorotnikov
46bcc0f559 tokio 0.2.0-alpha.4 2019-08-30 09:29:18 -07:00
Artem Vorotnikov
1d0bbcb36c Reformat all code using rustfmt 2019-07-23 03:44:16 +03:00
Tim Kuehn
537446a5c9 Remove use of unstable feature 'arbitrary_self_types'.
Turns out, this actually wasn't needed, with some minor refactoring.
2019-07-19 00:48:59 -07:00
Tim Kuehn
7b6e98da7b Replace transport integration tests with unit tests.
I want 'cargo test' to run faster.
2019-07-15 22:40:58 -07:00
Tim Kuehn
1089415451 Make server methods more composable.
-- Connection Limits

The problem with having ConnectionFilter default-enabled is elaborated on in https://github.com/google/tarpc/issues/217. The gist of it is not all servers want a policy based on `SocketAddr`. This PR allows customizing the behavior of ConnectionFilter, at the cost of not having it enabled by default. However, enabling it is as simple as one line:

incoming.max_channels_per_key(10, ip_addr)

The second argument is a key function that takes the user-chosen transport and returns some hashable, equatable, cloneable key. In the above example, it returns an `IpAddr`.

This also allows the `Transport` trait to have the addr fns removed, which means it has become simply an alias for `Stream + Sink`.

-- Per-Channel Request Throttling

With respect to Channel's throttling behavior, the same argument applies. There isn't a one size fits all solution to throttling requests, and the policy applied by tarpc is just one of potentially many solutions. As such, `Channel` is now a trait that offers a few combinators, one of which is throttling:

channel.max_concurrent_requests(10).respond_with(serve(Server))

This functionality is also available on the existing `Handler` trait, which applies it to all incoming channels and can be used in tandem with connection limits:

incoming
    .max_channels_per_key(10, ip_addr)
    .max_concurrent_requests_per_channel(10).respond_with(serve(Server))

-- Global Request Throttling

I've entirely removed the overall request limit enforced across all channels. This functionality is easily gotten back via [`StreamExt::buffer_unordered`](https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.1/futures/stream/trait.StreamExt.html#method.buffer_unordered), with the difference being that the previous behavior allowed you to spawn channels onto different threads, whereas `buffer_unordered ` means the `Channels` are handled on a single thread (the per-request handlers are still spawned). Considering the existing options, I don't believe that the benefit provided by this functionality held its own.
2019-07-15 19:01:46 -07:00
Artem Vorotnikov
950ad5187c Add JSON transport (#219) 2019-05-20 18:45:41 -07:00