Have you ever wondered how to make a reverse proxy using Rust? Perhaps you started along this journey then quickly came to realize that there aren’t any easy copy-pasta articles on the various parts of building a proxy. There are just too few stones to cobble together into something useful. Maybe after that you started to do some digging into some of the https request and response libraries that exist in the Rust ecosystem, only to realize that you’re getting knee-deep into some of the hairier parts of software engineering down in Layer 6.

I have been here. Still today do my bones shiver with excitement at the thought of doing some lower-level development in Rust. Places we’ve lived in the past in C and C++ seem slightly farther out of reach in Rust, though this may be because you have to breach the safety of Rust’s hearth and get into unsafe territory.

Fear not. We can accomplish our objective without writing anything unsafe, or more accurately without even reaching into the OpenSSL FFI. I was looking forward to diving into this deep end, but it looks like we can succeed without getting our hands too dirty.

First, though, we’ll have to do a bit of research.

If you are instead the type that just wants to see the answer, jump to it


Blueprints

Let’s set the stage on what we’re doing here. I want to build a reverse proxy. What does this look like?

Proxy Diagram

In this diagram, on the left-most side, we have our three clients all making requests to various subdomains of spec-trust.com. Our load balancer on the inside of our cluster (between the dashed lines) will be the end of the first SSL connection and will then route some http traffic internally to an instance of our proxy. The proxy then begins another SSL connection with the product our clients are trying to reach on the right-most side of the diagram. These are just some load balancers with some load-balancey hostnames that have some product hosted over there, it doesn’t really matter what it is.

Can you tell where the problem lies?

You may not all be SSL enthusiasts studying this complex protocol in your spare time, so let’s take a slight detour. Though, I won’t pretend I’m an expert by any means, we’ll at least lay a foundation on which we can demonstrate the problem that occurs in this scenario.

HTTPS, SSL, and TLS, oh my!

When our proxy makes an HTTPS request out to the destination, it begins by first opening a socket. I won’t go too far into details that don’t directly deal with our problem, though. For encrypted traffic, which HTTPS is, we need to then negotiate a TLS connection, CloudFlare does a pretty good job explaining all of these concepts I’m going to touch on in here. Part of a secure connection is determining that we’re connecting to the endpoint that we expect. In order to verify that the host we’re connecting to is the correct one, we perform A Gentleperson’s Handshake (okay, okay, it’s the TLS handshake).

The TLS handshake consists of some negotiation about SSL ciphers and other bits and bobs, then in the Client Hello, we provide to the server the hostname that we expect to verify. This part of the handshake is called the Server Name Indication. The server can use this hostname to check among any certificates it may possess and provide one that matches the hostname, if possible. The client then attempts to verify the certificate by examining its trust which involves checking who signed the certificate, if it is the correct hostname, then verifying each certificate in the chain all the way up to the root Certificate Authority.

This is a pretty high-level overview of what happens during a secure connection, but hopefully even from this you can see where the problem we have starts to rear its ugly head.

Pictures are worth a thousand words, though:

Server Name Indication Diagram

The problem with the existing Rust request libraries that we have available to us today is that they don’t allow us to easily customize the SNI of our TLS handshake. This is a hard requirement for our project. We have to maintain a completely secure, yet also transparent, connection to any of our customers. Solving this problem is the focus of this article.

Someone’s already done this…right?

To be completely honest, I’ve taken for granted how well-developed other programming ecosystems are. I’ve worked in C++, C#, Java (and JVM languages), Python, and more, and all of these languages have strong coverage of the tools required to build a wide variety of applications. Some of these languages are even privately funded and developed, though parts of them have been released into the open source world. I’m looking at you .NET Core. Though Rust has seen more interest lately coming from the likes of Amazon as well as acceptance as a Linux Kernel language, at least for writing drivers.

Having become a Rust professional, I’ve gotten more exposure to complex situations in the products I work on. Several times now I’ve gotten to see how young the Rust ecosystem is and, while frustrating when working on a tight timeline, it’s pretty exciting to be part of the growth! We may even get to give back to some of the community with some of the changes we’ve had to make while working this these libraries at SpecTrust!

Enough blathering about flashy, young programming languages. Let’s get started on building this proxy!

So, I began where we likely all start when pursuing new things. Searching around the internet for anyone doing stuff with proxies in Rust already.

The one article I found proposed a solution that is actually very dangerous.

Sorry to call it out

Give it a read if you would like, but we’ll talk about very similar stuff in this article, though we’ll stay away from the line that does all of the work in that article.

let mut conn = HttpsConnector::new()?;
conn.set_callback(move |c, _| {
    // Prevent native TLS lib from inferring and verifying a default SNI.
    c.set_use_server_name_indication(false);
    c.set_verify_hostname(false); // !!! OFFENDING LINE !!!

    // And set a custom SNI instead.
    c.set_hostname("somewhere.com")
});

The OFFENDING LINE above completely disables any and all hostname verification on certificates. This means that the server we’re trying to connect to (one.spec-trust.com) could provide a completely valid certificate for somewhere-else.com and the client would accept the secure connection as valid, though it most definitely is not. This allows any impersonator to provide what looks like a completely secure connection to our clients. We’re not trying to connect to somewhere-else.com. We’re trying to connect to one.spec-trust.com and we want the certificate to prove it!

This solution also doesn’t cover our use case, so it turns out not to work at all for our needs either way, because this code doesn’t allow us to check a hostname that is different than the connection domain. When we call set_hostname above, the hyper library sets the hostname of the connection to be this value as well. Recall from our images that we’re connecting to a load balancer hostname like one.lb.aws.amazon.com but we want to change the SNI to one.spec-trust.com.

The move down into hyper from higher-level libraries like reqwest will be a solid clue, though, so thank you Martin!

You didn’t think it’d be that easy

The only info I could find on this topic was that article, which means now we’re to the point where we have to start diving into some of the existing libraries to see if we can get part of the functionality we’re looking for. It might be worth noting that setting the SNI isn’t the only option… we could also do the verification ourselves, after OpenSSL verification proves false, we could examine the certificate ourselves to see if it’s our intended hostname! In general, I don’t like to do verification work if we could leave it up to a team of folks attempting to guarantee that the authentication and verification process is tried and true. We won’t look into this solution at all, but it was a consideration while looking for any possible solutions.

The Archivist - Julie Dillon
Art by Julie Dillon

I pour up another cup of dark, smoky coffee and huddle beneath the gentle glow of my monitor. It’s time to start our research.

We’ll dig through existing libraries to see if we can make use of something before we implement something ourselves. I won’t go into deep detail about these requesting libraries, you can look into them on your own time. They each have their own tradeoffs that can be worth exploring depending on what your needs are.

Up until this point I’ve been using the reqwest library to make HTTPS requests. This library gives us a lot of abstraction over the HTTP protocol and will handle things like implicit Host headers, following redirects, keeping a cookie jar, dealing with content types and content encodings, it’s really wonderful…but will it help us accomplish our objective?

I set out by diving further into reqwest. reqwest is a common HTTP library built on top of hyper (aha, hyper is used here as well!) that is pretty widely used and recommended. It would be great to stay inside of a solid abstraction, so let’s see if we can achieve that.

The TLS blurb here mentions

Various parts of TLS can also be configured or even disabled on the ClientBuilder.

Onward to the ClientBuilder. I don’t really see much at the top of the Methods list that indicates something useful we can control regarding TLS. There’s some danger_accept_invalid_* functions, but just judging by the name we likely don’t want to pursue those. There’s some stuff here about TLS library choices, use_native_tls, use_rustls_tls, they don’t take any arguments though. This must just be a way to select something. The last function in this group could be interesting, though, use_preconfigured_tls. Its documentation says:

Use a preconfigured TLS backend.

If the passed Any argument is not a TLS backend that reqwest understands, the ClientBuilder will error when calling build.

Advanced

This is an advanced option, and can be somewhat brittle. Usage requires keeping the preconfigured TLS argument version in sync with reqwest, since version mismatches will result in an “unknown” TLS backend.

If possible, it’s preferable to use the methods on ClientBuilder to configure reqwest’s TLS.

Huh, okay. Well does that mean I can make anything that conforms to some interface act as a connector for reqwest? Click that [src] link and we can see that this only accepts direct instantiations of the native_tls_crate::TlsConnector or the rustls::ClientConfig.

Can we customize the connector?

Alright, well…can we make use of either of those? By taking a peek at the Cargo.toml in reqwest, we can see that the native_tls_crate is native_tls. Let’s look at the TlsConnector documentation examples. This example has some useful information in it that will come into play later on, we can see two hostname uses in this example. One of them is to google.com:443 and the other is to google.com! Great, can we control this now?

If we recall from the use_preconfigured_tls source, it must be a direct instantiation of this TlsConnector, we can’t use our own implementation of a trait or something. I tried passing in my own connector…reqwest gets really upset over it. We move on to the TlsConnectorBuilder to see if there are any useful gems, but alas our journey is yet again unfruitful. The only related method is use_sni and it only accepts a boolean to toggle the feature on or off.

Okay, so we can’t control the connector to do some magic and let us alter the SNI hostname. I looked around in the source code of reqwest to find a call to connect() but…the closest thing I could find was an http.call() callsite. This kind of looks like what we’re trying to do, but it doesn’t give us access to the TlsConnector configuration. It looks like the road to reqwest dead ends here.

Surf

My colleague recommended I take a look at surf. This is a cool abstraction over HTTP clients! It allows you to switch between backends and have a ubiquitous API to interact with them. This seems pretty useful but as I dug into what this library does, I realized that the customization we’d be doing would be to one of the HttpClient implementations that we provide as an argument to this library. I won’t go into much more detail about it, but I wanted to mention it because it’s a nice abstraction from various backends, so if you need to switch between libraries because you find they may not do exactly what you need, this can help.

The other great benefit that surf offers is the ability to create Middleware! I hadn’t really found this in reqwest, or at least not easily, so the ability to operate on requests and responses transparently could be useful in many projects.

Other libraries

Other libraries didn’t seem as full featured as reqwest or seemed more difficult to use (isahc, curl-rust). I didn’t dig too far into them. Although using curl FFI bindings would have been interesting, we’re trying to stay in Rust land as much as we can.

Getting into some Hyper technology

I’ve brought it up time and time again, but we’ve finally made it. What is this hyper thing I keep briefly mentioning?

hyper is a relatively low-level library, meant to be a building block for libraries and applications.

Great, so we know that we’ve officially dropped down a layer of abstraction. We’re no longer in that happy state where we’re blissfully unaware of the trials and tribulations of HTTP communication. We will have to do some work, but in exchange we’ll be offered a degree of control. Engineering is always a series of tradeoffs and here it’s no different.

Sacrifice

Moving into lower abstraction layers always involves sacrifice. What do we give up by moving from reqwest to hyper?

  1. Content-Type charset - encoding text is generally utf-8, unless it’s not
  2. Content-Encoding - servers can compress bodies to save time on the wire
  3. Host headers - automatically set by executing requests
  4. Redirect handling - if you want it, follows redirects
  5. more…

This isn’t a dealbreaking amount of stuff, but it’s definitely something of which to be aware. Can we deal with these problems with libraries or will we have to do more work?

It turns out that we started this project at the right time, because there have been complaints about using hyper in projects and how difficult it is to deal with content encodings. There are now two libraries that exist that can help us with problems (1) and (2) from above. We have async_compression which is a library that deals with the compression aspects of HTTP bodies and we have the encoding_rs library to deal with the charset text encodings! We won’t go into detail on using these libraries in this article, but it’s important that we notice this exists in order to scope how much work this project will ultimately entail.

Rust may be young, but its maturity is beginning to show! With libraries like these we don’t have to do a very large amount of work to create arbitrary decoders and encoders in our project. Instead we can focus on the real question, can we use hyper to do custom SNI magic!?

reqwest…again

I haven’t gone too deep into libraries because I’m still rather new to Rust. At this point I wanted to find some type of code that I could use as a reference for writing my own, to make sure that I’m doing things correctly. I’ve done stuff with OpenSSL in the past, so I’m not starting from 0 knowledge, but my Rust knowledge comparitively is weak. In open source world, we stand on the shoulders of giants!

Back into the reqwest library to see if we can figure out how to use hyper to connect to a server. Documentation wasn’t abundant on doing complex stuff with hyper, so I searched around and found this section of code that uses a hyper_tls::HttpsConnector, our first piece of hyper use! It may not seem so useful yet, but this connect() pattern is similar to the native_tls example we had way back in the beginning of this article.

So what is this HttpConnector? Well it turns out that when you’re building a hyper::Client from the Builder then you have to provide a Connector which implements the Connect trait. This documentation claims:

This is really just an alias for the tower::Service trait, with additional bounds set for convenience inside hyper. You don’t actually implement this trait, but tower::Service<Uri> instead.

Okay, so we just have to be a Service<Uri> to be considered as a Connector for a hyper::Client!

BFF Secret Handshake

Let’s build this connector and get this special secret handshake going! What we’re going to do is essentially build a passthrough for much of the functionality that we want to retain from hyper, but then use a custom connection implementation. Inside of this custom implementation is where we’ll perform our sleight of hand and change the SNI hostname without changing the connection hostname!

Here’s a simple struct definition that will house a few things that we need. We have a hash map that will let us translate from requesting hostname to the destination hostname, e.g. one.spec-trust.com => one.lb.aws.amazon.com. There’s an option to force https traffic only, which we’ll enable for release builds. Some type, T, which will be an implementation of hyper::client::HttpConnector and a TlsConnector. Pretty simple to start off.

#[derive(Clone)]
pub struct SpecHttpsConnector<T> {
    server_name_translations: HashMap<String, String>,

    force_https: bool,
    http: T,
    tls: TlsConnector,
}

Now we’ll implement Service<Uri>. This may be a bit overwhelming if you haven’t dealt much with futures. If you haven’t, I’d encourage you to explore them a bit! The topic isn’t too bad once you write a couple of your own to really solidify how the innerworkings of the async mechanisms operate. I’ll plug one of my favorite Rust bloggers Undestanding Futures by going way too deep.

impl<T> Service<Uri> for SpecHttpsConnector<T>
where
    T: Service<Uri>,
    T::Response: AsyncRead + AsyncWrite + Send + Unpin,
    T::Future: Send + 'static,
    T::Error: Into<BoxError>,
{
    type Response = MaybeHttpsStream<T::Response>;
    type Error = BoxError;
    type Future = HttpsConnecting<T::Response>;

    // Translate poll_ready errors into our domain
    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
        match self.http.poll_ready(cx) {
            Poll::Ready(Ok(())) => Poll::Ready(Ok(())),
            Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),
            Poll::Pending => Poll::Pending,
        }
    }

    fn call(&mut self, dst: Uri) -> Self::Future {
        let is_https = dst.scheme_str() == Some("https");
        // Early abort if HTTPS is forced but can't be used
        if !is_https && self.force_https {
            return err(ForceHttpsButUriNotHttps.into());
        }

        let host = dst
            .host()
            .unwrap_or("")
            .trim_matches(|c| c == '[' || c == ']')
            .to_owned();

        // if we have a server name translation for this host, use it!
        let customer_origin = self
            .server_name_translations
            .get(&host)
            .map(String::from)
            .unwrap_or_else(|| host.clone());

        // naive string replace to shift the host
        let new_dst: Uri = dst
            .to_string()
            .replace(host.as_str(), customer_origin.as_str())
            .try_into()
            // don't unwrap in production though
            .unwrap();

        // connecting to the translated hostname one.lb.aws.amazon.com!
        let connecting = self.http.call(new_dst);

        let tls = self.tls.clone();
        let fut = async move {
            let tcp = connecting.await.map_err(Into::into)?;
            let maybe = if is_https {

                // validating the hostname with the requesting host, one.spec-trust.com!
                let tls = tls.connect(&host, tcp).await?;

                MaybeHttpsStream::Https(tls)
            } else {
                MaybeHttpsStream::Http(tcp)
            };
            Ok(maybe)
        };
        HttpsConnecting(Box::pin(fut))
    }
}

There’s not a ton to unpack here. We check to see if we have a SNI replacement, and if we do we use it to overwrite the incoming hostname. Otherwise we just use whatever comes in. We create the tcp connection by calling self.http.call(new_dst) and then we use the incoming hostname when we call tls.connect(&host, tcp). Effectively we’ll have a socket connecting to the replaced host, but a TLS handshake that uses the incoming host argument. Voila!

Now that we’ve come so far, it really doesn’t seem like we’ve gotten that far at all. There’s some more stuff going on behind the scenes with the MaybeHttpsStream enumeration and handling https vs. http traffic, but it’s not extremely relevant to this topic. This comes into play when we link in our encoding libraries, which is a story for another day.

So does this even work!?

let mut server_name_translations = HashMap::new();
server_name_translations.insert("one.spec-trust.com".to_string(), "one.lb.aws.amazon.com".to_string());
let mut ssl = SpecHttpsConnector::new(server_name_translations);
ssl.https_only(true);
let client = hyper::Client::builder().build::<_, hyper::Body>(ssl);

let request = hyper::Request::builder()
    .method(hyper::Method::GET)
    .uri(uri);
let res = client
    .request(request.body(Default::default()).unwrap())
    .await?;
dbg!(res);

If you execute this with one of your own domain + load balancer combination of urls, you’ll see that we make a request to your load balancer, but we provide the original hostname as the SNI in the TLS handshake. This really is nothing short of amazing!

we're doing the conversion!!
[src/connector.rs:155] &host = "one.spec-trust.com"
[src/connector.rs:156] &customer_origin = "demo-app-dev-spec-one-239847298.us-west-1.elb.amazonaws.com"
// clip

Above is some sample output from my application where you can see that we replace the hostname with a load balancey hostname and then verify SSL against it!

“What’s next?” or “Why libraries are nice.”

All of this may ring a distant bell in your ear if you’ve done some significant reverse proxying work with nginx. In nginx there is a proxy directive proxy_ssl_name which essentially performs this magic behind the scenes. This is a good place to go if you don’t have to do any other work and can instead use a product to do things for you!

In this document I touch on async_compression and encoding_rs, but never go into where they’re used. This topic gets more and more complex as we get to a full-fledged implementation that supports all clients’ needs. I do want to mention, though, that the presence of these libraries is important in choosing this solution because it will save us much needed time not rebuilding the wheel where content translation is encountered. Libraries are nice and it benefits all of us to give back to them.

The next steps are to build content encoding and decoding into our server, which will properly deal with multiple text formats and compressions when reading response data. This presents a whole different slew of challenges because we’ll be linking up with an asynchronous runtime and deal with Streams! This dives into the Tokio realm of things and conforming to various interfaces that grant us behavior within the ecosystem!

Thank you for reading this and I hope it helps you on your way!