I Built a Localhost Tunneling tool in TypeScript – Here’s What Surprised Me

For years, I relied on indispensable but proprietary tunneling tools like ngrok. They are fantastic for exposing local development servers to the public internet, making webhook development and cross-device testing a breeze. Yet, the software engineer in me was always curious about the "black box." How did it really work? Could I build an open-source version that was just as simple and effective?

This curiosity led me to create Tunnelmole, a localhost tunneling tool written entirely in TypeScript both on the client and server side. The journey was more challenging and surprising than I anticipated. I dove in expecting to wrestle with network protocols and asynchronous code, which I did. But I also found myself grappling with unexpected challenges, from fighting off phishing scammers to hitting the limitations of modern JavaScript APIs.

This article shares the four most surprising lessons I learned while building a localhost tunnel from scratch. It's a story about technical discovery, unforeseen consequences, and the trade-offs between high-level abstractions and low-level control.

1. The Dark Side: Phishing Scammers Love Tunneling Tools

One of the first "success" metrics for Tunnelmole was, unfortunately, its adoption by malicious actors. Shortly after launching, I noticed a surge in usage that correlated with a spike in abuse reports. Scammers were using Tunnelmole to host phishing sites. They'd set up a fraudulent login page on their local machine, use Tunnelmole to get a public, HTTPS-enabled URL for it, and then use that URL in phishing campaigns.

This presented a significant problem. The service was designed for developers, but its core value proposition a free, anonymous public URL was a magnet for abuse. From the scammer's perspective, it was perfect: they could hide their server's true IP address behind Tunnelmole's infrastructure, making them harder to track down. When their fraudulent tunnelmole.net URL was reported, the abuse complaint would come to my hosting provider, not theirs.

Initially, I played a game of whack-a-mole, manually taking down abusive tunnels as reports came in. But that was never going to be a sustainable approach in the long run. I needed a systemic solution that would make Tunnelmole an unattractive platform for phishers without alienating legitimate developer users.

The solution came in two parts, focused on one goal: deanonymizing the origin of the tunnel.

Solution 1: The X-Forwarded-For Header

The first change was to ensure the X-Forwarded-For HTTP header was always present in requests passing through the tunnel. This standard header is used by proxies and load balancers to indicate the IP address of the client that initiated the request.

When a request comes into the Tunnelmole service, the service now adds this header, setting its value to the IP address of the tunnelmole client user.

X-Forwarded-For: <IP address of the Tunnelmole client user>

This means that investigators can easily check this header and find the IP of the server hosting the malicious content.

Solution 2: IP in the URL

The second, more visible change was to embed the client's IP address directly into the randomly generated tunnel URLs. A typical Tunnelmole URL now looks like this:

https://xj38d-ip-111-111-111-111.tunnelmole.net

This makes it abundantly clear, even to a non-technical person, where the content is ultimately being served from. When an abuse report comes in, in addition to taking down the tunnel, I can reply with a simple, clear explanation: "The content is not hosted on our servers. As you can see from the URL, it originates from the IP address 111.111.111.111. Please direct your complaint to the hosting provider for that IP."

These two changes were game-changers. The abuse reports plummeted almost overnight. The scammers realized that Tunnelmole no longer offered them the anonymity they craved. If their origin IP was going to be exposed anyway, they might as well host the phishing site directly on their own server.

This experience also underscored the importance of domain separation. The main website is tunnelmole.com, while the tunnels themselves operate on tunnelmole.net. This was a deliberate choice to protect the reputation and SEO of the main domain. If malicious user generated content hosted on a .tunnelmole.net subdomain caused the entire domain to be blacklisted, it wouldn't take the tunnelmole.com website down with it.

Having malicious content hosted on the main domain, even if I didn't put it there myself, would have serious SEO consequences. Google massively downranks any domains known to have ever had malicious content.

2. Abstraction Disappointment: fetch Does Not Give You the Pure HTTP Request

When it came time to write the client-side code that receives a request from the tunnel and forwards it to the user's local server, my first instinct was to reach for a modern, high-level API like fetch. It's the standard for making HTTP requests in browsers and Node.js. This is what i'd used for much of my life for interacting with APIs. My thinking was simple: take the incoming data from the WebSocket, construct a fetch request, and send it to localhost.

I quickly ran into a wall. High-level abstractions like fetch and even axios are designed for convenience, not for perfect, byte-for-byte proxying. They are "opinionated" and manipulate the underlying HTTP request in ways that are helpful for most application development but disastrous for a tunneling tool.

Here were the main problems:

  • Header Manipulation: fetch automatically lowercases all header names. This is usually fine, as HTTP header names are case-insensitive. However, a true tunnel should be transparent. It shouldn't alter the data passing through it. If a developer is debugging a case-sensitive header issue with a poorly-behaved client, the tunnel shouldn't hide the problem.
  • Body Parsing: fetch wants to be helpful with the request and response bodies. It tries to parse them, stream them, and handle content encoding. But for a tunnel, the body is just an opaque bag of bytes. It could be a JSON payload, a multipart form upload, or something binary. I needed to grab the raw body as a Buffer and forward it verbatim. Trying to force fetch to "un-process" the body was clumsy and unreliable.
  • Lack of Low-Level Control: I couldn't get the raw, untouched HTTP request. The tool was always one step removed from the underlying socket.

After struggling with these libraries, I realized I was using the wrong tool for the job. I didn't need a convenient API for making requests; I needed a low-level tool for reconstructing them.

The solution was to go back to basics and use Node.js's built-in http module.

The http.request() method provides the granular control I needed. It allows you to set headers exactly as you receive them, write the request body directly from a Buffer, and manage the connection at a much lower level.

By working with the http module, requests could be treated as generic Buffer objects. This ensured that any type of data JSON, HTML, images, binary files could be proxied faithfully without being accidentally misinterpreted or modified by an overly helpful abstraction layer. The tunnel could finally be the transparent conduit it was meant to be.

The only downside here is it does not come with a nice async/await Promise based workflow, so I had to go back to using callbacks.

3. You Can and Should Send Typed JSON as WebSocket Messages

A tunnel works by establishing a persistent connection between the client (tmole) and the server (tunnelmole.net). I chose WebSockets for this, as they provide a full-duplex communication channel over a single TCP connection, perfect for this kind of proxying.

The fundamental challenge with WebSockets is that they just transmit messages, either as strings or binary data. You are responsible for defining the structure and meaning of those messages. The naive approach would be to just send raw data and use a series of if/then/else statements or complex prefixes to figure out what each message represents. Is this the start of a connection? Is it an HTTP request from the server? An HTTP response from the client? This path leads to brittle, unmaintainable spaghetti code.

Instead, I decided to build a simple, explicit messaging "framework" on top of the WebSocket connection. Every message is a JSON object with a type property. This property dictates the shape of the message's payload and determines which handler function should process it.

On the server, when a message arrives from a client, a simple router looks at the type and dispatches it to the correct handler.

Here is a simplified look at the server-side dispatcher:

// Simplified message dispatcher on the server
websocket.on('message', (text: string) => {
    try {
        const message = JSON.parse(text);

        // A map of message types to handler functions
        const handler = messageHandlers[message.type];

        if (handler) {
            handler(message, websocket);
        } else {
            console.error(`No handler for message type: ${message.type}`);
        }
    } catch (error) {
        console.error('Failed to parse or handle message', error);
    }
});

This approach turns a chaotic stream of data into a structured, event-driven system. We have a dedicated handler for each type of message. For example, when a client first connects, it sends an initialize message, which is processed by the initializeHandler.

The real power of this pattern becomes clear when handling the core tunneling logic. When a public request hits a user's URL (e.g., https://...tunnelmole.net/api/test), the server packages it into a ForwardedRequestMessage and sends it to the client over the WebSocket.

The client receives this message and its forwardedRequest handler fires. This handler is the bridge between the WebSocket world and the localhost world.

Here's a closer look at the client-side handler, which uses the http module we discussed earlier:

// From: src/message-handlers/forwarded-request.ts

import http from 'http';
import { ForwardedRequestMessage } from '../messages';
import { Options } from '../options';
import { HostipWebSocket } from '../websocket-wrapper';

export default async function forwardedRequest(
    forwardedRequestMessage: ForwardedRequestMessage, 
    websocket: HostipWebSocket, 
    options: Options
) {
    const { requestId, url, headers, method, body } = forwardedRequestMessage;
    
    // 1. Configure the local HTTP request options
    const requestOptions: http.RequestOptions = {
        hostname: 'localhost',
        port: options.port, // The user's local port (e.g., 3000)
        path: url,
        method,
        headers
    };
    
    // 2. Create and dispatch the request to the local server
    const request = http.request(requestOptions, (response) => {
        let responseBody = Buffer.alloc(0);
        
        // 3. Collect response data chunks as they stream in
        response.on('data', (chunk: Buffer) => {
            responseBody = Buffer.concat([responseBody, chunk]);
        });
        
        // 4. When the local response ends, send it back to the server
        response.on('end', () => {
            websocket.sendMessage({
                type: 'forwarded-response',
                requestId,
                statusCode: response.statusCode,
                headers: response.headers,
                // The body is Base64 encoded to safely transmit binary data in JSON
                body: responseBody.toString('base64')
            });
        });
    });
    
    request.on('error', (error) => {
        console.error(`Error forwarding request to localhost:${options.port}:`, error);
        // Inform the server that the request failed
        websocket.sendMessage({
            type: 'forwarded-response',
            requestId,
            statusCode: 502, // Bad Gateway
            headers: {'content-type': 'text/plain'},
            body: Buffer.from(`Tunnelmole: Error connecting to localhost:${options.port}`).toString('base64')
        });
    });

    // 5. Write the request body if it exists
    if (body) {
        // The body from the server is Base64 encoded
        request.write(Buffer.from(body, 'base64'));
    }
    
    request.end();
}

This typed, message-driven architecture is clean, self-documenting, and extensible. Adding a new capability to the tunnel is as simple as defining a new message type and writing a handler for it. It completely avoids the ambiguity of an unstructured data stream.

4. Node.js Will Not Hold Your Hand: Memory Leaks Are a Thing

Coming from a PHP background, I was accustomed to a stateless, "share-nothing" architecture. In PHP, every web request starts with a clean slate. Memory and resources are allocated, the script runs, a response is sent, and then everything is torn down. It's exceptionally difficult to create a memory leak that persists between requests unless you go out of your way to remove built in safety limits and misconfigure things.

Node.js is a different beast entirely. It's a stateful, long-running process. This is one of its greatest strengths it's fast and efficient because it doesn't have the overhead of bootstrapping and tearing down on every request. But this power comes with responsibility. Any object you create can potentially live for the entire lifetime of the process. If you forget to clean up, you will have a memory leak.

I learned this the hard way. The Tunnelmole service needs to keep track of every active client connection. I created a simple Proxy class to manage this. Here is an oversimplified version of the initial implementation:

// Oversimplified initial version of the connection manager
export default class Proxy {
    private static instance: Proxy;

    // An array to hold all active WebSocket connections
    connections: Array<Connection> = [];
    
    public addConnection(hostname: string, websocket: HostipWebSocket, /* ...other params */): void {
        const connection: Connection = {
            hostname,
            websocket,
            // ...other properties
        };

        this.connections.push(connection);
    } 

    // ... other methods like findConnectionByHostname() ...
}

This code works perfectly… for a while. It adds new connections to the connections array as clients connect. But what happens when a client disconnects? Nothing. The Connection object, including its now-defunct WebSocket object, remains in the connections array forever.

The array just grew and grew. With each new connection, the server's memory usage crept up. Eventually, the process would exhaust all available memory and crash. I had created a classic memory leak.

The fix, in hindsight, was obvious. I needed to hook into the WebSocket's close event and explicitly clean up the stale connection object.

First, I added a deleteConnection method to the Proxy class:

// In the Proxy class
public deleteConnection(clientId: string): void {
    this.connections = this.connections.filter(conn => conn.clientId !== clientId);
}

Then, I attached a listener to the close event for every new WebSocket connection:

// When a new WebSocket connection is established...
websocket.on('close', (code: number, reason: string) => {
    // Ensure the connection is fully terminated
    websocket.terminate();
    
    // Use the proxy to remove the connection from the active list
    proxy.deleteConnection(websocket.tunnelmoleClientId);
    
    console.log(`Connection ${websocket.tunnelmoleClientId} closed.`);
});

With this change in place, the memory leak vanished. The server's memory usage became stable, rising and falling naturally with the number of active users. This experience was a stark reminder that in a long-running environment like Node.js, you are the custodian of memory. You must be diligent about resource cleanup.

Conclusion

Building Tunnelmole from the ground up was an incredible learning experience that went far beyond writing network code. It forced me to confront the real-world operational challenges of running a public service, appreciate the trade-offs between different levels of API abstraction, and internalize the discipline required for state management in a long-running process.

The key takeaways were:

  1. Security Through Transparency: When building public tools, preventing abuse is as important as the core functionality. Sometimes, the best security measure is to remove the veil of anonymity that attracts bad actors.
  2. Use the Right Level of Abstraction: High-level APIs like fetch are powerful but have their limits. For tasks that require absolute control, like proxying, don't be afraid to drop down to lower-level APIs like Node's http module.
  3. Structure Your Data Streams: Don't treat WebSockets as an unstructured pipe. A simple, typed messaging protocol brings order to chaos and makes your application robust and extensible.
  4. Manage Your State Diligently: In stateful environments like Node.js, you are responsible for memory management. Always have a plan for cleaning up objects and resources that are no longer needed.

I set out to build an open-source alternative to proprietary software and in the process, I learned quite a lot about NodeJS and the core of the HTTP protocol. If you're a developer who loves to peek inside the black box, I can't recommend a project like this enough.

If you're interested in checking out the result of this journey, you can try Tunnelmole now or dive into the full, non oversimplfied code on GitHub. It’s open source, and contributions are always welcome.