Mt. Gox is a core memory for many people inside the crypto space. Before the phrase “crypto exchange” was even coined, there was Mt. Gox, originally built to trade Magic: The Gathering cards.
We often take for granted the underlying infrastructure that now surrounds cryptocurrency. Today, if you want to buy Bitcoin, there are polished apps, custodial services, liquidity providers, and compliance departments. But in the beginning, if you wanted BTC, you were wiring money to a stranger on the internet and hoping they honoured the trade. Mt. Gox changed that. It filled the void. In doing so, it accidentally became the backbone of the Bitcoin economy.
Then it collapsed.
At the time, it felt existential. Bitcoin was still in its infancy - if it was going to die, this was the moment. The exchange’s owner pointed at the protocol, a flaw in Bitcoin itself.
The truth was far less dramatic.
It wasn’t the protocol. It was incompetence.
By early 2014, Mt. Gox had quietly lost control of its own reserves. For years, Bitcoin had been siphoned out of its hot wallets - likely through a combination of theft, poor key management, and catastrophic internal accounting practices. The exchange did not even appear to realise the scale of the losses as they were happening. When withdrawals began failing, management blamed “transaction malleability,” a known quirk in Bitcoin’s early design that could alter a transaction ID without changing its contents. However, investigations later revealed that this was not the root cause of the collapse. Approximately 850,000 BTC were missing. The exchange had been insolvent for a long time. What failed was not Bitcoin’s consensus mechanism, but Mt. Gox’s operational security and governance.
This post is not about Mt. Gox, but it is about the biggest cybersecurity flaw facing the space today - The person using the computer.
Tor: The Ideal
What has been the boogeyman for all forms of authority for the longest time, Tor is a TCP/IP overlay network which facilitates everything from private communication to intricate drug markets. What is lost on many people new to this space is that Tor itself was born out of a US Naval Research Laboratory project and later formalised into The Tor Project. It was designed to allow anonymous communication online, and is still funded in part by the same state-level actors many of its users believe they are hiding from.
Tor is not magic. It is layered routing with carefully constrained trust. When you open the Tor Browser, your client builds a circuit through three relays:
- Entry Guard – This is the first hop. Your ISP can see that you are connecting to a Tor node, but it cannot see your final destination. Guards are long-lived on purpose. By sticking to a small set of trusted entry nodes over time, Tor reduces the risk of eventually selecting a malicious first hop.
- Middle Relay – This node simply passes encrypted traffic along. It knows only the previous hop and the next hop, never both ends of the communication. Its role is separation. It prevents the entry guard and the exit node from being directly linked.
- Exit Node – This is where traffic leaves the Tor network and enters the public internet. The destination website sees the exit node’s IP address, not yours. However, the exit node can see unencrypted traffic. This is why HTTPS still matters on Tor.
The routing itself, while providing obfuscation of destination and source IP addresses, also implements layered encryption and ephemeral key exchange, providing properties similar to Perfect Forward Secrecy (a feature in encryption protocols that ensures session keys are not compromised even if a server’s long-term private key is stolen). This is implemented in Tor by traffic being wrapped in three distinct layers before it leaves your machine.
- The outer layer can only be decrypted by the entry guard
- The next layer can only be decrypted by the middle relay
- The final layer can only be decrypted by the exit node
Each node peels off exactly one layer and learns only where to send the packet next. No single relay knows both who you are and where you are going.
“A decentralised swarm of independent nodes all passing data between each other? Wow, that sounds a lot like a BitTorrent swarm.”
You wouldn’t be entirely wrong. BitTorrent swarms work by breaking a file into many small pieces and distributing those pieces across multiple peers. Each participant downloads and uploads chunks simultaneously, creating a decentralised distribution network. Coordination happens through BitTorrent trackers (or Distributed Hash Tables), which allow peers to discover each other and join the swarm.
Tor has something similar in the form of Directory Authorities. A small, hardcoded set of these authorities maintains a signed consensus document listing active relays. When your Tor client builds a circuit, it downloads this consensus to determine which relays are currently available.
This is one of the more controversial aspects of Tor. It introduces a small degree of centralisation in an otherwise decentralised system. That said, Directory Authorities do not see user traffic. They coordinate the network map - they do not observe the packets flowing through it.
“But what about the Silk Road?”
That is the other half of Tor’s design.
Tor does not just anonymise the client. It can anonymise the server as well.
This is achieved through what are known as Onion Services (formerly “Hidden Services”). With these services, traffic never touches the clearnet. Instead, both client and server connect to introduction points within the Tor network. A rendezvous point is established, and communication happens entirely inside Tor.
In this model:
- The client does not know the server’s IP address
- The server does not know the client’s IP address
- There is no traditional exit node involved
All of this is a long-winded way of saying that Tor works because no single component has full knowledge of a connection. It distributes trust across multiple independent relays and relies on layered encryption to prevent correlation.
It is not invisibility.
It is compartmentalisation.
Don’t meet your heroes
The mythology of Tor is that it is unbreakable. The reality is that it is extremely good at what it was designed to do, and completely useless at protecting you from yourself. Most high-profile Tor arrests were not the result of the network being cryptographically broken. They were the result of operational security failures.
- Ross Ulbricht did not get caught because Tor failed. He was caught because of early forum posts under his real identity, reused email addresses, and small human mistakes that linked personas together.
- AlphaBay’s administrator did not get deanonymised through onion routing being cracked. He logged into an administrative interface from a non-Tor IP address.
- Even outside of criminal cases, researchers have demonstrated that writing style analysis, timing correlation, browser fingerprinting, and behavioural patterns can dramatically reduce anonymity sets.
Tor hides your IP address. It does not hide:
- Your writing style
- Your sleep schedule
- Your political opinions
- Your typing cadence
- Your reused usernames
- Your cryptocurrency reuse patterns
The network compartmentalises data. Humans recombine it.
This is not to say that the Tor architecture does not have vulnerabilities. Tor relays are operated by volunteers, with roughly 7,000–10,000 nodes active at any given time. Based on the decentralised nature, you may think that this type of network is susceptible to a “51% attack” like Bitcoin; this is not the case. They are, however, susceptible to what are known as Sybil attacks and traffic correlation attacks. If a single entity were able to operate a large enough portion of the network’s relays, the probability increases that a user’s entry and exit nodes could both be controlled by that entity. In that scenario, statistical traffic analysis becomes possible; correlating timing and volume to deanonymise users. Tor mitigates this through guard node design, relay diversity, and circuit construction rules, but it is not immune in theory. Again, the network raises the cost of surveillance. It does not eliminate it.
I2P: The Parallel Internet
If Tor is an anonymity layer bolted onto the existing internet, I2P is an attempt to build a different one entirely. I2P, or the Invisible Internet Project, was designed not primarily as a way to anonymously browse the clearnet, but as a self-contained, encrypted network. The key difference is that while Tor allows communication with the clearnet via exit nodes, I2P was built to keep all (if not most) communication internal.
Instead of onion routing, I2P uses what is known as garlic routing. This difference is not a quirky reference to Tor; it refers to the architecture. In Tor, a single request is wrapped in multiple encryption layers and sent through a three-hop circuit. In I2P, multiple messages can be bundled together into a single encrypted “garlic clove”. This makes traffic analysis more difficult because observers cannot easily distinguish which encrypted payload corresponds to which logical message. It also differs in how tunnels are constructed, rather than building a single bidirectional circuit like Tor, I2P uses distinct inbound and outbound tunnels. When you communicate with a service in I2P:
- Your outbound traffic travels through relays you selected
- your service’s reply travels back through a completely separate inbound tunnel.
This asymmetry increases the resistance to certain correlation attacks. Additionally, every I2P user participates in routing traffic. Unlike Tor’s volunteer relay model, I2P grows more directly with its user base - each participant contributes to the network’s capacity.
At first glance, this sounds even stronger than Tor.
- no exits
- internal-only services
- garlic routing
- decentralised tunnel building
A darker web, but cleaner; a purer implementation of anonymity… And this is where the illusion deepens.
Same human problem
I2P does not fail because its routing is weak - It fails for the same reason everything else fails.
- Anonymity sets matter - I2P’s user base is significantly smaller than Tor’s. Fewer participants means less statistical cover; the crowd you hide inside is thinner.
- Misconfiguration matters - Running services that leak identifying metadata defeats the purpose of garlic routing entirely
- Operational security still matters - Logging into a clearnet account with the same username you use in I2P is not something cryptography can fix
- Timing correlation still matters
- Behavioural fingerprinting still matters
- Writing style still matters
I2P reduced the attack surface; it does not strip out human nature. It can only obscure packets, not personality.
The Illusion of Privacy
This is the uncomfortable truth: Most of the time, the protocol is not what fails; we are.
Illusion 1: “If I use Tor, I am anonymous”
Tools like Tor and I2P obfuscate your traffic but do not hide your identity. This leaks through:
- Writing style
- Time-of-day activity patterns
- Language quirks
- Reused usernames
- Cryptocurrency reuse
- Social graph analysis
- Human Habit
You can route your packets through three continents, all with independent privacy laws, making sure none even scrape the borders of the Five Eyes, but you cannot route personality through the same system. Anonymity is not just network location, it is behaving consistently and humans are remarkably consistent.
Illusion 2: “Decentralised means private”
Decentralisation describes architecture, privacy describes behaviour.
You can:
- Self-host everything
- Run your own Matrix server
- Run services over Tor
- Use I2P exclusively
and still:
- Reuse the same email address
- Log into real-world accounts
- Post identifiable information publicly
- Link your identities through convenience
Infrastructure does not override habit; your pipeline may be water-tight, but you can still leak through it.
Illusion 3: “My threat model demands this.”
For some people, yes, it GENUINELY does
If you are:
- A journalist in an authoritarian state
- A whistleblower exposing corruption
- A political dissident
- A vulnerable minority under active surveillance
Tor is not an aesthetic quirk; it is a survival technique.
Most people reading this blog, however, are not in that position. We are technologists, hobbyists, privacy enthusiasts, and ideologically aligned with sovereignty. There is nothing wrong with that, but we need to be honest about it. Running Tor because you value exit options is different from running Tor because you are Edward Snowden. One is principle, and the other is necessity - conflating the two is ideologically dishonest.
Illusion 4: “PGP gives me privacy”
PGP does not grant you anonymity; it grants you pseudonymity, they are very different things. It is not a cloak, it is a flag. It is a mathematically pure way of saying “this identity is consistently me”, and that is incredibly powerful… but it is not invisibility.
Cryptography protects content
- It does not erase metadata
- It does not erase pattern
- It does not erase behaviour
You can sign and encrypt every bit of data you produce, but you are creating more correlation by doing this, not less.
Realism
I have gone by many handles online.
- Spud
- Alice McTavish
- Genesis Affair
- Hell Hound
- Night Marcher
- Gringo
- JBS
The handles I use most now are eddiequinn and ghostwire. Ghostwire started when I moved out of my parents’ house two years ago, bought my first server and got into torrenting and private trackers again. I made ghostwire with the intention of paying attention to OpSec. This is my piracy identity, and it should not cross over into my personal life - I now have it pasted onto my GitHub, blog, GitLab, Matrix server, etc.
Privacy is not a browser, protocol, operating system, or convenient checklist you can tick off one by one. These help, but they are not what matters. Privacy is discipline.
It requires:
- Compartmentalisation across devices
- Distinct identities with no crossover
- Strict operational boundaries
- Refusal of convenience
- Continuous vigilance
Even with all of this it is still just a numbers game.
Tor raises the cost of surveillance. I2P raises the cost of surveillance. They do not reduce it to zero.
The illusion is not that these tools fail; the illusion is that using them is enough.
In 2023, I went to HCPP in Prague. It was the first time my experience with fellow hackers ventured away from cyberspace and into meatspace, and whilst there, I was thrown headfirst into the crypto-maximalist anarchist subculture. I won’t get into this to such a degree as I already have a post about this on this blog, but one talk I went to still resonates with me: The Plastic Powder by Cody Wilson. For those of you who do not know who Cody Wilson is, he is the godfather of 3D printed guns. Due to what Cody did, his life was effectively ruined, and thus, in his view, sometimes “any port in a storm” is the correct way to go about it. I git commit -m "blind commit" && git push sometimes, because I need to leave my desk now, and that is my port in that storm. He also says something akin to “we cannot win, we keep playing regardless”.
I will not make some grandiose statement here of ‘it is our moral duty to protect our privacy’ and ‘if we try hard enough we will be victorious’. We are up against something far more powerful and unyielding than nation-state actors, we are fighting profit incentive. You are less valuable than the data you produce, and every single tool we employ to obfuscate and poison the money spigot that is our digital lives, the more those tools will be fought against by actors whose entire existence depends on that well now running dry. We cannot win, and yet we keep playing regardless. As tech enthusiasts we all know that privacy is a farce, and yet some of us do it anyway, much to the ridicule and torment that comes with it. Some of us do it for fun, others as our own form of quiet rebellion, and some of us are just tumbling down the rabbit hole and have too much inertia to actually stop. Despite the points I have made in this post, I do run Tor nodes, I will eventually mirror this site onto both I2P and Tor, and I will fight my personal battle to dam up my personal data reservoir - I cannot win, but I keep playing anyway.
Verify this post
This page is published as a PGP clearsigned document. You can verify it like this:
gpg --keyserver hkps://keys.openpgp.org --recv-keys CA98D5946FA3A374BA7E2D8FB254FBF3F060B796
curl -fsSL 'https://eddiequinn.xyz/sigs/posts/2026/march/tor-i2p-and-the-illusion-of-privacy.txt' | gpg --verify