Even if you have encrypted your traffic with a VPN (or the Tor Network), advanced traffic analysis is a growing threat against your privacy. Therefore, we now introduce DAITA.
Through constant packet sizes, random background traffic and data pattern distortion we are taking the first step in our battle against sophisticated traffic analysis.
Even if you baked in variable packet size into TCP. It would be trivial for anybody monitoring network flow, to see you who you’re talking to. There would be no ambiguity.
The only reason this makes sense for a VPN, is there’s a lot of traffic bundled together, so a third party doesn’t actually know where your traffic flow is going.
Consider the example if you ran your own personal VPN endpoint. So you were the only user on the VPN. Even with randomized traffic flow injected into your VPN connection, it would be trivial for any third party who’s monitoring traffic flow to know that traffic is yours. Because you’re the only VPN connection talking to the VPN server. This thought experiment applies when you don’t have a VPN at all.
If I were to send packets to a single entity over time, I’d have no use for DAITA. I agree with you on this.
However, let’s say that I run a bunch of VPN endpoints across VPSes, and the entity trying to track me doesn’t know about all of these IP ranges. I could be renting from a colo, the cloud and even a a bunch of friends who have their ports open. If I were to mix this in with my usual internet traffic, it becomes significantly harder for third-parties to figure out what I’m doing connecting to all of these different IPs. A state actor could put the resources behind it, but the average third-party will have a hard time with it. I can certainly see use-cases for it.
I think we’re mixing up vocabulary.
Every IP you talk to is visible to anybody monitoring your network. The sale of net flow data is commonly acknowledged by ISPs. So every computer you talk to is common knowledge for sale.
In your scenario, let’s say you have five VPN connections set up to go to five endpoints that you control. But if nobody else is using those same endpoints. Your net flow data still exposes exactly what you’re doing. There’s no ambiguity. Your traffic is plainly obvious to anybody observing the network. Even if those VPN connections are adding randomized traffic onto the links.
Except that I will not necessarily be connecting to the exact same IPs over time, just going to do so in specific ranges which the VPS/colo owns. There’s plenty of people who are going to be renting VPSes and will have their traffic originate from the same IP range as mine, which means that if everybody using TCP had their traffic anonymized like so, the third party wouldn’t actually know that MigratingToLemmy specifically was connecting to AWS at a certain time and from a certain location, so to speak. This hypothesis doesn’t include correlation through other data in the threat model. But it could definitely prevent correlation with traffic across locations, which is similar to what Mullvad states
I’m sorry no. This will not help you avoid flow analysis
I think you both are talking past each other. You said “But if nobody else is using those same endpoints.” but @[email protected] said “There’s plenty of people who are going to be renting VPSes and will have their traffic originate from the same IP range as mine”. Reading this thread, it seems like you both have different network setups in mind.
Thanks for pointing that out. I tried to address that. When I responded about net flow analysis. Having the same IP range as other people does not let you hide in the crowd. The net flow data will identify exact IPs.
Hypothetically, what if everybody in the world were using mixnets to obfuscate destination/origin, and then mullvad’s DAITA to obfuscate traffic timing and size. Would netflow analysis be able to defeat that?
What is a mix net? Something like TOR? An onion overlay Network where the routing goes between multiple hops before it exits the network?
Let’s go through a few scenarios first
Scenario A: you have a link to a common VPN endpoint, that other people use. On this link you generate traffic, a consistent 1 megabyte per second up and down.
There is now ambiguity about what traffic goes into the VPN, and goes to you. And outside observer would not be able to deduce what traffic is yours just by size and timing.
This is the gold standard. You remove all possible signal data.
Scenario B: everyone is using a onion overlay network, and their traffic has a little padding added, and a little extra timing added at every link. This would reduce the probability and outside observer could deduce the entire end to end flow of your traffic. But the type of your traffic could defeat whatever level of obscuring is happening. Imagine you have a real time connection to an network, and you’re typing out Morse code… - - - - sort of thing. Imagine each of those packets has a different size. If I’m observing the network for long enough, I’m going to notice the Morse code type of packets, with the timing and the size go through the onion network. There will be some ambiguity. But enough traffic over enough time would give me high confidence that you’re the source of the traffic. Because the extra obscuring traffic has a probability, but not a guarantee, of masking the shape of your traffic.
So scenario a is the gold standard, scenario b would be better then nothing. Having a global onion network has its own issues, now you have to trust many nodes instead of one node. All this is down to your threat model and how much effort you’re willing to do.
What am I missing?
https://hackertalks.com/comment/3687086