Hmmm.
Asia:
64 bytes from orange.kame.net (203.178.141.194): icmp_req=1 ttl=52 time=330 ms
64 bytes from rev198.asus.com (211.72.249.198): icmp_req=1 ttl=238 time=339 ms
USA:
64 bytes from web2.eff.org (64.147.188.3): icmp_req=1 ttl=52 time=218 ms
64 bytes from forward.markmonitor.com (64.124.14.63): icmp_req=1 ttl=117 time=212 ms /* was time-warner.com */
Around the corner:
64 bytes from
www.heise.de (193.99.144.85): icmp_req=1 ttl=249 time=51.1 ms
64 bytes from
www.free.fr (212.27.48.10): icmp_req=1 ttl=121 time=70.1 ms
If Shareaza really needs more than 100ms to process a packet, then IMHO there is something really going wrong.
I do not have the numbers for G2CD ATM, but i think it was around 1ms...
And that was "measured" with tcpdump, so from packet entering the network card, to answer leaving the network card, but with uncongested Network!
I guess the problem is that most of the time the upload is totally congested. And that needs to takled.
Maybe by mesuring the RTT and saying: "Sorry, you are a to crowded/congested/bad Hub - bye".
But not by activly seeking the Hub with the lowest RTT.
Note the fine difference in this!
Nearby, how to you prevent the effect of a stupid swarm. All Clients mesaure a good RTT on one Hub, all now use that hub to do their relaying, because of that the Hub gets congested. All Client move on, only to congest the next Hub.
You enter a world of problems there.
So again:
Filtering Hubs with bad RTT - yes
Using the lowest RTT - no
Greetings
Jan