The following problems remain as of version 2.7.2.0 that I consider serious. Where noted, I've verified that they are unfixed in 2.7.4.0; none are explicitly stated in the commit log to have been corrected since 2.7.2.0.
1. Inability to automatically reconnect to G2 reliably.
The suggestion to have all far-eastern IPs silently dropped by the G2 hostcache (and only by the G2 hostcache; modifying the add-to-G2-hostcache function to just return without doing anything when the IP it's handed geolocates to the far east, while not changing how such IPs are treated when they arise in other contexts, e.g. as G1 ultrapeers, ed2K servers, or sources) has yet to be implemented. It still gets stuck banging its head against the Great Firewall most times if it loses a G2 connection.
This issue is confirmed to still be present in 2.7.4.0.
2. Inability to clear G2 hostcache reliably if one G2 connection is established.
This causes no end of trouble if Shareaza loses one G2 connection but retains one, and starts bashing its head against the Great Firewall. One ideally wishes to purge the dud hosts from the host cache and have it query discovery services to get a proper, working G2 host to connect to to replace the missing connection, without disturbing the good G2 connection that remains operational, especially when one has active searches that one wishes to disturb as little as possible.
Unfortunately, clearing the hostcache when one G2 is still connected fails weirdly, usually immediately restoring all the garbage hosts that were just deleted from it instead of querying discovery services, though sometimes there's a delay first. Sometimes it even queries a discovery service, only to then fill the G2 cache with Great Firewalled hosts instead of with what the discovery service replied with! As a result it is very difficult, requiring up to several dozen attempts in a row, to nursemaid it into reconnecting a single lost G2 connection without disturbing the second G2 connection.
The amount of manual nursemaiding required to reestablish a lost connection to G2 (or more than one, or a lost connection to any other network) should be zero and certainly shouldn't be dozens of repetitious steps under any circumstances.
This issue is confirmed to still be present in 2.7.4.0. Correcting Issue #1 would probably render this issue moot.
3. G1 is often very slow to connect, and to replace lost connections (though it does not require manual nursemaiding unlike the usual case with G2). This started abuptly a few weeks ago, right around the time the abnormal G2 behavior that was being caused by a broken GTK-Gnutella implementation stopped. The claimed cause, of fewer G1 clients, is unlikely to be the explanation, given that this was a step-function change (from normal to very slow connecting over a period of at most a day or two, a few weeks ago) and given that G1 should maintain a proportional number of ultrapeers rather than suddenly have too few per leaf because of any hypothesized shrinkage of G1.
This issue is confirmed to still be present in 2.7.4.0.
4. Files with duplicate names overwrite each other when moved to the download directory from the incomplete directory upon completion.
5. If a file with a name X has a Completed status in the download list, and a second file with the name X is downloaded, Shareaza will apparently fail to verify the second file (or verification seemingly takes much more than two minutes, even if the file is only a few hundred K, when it normally takes only a second or two for such small files). This happens even if no overwrite happened (because the first file was moved, renamed, or deleted before the second was downloaded or perhaps even before the second download was started). It does not happen if all Completed items with name X are "Clear Download"ed before starting another download with name X; if that download completes it will verify normally.
6. There remain several incompatibilities between Shareaza and GTK-Gnutella that manifest when Shareaza 2.7.2.0 (and likely 2.7.4.0) tries to download from GTK-Gnutella sources:
- Some results, seemingly chosen at random but consistent from search to separate search and session to separate session, don't get filtered by "files you have already" when present in library and triggering "you already have this file in your library" prompt on "download" command.
- Some results, seemingly chosen at random but consistent from search to separate search and session to separate session, fail to add idempotently to the download list -- select an affected result and click "download" n times and n copies of the file will be added to the download list. This also prevents adding new sources if any are found. Every source will end up with its own download list entries. And that in turn prevents swarm downloading of affected files, as to get a complete copy will require downloading the entirety from one particular source. Downloading parts from two or more sources will overwrite each other rather than combine to form the full file.
- Some results have a source IP of 0.0.0.0 and add as bogus download list entries that list "0/1 sources" and cannot be started or resumed (always Queued and never Pending, Searching, or etc).
- Other results, seemingly chosen at random but consistent from search to separate search and session to separate session, sometimes end up stuck as Queued (never Pending, Searching, or etc.) and cannot be downloaded, but don't have a source IP of 0.0.0.0 and don't display the abnormal "0/1 sources" in the download list. There can be other results from the same source that download normally, and then either display the first bug (not filtered by "files you have already") or not, and yet others that display some of the other bugs. Whether some of the ones erroneously claiming a source of 0.0.0.0 are really from the same source again is unknown, since the true IP of the machine that sent the affected hit is not apparently recorded anywhere.