by grey-hame » 27 Oct 2009 14:27
When Shareaza is running I get odd networking-related symptoms from other apps. Firefox randomly won't load pages until a link or refresh is banged on three or four times, though usually loads normally. Firefox also sometimes spontaneously acts as if "work offline" had been selected from the file menu, even though it hadn't. Thunderbird randomly won't fetch mail or newsgroups, then works again after waiting a while. Youtube videos pause randomly and need some prodding to resume. And so forth.
Shareaza version is 2.4.0.4 r8219 20091004. Vista home premium, with half-open patch, 3 megabit DSL (768Kb up), Linksys BEFSR41 and Speedstream 4200 between machine and internet (latter in bridge mode). The setup works fine for everything else but Shareaza seems to cause some sort of problem. The weird thing is, it only does it some of the time; everything can work fine for hours but then for hours the networking can be spazzing out every few minutes, and when Shareaza's causing this it also won't download much or reliably and search results are poor.
It is not Shareaza using up all the bandwidth and leaving none for Firefox and other applications: when this happens I'll generally see Shareaza using only a few KB/s down or up, since Shareaza itself is affected by whatever this is.
I do notice that performing searches seems to trigger it or make it worse, and having a lot of pending downloads seems to make it more likely to happen spontaneously and to make it worse.
I assume the bug is therefore in Shareaza's search logic somewhere.
In the meantime your website has severe bugs of its own. Since it was moved it has had a problem where, intermittently, when a page is requested your HTTP server serves a blank page with the background image only (so with a whitish bar across the top but no content). Reloading always worked, and it mainly affected thread list pages. But lately, it has become much, MUCH worse. Loading any page has about a 3 in 4 chance of doing this now, and it doesn't matter if it's the second or later attempt either, so one is forced to click a link, then click refresh three or four times typically, to load each page. Worse, this includes thread pages, and if it serves a blank page instead of the correct page it DOES mark everything "read", so when you reload it you can't see what's new and what's not and have to re-read EVERYTHING. Between that and having to click refresh repeatedly to load everything, using the site is suddenly EXCRUCIATINGLY SLOW AND FRUSTRATING.
You will therefore fix this problem immediately, and permanently. I can understand not being bothered and having higher priorities when it was only one in 10 or fewer page loads that this bug affected, but suddenly it's 3 in 4 or more and completely unacceptable. Move this to the front burner and get it resolved.
If it is a deliberate behavior in response to load, because fobbing someone off with a blank page uses less bandwidth than sending a proper web page, be advised that it will backfire and has backfired. People respond to the blank page by clicking reload, not simply accepting it and forgetting about viewing the page they wanted to view. As a result, the server has to send BOTH a blank page AND the correct page. Upping the odds to 3 in 4 just results in the server serving THREE blank pages plus the correct page, on average. Which uses up MORE bandwidth, drives the load HIGHER, and makes the blank-page thing respond by raising the ratio even more, which just ends up driving a vicious cycle. So if the blank-page thing IS intentional and some sort of load-limiting thing, it is the most monumentally STUPID such thing that I have EVER seen; whoever thought it up ought to be fired post-haste for not thinking things through and for leaping to a user-hostile solution to a problem as their first choice.
If you want to reduce bandwidth load on your servers, do the following:
1. Get rid of the blank-page thing, whether it's intentional or just a bug. Whichever it is it has the effects described.
2. Get rid of whatever stupid Javascript forces the browser to reload every goddamn page every goddamn time it's viewed. I assume this was done to ensure users always see an up to date thread list and an up to date view of a thread, but the end result is to both annoy users and waste bandwidth re-serving unchanged pages. Users with slow or flaky connections are especially affected, but everyone is annoyed when the "back" button does not produce instant results. "Back" is supposed to be instant. Users here will generally be techy enough to know they can manually click refresh to make sure a view is up to date.
3. Consider getting rid of https except for login. There's really no need to encrypt every forum page-view and every post to public forums; the only sensitive data here are users' passwords. Using https bloats the network traffic, since the ciphertext tends to be bigger than the plaintext, and requires additional bandwidth for key exchange and similar bookkeeping, none of which is necessary except when the user submits their password at the login form. Only the page loaded BY the login form's submit button actually requires SSL.