Processor assesment

After you have edited the source code, post your patch here.
Forum rules
Home | Wiki | Rules

Processor assesment

Postby siavoshkc » 01 Jan 2010 20:25

Last edited by siavoshkc on 25 Jan 2010 17:41, edited 1 time in total.
siavoshkc
 
Posts: 347
Joined: 02 Nov 2009 09:37

Re: Processor assesment

Postby cyko_01 » 01 Jan 2010 21:24

if we were to use this code and this was the last version to be released we could end up with a huge problem on our hands in a few years. This should be based on system requirements, not on technology trends
User avatar
cyko_01
 
Posts: 938
Joined: 13 Jun 2009 15:51

Re: Processor assesment

Postby old_death » 01 Jan 2010 21:36

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby ocexyz » 02 Jan 2010 00:30

I think this is totally sick idea or I haven't understood it correctly. CPU speed should be checked on the particular system upon real data from hardware, but not theoretical calculation... My CPU is not getting faster however considering Moore's law should become twice faster every two years. But while this is the same chip, this is not area of using this law, beyond possibility of usage of this law. Are you nuts? :shock: :roll: :mrgreen:
User avatar
ocexyz
 
Posts: 624
Joined: 15 Jun 2009 13:09

Re: Processor assesment

Postby cyko_01 » 02 Jan 2010 01:35

User avatar
cyko_01
 
Posts: 938
Joined: 13 Jun 2009 15:51

Re: Processor assessment

Postby siavoshkc » 04 Jan 2010 06:56

We can have a benchmark function in our code but it slows down the program.
Vista has an assessment technology that can be used.
But I have a new Idea, Shareaza can measure the time it spends to generate hashes and divide it to total size of the files. The problem is that if a system share nothing its performance is never determined.
Its Ideal to have the codes that get system info directly from registry removed.

Anyway by changing 18 months to 24 months we can be sure Shareaza will give score to reletively fast CPUs.
BTW, I found out Shareaza will count processors at startup. So GetSystemInfo() is redundant.
siavoshkc
 
Posts: 347
Joined: 02 Nov 2009 09:37

Re: Processor assesment

Postby mojo85 » 04 Jan 2010 08:51

I think this is totally not necessary. The two main bottlenecks are memory and network. Processor speed seldomly plays a role in being a hub now a days as most speeds over 1GHz are capable of being a hub to 300 leaves. Better use of coding talent is to create an adaptive network which grows or shrinks to accommodate network changes throughout the day. Already we have somewhat of a feedback mechanism in play, but I always wanted Shareaza to be the first P2P to adopt AI mechanisms and techniques, and this could be one key area where many nodes communicate like a nueron and promote or demote a hub depending on feedback mechanisms. AI and p2p are the next evolution, seriously I'm not kidding here you guys think about it for a moment. We can literaly create a tame skynet for now ... but it would be a first and it would be the largest physical neural net ever created ... something that would get Shareaza in the guiness record even. It seems like a joke, but I'm serious AI and p2p are natural selection at work.
mojo85
 
Posts: 115
Joined: 27 Sep 2009 05:35

Re: Processor assesment

Postby ocexyz » 04 Jan 2010 18:28

User avatar
ocexyz
 
Posts: 624
Joined: 15 Jun 2009 13:09

Re: Processor assesment

Postby siavoshkc » 05 Jan 2010 19:29

I withdraw my first code.
I think processor assessment is to ensure that a slow computer won't become a hub.The assessment is too ensure PC is not too slow not to make sure its very fast.
I have a new idea: If Shareaza measures its start-up time and make sure that its not below a limit, that would be enough I think. It can even score based on start-up time.
For example:
Score+=0 for t>4,
Score+=1 for 4>t>2 and
Score+=2 for 2>t

Depending on the scores it gets in IsG2HubCapable() it can decide the number of the leaves to handle.
siavoshkc
 
Posts: 347
Joined: 02 Nov 2009 09:37

Re: Processor assesment

Postby mojo85 » 05 Jan 2010 20:34

My computer is fast enough to run Shareaza in HUB mode ... it is a dual core 3.2GHz overclocked PC. Shareaza takes a bit of time to load because of the large database file it loads initially. This other method would marginalize people who have a large Library (Over 200GB of shared data).
mojo85
 
Posts: 115
Joined: 27 Sep 2009 05:35

Re: Processor assesment

Postby siavoshkc » 06 Jan 2010 06:28

siavoshkc
 
Posts: 347
Joined: 02 Nov 2009 09:37

Re: Processor assesment

Postby ocexyz » 07 Jan 2010 00:11

Always there is discussion how to limit machins in hub mode, but I supposse this does not give effects. Computers performance is gtting better and better, so hubs, but still searching effects are either weak or false. Let me know, but not theoretically, but rather practically why having more hubs with less numbr leafs is not good idea? :?: I just suppose that when we don't have some ammount of really huge machines working peramanntly as "big fast" hubs then more hubs, even weaker with limited number of leafs, could give better performance of the whole network and could eliminate "isolated islands" which happens. And this would not disturb fast mchines to be hubs with more leafs. Just an idea.
User avatar
ocexyz
 
Posts: 624
Joined: 15 Jun 2009 13:09

Re: Processor assessment

Postby old_death » 07 Jan 2010 10:59

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assessment

Postby ocexyz » 08 Jan 2010 00:58

User avatar
ocexyz
 
Posts: 624
Joined: 15 Jun 2009 13:09

Re: Processor assesment

Postby ailurophobe » 08 Jan 2010 19:00

Startup time is dependent on one time file loads from drives shared with other applications. Not very reliable measure of speed. The only really reliable way to measure hub performance is to run as one and keep track of the latency you have responding to leaves/neighbours. Since the number of leaves always starts from zero and climbs slowly it should be possible to set some target level of performance and simply stop accepting new leaves when it is reached. Then all you need to do is to check if the achieved number of leaves is high enough to be worth running as a hub. Preferably by comparing it to the leaf number your neighbours have achieved. (Over 50% the median leaf count of neighbours?)
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby old_death » 09 Jan 2010 03:28

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby ailurophobe » 09 Jan 2010 10:46

Actually Shareaza already does pretty good job of determining if the computer meets the requirements to be a hub. You should just use the existing minimum requirements. I still think that my suggestion of measuring the actual performance as a hub and comparing it to the actual performance of your neighbours using the leaf count as proxy is better than any sort of synthetic score we guess might correspond to value as a hub can be. Specifically, the only reliable way to know if your internet connection is stable with hundreds of connections is to try running it with hundreds of connections. The only reliable way to know your internet connection can handle heavy UDP traffic correctly is to run it with heavy UDP traffic and see what happens. And you have to check these things dynamically! A minor update to your security software might change your UDP packet handling overhead. Your ISP might add a content filter that messes up with the leaf-to-hub links. The easiest way to do this is to run as a hub.
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby old_death » 09 Jan 2010 14:32

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby ailurophobe » 09 Jan 2010 18:25

What I was saying was that a synthetic rating is useless. You must meet ALL the minimum requirements for running as a hub to make sense, even if you meet some of them by a wide margin. A gigabit connection is useless if your computer has insufficient RAM and a terabyte of RAM is useless if you have a 56k dial-up connection. And if you do meet all the requirements, you CAN run as a hub, there is no benefit in trying to guess how well you can run as a hub, you can just test and compare. In fact, the main benefit of my suggestion is not seeing if you can run as a hub, everybody who meets the minimum requirements can, the benefit is in finding the correct leaf count, and hopeful ending up with a network with higher average leaves per hub. (-> faster search) Spotting and removing hubs with odd connection problems is a secondary benefit. You also missed that if we measure the response latency, we do not need to evaluate memory or CPU or connection separately for a synthetic rating, any effect they have will be directly measured in correct ratio.

EDIT: I agree any rating, even if it is just dynamic leaf count like I suggested, should be clearly noticeable. People love seeing how great their hardware is. More detail available the better. Like having different color coding if you have more leaves than any of your neighbours.
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby diztrancer » 09 Jan 2010 19:25

This thread is most boring thing in whole internet :)
User avatar
diztrancer
 
Posts: 222
Joined: 13 Jun 2009 15:41
Location: Ukraine

Re: Processor assesment

Postby ailurophobe » 09 Jan 2010 22:48

That would be pretty awesome if true. The competition for most boring in the internet is fierce.
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby old_death » 10 Jan 2010 00:27

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby mojo85 » 10 Jan 2010 08:34

Seems like you put a lot of thought to it old-death. I although would like to remind you that the current method yields a desirable outcome for which has for the time being worked sufficiently enough.

The G2 crawler is the best tool for trying to achieve the best trend, and I believe dcat is the person worth hearing there opinion on the matter, due to his statistics and analytical background.

G2 Crawler - Density

For me it boils down to logical reasoning:

1) What are the rules of the current system? (It is in the code, I don't remember off the top of my head)

2) Does the current system promote people whom barely can run as a hub connected to 300 leaves? (How much bandwidth is required, and how much RAM/ Processor speed is needed to run 300 leaves effectively?)

3) Would it be more simpler to change those current rules to have the desired outcome as oppossed to a total synthetic benchmark method?

4) What are the desired outcome? (What sort of hub density, and diversity are you trying to achieve? Ideally you want all Hubs at 300, for resiliency you want Hubs with low numbers)

5) What are the benefits of limiting the number of hubs in the population? (Better clustering, resulting in quicker searches.)

6) What are the repercussions? (Loss of network resiliency, points of failure begin to narrow. Also hubs are location based, as nightfalls a bunch turn off and others come to replace.)

So once we clear questions like that we can begin to make changes to this network core. Hasty additions to the hub promoting mechanism can have a dire crippling effect to the network. I mean the moores law thing is most often not the bottleneck, but rather RAM or mostly internet connection speed. To siavoshkc make a method of testing internet speed that is least intrusive, and we can add to the quickstart wizard ... this could solve most hub related problems. Also we have to ensure that a Hub gets the maximum allotted bandwidth, lets say if someone is downloading but is a hub ... his download should be off the bandwidth leftover after the overhead of the network.
mojo85
 
Posts: 115
Joined: 27 Sep 2009 05:35

Re: Processor assesment

Postby ailurophobe » 10 Jan 2010 19:00

IMHO, the current system is pretty good. It accounts for all the relevant variables that are easily measured and keeps track of uptime to see if there is some non-obvious reason the computer should not become a hub. IIRC it even calculates a score, but it is not used for anything because, like I said, you either can run as a hub or not.

G2 has pretty good resiliency. The backup hub and automatic hub promotion should keep us safe even with higher leaf counts. But certainly you could use bigger clusters in addition, or even instead of, higher leaf counts, to get faster search.

Personally my desired outcome would be a more heterogeneous network with wider variety of leaf and neighbour counts with the same (or even higher) amount of potential hubs but fewer and higher capacity active hubs. Currently we have hubs that have performance and stability problems and hubs that could do more. Making the leaf and neighbour counts more dynamic would help both ways.

So it would work something like this:

1. Wait for hub promotion.
2. Check if you have enough available resources and meet the hub requirements, if not go back to 1
3. Accept the hub connection
4. Connect to the minimum number of neighbours
5. Wait for more connections, keep number of neighbours above minimum
6. Check if you have enough available resources and meet the hub requirements, if not go back to 5

So for example if you had a limit on upload bandwidth for the hub, it would stop accepting connections once it starts hitting the limit. You can even make the limits dynamic, say drop available bandwidth when downloading something. You would also need the logic for deciding not to be a hub any more. Comparing to other hubs like I suggested would be easy and capable of responding to network changes.
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby old_death » 10 Jan 2010 19:24

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby mojo85 » 11 Jan 2010 00:18

I would argue that a netbook can support a hub, when solely used as a hub (not one that is sharing a full library), nor is multitasking rich multimedia in another process thread. But in reality that wouldn't be the case and I guess that is where a folly in the current minimum specs is, as it doesn't realize that you'll be doing other things in the background. Maybe we can bumb up the minimum specs required, to something like 2GHz, and 2GB RAM?

The desired outcome is to not totally remove the 0-49 range hubs, but rather to reduce them to 1/3rd what they are now. We need these hubs as they are transitionary hubs. They bridge the dynamic changes that occur when hubs go offline (300 or so people needing a connection).
mojo85
 
Posts: 115
Joined: 27 Sep 2009 05:35

Re: Processor assesment

Postby ocexyz » 11 Jan 2010 03:46

I don't see any sense in any saving of any index for any time. Network must react dynamicly on current situation in internet and be able to adapt to changing circumstances - connected users, hubs aviable, resources aviable on particular machine ATM (other programs used can limit them), and any other important. So these parameters which are important should be measured and evaluated I think automaticly every 1 hour, or every a few hours and determine decision: if Shareaza becomes hub, remain a hub, stop to be hub, how many leafs can hold at this moment - so adjust the number of leafs (more programs using resources the less resources are aviable for hub). This would allow a machine to hold over 300 hubs when not used intensively but limit leaves to 50 when user is doing something resources consuming.

For me from posts above implies that more important then minimal requirements (which are now hardcoded already) is how to allow more powerfull machines to hold more leafs. Minimal requrements has been established years ago and it is difficoult to imagine that currently used machines can't comply them. But it is easy to imagine that newer machines got more possibilities then they use. Currently any change in number of leafs must be done manually, so any hubs with othere then 300 number of leafs is a proof that user was manipulating with this number. Those who lowered it were counting on less resources consuming, thos who made that bigger knows why have done this most probably. Density graph shows that much less then 50% of total number of hubs use 300 so defult value. This shows autoaccomodation to current network condition is needed. Considering Moore law, time when current hub rquirements were hardcoded and number of hubs with more then 300 leafs I think shows that better machines are not used as they could be used (more leafs etc.).

How many Shareazas now are tryning but remains unconnected because can't find free hubs?
User avatar
ocexyz
 
Posts: 624
Joined: 15 Jun 2009 13:09

Re: Processor assesment

Postby old_death » 12 Jan 2010 14:41

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19

Re: Processor assesment

Postby ailurophobe » 12 Jan 2010 16:03

It might make sense tweaking the uptime requirement. Shareaza already keeps track of whether it is stable enough to run as a hub, but ODs examples (not having time to get full leaves / trying and failing to become a hub repeatedly) would mean it is not working as it should. Neither of those is supposed to happen. If they happen commonly they should be reported as a bug.
ailurophobe
 
Posts: 709
Joined: 11 Nov 2009 05:25

Re: Processor assesment

Postby brov » 12 Mar 2010 14:30

Besides of that said above...

I don't know if Shareaza is doing it actually, but it's worth to monitor local hub cluster and up/downgrade if necessary. Say for example, we're able to be a hub, but the cluster is loaded at, say, 50% for a hour - don't upgrade, downgrade if in hub mode and have a few or no leaves. If loaded at 90% for a hour - upgrade can fire. IMHO this is good indication if hub is really needed, and low-leaf hubs should be filtered in this way. One could say that on some time when upgrades will fire there will be too many hubs and/or the network will be somewhat unstable... But we can use scoring to calculate "probability" of next up/downgrade (low score - high probability to downgrade, low for upgrade, high score - low probability for downgrade, high for upgrade) or simply adjusting time for the event based on score so high-scored computers have better chance to be a hub than low-scored (as event time is shorter for high scored nodes).

I hope it's clear... The numbers are for example only and probably should be adjusted.

My 3 eurocents ;)
brov
 
Posts: 87
Joined: 05 Jul 2009 12:15

Re: Processor assesment

Postby old_death » 14 Mar 2010 11:30

User avatar
old_death
 
Posts: 1950
Joined: 13 Jun 2009 16:19


Return to Code Submission

Who is online

Users browsing this forum: No registered users and 1 guest

cron