Cerberus
Alpha
- Joined
- Apr 11, 2007
- Posts
- 514
- Location
- Sweden
- Avatar Name
- Thomas Skandis Stale
First off I'm going to start off by saying that this hypothesis clearly breaks Occam's razor, however, I feel that it could explain why looting can be so inconsistent.
Assumption: The loot works without memory
Assumption: When checking individual runs, loot behaves very irrational; much unlike what you'd expect from a simple "You have an 60% chance of getting a loot worth 120% of your TTin, and 40% to get a no looter"; or similar.
Assumption: TTout goes to roughly 90% of TTin under a longer timeframe
To explain the hypothesis, I'll keep it as simple as possible:
What I propose is that once in a given timeframe, a fictional number is drawn. This could either be related to you personally (once every 100 attempts has been done looting hunting [or even getting loot from the system at all, which would incorperate all systems]) or on a more global level (every time the server processor usage goes to 45,34894% or similar).
This number, in turn, signifies which subset of the loot table will be used when your avatar attempts to loot something. So, to keep it absurdly simple, it could be something as:
With the only rule that the sum of the attempts go to an estimated 90% TTout (which my example doesn't follow; lol).
This would give an added layer of control, as the probability of getting a profit margin of 300% on Chirpies would go from tiny to absurd. It would also allow modifications of the loot payout very easily. So for instance, if someone gets a huge profit margin, it would be quite easy to see why, and fix that subset, instead of rewriting the entire loot randomization code.
---
As far as I can see, the only plausibility problem this hypothesis has is that it disagrees with Occam's razor. But I see as that is justified, as the added layer of control and ease of debugging for developers this theory suggests outweights the implementation problems.
What I want now is for someone to break this theory. Partially or completely, I'll be satisfied either way.
Assumption: The loot works without memory
Assumption: When checking individual runs, loot behaves very irrational; much unlike what you'd expect from a simple "You have an 60% chance of getting a loot worth 120% of your TTin, and 40% to get a no looter"; or similar.
Assumption: TTout goes to roughly 90% of TTin under a longer timeframe
To explain the hypothesis, I'll keep it as simple as possible:
What I propose is that once in a given timeframe, a fictional number is drawn. This could either be related to you personally (once every 100 attempts has been done looting hunting [or even getting loot from the system at all, which would incorperate all systems]) or on a more global level (every time the server processor usage goes to 45,34894% or similar).
This number, in turn, signifies which subset of the loot table will be used when your avatar attempts to loot something. So, to keep it absurdly simple, it could be something as:
Table #4:
0 0 0 115% 0 65% 0
Table #1:
20% 20% 25% 0 15% 0
With the only rule that the sum of the attempts go to an estimated 90% TTout (which my example doesn't follow; lol).
This would give an added layer of control, as the probability of getting a profit margin of 300% on Chirpies would go from tiny to absurd. It would also allow modifications of the loot payout very easily. So for instance, if someone gets a huge profit margin, it would be quite easy to see why, and fix that subset, instead of rewriting the entire loot randomization code.
---
As far as I can see, the only plausibility problem this hypothesis has is that it disagrees with Occam's razor. But I see as that is justified, as the added layer of control and ease of debugging for developers this theory suggests outweights the implementation problems.
What I want now is for someone to break this theory. Partially or completely, I'll be satisfied either way.