Other players mining before you affects your hit rate

I have an off topic question, what would the minimum sample size be for data needed in order to draw a statistical conclusion from ?
 
I have an off topic question, what would the minimum sample size be for data needed in order to draw a statistical conclusion from ?
The minimum sample size needed for reliable conclusions varies. For surveys, around 400 samples are often used for a 95% confidence level and 5% margin of error. In this game, factors like loot variability/wave can affect this, so more samples might be needed for accuracy
 
The minimum sample size needed for reliable conclusions varies. For surveys, around 400 samples are often used for a 95% confidence level and 5% margin of error. In this game, factors like loot variability/wave can affect this, so more samples might be needed for accuracy
Perfect thanks. I am going to hate statistics. 20 samples per run at a cost of 3.1 ped (yes I am going to use an md1) roundup(400/20,0)*3.1 is about $7 total. Also the locations that I am going to be sampling from are not exactly going to be random, but that is the point, I want to see by how my sample hitrate differs from the mean.
 
Last edited:
I have an off topic question, what would the minimum sample size be for data needed in order to draw a statistical conclusion from ?

Maybe this can help

OMW to Caly from Ark now and I have been mining on the same spot this month with all the dif finders (25 for the moment) I use so from MD-10 up to TM6 and used depth enh on some finders to reach vesp
Here again no amps or pre-amped finders where used, nor did I used pets and didn't rush, just relaxed mining as in run, drop, extract, run, drop, extract :p
All costs are calculated in the results, so this is net.
If other miners where there, I kept going and didn't see any changes in return.

Here is my result :

EDIT : Results from all these dif finders where very different and going deep doesn't mean more profit ...
Totall drops : 3938
Totall peds : 4299,23
TT return : 4440,94 ped = 103,30% = + 141,71 ped
After sales : 5032,10 ped = 117,05% = + 732,86 ped
 
Last edited:
I am no where near on that level of turn over. Right now I am working on finding the best method for me. I don't mean to brag but it is capable at the moment of about 10 increases off of the average hit rate. It also looks like some areas are biased to either ore or matter. My hunches and intuition is also getting a lot better regarding if an area is good or not, but I do complete the run in order to gather the data and confirm my hunch.

I am currently using an md1 and have no interest in scaling up till I have something concrete that I can use.
 
The minimum sample size needed for reliable conclusions varies. For surveys, around 400 samples are often used for a 95% confidence level and 5% margin of error. In this game, factors like loot variability/wave can affect this, so more samples might be needed for accuracy
There’s a post I mentioned it in awhile back with the details, but you’d just have to use a power calculator to determine minimal sample size. At least for the tests done in my threads, the sample size was already more than enough. More is needed depending on what the response variable is, number of comparisons, etc. I’ll see if I can dig the numbers up again someday. IIRC, around 400 was a good amount for the finder decay testing, in part because it dealt with small differences.

As for variability or waves, that should already be incorporated into the statistical tests or especially experimental design for the latter. If you are concerned about waves while testing the difference in average MU between treatments or something like that, waves shouldn’t be a confounding factor if you’re switching between treatments (e.g., amped or not) each claim.

That’s partly why you don’t go testing one method X-hundred times and then go try the other method the same amount, especially if it’s over the course of days or weeks later. The two aren’t statistically comparable at that point. If you keep the treatments as close to pair-wise as possible, then both should be getting the same background variation from nuisance factors.
 
There’s a post I mentioned it in awhile back with the details, but you’d just have to use a power calculator to determine minimal sample size. At least for the tests done in my threads, the sample size was already more than enough. More is needed depending on what the response variable is, number of comparisons, etc. I’ll see if I can dig the numbers up again someday. IIRC, around 400 was a good amount for the finder decay testing, in part because it dealt with small differences.

As for variability or waves, that should already be incorporated into the statistical tests or especially experimental design for the latter. If you are concerned about waves while testing the difference in average MU between treatments or something like that, waves shouldn’t be a confounding factor if you’re switching between treatments (e.g., amped or not) each claim.

That’s partly why you don’t go testing one method X-hundred times and then go try the other method the same amount, especially if it’s over the course of days or weeks later. The two aren’t statistically comparable at that point. If you keep the treatments as close to pair-wise as possible, then both should be getting the same background variation from nuisance factors.
Makes a lot of sense.

You want to snapshot a control and an experiment back to back in order to reduce the impact of time as a factor.
 
Back
Top