Well I previously stated two rules:
1. Larger step we make in ESI TT value in our data - lower the realative noise
2. Larger step we make in ESI TT value in our data - harder to get data (there will be fewere data points)
My conclusion is:
3. We have to take middle road.
Here is how i propose to do this:
1. Gather all available data in single data file with easy format
2. Write a simple program that does following:
Reads all data from data file in integer array [0 - 125000]
array[0] stands for ESI with TT 0.00
array[14] stands for ESI with TT 0.14
array[125000] stands for ESI with TT 1250 PED
data days:
1,40 543
So array[140] should contain value 543.
Now i hope it is understandable what info array should contain.
Now procedure does following job.
Takes step 1 or 0.01 (smalles posible step)
And iterates through array counting values(data points) we have in array
and stores result in file.
After that in increases step to 2 (0.01) and goes through the array and counts data poins incrementing step by two. - store results in file
After that increase step to 3 and do the same.
Increase step in that manner until step is too large to be interesting.
After this procedure we will have results about or data, that says how much data do we have if we chose spesific step.
Naturaly step 0.01 (smallest) will have all data - largest number of data points
And tendency will be - larger the step fewer data points we will have.
But procedure will tell us - what step it is reasonable for us to chose with data we have - the step that is large enough, but still have reasonably manu data points.
The procedure is easy to write, but don`t count on me - I am bit lazy
The main target of procedure is analyze data and tell us what is the best way to exploit data we have.