Fill up ESI data...

Status
I changed the fit for 1-550 to a polynomial and it reduced the % error substantially with my data. I still get off by quite a bit with the earlier test cases so i guess the rounding errors in our data are throwing off the fit equation.

Dunno. Anyway, i updated the script for that range.
 
Bump to beg

We need data. Must have data.

I'm making good progress but need less skeletal data sets.

Remember this mess from my slope data?
rawslope.png

Slope vs. level

It showed such variation that we were entertaining the possibility that there were two or more "weights" of skill.

Behold:
sinslope.png

Slope-sinusoid vs. level

After removing a cosine (derivative of the one that matched the tt data for the lower levels because this is the slope), the data starts falling into lines. There is some error there but things are looking promising.

I need some more data for high skill levels.

Please. :)
 
I think main problem here is to get accurate data.
I mean the formula should be easiest to derive from the data from small esi, cause there you can easy get data for every esi value.
From the other hand getting precise data for those small esi values is troublesome, cause you can`t even know for shoure how much skill you get from 0.01 TT ESI, because niether the esi value, nor skill levels are discrete.
Bottom line:
we loose precision saying that you get 10 skill levels fro 0.01 ESI, because there can be two different 0.01 ESI (say one 0.0115 and other 0.0191) and two different 10th levels - 10.138 and 10.998.
That is why I am suggesting we would probably get acurate results if we would play with larger increments - so we would loose those small fluctuations in precission.
So we do not try to get function from this data:
ESI -> SKILL
0.01 --> 10
0.02 --> xx
0.03 --> xx
...
But this data
0.10 --> xx
0.20 --> xx
0.30 --> xx
And simply discard the rest data. First aproach seemingly provides more data, but also considerably more noise.
And i think, the second aproach preserves the properties of function we seek, cause I really think we are dealing with some kind of exponent function, that is relatively simple - I men - of course there is no way to be shore, but it is more likely programmers used a function that is relatively fast to compute, simple and easy to predict.
 
ESI -> SKILL
0.01 --> 10
0.02 --> xx
0.03 --> xx
...
But this data
0.10 --> xx
0.20 --> xx
0.30 --> xx
And simply discard the rest data. First aproach seemingly provides more data, but also considerably more noise.

I understand what you are saying, and i have removed some data points that clearly were extremes of rounding. However, the approach you specify doesn't guarantee any better security from rounding errors because there is no knowing that the particular data chosen is not an extreme.

The second problem is that the gap in skills from 0.10 to 0.20 tt is HUGE. The only range where my present fit has a large error margin is from 1-400 skills, which represent less than 0.70 PED in total tt. What you're describing is the reason that the error margin of my fit is large in the low skill region.

However, now that i can remove the sinusoidal component of the function your approach might work better, it is in essence what i've done where i can see lots of deviation in the data points from a continuous line.

And i think, the second aproach preserves the properties of function we seek, cause I really think we are dealing with some kind of exponent function, that is relatively simple - I men - of course there is no way to be shore, but it is more likely programmers used a function that is relatively fast to compute, simple and easy to predict.

There are at least two different functions superimposed at all points, one sine and one exponential, but the coefficients and functions also change at at least two different points in the progression, as well.

I have already found the algebraic form of the sinusoidal component which simplifies the rest of the matching. Now, i just need data from 7-9k range to connect my slope data from 5-17k to the exact tt data we've collected up to about 7.5k. There is a discontinuity in the slope around 8k skills, and i need some actual tt value data points to know what happens there.
 
Last edited:
We need data. Must have data.


RIFLE 10135 + ESI 9.80 = RIFLE 10147
RIFLE 10135 + ESI 11.23 = RIFLE 10148
RIFLE 10135 + ESI 11.67 = RIFLE 10149
RIFLE 10135 + ESI 14.63 = RIFLE 10152
RIFLE 10135 + ESI 15.11 = RIFLE 10153
RIFLE 10135 + ESI 110.80 = RIFLE 10285
RIFLE 10135 + ESI 147.76 = RIFLE 10334
RIFLE 10135 + ESI 191.15 = RIFLE 10384
RIFLE 10135 + ESI 273.77 = RIFLE 10459

BLP 10083 + ESI 1.20 = BLP 10084
BLP 10083 + ESI 10.01 = BLP 10093
BLP 10083 + ESI 10.95 = BLP 10094
BLP 10083 + ESI 12.40 = BLP 10095
BLP 10083 + ESI 12.78 = BLP 10096
BLP 10083 + ESI 102.77 = BLP 10204

MMS 7521 + ESI 11.46 = MMS 7539
MMS 7521 + ESI 17.04 = MMS 7548
MMS 7521 + ESI 33.17 = MMS 7576
MMS 7521 + ESI 62.22 = MMS 7633

Hope it helps :)
 
I understand what you are saying, and i have removed some data points that clearly were extremes of rounding. However, the approach you specify doesn't guarantee any better security from rounding errors because there is no knowing that the particular data chosen is not an extreme.

Well It does guarantee better security.
Let me show you what i mean.
Lets look at two examples:
1.
9,98 1877 [posible range {9.98(000); 9.98(999)}]
9,99 1878 [posible range {9.99(000); 9.99(999)}]
Two worst cases:
9.98(000) -> 1877
9.99(999) -> 1878
The difference is posible ESI TT value 0.01(999) or ~2
9.98(999) -> 1877
9.99(000) -> 1878
The difference is posible ESI TT value 0.0000..1 or effectively almost 0
2.
0,30 216 [posible range {0.30(000); 0.30(999)}]
0,40 268 [posible range {0.40(000); 0.40(999)}]

Two worst cases:
0,30(000) -> 1877
0,40(999) -> 1878
The difference is posible ESI TT value 0.10(999) or ~11
0,30(999) -> 1877
0,40(000) -> 1878
The difference is posible ESI TT value ~ 9

Absolute noise is the same for both samples, but relative error is much lower for second.

Two conclusions:
1. Larger step we make in ESI TT value in our data - lower the realative noise
2. Larger step we make in ESI TT value in our data - harder to get data (there will be fewere data points)
 
Well I previously stated two rules:
1. Larger step we make in ESI TT value in our data - lower the realative noise
2. Larger step we make in ESI TT value in our data - harder to get data (there will be fewere data points)

My conclusion is:
3. We have to take middle road.

Here is how i propose to do this:
1. Gather all available data in single data file with easy format
2. Write a simple program that does following:
Reads all data from data file in integer array [0 - 125000]
array[0] stands for ESI with TT 0.00
array[14] stands for ESI with TT 0.14
array[125000] stands for ESI with TT 1250 PED
data days:
1,40 543
So array[140] should contain value 543.
Now i hope it is understandable what info array should contain.
Now procedure does following job.
Takes step 1 or 0.01 (smalles posible step)
And iterates through array counting values(data points) we have in array
and stores result in file.
After that in increases step to 2 (0.01) and goes through the array and counts data poins incrementing step by two. - store results in file
After that increase step to 3 and do the same.
Increase step in that manner until step is too large to be interesting.
After this procedure we will have results about or data, that says how much data do we have if we chose spesific step.
Naturaly step 0.01 (smallest) will have all data - largest number of data points
And tendency will be - larger the step fewer data points we will have.
But procedure will tell us - what step it is reasonable for us to chose with data we have - the step that is large enough, but still have reasonably manu data points.

The procedure is easy to write, but don`t count on me - I am bit lazy:)

The main target of procedure is analyze data and tell us what is the best way to exploit data we have.
 
Haven't read the whole thread - it all looks a bit too clever for me but would just add the foloowing suggestion if it hasn't been made already:

In auction, when checking the implants tt value, is it possible to use the %markup which appears to 2 decimal places and extrapolate from the current bid a much more accurate tt value of the implant? This may be of particular use for those chips with high markups, say coloring as a 0.5PED chip may go for something like 300PED, when you divide the 300 by the markup you'll get a more accurate tt value than that given in the item info... won't you?
 
Haven't read the whole thread - it all looks a bit too clever for me but would just add the foloowing suggestion if it hasn't been made already:

In auction, when checking the implants tt value, is it possible to use the %markup which appears to 2 decimal places and extrapolate from the current bid a much more accurate tt value of the implant? This may be of particular use for those chips with high markups, say coloring as a 0.5PED chip may go for something like 300PED, when you divide the 300 by the markup you'll get a more accurate tt value than that given in the item info... won't you?

Yes, that's a good idea ano. The bigger uncertainty is in the level in the range i'm concerned with now (7k+), so it doesn't help there, but certainly it would be a great way to collect super-accurate data for <1k skills.

+rep to Lorfat for his help. I'm puzzling over the BLP data because there is a large variation on the four chips of similiar size. Going with a straight mean gives me an outlier.

Dawis my effort is mostly in fitting a good equation to the data that others have provided. If you can collect a better data set than what jdegre and MGMighty have put together than i'll definitely use it as a test set. I'm not going to go through and repeat things that have been done satisfactorily already, myself.
 
I've been reading through this thread - fascinating stuff by the way - and there's lots of detailed analysis, but could someone paint an overall picture of what it really means.

I know from my own experience this is not easy to do until you have all the data, complete the analysis and then say - ah this is how it works, and publish your results. However, it all looks so interesting and yet I feel when reading through this that I cannot see the wood for the trees.

For example, there are graphs which show (if I understand correctly) TT value of a skill increment of 1 plotted against skill level. This centres around zero, so some skill increments are negative. This makes no sense to me, as it would imply that at some skill levels the TT would pay you to accept a skill increase of 1 point.:scratch2: Probably I've misunderstood what the graph is plotting, but this sine wave idea seems very weird. Why have a value that oscillates. I know TT value is all code-generated, and not natural phenomena, but is it possible there's another explanation for the perceived sine-wave based variation. Like rounding at an early stage in the calculation inside the servers because the programmer used an integer variable by mistake?

I think I understand the objective here - to be able to predict the TT value of any skill level (is it just an assumption that this is the same for all skills?). But if there is a lot of variation outside of rounding factors, doesn't this mean something else is also affecting the TT values being observed? I mean here that if the formula was say TT = skill delta * X (it's not I know - this is for illstration) then you could find a value for X +/- some error due to rounding issues. But if more than a couple of data points (which could be put down to measurement error) were outside the rounding error range, then the formula is wrong. Because code is perfect. If the skill valuation code in the game has a formula of whatever, this will always give the same answer. If it doesn't, it's not going to be power surges on the server - the code will have been written to bring in the other factors.

So I guess what I'm saying here is could someone summarise the tentative theories which you're hoping to prove/disprove with the data.

I'm keen to see the outcome of this exercise - a fascinating project
 
So I guess what I'm saying here is could someone summarise the tentative theories which you're hoping to prove/disprove with the data.

I'm keen to see the outcome of this exercise - a fascinating project

Hi KP,
The objective is, as you said, to find the function that maps a number of skill levels to the tt value of a chip holding these skills.

The immediate application of this function would be to develop a calculator to know how much valuable are the skills in your avatar. There is one at the moment in www.entropiatools.com, which is excellent, but unfortunately it is outdated, since the function was changed in VU 8.9 (i think) and it has not been updated with the new one.

Another useful appliaction would be to know how much would it cost to chip in (and which skills) to reach a certain level in a pro standing, so you can unlock a certain skill. And calculate the cheapest way to do it, of course. (I'm working on this right now... may have something ready soon...)

The function is pretty clear up to a certain level 6-7k, but for higher levels is quite unknown yet. Actually we don't have a clue (well, at least I don't), what's the tt value of 15k skill levels, for instance.

Regarding the sine wave and so, no, definitely, there are not negative skill gains :) It is simply that the function can be approximated for the curve y=x*exp(x) up to a level of 6-7k, quite accurately; but on top of that, you get a sine-like curve super-imposed to the exp curve.

The sine-like curve can be approximated too, to get a super-exact approximation for the range 6-7k, but I think that for practical purposes, it is quite ok to approximate the overall curve via polynomial interpolation, in the same way that was done in the skill calculator I mentioned earlier.

So, this is basically it. The problem now is that to improve the curve approximation for higher levels, we need people with uber skills to check very big chips and see how they skill increase goes for a certain tt value.

Yesterday I found a massive marksmanship chip (>1k ped tt), in Joker's shop. It would be great if someone with MMS level around 8-9k could check the chip and post what's their skill gain.

Cheers,
/jdegre.
 
It is simply that the function can be approximated for the curve y=x*exp(x) up to a level of 6-7k, quite accurately; but on top of that, you get a sine-like curve super-imposed to the exp curve.

Thanks for the overview - this makes a lot of sense.

So total value of skill level x = x*exp(x)? ignoring the sine component, and ignoring constant factors? Just making sure this is not value of 1 skill increment, and I'm understanding what you a representing with the formula - sorry for being pedantic.

Why is it called an esi, when it has skills in?:scratch2: :laugh:
 
Thanks for the overview - this makes a lot of sense.

So total value of skill level x = x*exp(x)? ignoring the sine component, and ignoring constant factors? Just making sure this is not value of 1 skill increment, and I'm understanding what you a representing with the formula - sorry for being pedantic.

Why is it called an esi, when it has skills in?:scratch2: :laugh:

yes, the function is something like y = x*exp(a*x+b), you can find "a" and "b" in previous posts, ignoring the sine-like component.
"x" is the skill level, and "y" is the tt value of the chip, in peds. (it works more or less in the interval 500 < x < 6000).

skill chips are not called esi's when they have skills in them. they are called "skill implants".

/jdegre.
 
OK, Bogger kindly gave me his NRF data in exchange for trying to find a rough estimate as to a decent chipping path to Commando for him.

I used the calculator linked earlier in the thread to estimate the costs of ESI (in 500skill batches for each relevant skill) where I could, otherwise I just guessed it. (anything>7500)

If someone has the time to check I've done nothing stupid I'd be grateful, the results can be seen:

http://spreadsheets.google.com/pub?key=pEb5Qm7V3AE1gPvuZadP5dA

Keep up the good work, its useful stuff.
 
OK, Bogger kindly gave me his NRF data in exchange for trying to find a rough estimate as to a decent chipping path to Commando for him.

I used the calculator linked earlier in the thread to estimate the costs of ESI (in 500skill batches for each relevant skill) where I could, otherwise I just guessed it. (anything>7500)

If someone has the time to check I've done nothing stupid I'd be grateful, the results can be seen:

http://spreadsheets.google.com/pub?key=pEb5Qm7V3AE1gPvuZadP5dA

Keep up the good work, its useful stuff.

Hi Jimmy,
I've run Bogger's data through my calculator and I've got the following results:

Initial Laser Pistoleer: 5588
Final Laser Pistoleer: 7009
Total Cost: 56002 ped

Aim: 4473
Combat Reflexes: 4214
Combat Sense: 4913
Handgun: 10144
Marksmanship: 6205
Weapons Handling: 6015
Laser Weaponry Technology: 7595
Dexterity: 3648

So, even when the function I've used is different from Doer's, and the resulting final skills are different than yours, the total cost is amazingly similar (and also the subset of skills to chip and not to chip); nice... :)

My program uses basically the same approach than your spreadsheet, but in an automated way, so I can do more iterations with a smaller increment (you used 500, and I use 50).
Currently the program is developed in Java; I'm working on a web-based version so it can be easily used by other people.

I've noticed the "10% uncertainty" margin you added at the end of your calcs, and yes, unfortunately it can be even higher than that, specially for those high skill levels (~10k) where the function "level->tt" is not fully known yet.

Cheers,
/jdegre.
 
Thanks Jdegre, yeah doing it by hand with 50 increments wouldn't have been much fun :D

Nice to see them come out reasonably similar :)

Your approach has particularly picked out more Handgun, which isn't necessarily all that surprising as I had to go by complete guesswork for the Handgun skill Vs Implant tt :D

Bogger will be pleased - 750 PED less :D
 
ty Jdegre and Jimmy ;) now i just wait as price of ESI continues to tumble lol
 
Sorry i've been super busy lately. KP708, what you were probably looking at was one of the plots of the sinusoidal part of the curve. As jdegre mentioned, the level->tt value function has the form y=sin(x)*z+*x*exp(x) from 550 to at least 7.5k. Somewhere above 7.5k the nature of the function changes significantly. Unfortunately, no one fit of the x*exp(x) curve fits the entire range from 550 to 7k, so i have been breaking it into segments and fitting the segments with their own exp() function. The sin function remains the same except for a possible change in factor at a couple places, though.

I do have to disagree with jdegre that a poly fit will be sufficient for getting optimal results (at least not without breaking it into short enough segments to approximate the sine, which we don't have enough data for at high levels). You can see that the error is greatly reduced by including the sin function in the fit:
ESI-error.png


The average error is much smaller with the sin and exp fit (black line) vs just the exp() function (red line). Sure the sine can be ignored for low skill values, but by the time you get to ~8k skills, the magnitude of the function is about 20 PEDs, which as a percentage may not be much, but in absolute terms is huge. With my fit that includes a sine wave i get less than 1% error over almost the entire range we have data for.

Here's the fit of my current function to the data we have:
ESI-fit.png


I spent some time creating a new spreadsheet to simplify trying different fits and data sets. When we have a good fit to 8k+ i will focus on the chip calculator i have been wanting to do, and been detoured by the need to finish the skill<->tt fit.

This is not the function currently used by the php script i have linked earlier. I will update that script when i'm satisfied with the 900-1.1k range, which is a bit high with the parameters i'm using now.
 
Last edited:
My program uses basically the same approach than your spreadsheet, but in an automated way, so I can do more iterations with a smaller increment (you used 500, and I use 50).
Currently the program is developed in Java; I'm working on a web-based version so it can be easily used by other people.

Sorry for quoting myself :)
Just wanted to share a draft version of the "chipping optimizer" program (web version). Try this link:

http://jdegre.net/pe/unlocker.php

It is quite straightforward to use: just select the profession from the drop down list (only a few available so far), enter some skills, your current level is calculated as you type, and enter the target level, and click "go"; you should get the optimal path to get to your target level, and the total cost. The level is displayed as 1-10k to avoid decimals.

If you have locked skills, leave that field empty and it will not be considered in the calculations.

Let me know what you think.

Cheers,
/jdegre.

PS.: Keep in mind that this is very dependent on the skills->tt function, which is extremely unreliable for levels > 7-8k, so you better don't trust too much on those values.

PS2.: Some interesting finds is how much it costs to unlock MMS (~275 ped), RDA (~1150 ped), Seren (~4200 ped), Coolness (~11k ped), etc... starting from level 1.
 
Sorry for quoting myself :)
Just wanted to share a draft version of the "chipping optimizer" program (web version). Try this link:

http://jdegre.net/pe/unlocker.php

It is quite straightforward to use: just select the profession from the drop down list (only a few available so far), enter some skills, your current level is calculated as you type, and enter the target level, and click "go"; you should get the optimal path to get to your target level, and the total cost. The level is displayed as 1-10k to avoid decimals.

If you have locked skills, leave that field empty and it will not be considered in the calculations.

Let me know what you think.

Cheers,
/jdegre.

PS.: Keep in mind that this is very dependent on the skills->tt function, which is extremely unreliable for levels > 7-8k, so you better don't trust too much on those values.

PS2.: Some interesting finds is how much it costs to unlock MMS (~275 ped), RDA (~1150 ped), Seren (~4200 ped), Coolness (~11k ped), etc... starting from level 1.

Very nice jdegre :)
 
Sorry for quoting myself :)
Just wanted to share a draft version of the "chipping optimizer" program (web version). Try this link:

http://jdegre.net/pe/unlocker.php

It is quite straightforward to use: just select the profession from the drop down list (only a few available so far), enter some skills, your current level is calculated as you type, and enter the target level, and click "go"; you should get the optimal path to get to your target level, and the total cost. The level is displayed as 1-10k to avoid decimals.

If you have locked skills, leave that field empty and it will not be considered in the calculations.

Let me know what you think.

Cheers,
/jdegre.

PS.: Keep in mind that this is very dependent on the skills->tt function, which is extremely unreliable for levels > 7-8k, so you better don't trust too much on those values.

PS2.: Some interesting finds is how much it costs to unlock MMS (~275 ped), RDA (~1150 ped), Seren (~4200 ped), Coolness (~11k ped), etc... starting from level 1.

Awesome tool - I'd done some calcs for myself heading towards commando, but only calc'd up a couple of prof standings working out which would be best to chip and had planned to recalc once I got those levels. Your tool however not only confirmed my calcs as to which skill was cheapest for me to chip for prof standing, but also my guess that it would still be the cheapest to chip.

I did notice one small issue with the figures though. When entered my skills from a couple of days ago, it tells me it'd cost 22225 to chip open commando. When changing my rifle to what it was today before logging off (19 higher), it tells me the cost is now 22326. I'm guessing this related to rifle being in the >7k range. Still, damn tempting to bite the bullet and swallow the chips.
 
Awesome tool - I'd done some calcs for myself heading towards commando, but only calc'd up a couple of prof standings working out which would be best to chip and had planned to recalc once I got those levels. Your tool however not only confirmed my calcs as to which skill was cheapest for me to chip for prof standing, but also my guess that it would still be the cheapest to chip.

I did notice one small issue with the figures though. When entered my skills from a couple of days ago, it tells me it'd cost 22225 to chip open commando. When changing my rifle to what it was today before logging off (19 higher), it tells me the cost is now 22326. I'm guessing this related to rifle being in the >7k range. Still, damn tempting to bite the bullet and swallow the chips.

Thanks Trabin, glad to see it is (or may become) a helpful tool.

The difference you have observed from one day to another is normal; most likely is due to the change in prices of implants. The tool uses current peauction prices, so the total cost changes as those prices do, and most importantly, the optimal set of implants (and their tt value) might be different as prices change. This is not a problem, otoh, is one of the most interesting results of the tool: the optimal path (as EU) is dynamic... :)

/jdegre.
 
As jdegre mentioned, the level->tt value function has the form y=sin(x)*z+*x*exp(x) from 550 to at least 7.5k.

You can see that the error is greatly reduced by including the sin function in the fit:
ESI-error.png


The average error is much smaller with the sin and exp fit (black line) vs just the exp() function (red line). Sure the sine can be ignored for low skill values, but by the time you get to ~8k skills, the magnitude of the function is about 20 PEDs, which as a percentage may not be much, but in absolute terms is huge. With my fit that includes a sine wave i get less than 1% error over almost the entire range we have data for.

1. What expressions do you use for evaluating the periodic function? If the percentage of error remains same, it is more likely x*sin(bx) than just sin(x), and, with exp being multiplied by x, too, all data are better represented in y/x vs x coordinates.

2. My guess is that in the range of 1600-7000 (and possibly outside this range, too) higher error (black curve, with exp and sine fit) can indicate the points that are doubtful, and this method can be useful for eliminating incorrect data points.
 
Thanks Trabin, glad to see it is (or may become) a helpful tool.

The difference you have observed from one day to another is normal; most likely is due to the change in prices of implants. The tool uses current peauction prices, so the total cost changes as those prices do, and most importantly, the optimal set of implants (and their tt value) might be different as prices change. This is not a problem, otoh, is one of the most interesting results of the tool: the optimal path (as EU) is dynamic... :)

/jdegre.

Sorry, should have made what I was seeing clearer - I ran the figures through at the same time - one using 2 day old info (but put in the tool last night) and the other using yesterdays rifle, also put in last night. Minor thing anyway :)
 
Sorry, should have made what I was seeing clearer - I ran the figures through at the same time - one using 2 day old info (but put in the tool last night) and the other using yesterdays rifle, also put in last night. Minor thing anyway :)

Oh, I see...
In this case, the reason is the algorithm itself, which calculates the optimal path in small skill batches, so different start levels might end-up with slightly different target levels in the professional standing.

If you run the calculation, and then enter the calculated skill levels in the edit boxes again, you'll notice that the achieved level is different (usually is slightly higher) than the target level you entered in the "target" input box. In the two tests you did, you surely ended with a slightly higher target level in the second test.

Still, the price difference you got is roughly 0.4%, which is much lower than the expected error due to the inaccuracy of the "skill->tt" function.

Cheers,
/jdegre.
 
I think I have some new insights, BUT....

I have watched this interesting thread for a while and spend some thoughts on this and I think I have a couple of new insights, but also some question marks to add at the end.

First of all, I took a slightly different look at the data, i.e. I looked at the change in TT of the ESI at a certain skill level, called for simplicity dTT/dSkill. This is plotted below. I took only data starting from skill level 1 so far into consideration, so the high levels won't fit at this stage, but I think this can be improved. I will come to the colored curves in a second.
ESIdata1.gif


Next thing, changed the scaling to log, and now we nicely see the cosine that people have been thinking about before -- even with a constant amplitude. So I did an eyeball fit of the center line (red line) and found dTT / dSkill = 0.0024 * Exp [Skill/1275] to give a good enough fit.
ESIdata2.gif

Now transformed the data with the inverse function (1275 * Ln[dTT/dSkill / 0.024]) to take a look at the trig function:
ESIdata3.gif

Turns out that the well known period of 500 shows itself nicely, the amplitude is roughly 420, and there is a slope of 1 invoved as well. So including that into the formula, I get this:

dTT/dSkill = 0,0024 * Exp[[420*Cos[Skill*2*PI()/500]+Skill]/1275]

This is included in all the plots as the yellow line.

Note the deviation at skills <250 -- clearly something else is going on here, but the uncertainty in the data is also pretty large due to the low TT.

By the way, if I plot dSkill/dTT instead (so 1/(function above) ) I do get the plot that MA showed on the website a while ago to explain how the skillgain works:
ESIdata4.gif

So, except for the very low range I got a nice function that I just need to integrate from 0 to Skill to get the TT of the chip, BUT: Mathematica finds no integral and my analysis lessons are too far back to let me find one.

So I took the "cheap" approach and simply calculated dTT/dSkill for all individual skill levels starting at 250 and summed it all up until the skill level I am interested in. With this "manual Excel" integration I end up with the following prediction:

a) Low levels
ESIdata5.gif


b) High levels
ESIdata6.gif


Note, I cannot get a good fit for the very high levels and the medium levels at the same time. At the moment, I suspect that there is one function added on top responsible for the deviation at low skill levels. This function's effect should extend somewhat into the medium skill region as well.

Have no more time to fiddle around with the data and really would love to get some feedback on this (especially the integration) before I try more... any Analysis experts around?
 
Last edited:
I've done a bit calculating on low TT implants myself as suggested here to help a bit with more precise data.

Its pretty obvious that the displayed TT value of the implants in auctions are rounded downwards!
Internally, MA keeps track of the TT value down to 4 decimal digits!

Here are the data so far, will edit this post when i have more:

Code:
[B]Displayed TT[/B]	[B]Real TT[/B]		[B]Skill level[/B]
[COLOR="Blue"]0.05		0.0516		45[/COLOR]
0.06		0.0670		53
0.10		0.1071		85
0.13		0.1322		108
[COLOR="blue"]0.16		0.1688		129[/COLOR]
0.22		0.2272		170
0.27		0.2791		199
0.30		0.3043		217
0.37		0.3720		255
0.39		0.3973		264
0.50		0.5041		312
[COLOR="blue"]0.56		0.5686		335[/COLOR]
0.65		0.6530		367
0.99		0.9905		458
0.99		0.9965		458
1.12		1.1219		486
1.25		1.2572		513
1.27		1.2785		517
1.51		1.5133		567
[COLOR="blue"]1.61		1.6125		589[/COLOR]
1.67		1.6798		603
1.68		1.6819		606
1.86		1.8650		653
2.05		2.0591		710
2.83		2.8360		911
3.23		3.2319		974
4.63		4.6363		1196
8.52		8.5299		1693
[COLOR="blue"]12.20		12.2062		2033[/COLOR]

Hope it helps :)
 
Last edited:
This looks like a typo :(

Thx Kolobok for pointing out!

I've mixed up some screenies where skills didn't start at 1, but a bit above, thus the error...

Added some more fresh data, and removed 2 erraneous data from post above
 
Status
Back
Top