First thank you for fruitful post. Was very interesting to read.
In first place I disagree that stricter sanctions negatively affect us all. I do not care about sanctions if I play in accordance with rules but If I cheat I might consider what are sanctions.
We do not speak here about accidently mistakes or reasons that players did not foresee. In this case you have clear situation, game was exploited very hardly and sanction was very weak.
As we speak about real cash economy game there must be clear policy enforcement especially in practice and I do not see here need for build fail safe measures rather I do see need for clear written sanctions. As I mentioned before it is clear when something is exploited in this game.
In accordance with above, I do see that sanctions should be clearly prescribed as you have it in real life. For those who play regularly in accordance with rules, they don't care but for cheaters they might think twice before using exploit.
There will be always players who abuse system and especially in this case where no clear policy is made. e.g. if here they made profit 500K peds and they get only with 6 months ban that not adequate sanction. Also in real life if you rob bank and get only warning or ban for access to bank you will rob it again and again.
I do see potential about discussions about policy enforcement options but anyhow that another topic.
Best regards,
Aye, always happy to respectfully iron out a productive disagreement.
I would respond by cautioning that some of your statements gloss over complex policy enforcement challenges. For example, you say that "we do not speak here about accidently mistakes or reasons that players did not foresee," but if we do not speak about these then the thread is a nonstarter. The thread creator
both loads assumptions of intentionality into his/her terminology choices ("cheaters," "stole," "certain avatars were doing X so they could Y," etc.)
and ties his/her assertions to specific instances of putative infraction (so moving to an abstract discussion bound by stipulation to the intentional case would be a retreat). Thus the thread creator, or any volunteering proxy, accepts a burden of proof to establish intent. This burden might often be achievable given access to all of MindArk's resources (an avatar's full public and private chat logs, the ability to interactively communicate with the avatar during investigation, a window into the nuts and bolts of the bugged code, etc.), but I don't envy it for entities without such means.
Even more idealistically, the thread creator seems to treat intentionality as a property of a bug, rather than as a property of an individual's utilization of that bug, apparently implying that all individuals involved act with identical motive. This might occasionally work for bugs with unusually complicated execution, but if the bug is something as simple as "spawn a Yog while in an instance," player intent is going to be all over the map. Some individuals will have spawned the Yog to intentionally gain an unfair advantage in the event, some to gain an advantage they perceived as fair, some because they saw a Twitch streamer using a Yog or their friend told them they should without an understanding of the mechanical reason, some by unlikely coincidence, and so forth. The
best information an improperly-equipped forum community could even
hope to construct is a probability distribution of player intent with regard to a given bug, but this is both insufficient to prescribe appropriate penalties to individuals, and likely based on dubious guesswork rather than being data driven in any sense.
As for the notion that increasingly aggressive penalty issuance poses no risk to honest players, I have to wonder how you think policy enforcement actually plays out in practice. I think you'll find it obvious upon further reflection that avatars are never presented to MindArk labeled "good guy" or "bad guy;" MindArk has to gather as much information about a situation as they possibly can, and make judgments as to whether they are convinced beyond some predefined likelihood standard that an infraction(s) has occurred, whether they are convinced beyond some predefined likelihood standard that any discovered infraction(s) were committed intentionally, to what extent it is reasonable to expect an honest player to have engaged in the behavior in question (i.e., "everyone else was doing it" doesn't excuse the infraction but in some circumstances might be a mitigating factor), the consequences any committed infraction(s) has produced or was likely to produce, etc. It is a matter of collectively judging the quality and content of the information they can obtain, and it is a very general truth that human judgment is inherently subject to occasional error. Furthermore, analogies from domains as widely varied as
radar design,
predator-prey modeling, airport security, and medicine confirm that while "in an ideal world, we want to ensure that whatever test we're using to measure something has both a low false positive and low false negative rate, so that it's maximally accurate...in reality that can be very hard to do; often there's a direct tradeoff between these two things...so depending on the situation, we typically prefer to maximize one over the other, depending on which outcome is worse." Because of this inexorable tradeoff between false positive and false negative investigation outcomes, if we agree even with Benjamin Franklin's formulation that "it is better 100 guilty Persons should escape than that one innocent Person should suffer," and that's still more black swan risk than I suspect most honest players desire given their time and possibly-monetary investment into Entropia, I suspect there are wiser courses of actions than posting the kind of cookie cutter, anger-baiting rhetoric we see in this thread, consisting not of arguments over how to best manage the tradeoffs between pros and cons of different policy enforcement strategies, but of a gross misframing of the relevant tradeoffs in an attempt to make opposing perspectives appear optically indefensible. This is how an incredibly complex subject gets bastardized into a false dichotomy between MindArk taking maximal possible punitive action and their being "too lazy" or "too greedy" to do so.
I also reserve a great deal of skepticism over the bank robbery analogy. By definition, robbery involves threat of force, which raises concerns not even in the ballpark of multiplayer game infraction. In fact when we remove this element and shift the analogy to white collar theft,
the data are not clear that formal penalty issuance even provides a significant deterrent effect. Moreover, in areas of criminology where there is such a deterrent effect, it is often not a nice, monotonically increasing function of punishment severity. Finally, the value proposition of storing wealth in Entropia is a lot different than the value proposition of storing wealth in a bank. A bank is utilized for secure storage or risk-modified access to financial markets. Entropia is a giant sort-of-casino-like-but-not-really black box, where players input time and/or money, and receive fun and/or some other amount of money as output. It certainly
is some sort of violation of their rights if part of that black box algorithm involves cheating, but it is more qualitatively similar to an opponent cheating you out of a prize by making an illegal play in a Magic the Gathering tournament than it is to transferring money out of your bank account.
I could probably double or triple the number of points I've raised here, but everyone would tl;dr. Hopefully this is enough to at least get people thinking about the higher order effects of policy enforcement decisions, and maybe come to better, more nuanced conclusions.