I don't have particularly strong opinions as to whether MindArk is too permissive or too harsh in the way they apply penalties. Perhaps the broad conclusion of this thread is correct, or perhaps it's not. Regardless, it seems worth drawing attention to the fact that the framework people are using (in this thread and others) to assess MindArk's policy enforcement decisions is entirely inadequate to describe actual policy enforcement challenges. Framing this as MindArk being too unethical or too lazy to enforce policies to the optimal degree is irresponsible to the point of being a strawman.
A few of the challenges to policy enforcement include the reality that false positives and false negatives in the identification of policy infractions tend to be correlated (the acceptable level of false positives is subjective, but there could be possible analogies in legal theory; Ben Franklin, for example, is famous for stating that "it is better 100 guilty Persons should escape than that one innocent Person should suffer," which suggests at least a bound for the optimal target), the frequent need for one-on-one communication with suspected policy violators to establish their intent with any degree of confidence (which is much harder to do effectively online than in a real life conversation), the loss of game immersion and other indirect costs to other members of the community when direct developer actions rather than game mechanics start to dictate how the game evolves, etc.
MindArk is in a better position than we are to assess how effectively their tools can meet these and other challenges in a fair and desirable manner. I'm certain they've given the matter a much more thorough cost benefit analysis than any player has, and while it's possible that their bottom-line answer to the question is at odds with what's best for the player base, let us not jump to the hasty conclusion that it is. We should calm down with the almost-comically-uncharitable framing of MindArk just being too unethical or lazy to opt for certain courses of action, and appreciate that they face genuine trade offs that they probably know how capable they are of managing. Otherwise, we may end up applying social pressure on MindArk to do things which they are not capable of doing well, which is certain to create bigger problems than we have now.