Legends
Elite
- Joined
- Dec 31, 2006
- Posts
- 3,235
- Location
- Twin Peaks mall, 2nd floor
- Society
- Dirty Dingos
- Avatar Name
- Inherent Marxus Legends
This is a topic that comes up from time to time, like for example in recent AMAs and also in Suggestion threads. I thought I'd share my perspective on this and invite others to also share theirs. I'm not by any means a computer programmer or software/game developer, but I am tech savvy and I can generally follow a conversation about computers and their evolution over time since I have been using them since the mid-1990s.
Back in my late teens I was working for a printing company and learned to do Desktop Publishing. The tool of the trade at the time was Macintosh Power PCs. If I remember I was working off of a Power PC 9600. I have to admit that back then, I was a bit confused as to why we were using Macintosh for this since PCs generally had Intel chips in them with faster clock speeds. It wasn't until the early 2000s that I actually took it upon myself to do the research to find out why that was.
Superscalar Architecture:
Apple, IBM and Motorola had joined forces in the early 90s to create their own CPUs based on their needs and what they thought was best. To them, what held the most promise in the long term was this Superscalar architecture where the internal clock of the CPU has more than 1 slot and therefore is able to accomplish several operations with each revolution as opposed to Intel's x86 chips which could only perform 1. This meant that although Macintosh's Power PCs had slower clock speeds, they often were actually faster than PCs with Intel chips since they could perform 2 and sometimes 3 operations per clock revolution.
But the whole world in the early 2000s seemed to be obsessed with clock speeds, all you kept hearing about was the Ghz of the CPU, that seemed to be all that mattered. Marketing-wise, it sure made sense; it was a lot easier and simpler to compare machine performances when that performance was tied to just 1 number, it's internal clock speed. But I couldn't help but feel that people were misguided and that this was not the smartest way to develop the technology. In other words, I thought Apple had been smart to go the Superscalar route, I agreed with them that it held much more promise in the long term when clock speeds have reached their upper limits and the only way to improve performances further would be through multi-thread processing (more than 1 operation per clock cycle).
From Wikipedia:
I was shocked when Steve Jobs announced in 2005 that Apple was going to install Intel chips in their Macintosh Desktop computers starting in 2006, this to me was such a disgrace and an admission of having lost the race on CPU performance. But at the same time I understood that Apple was probably just following the old philosophy of "if you can't beat them, join them", and that it was probably the right move for Apple to gain more market share in the personal computing space (quick sidenote, the P5 Pentium chip was the first x86 chip to be a superscalar chip and this could have also played a role in Apple's decision at the time).
CryEngine 2 and CryEngine 3:
Back in 2005-2007, the world of personal computing was very much single-threaded. CryTek looked into their crystal ball and this is what they predicted the future of computing would look like so they built their game engine accordingly. Therefore it was no surprise when CryTek decided to bet on the future of computing being mainly characterized by ever increasing clock speeds and the first few iterations of the CryEngine were actually designed for single-thread processing computers and didn't even consider the possibility of multi-threads or multi-cored machines. And this is the reason why still today, older games made for PC such as Crysis don't run as well as they should even when installed on today's best gaming machines (see articles about that very subject here and here).
MindArk choses CryEngine for Entropia:
In July of 2007, MindArk settles on the CryEngine for Entropia. Who knows, maybe the UnReal engine may have been a contender at the time but back in 2007, though there was a lot of hype being generated about their upcoming UnReal Engine 4, the latest version available for licensing was UnReal Engine 3 and UnReal Engine 4 didn't actually get released until 2014. CryEngine was actually ahead in terms of graphics quality at the time and I'm sure that MindArk had to consider a whole lot of other factors such as cost to license and how resource intensive the engine would actually be for them to use for Entropia.
CryEngine 2 had all of the following features:
Multi-Threading optimization:
The optimization of graphics cards and game engines for the use of multi-cored/threaded machines has been somewhat slow. Some of that could be attributable to Microsoft's DirectX applications, and some of that optimization is also dependent on the game developers themselves. But there is a point where the amount of time it takes just outweighs the performance gains so it's difficult to argue that the responsibility lies entirely with the developers and much more likely that the technology, i.e. development tools and software, still has some catching up to do. In other words, the technological advancements on the hardware side seem to have been out-pacing the advancements on the software side. I found a couple of articles that show that beyond 6 cores, there are just no further gains to be made right now as far as FPS is concerned, which to me at least, shows that the software running the hardware is just not optimized for that sort of hardware configuration just yet (articles here and here).
Is CryEngine still the right choice for Entropia:
People often ask themselves that question, even I did at one point, mostly after hearing that CryEngine 2 was not mutli-thread capable. But since that time I've come to realize that this problem has not been isolated to just CryEngine and in fact was attributable to many factors, which are slowly being addressed. And as I said before, I'm sure that MindArk had to consider many things other than just graphics and performance, including cost, both financially (licensing fees) as well as cost in terms of development resources required to further expand the game in the future using this engine as opposed to another one.
One particular article which was quite revealing to me recently was this one where the author explains:
And:
Let's face it, Entropia is a rather small MMO in terms of users, and MindArk is a small developer. And although UnReal might look a bit better than CryEngine does today (this is highly subjective and could depend on who you ask), you have to consider how many developer manhours it actually takes to realize projects with it, and it seems that CryEngine is just much faster in that respect. So for that reason, I honestly think that we'll all be better off if MindArk just sticks to CryEngine for now as they are short on resources and way behind in terms of releasing new content (Space transport missions, Codex for PPs, Land Plots, etc...)
Superscalar Architecture is back with a vengeance:
The idea for this thread came when I saw this youtube video below. It shows how Apple is now turning all of the Personal Computing space over on it's head by doubling-down on Superscalar and actually surpassing Intel's fastest chip by a very comfortable margin recently. The Apple M1 chip is arguably the fastest commercial chip ever made and soon to be used in a host of personal computing products. It's capable of 25,000 simultaneous threads according to Apple. The future is very much multi-threaded, there is no longer any doubt about that, if there ever was.
How Apple just changed the entire industry:
Conclusions:
According to Wikipedia, we are still running on CryEngine 2. There was a major graphics update in December 2019 but it is not stated anywhere that the CryEngine was updated to a more recent version. So at this time, it would appear that Entropia is saddled with a game engine which is not made for multi-core gaming computers and which is designed around principles of yesteryear which are being abandoned quickly by the industry.
My understanding is that porting an existing game that was created in an earlier version of CryEngine to a more recent version is not necessarily that easy or quick to do, and if that's the case, I agree that MindArk should not 'waste' their precious little development resources on this unless they see a huge advantage in doing so. Multiple surveys have been done before on the player base in this forum and graphics is not top of mind for Entropia players.
However in March 2020, CryTek announced support (in beta) for Android (and hints at full mobile support). Once CryTek announces full native support for Android and other cell phone operating systems, I think there would be a lot to gain for MindArk to port the game to the latest CryEngine at that point. With CE 3, full native support for Linux was introduced, and with Rosetta, Mac users can play pretty much any PC game they like, so this would make Entropia much more widely available going forward, not to mention all the new tools and features that have been introduced in the software development kit (SDK) since 2007.
When do you think MindArk should consider upgrading the game engine? And what do you think they should upgrade it to?
Legends
Back in my late teens I was working for a printing company and learned to do Desktop Publishing. The tool of the trade at the time was Macintosh Power PCs. If I remember I was working off of a Power PC 9600. I have to admit that back then, I was a bit confused as to why we were using Macintosh for this since PCs generally had Intel chips in them with faster clock speeds. It wasn't until the early 2000s that I actually took it upon myself to do the research to find out why that was.
Superscalar Architecture:
Apple, IBM and Motorola had joined forces in the early 90s to create their own CPUs based on their needs and what they thought was best. To them, what held the most promise in the long term was this Superscalar architecture where the internal clock of the CPU has more than 1 slot and therefore is able to accomplish several operations with each revolution as opposed to Intel's x86 chips which could only perform 1. This meant that although Macintosh's Power PCs had slower clock speeds, they often were actually faster than PCs with Intel chips since they could perform 2 and sometimes 3 operations per clock revolution.
But the whole world in the early 2000s seemed to be obsessed with clock speeds, all you kept hearing about was the Ghz of the CPU, that seemed to be all that mattered. Marketing-wise, it sure made sense; it was a lot easier and simpler to compare machine performances when that performance was tied to just 1 number, it's internal clock speed. But I couldn't help but feel that people were misguided and that this was not the smartest way to develop the technology. In other words, I thought Apple had been smart to go the Superscalar route, I agreed with them that it held much more promise in the long term when clock speeds have reached their upper limits and the only way to improve performances further would be through multi-thread processing (more than 1 operation per clock cycle).
From Wikipedia:
I was shocked when Steve Jobs announced in 2005 that Apple was going to install Intel chips in their Macintosh Desktop computers starting in 2006, this to me was such a disgrace and an admission of having lost the race on CPU performance. But at the same time I understood that Apple was probably just following the old philosophy of "if you can't beat them, join them", and that it was probably the right move for Apple to gain more market share in the personal computing space (quick sidenote, the P5 Pentium chip was the first x86 chip to be a superscalar chip and this could have also played a role in Apple's decision at the time).
CryEngine 2 and CryEngine 3:
Back in 2005-2007, the world of personal computing was very much single-threaded. CryTek looked into their crystal ball and this is what they predicted the future of computing would look like so they built their game engine accordingly. Therefore it was no surprise when CryTek decided to bet on the future of computing being mainly characterized by ever increasing clock speeds and the first few iterations of the CryEngine were actually designed for single-thread processing computers and didn't even consider the possibility of multi-threads or multi-cored machines. And this is the reason why still today, older games made for PC such as Crysis don't run as well as they should even when installed on today's best gaming machines (see articles about that very subject here and here).
MindArk choses CryEngine for Entropia:
In July of 2007, MindArk settles on the CryEngine for Entropia. Who knows, maybe the UnReal engine may have been a contender at the time but back in 2007, though there was a lot of hype being generated about their upcoming UnReal Engine 4, the latest version available for licensing was UnReal Engine 3 and UnReal Engine 4 didn't actually get released until 2014. CryEngine was actually ahead in terms of graphics quality at the time and I'm sure that MindArk had to consider a whole lot of other factors such as cost to license and how resource intensive the engine would actually be for them to use for Entropia.
CryEngine 2 had all of the following features:
- Volumetric 3D Clouds
- Real time Ambient Maps with dynamic lighting and no premade shadows
- 3D Ocean Technology dynamically modifies the ocean surface based on wind and wave direction and creates moving shadows and highlights underwater
- Depth of field to focus on certain objects while blurring out the edges and far away places
- Vector Motion blur on both camera movement and individual objects
- Dynamic Soft shadows with objects casting realistic shadows in real time
- Realistic Facial Animation that can be captured from an actor's face
- Subsurface scattering
- Breakable Buildings allowing more tactical preplanning on the player's side
- Breakable Vegetation (with possibly heavy foliage) enabling players and enemy AI to level entire forests as a tactical maneuver or other purposes
- Advanced Rope Physics showcasing bendable vegetation responding to wind, rain or character movement and realistically interactive rope bridges
- Component Vehicle Damage giving vehicles different destroyable parts, such as tires on jeeps or helicopter blades
- HDR lighting
- Fully interactive and destructible environments
Multi-Threading optimization:
The optimization of graphics cards and game engines for the use of multi-cored/threaded machines has been somewhat slow. Some of that could be attributable to Microsoft's DirectX applications, and some of that optimization is also dependent on the game developers themselves. But there is a point where the amount of time it takes just outweighs the performance gains so it's difficult to argue that the responsibility lies entirely with the developers and much more likely that the technology, i.e. development tools and software, still has some catching up to do. In other words, the technological advancements on the hardware side seem to have been out-pacing the advancements on the software side. I found a couple of articles that show that beyond 6 cores, there are just no further gains to be made right now as far as FPS is concerned, which to me at least, shows that the software running the hardware is just not optimized for that sort of hardware configuration just yet (articles here and here).
Is CryEngine still the right choice for Entropia:
People often ask themselves that question, even I did at one point, mostly after hearing that CryEngine 2 was not mutli-thread capable. But since that time I've come to realize that this problem has not been isolated to just CryEngine and in fact was attributable to many factors, which are slowly being addressed. And as I said before, I'm sure that MindArk had to consider many things other than just graphics and performance, including cost, both financially (licensing fees) as well as cost in terms of development resources required to further expand the game in the future using this engine as opposed to another one.
One particular article which was quite revealing to me recently was this one where the author explains:
Having so much available right out of the box, combined with visual strength, makes CryEngine a great tool to iterate quickly on projects and get prototypes out of the door.
- CryEngine allows for a fast iteration process
"CryEngine has no build or compile times in its editor," Larsen says. "It's extremely easy to create something good looking very fast thanks to the powerful tools such as the vegetation tool, the environment tool or flowgraph editor (node-based scripting) which makes getting off the ground or simply prototyping ideas very fast indeed."
And:
While beginners may be more at ease with engines such as GameMaker thanks to its drag-and-drop features, CryEngine is still a good choice even for entry level developers as it doesn't have a steep learning curve.
- CryEngine is easy to learn
"I always encourage other developers to consider CryEngine for their projects if applicable," Bergendahl says. "The engine is not hard to learn, is immensely powerful, and carries with it a special recognition in the minds of the gaming community. Gamers' eyes usually light up when they see 'developed in CryEngine' before a game trailer. We never had any issues training talented developers in the tool and code base."
Let's face it, Entropia is a rather small MMO in terms of users, and MindArk is a small developer. And although UnReal might look a bit better than CryEngine does today (this is highly subjective and could depend on who you ask), you have to consider how many developer manhours it actually takes to realize projects with it, and it seems that CryEngine is just much faster in that respect. So for that reason, I honestly think that we'll all be better off if MindArk just sticks to CryEngine for now as they are short on resources and way behind in terms of releasing new content (Space transport missions, Codex for PPs, Land Plots, etc...)
Superscalar Architecture is back with a vengeance:
The idea for this thread came when I saw this youtube video below. It shows how Apple is now turning all of the Personal Computing space over on it's head by doubling-down on Superscalar and actually surpassing Intel's fastest chip by a very comfortable margin recently. The Apple M1 chip is arguably the fastest commercial chip ever made and soon to be used in a host of personal computing products. It's capable of 25,000 simultaneous threads according to Apple. The future is very much multi-threaded, there is no longer any doubt about that, if there ever was.
How Apple just changed the entire industry:
According to Wikipedia, we are still running on CryEngine 2. There was a major graphics update in December 2019 but it is not stated anywhere that the CryEngine was updated to a more recent version. So at this time, it would appear that Entropia is saddled with a game engine which is not made for multi-core gaming computers and which is designed around principles of yesteryear which are being abandoned quickly by the industry.
My understanding is that porting an existing game that was created in an earlier version of CryEngine to a more recent version is not necessarily that easy or quick to do, and if that's the case, I agree that MindArk should not 'waste' their precious little development resources on this unless they see a huge advantage in doing so. Multiple surveys have been done before on the player base in this forum and graphics is not top of mind for Entropia players.
However in March 2020, CryTek announced support (in beta) for Android (and hints at full mobile support). Once CryTek announces full native support for Android and other cell phone operating systems, I think there would be a lot to gain for MindArk to port the game to the latest CryEngine at that point. With CE 3, full native support for Linux was introduced, and with Rosetta, Mac users can play pretty much any PC game they like, so this would make Entropia much more widely available going forward, not to mention all the new tools and features that have been introduced in the software development kit (SDK) since 2007.
When do you think MindArk should consider upgrading the game engine? And what do you think they should upgrade it to?
Legends
Last edited: