New server going online tonight

Starfinder

Elite
Joined
Jan 31, 2005
Posts
3,062
Location
Denmark
Society
Planet Express
Avatar Name
Ms. Kazzza 'Starfinder' Milla
Since 2008 when we started Entropia Tracker we have had a massive increase in users and usage of the site.
This is a good thing!

However, we are running low on resources on the current server.
This is not a good thing!

So - we have just bought a new server and we are currently installing the software on that machine.
This will mean that Entropia Life will go offline for a short period this evening while we move the database.

We do not anticipate a downtime of more than 20 minutes.


Kind regards Starfinder
 
Thanks for everything!! It is appreciated by me and many more I am sure!
 
Pre-grats on the upgrade. And a praise to your code skills and service!
:wtg:
 
I wish you good luck :)
 
as is always the case.. stuff dident go as planned... BUT.. we are online now (as far as we can see).. if you experience any issues, please let me know... :)
 
as is always the case.. stuff dident go as planned... BUT.. we are online now (as far as we can see).. if you experience any issues, please let me know... :)

gratz you should get a HOF message or something for that. :yay:
 
Pic of the new server:

serer.jpg


Specs:
CPU: i7 Haswell 4770K (3.5 ghz) Quadcore
RAM: 32 GB Kingston HyperX
Disk: 2 x Kingston SSD in Raid1

Previously it ran on 4 GB RAM, 1 SATA disk and a rather old CPU.... I hope you can notice the difference now.. :)
 
big gtz :) very nice & most impressive!
 
Starfinder,

Thank you for the efforts throughout the years with the site! The response and load times of pages with the new server is much improved.
 
very nice i think im gonna buy the same cpu for the next upgrade :)
 
The site is much more responsive now :)

Nice rack :wtg:
 
Currently getting error 503...
 
Why buy your own server.....
it cost 5pec a day to rent servers and hardware will be upgraded every other year.
There are hundreds of services like this.... prices are really low.
 
Why buy your own server.....
it cost 5pec a day to rent servers and hardware will be upgraded every other year.
There are hundreds of services like this.... prices are really low.

The cheapest I could find.. with the stuff I need.. was 43$/day... Amazon VPS
 
Since 2008 when we started Entropia Tracker we have had a massive increase in users and usage of the site.
...
So - we have just bought a new server
Just wanted to say:
You did some good performance planning to have the previous machine running for 5 years.

Sidenote: With increased CPU (and RAM) speeds (now), if (hopefully when) performance again becomes a problem, it could be worth keeping an eye on what slows down more. If I/O (which is not at all impossible), I believe ZFS and it's L2ARC+ZIL on SSD could be worth keeping in mind.

++luck;
 
i like that the data is next to starfinder and not in the hands of a (*insert nationality here) zombie :)

got a sig now ^_°
 
Just wanted to say:
You did some good performance planning to have the previous machine running for 5 years.

Sidenote: With increased CPU (and RAM) speeds (now), if (hopefully when) performance again becomes a problem, it could be worth keeping an eye on what slows down more. If I/O (which is not at all impossible), I believe ZFS and it's L2ARC+ZIL on SSD could be worth keeping in mind.

++luck;

We actually wanted some "server quality disks" - but there was a 3 week deliverytime on those. And I had time to build/move/panic when stuff dident work.. yesterday...

Eventually we want the server disks in raid 10, but for now the SSD's we got is fine.

When we will do that upgrade I also want a dedicated raid controller and UPS installed. As im sure you know, a bunch of decent server disks, a good raid controller and a UPS = $$$

Im not saying "give me money", but at least you have a picture of what your money (when you pay for features) go to .. its not (just) cheap beer... :p
 
Maybe not cheap beer, but red bull to keep up all night adding to the site ;)
 
Good job man.. The site is really responsive and its nice to see some spirit in terms of "give it back" to the community.. +rep! :)
 
When we will do that upgrade I also want a dedicated raid controller and UPS installed. As im sure you know, a bunch of decent server disks, a good raid controller and a UPS = $$$
Personally I'd build such a server with ZFS (possibly run by FreeBSD). Some reasons why:
1. Free.
2. 100% reliability. Everything on disk is checksummed, and verified by the CPU. It is practically impossible to get bad data from. This is not the case with RAID. A dedicated RAID controller can give you bad data.
3. Ability to back reads and writes with SSD's as cache, allowing use of cheaper spinning-rust while maintaining more than acceptable IOPS. In fact, you couldn't get that much IOPS from just a plain RAID controller and only spinning rust, even if the disks were expensive 2.5" 15k RPM.
4. When a dedicated RAID controller goes south, it can be both time consuming and difficult finding a replacement card, not to mention expensive. With ZFS you could in a worst case scenario replace the whole computer, plug the disks back in, and it would all "just work".
5. Power consumption. A dedicated RAID controller consumes power, and the 15k RPM "server grade" disks it most likely would be backed by also consumes a notable amount. This combines to require a larger cooling solution, which again adds some power consumption. This results in the need for a larger UPS, which of course drives cost up. With ZFS you can get 3 or 4 relatively cheap "green" spinning rust, and back them by 2 SSD's for caching.

A smaller UPS could run a properly configured system with what I had in mind by just a few config tweaks:
- UPS signals "Main Power Loss"
- Spin down the spindles, let writes queue up on the SSD (ZIL).
- Throttle CPU's, maybe even turn off all but 1-2 cores.

Depending on system, you could now be at a power-state requiring maybe 30W or less in total. A simple 300VA UPS could keep that system running at least 8 hours (unless my math skills have completely escaped me).

Compare this with a RAID card with (at least) 3 always-on 15k RPM disks. While the system throttling could be done, the controller and the disks would still be operating at their normal power. Larger and therefore more expensive UPS would probably be required for same uptime.

Anyway, I just wanted to bounce a few ideas that came to mind.
++luck;
 
The cheapest I could find.. with the stuff I need.. was 43$/day... Amazon VPS

eh what ?
what exactly do you need from a webhost ?
The top package thay my webhost offers is $19 /day and thats for a dedicated server with :

Dual Intel® Xeon® E5-2430 Turbo 2.2GHz, 15MB Cache, 7.2GT/s QPI, 12C/24T, 32GB RAM, RAID-5 SAS15K 1200GB (3x600GB HDD) w/ 512NVCache, 10TB Data Transfer
 
eh what ?
what exactly do you need from a webhost ?
The top package thay my webhost offers is $19 /day and thats for a dedicated server with :

Dual Intel® Xeon® E5-2430 Turbo 2.2GHz, 15MB Cache, 7.2GT/s QPI, 12C/24T, 32GB RAM, RAID-5 SAS15K 1200GB (3x600GB HDD) w/ 512NVCache, 10TB Data Transfer

I need to run the server program the Entropia Tracker client speaks with + various other programs.
Entropia Life is not just a website... :) We also have the client remember :)
 
ah gotcha.

Looks a nice setup there , whats the case your using ?
 
Starfinder,

Thank you for the efforts throughout the years with the site! The response and load times of pages with the new server is much improved.

With the new server having been online for a few days, I can now share some stats on the speed.
On the old box, the average loadtime of the site was 2.07 second, now it is 1.1 second.

Bonus: Last month we had ~500 000 visits (Not unique visits)
 
...Depending on system, you could now be at a power-state requiring maybe 30W or less in total. A simple 300VA UPS could keep that system running at least 8 hours (unless my math skills have completely escaped me).

hi there, i'm curious what you would achieve with your 8 hours up time of a server with throttled back CPU and main storage offline? you certainly wouldnt be able to run a website very well.
 
With the new server having been online for a few days, I can now share some stats on the speed.
On the old box, the average loadtime of the site was 2.07 second, now it is 1.1 second.
Hmmm, that sounds quite slow... especially as this new machine has a much faster CPU, and while running on only SSD's should have access to easily 50k IOPS alt. 500MB/s throughput.

Are there a gazillion DB-lookups using all that time? Is it CPU or I/O bound (to me it seems unlikely to be either, considering the speed of both pure SSD and the CPU, but better to ask)?

Could this be improved using a front-end web cache, such as varnish?

Not asking/commenting just for the hell of it, but because the software developer (optimizer) in me see performance figures that at first blush look... suboptimal. Also, with such lengthy response times it (the server) likely consumes more energy than would really be needed.

++luck;
 
hi there, i'm curious what you would achieve with your 8 hours up time of a server with throttled back CPU and main storage offline? you certainly wouldnt be able to run a website very well.
Correct. I should have added "and switch over the web-server to display a static page mentioning power outage". As power outages in scandinavia are to my experience rather infrequent, and when they do happen they are very short, I honestly wouldn't expect the UPS to need to kick in more than a few minutes/year.

That said, one could (should) obviously allow mains to be disconnected for x amount of time before starting to shut down and throttle various subsystems.

Maybe I'm even pulling ideas out of my ass, in case my mumbling could be disregarded. :)
 
Hmmm, that sounds quite slow... especially as this new machine has a much faster CPU, and while running on only SSD's should have access to easily 50k IOPS alt. 500MB/s throughput.

Are there a gazillion DB-lookups using all that time? Is it CPU or I/O bound (to me it seems unlikely to be either, considering the speed of both pure SSD and the CPU, but better to ask)?

Could this be improved using a front-end web cache, such as varnish?

Not asking/commenting just for the hell of it, but because the software developer (optimizer) in me see performance figures that at first blush look... suboptimal. Also, with such lengthy response times it (the server) likely consumes more energy than would really be needed.

++luck;

The majority of the time it takes to load the page is the external lookup for the jquery lib that is located at google... in pure "my server" stats.. my performance counter says that each page has a "wait time" of 0 seconds.. So, the server in it self is fast.

If I want the pages loaded faster - I need to get the js file hosted at my own server instead of @ google.. :)

That said.. im ok with 1.x second load time..
 
Back
Top