Server costs and platforms available

This is something I don't see discussed all that often:
Server costs and platforms available

From big to small, game devs and companies that offer online services need a reliable network backbone and storage space to offer their constant updates or multiplayer connection.
I originally thought most companies are good at that but then:

It seems to me that even big companies fail at maintaining online services more often than not.
Given that it seems to be such a huge issues I can see why smaller devs would rather pay valve to maintain the multiplayer in smaller games.
Yet a lot of titles with smaller communities like natural selection, king arthur's gold, red solstice and forged alliance forever rely on people hosting their own servers or peer to peer connections.


1. Say you are a dev in charge of a game with 1000 people.
– How much network requirement would be there for you if the whole thing relies on your own server ? Connection speed, location, storage space ?
How much would it cost ?
1.1 If you are a small time dev, is it better to host your own shit or pay someone like valve to do it for you ?
1.2 Would you need a regional server for every single continent ? How do bigger companies even go about something like that ?
2. How much do actual big scale companies pay for server management? Blizz, EA, Volvo, S2 games, Wargaming etc companies, how much do they pay ? Do we have any data ?
3. What are the alternatives if you wanna host your game somewhere else ?
4. Can LAN make a comeback given the overall improvement of network connections worldwide ?

Other urls found in this thread:

youtu.be/3-nJLIJdpw4
youtu.be/U87-vhjrYF4
computerworld.com/article/2581420/disaster-recovery/all-systems-down.html
twitter.com/SFWRedditVideos

Bump for interest.

tl;dr:

if you are a dev

If you have a server that services 1000 people a month. Assuming they are playing an MMO like blizzards wow you would be paying 50$ a month for the server. Server costs are nothing with todays technology. A 1mbps upstream connection could service a 50 player wow server running on an I5 CPU with 4gigs of ram. Server pricing is so cheap that companies are using networking as a form of DRM . Anyone that thinks developers pay a shit load is server costs is an idiot. An employee costs a million times more than a server or the internet connection. If you have a company that services a million people a month their internet bill for the month can be 20k or lower. That being said services such as HD streaming can cost a lot more than hosting a chan and a chan costs a lot more than. A client server software such as an MMO as MMOs simply send stats about the user back and forth which makes MMOs the cheapest online service one can run

It really depends on what you do with your servers.

If you do it like CoD, p2p lobbies, games are hosted by one player in the lobby, servers are only needed to assign lobbies and keep stats.

If you do it like battlefield on consoles, you also need a lot of processing power, since your servers need take care of each lobby.

For a MMO, it's a whole different thing (depending on the player base), since you can't do one or multiple lobbies per server, so you need a distributed application to take care of that, and that's not easy to get completely right.

Services like Steam, PSN, and XBox Live need a lot of bandwidth, since a lot of people are downloading games and updates. Everything else they do usually isn't a problem.

No.

The point of LAN is to have an isolated network that's not really connected to the rest of the Web.
This makes no sense for just about any kind of MMO or any other game that features some kind of ranking or matchmaking that's supposed to measure players from many different countries.

Then there's the issue with matchmaking. A lot of games opt for it since it's easier to make sure their players can find an available match to play with a lot of people. You try that with LAN and you're pretty much saying "you better have some friends to play with"

Being optimist, LAN could have a comeback as a secondary thing. Something the game supports but you wouldn't pick it as long as you have net.
Being realistic, it won't because there's no point for the Devs. They can't control anything that happens in a LAN or oversee it at all so any kind of ranking is out the window due to cheaters. And since most devs are more interested in the serious competitiveness of multiplayer (since that's what gets you a lot of players) they aren't gonna bother with LAN.

Granted the bills for connection and electricity are not high. But I automatically assume sony/valve/blizzard have something like data centers for their stuff and those are expensive as fuck to maintain.
Granted not high-level data centers with biometric scans on entry but a dedicated space full of racks for storage and with several connections and proper air-condition is normally a must.
The initial investment for that should be big.
I believe this is the reason most smaller players outsource to volvo.

To this day I keep remembering the day valve said they didn't have anyone monitoring the servers during one of their downtimes so they called two of their IT department in the middle of the night to fix stuff so their services definitely used to be low tier, but they have probably improved a great deal since then. this was for dota in particular


What of regions ? Say Blade and Soul, the MMORPG or Tera or whatever.
They had asian support initially. When they came west do they hire some private company to take care of their online services ? Or outsource to another asian company like perfect world to do it for them ?
Do they just keep one per continent or are there issues with individual countries ? Because if I remember right china bans half of everything, russia bans 10% of everything and germany has ultra-strict laws on content.


It feels like such an easy feature to add. And there is a ton of software like hamachi that allows you to imitate being on the same isolated network, while not physically being close.

tl;dr: useless? trivia. Servers are cheap, queues and downtimes are calculated.
- 1. There's a subject about throughput, you can use data to calculate costs, performance, throughput, size and duration of queues, using this and calculated performance, server costs etc data you can calculate the amount of servers you'll need for expected amount of requiests(clients, etc), and you get the expected amount from statistic from other project launches/sales/etc and amount of interest generated by advertisement and stuff. Server-side programs themselves are very very light so server costs (rent, everyone rents) aren't high but they cut into your income so of course everyone wants to save on those, also you need data replication and shit so that you won't lose everything in case shit hits the fan but servers themselves can do that. Btw there's might be new problems if the amount of connections or some type of requests are too large as, for example, instead of CPU bottleneck, you'll get get problem with too many read/writes or too many (write) requests to some file that will hold back the system from operating normally.

So with all that if you see steam or something like it down you can be pretty sure that it was expected and calculated unless LORD GABENS HOLY PRIESTS somehow FUCKED UP. Usually companies rent additional servers during game launch or major updates to reduce server load. Also rented servers usually aren't located in one data-center but distributed for increased reliability and most of the time if you click connect button you'll connect to one of many servers, not "the game server".

- 1.1. Valve doesn't just supply you with "servers" they also supply you with api made to work with their servers that reduce the amount of work needed to implement online and other features.
- 1.2. Renting a server is super simple. You might want servers on different continents to reduce latency or increase reliability.
- 2. Dunno about that stuff but I know that Wargayming are ultrajews that spend almost nothing on servers and stuff, including development, WoT is based on chink MMO engine which had no z-levels, no wonder it's so shit and casuls l-l-love eating shit
- 3. Use steam/gog etc api's and online features or write them themselves and rent a server, you can even buy one but no one does that, well maybe some huge-ass companies do.
- 4. LAN isn't much different from internet connections, rather, it's part of them so no LAN means simply no "client-to-client connection via IP" implemented. What's more important is correct use of TCP (reliable but slow) and UDP (fast but unreliable) for different type of data as well as correct handling of it by the program I remember watching some video about HL2 netcode, movement prediction and hit detection - it's not simple. You can read about GGPO or rollcaster/sokuroll articles if you want to see what kind of problems fightan games have and how they are solved, a very interesting read.


Stop with fucking plebbit style writing, fag.

Steam already lets you use their master server system to keep track of player-hosted server IPs. They can also keep track of player statistics, inventories, and host user made content in the players' Steam Cloud storage. If that's all you need: Congratulations, you're off the hook.

The server itself probably doesn't require too many resources. From the JC2MP site:


The main limiting factors are bandwidth and latency. You need enough throughput to deal with the volume of traffic, and you need your servers to be physically close to the players using them. The less separate servers need to talk to each other, the better.

Ideally, you'd want to build a game that scales horizontally rather than vertically: Your game design should hopefully allow you to spread players to smaller instances that you can spawn and dispose of as needed, so that when you have to deal with a sudden 10,000 player peak you can just rent a couple dozen EC2 instances instead of shelling out for an expensive dedicated server.

It's kind of become a meme in itself. Computers handling thousands of connections every minute, operating at peak capacity ~98% of the time or more can fail. Most cloud based systems use cheap, $50 blades as servers and just replace them as they die. DDOS and technical bugs are also problems, though usually they can be resolved over time.

Throwing money at problems rarely resolves them unless it's simply hardware failure, and these systems are pretty big. Without proper diagnostics or someone ready to replace the hardware, it can take some time. Unless you're paying for the Usain Bolt of hardware technicians, you're waiting on the tech support to identify the problem or problems, get a replacement, install it, and reopen the service. It's become a worse problem now that many companies simply rent their servers from people like Amazon. So if, for example, Blizzard hears about a problem, they have to contact them, and have their technicians sort it out.

Most people remember the times service was bad, not good. When it works, it's usually pretty great from a technical perspective.

I'm hardly experienced in the area but I know enough about it from research. I'll try to answer what I can.

Typically this requires building a scale-able solution, one that is preferably solved by just buying cheap hardware, and if one already exists, why not use that and save on server costs? What if you put your game on sale on steam? Would the influx of new players kill your server? What about players in Australia (who are commonly screwed over by technical details like this)? MMO-game companies are probably the most common devs who run their own dedicated hardware these days, given they need to since it lets them directly control how their game is run. There's a lot of factors to consider, and no solution is perfect.

Strictly better to pay for hosting from a monetary perspective. You pay for what your players use, and their services usually work. If you have like one million concurrent users, rent might get costly and dedicated hardware sounds good, but even something like a thousand players is pretty small time and is far more affordable to just rent, even long term.

It definitely helps if your playerbase needs it. An easy to understand example is matchmaking systems in games. Traffic is routed to a specific server, you play on it with others playing around you when the lobby is full. You just write code to handle swapping players around. You can hire servers to host your game in different regions from various companies, and these companies run servers usually all over the world in different centres. Valve do this for matchmaking in CSGO and TF2, and they usually list the place you're connecting to in the server name.

Different companies exist, I can't list all of them, but the likes of Demonware, Amazon, Unity even has a contract with their hosts and offers it as a package. Just a case of digging for these companies and contacting them.

Hardware has gotten better, definitely. Code supporting LAN though? Depends on the game. I'd argue that since network connections have gotten better worldwide anyway, the only draw of playing LAN is wanting to meet with friends, and organizing tournaments where even minute amounts of latency is an absolute detriment to the game at a competitive level, like Counter Strike. LAN I believe is usually cut given the majority of people will play on matchmaking servers anyway, and since they're reasonably reliable (key word being reasonably), it's usually safe to assume that'll be enough for people to use and be happy with. They can't account for EVERY scenario or drop in service, but nobody goes into making a network hosted game wanting to buy the worst servers and hosts imaginable. They want you to play their game with others.

Server thread?

Posting EVE Online architecture!

30x IBM x240 - Intel E5-2637-V3 CPU @ 3.5GHz CPU with 64GB RAM (2133MHz)

6x IBM x240 – Intel E5-2667 v3 CPU @ 3.2GHz with 128GB RAM (2133MHz)

6x IBM x240 - Intel E5-2667 v3 CPU @ 3.2GHz with 128GB (2133MHz)

6x IBM Flex x240 Intel E5 2640 v3 @ 2.6GHz 386GB RAM (2133MHz)

2x X880 X6 FlexNode – Intel E7-8893 V3 @ 3.2 GHz with 768GB RAM (1866 MHz)

More

youtu.be/3-nJLIJdpw4
youtu.be/U87-vhjrYF4 (the virtual server can reboot in ~7 seconds)

...

Final picture.

WHY

Just how many players can they possibly have and how hard can the game really be to run?
Ship's position and rotation plus whatever variables they use, what's hard about this? Why is there any need for all of that shit? The server doesn't even have to render jackshit.

I know calling EVE the "spreadsheet game" is kind of a meme, but it would actually be entirely possible to simulate everything needed to run EVE in a spreadsheet, why would they need anything better than an average setup?

Because every single dot on this map is a virtual server running on those boxes. And more because of Wormhole space. Dropping 2000 players into one small area all at once is straining as well so there are the Expanded SOL Servers.

Internet has not improved for games, it has gotten worse in many countries. Sure bandwidth is higher, but ping is terrible and inconsistent due to bufferbloat.

LAN takes no effort whatsoever to put in the game. There were rumours that companies like Microsoft were deliberately removing it from their games to promote the use of Xbox Live subscriptions.

tired and sober systems engineer here. I'll explain after I'm done eating and get some whiskey.

I talked to a Blizzard guy once, apparently their entire infra is tacked together and tends to fall in on itself. He implied that that's what some of their "DDOS attacks" actually are.

1: The server itself is piss-easy, it's infrastructure that's hard. Server hardware isn't that expensive, you can get a 12 core/24 thread T710 with 96gb RAM on Ebay for $400. The actual cost of hardware is getting new stuff that's not EoL and has a warranty, because at this level of service, warranties generally involve overnighting parts. Even higher tier gets you couriers and technicians that are there with spare parts and any knowledge you don't have yourself by the next business day, or same-day if you pay out the ass.

If you want to know what the top-tier service contracts involve, give this piece a read and look at what Cisco did:
computerworld.com/article/2581420/disaster-recovery/all-systems-down.html

What isn't easy is uptime. You need HA (high availability) infrastructure, and it gets incredibly expensive. You're looking at two internet connections ideally, and symmetrical fiber for business will run thousands of dollars a month for just your low-tier stuff. You need gigantic battery banks to keep everything going until you can turn on your gigantic generator. You need air conditioning unless you're a really tiny operation. You need two firewalls. You need licensing for everything that has software, OS, hypervisor, additional products, it goes on.

So you've already got a fuckton of money involved here, and now you have to pray that your shitty code will stand up when it's in a production environment. Devs are, in the end, responsible for most outages. This is especially annoying because devs are incredibly full of themselves, and write shit code. "lol sorry gaise this financial application needs to run as domain admin or it won't work. waht's the matter? it's not insecure, silly!"

Now, most enterprise has their game together decently, despite the horrifying overabundance of buzzwords and incompetent management that views IT as a cost center. The problem is building out to scale, and game companies don't seem to be that great at it. I'm not a senior systems architect, so I can't give more specifics, but your big-name companies like Blizzard and Valve deal with incredible amounts of traffic, and probably have trouble scaling. Combine that with shitty code, and you have a disaster that'll happen every few weeks.

I've gotten kinda off-track, but as for the rest of the connection, it relies on context. If you're looking to host something you wrote, OP, look into colocation. You rent space in an actual, professional datacenter, pop your own servers into your own cage, and they take care of the gnarly networking, power, A/C, and other annoying stuff. Uptime at these places is generally quite good. It's pricy, but it's vastly better than stacking up some old Dell PEs at home and calling it a day. I couldn't really give you an actual price point, I'm afraid.

1.1: How small-time? 1k users, I'd host at home with a business internet connection (expect to pay lots for that, but you need it). But then, I know what I'm doing, and have truckloads of spare hardware lying around.

1.2: Regional. You just buy a datacenter, or you rent a big chunk of one from someone that has lots of space. Depends on budget, but this is hitting the sort of scale that will leave you with employees on-site. 1k users, absolutely not. Those poor faggot commies in china will just have to deal with their latency. Smaller guys that run f2p games, I could see them just renting a fair amount of colo space.

2: Many millions per year for the big ones, I'd imagine.

3: If you're a Holla Forumsirgin with emone to spare, I'd go colo after buying your own secondhand hardware. Buy two of everything and keep an entire spare stored so you're not fucked if a part dies. If you really need some sort of HA, look into a SAN and vMotion across a bunch of servers. Expect to pay tens of thousands for it all.

4: LAN is dead. Network connections worldwide are irrelevant - remember that we're talking about Local Area Networks, switched dealios that (generally) don't need routing. They're irrelevant today outside of actual LAN parties (where you'll have an outside connection, since your games are probably online-only) and internet cafes, where the previous applies as well. The advantage of LAN for vidya is virtually no latency, and it's a very social setting out of sheer necessity. Unless you're doing something like a MAN across your city, in which case you're just a fucking autistic engineer, and we should have drinks sometime.

Nope, not really.


Jesus Christ, and here I am with a phone that's going to automatically ring the minute any one of 120 little client sites or one of ~100 servers drops offline.


Only half right. Conditions that would result in failed UDP packets will also result in failed TCP/IP packets. UDP isn't necessarily less reliable, it's just that it's fire-and-forget - there's no acknowledgement or confirmation.


Cheap cloud services like Digital Ocean are notoriously finicky and are prone to extended downtime. You get what you pay for. A lot of these places, your resources will generally be allocated to someone else the instant you stop using them.

Weird that they're so specific about specs though, I have to wonder if they're just buying old hardware and flipping it into production. That CPU is EOL, and consumer-grade to boot. Not at all surprised that they missed their uplink speed by that much, to be honest.


muh dick

CCP has top-tier architects. I'd love to know what they're doing for storage, though - there's no way they're running onboard drives. They don't even list any sort of storage in the rack diagram.

even with all that, largish fights still crush their servers and they have to put sections of the game world into slow mo to keep up with that.