I talked to a Blizzard guy once, apparently their entire infra is tacked together and tends to fall in on itself. He implied that that's what some of their "DDOS attacks" actually are.
1: The server itself is piss-easy, it's infrastructure that's hard. Server hardware isn't that expensive, you can get a 12 core/24 thread T710 with 96gb RAM on Ebay for $400. The actual cost of hardware is getting new stuff that's not EoL and has a warranty, because at this level of service, warranties generally involve overnighting parts. Even higher tier gets you couriers and technicians that are there with spare parts and any knowledge you don't have yourself by the next business day, or same-day if you pay out the ass.
If you want to know what the top-tier service contracts involve, give this piece a read and look at what Cisco did:
computerworld.com/article/2581420/disaster-recovery/all-systems-down.html
What isn't easy is uptime. You need HA (high availability) infrastructure, and it gets incredibly expensive. You're looking at two internet connections ideally, and symmetrical fiber for business will run thousands of dollars a month for just your low-tier stuff. You need gigantic battery banks to keep everything going until you can turn on your gigantic generator. You need air conditioning unless you're a really tiny operation. You need two firewalls. You need licensing for everything that has software, OS, hypervisor, additional products, it goes on.
So you've already got a fuckton of money involved here, and now you have to pray that your shitty code will stand up when it's in a production environment. Devs are, in the end, responsible for most outages. This is especially annoying because devs are incredibly full of themselves, and write shit code. "lol sorry gaise this financial application needs to run as domain admin or it won't work. waht's the matter? it's not insecure, silly!"
Now, most enterprise has their game together decently, despite the horrifying overabundance of buzzwords and incompetent management that views IT as a cost center. The problem is building out to scale, and game companies don't seem to be that great at it. I'm not a senior systems architect, so I can't give more specifics, but your big-name companies like Blizzard and Valve deal with incredible amounts of traffic, and probably have trouble scaling. Combine that with shitty code, and you have a disaster that'll happen every few weeks.
I've gotten kinda off-track, but as for the rest of the connection, it relies on context. If you're looking to host something you wrote, OP, look into colocation. You rent space in an actual, professional datacenter, pop your own servers into your own cage, and they take care of the gnarly networking, power, A/C, and other annoying stuff. Uptime at these places is generally quite good. It's pricy, but it's vastly better than stacking up some old Dell PEs at home and calling it a day. I couldn't really give you an actual price point, I'm afraid.
1.1: How small-time? 1k users, I'd host at home with a business internet connection (expect to pay lots for that, but you need it). But then, I know what I'm doing, and have truckloads of spare hardware lying around.
1.2: Regional. You just buy a datacenter, or you rent a big chunk of one from someone that has lots of space. Depends on budget, but this is hitting the sort of scale that will leave you with employees on-site. 1k users, absolutely not. Those poor faggot commies in china will just have to deal with their latency. Smaller guys that run f2p games, I could see them just renting a fair amount of colo space.
2: Many millions per year for the big ones, I'd imagine.
3: If you're a Holla Forumsirgin with emone to spare, I'd go colo after buying your own secondhand hardware. Buy two of everything and keep an entire spare stored so you're not fucked if a part dies. If you really need some sort of HA, look into a SAN and vMotion across a bunch of servers. Expect to pay tens of thousands for it all.
4: LAN is dead. Network connections worldwide are irrelevant - remember that we're talking about Local Area Networks, switched dealios that (generally) don't need routing. They're irrelevant today outside of actual LAN parties (where you'll have an outside connection, since your games are probably online-only) and internet cafes, where the previous applies as well. The advantage of LAN for vidya is virtually no latency, and it's a very social setting out of sheer necessity. Unless you're doing something like a MAN across your city, in which case you're just a fucking autistic engineer, and we should have drinks sometime.