What do you think Holla Forums? Will it topple the 1080ti?

What do you think Holla Forums? Will it topple the 1080ti?

Other urls found in this thread:

kitguru.net/components/graphic-cards/luke-hill/computex-amd-rx-vega-consumer-july-2017-threadripper-summer-2017/
nvidia.custhelp.com/app/answers/detail/a_id/2752/~/nvidia-gpu-maximum-operating-temperature-and-overheating
twitter.com/AnonBabble

Hopefully they will get better pumps this time, the whine was a real issue.

who the fuck cares

I've been an nvidia fag up until they announced the Vega, what was up with the whine?

A shitload of the watercooled Fury X cards had real whiny pumps. Returned 3 of them before I gave up and went for a regular Fury with regular fan/heatsink setup

Graphics have plateaued. Only people buying these are bitcoin miners.

Until pixels are entirely indistinguishable to the naked eye and screens take up my entire field of view they haven't peaked.

You know, I've felt the same way, but lately I've noticed my 980 runs hotter than normal with just a couple of games, mainly Friday the 13th, Alien Isolation, and tumblrwatch. 980 reaches 80 degrees with just those three, but everything else, including R6 Siege runs at a steady 68 degrees. Is it just poor optimization or what?

It's time to buy a new card goy. Even though though those same games run just fine on inferior console hardware.

What the fuck, man.

I'd much rather avoid that.


I don't get it either. It idles at around 50, most games the fans don't even have to turn on because I'm cheap and play older games, but even some newer games, it doesn't push past 70. Those three games though, for some fucking reason are a fucking bitch on my card.

It's called "planned obsolescence." Thanks for being a nVIDIA™ customer.

Why do are they making premium cards?

has anyone been as far as decided to do what more look like

I figured it was some fuckery on Nvidia's end. That's why I'm hoping the Vega at least matches the 1080ti, so I can get it and be done with upgrading forever.

And here i thought my 960 4 gb going at 47/53 degrees with Near A Tomato was bad.

it ain't some dumbfuck jew conspiracy, those cards just run hot as shit and your room temp/other factors influence it quite a bit.

Today i shall remind them.

:')

Maxwell and older nvidia architectures are pretty optimized software-wise for DX11, DX9 and below but show their age at Vulkan or DX12. With AMD it's the other way around so you either have a good CPU to compensate AMD's lackluster driver support, or get nvidia. To be honest, even with a 1440p monitor the most expensive card i'd buy would be the GTX1700, otherwise get the cheapest RX470/570 equivalent and wait until Vega, Navi or whatever to save mankind.

Voodoo 5 6000 / Spectre 3000 never ever.

I'm sorry, I am very drunk.
What I am meant to say is that why are premium cards thing when Cards do the software?

I'm happy AMD uncucked itself and is making intel and jewvidia actually work to compete.

*Also because of the fabrication process they're rather inefficient when it comes to thermals and power management compared to the never 16/14nm process.

And the GTX970 fiasco was fucking embarrassing for a company of that magnitude.

50 is really fucking high for it to be idling. My 970 idles at closer to 30. However, under full synthetic load I easily reach 85 on the GPU and even a peak of fucking 90 on my Skylake i7. This is a synthetic load though, so I doubt the numbers are that high in any games

I've heard the 980 is supposed to idle higher than the 970, but still I feel like it's too high. I just want the Vega to hurry the fuck up and drop so I can switch off nvidia already.

Go the fuck to sleep, you drunkard. You make no sense. Go, now.

Intel has been jewing their consumers for over 10 years. The tech was definitely available as you could buy a couple of cheap Xenon off a chinese retailer even though getting the motherboard was a bit harder and more expensive, but of course, it was intended to be kept exclusively for the Server market for as long as possible. Even current gen console have multiple -albeit weaker- cores (PS4 has 8 cores in a 4+4 configuration IIRC, dunno about Xbone)

I know it's a typo, but the thought of it gave me a chuckle. As for the 1070, I love it. I had a 570 for five years prior to upgrading, worked just like new to the last day but I needed some more power.

Fucking numbers man.

...

Why in the fuck would you need anything more than a fan to cool down your personal home computer? Are you downloading 50 terabytes of gay midget porn?

I would honestly not be comparing the Xbox One and the PS4s chips to mainstream Intel Chips. Both the Xbone and the PS4 use AMD Jaguar cores. They're weak as shit even with the extra cores on the die. Its the same microarchitecture used in my laptop for fucks sake. With the extra cores I can see it maybe in the neighborhood of AMD A series APUs at best

Texas says hi.

I hope so, not gonna get the liquid cooled version though, air cooled works just as fine for graphics cards.

I just want a new PC. I can't be that hard to make something that gets me hyped enough to buy it, can it?

I've had this one for two years, I want a new one. These two 980 are starting to feel old. Maybe I should get 8 TiB of SSD backed storage in my new one so I can finally install all my games on something that is't slow. But everything on the market is so shit it doesn't feel like it's worth replacing my current PC with.

yEAH!

all the posters in this thread for starters :^)

Wait that's something abnormal? My R9 290x goes up to 80 degrees on pretty much any recent game I play. Even goes up to 90 for some

jesus christ this fucking flag, FUCK OFF

Isn't water cooling mainly for overclocking anyways?

Demoman get off Holla Forums

I know and i agree. Still, having the same core count across different platforms might make things easier for devs, but im not one so i wouldn't really know.

Calm down Osea.

Pic related

who is that semen demon?

Texas? More like fucking Australia, or some other third world shithole like Brazil.

They're retarded. If its not thermal throttling its in safe limits. Chips generally don't start throttling until 95 degrees. And at 100 they're designed to shut off to protect themselves. You wouldn't want to get close, sure, but if anything you want to avoid heat just to avoid throttling down

I may be retarded regarding temps, but I've never actually gotten a clear answer regarding this issue. Once my room started getting hot as fuck from my computer, I questioned if it was normal or not.

Still using my R9 290 and run every game on max settings 60fps.

It has 32 GiB of RAM, though.

I think I'm just gonna get a side fan to blow some cool air on the thing and replace the PSU with something semi modular so I don't have a bunch of useless cables cluttering my case, and hopefully have that improve the airflow a little. I've heard the MSI line of R9 290x are incredibly loud and hot running cards anyway.

Yes, generally speaking your computer will make your room hot, seeing as it is blowing hot air from its exhaust fans. And yes, if you don't have ventilation in your room then it will continue warming until it reaches thermal equilibrium. But seeing as most rooms are not in a fucking vacuum chamber that will never happen. Just keep an eye on your CPU temps and if they get too hot as your room gets hot then open a fucking window or invest in an AC

It's pretty normal, especially during long sessions and especially with flagship GPUs. Even if your GPU is staying cool, the heat dissipated from it has to go somewhere. That somewhere is the ambient environment, and you'll notice it a lot more in the summer than you do in the winter.

naa its just an alternative to air cooling. If anything its more likely to break than an air cooler because it has a mechanical pump, and mechanical parts break

Some people prefer it because of aesthetics too, but i don't like because if a) It's not an AIO or b) It's a custom loop then there's the chance of water leak and screwing your whole system. So i'll stay with air cooled solutions.


The flatchested girl from Netoge no Yome wa Onnanoko ja Nai to Omotta? who tries to hide her powerlevel.


Well it's rather hard to have your PC temperature be lower than ambient temperature, unless you're using LN2, but that's only for extreme overclocking as it fucks up the CPU and socket after a few hours.

All those virtual machines i could not run :^) 16GB is more than enough right now though even if you want to mount a RAMdisk

Good idea, i nigger rigged my PC like that for a while. A not so shit cooler and decent airflow is vital though.

No considering you decided to post the Frontier Edition and not a gaming card.

Thats not true at all. Liquid is more dense than gas, and thus carries heat better. This is thermal dynamics 101 user shit

does coool noise?

No I didn't mean a fan on the side of my case. I meant an actual case fan mounted to the side of my case.

My CPU temps are fine since I'm using a corsair h55. Just my GPU temps that had me a bit worried was all.


Good point. Thanks anons, I feel like I actually learned something today.

...

Are you gay or something? I've got three of pic 1 for VMs and pic 2 for website stuff.

Mine was more like this.


Gayman passthorugh VMs, but nice either way. By the way which website user? Don't be a fag and share it with us

Wait a few more years. PCI 4 is coming out in the next year with talk about PCI 5 in less than 5.

water takes longer to heat up so the cpu stays cooler for longer but once that water has heated up (say after gaming for 2 or 3 hours) the water also retains that heat for longer too (about an hour or so) meaning your cpu is being kept at a greater heat by the water cooler for longer than it should.
Compare that to a thermal heat sink and fan, it heats up to max temp within a matter of minutes yes, but it also cools down withing a matter of minutes too meaning you aren't keeping it at higher temperatures for longer periods of time which can cause damage to the cpu.
Also nice cherry picking aby ignoring the fact that aio's are more likely to have mechanical faults than thermal sink + fan coolers.

Yeah I do it like that on especially hot days for now, but I really need to come up with a permanent solution cause I'm facing an entire season of shitty heat. My attic room gets so fucking hot I actually get scared of running anything remotely demanding.

Guys I need to badly upgrade my gpu, still using the same since 2011. Since I have a freesync 144hz screen I need something made by AMD, but it looks like they are not releasing anything good anytime soon.

Would a nvidia card run well on a freesynch screen?

Ah, it's mostly a GNU/Social instance at this point. If you're into that kind of thing then you're better off at registering with shitposter.club anyway.

then get vega, its a no brainer
If you are already set on getting AMD for freesync that is.

The joke of water cooling is that it's still air cooling. You're not using water to increase cooling efficiency, you're using water to transport heat to a radiator somewhere else, and that radiator has fucking fans and heatpipes and shit too. Water cooled CPUs reduce cooling efficiency by putting fucking water between the CPU and the heatsink, whereas air cooling slaps the heatsink and fans directly to the CPU.

The only reason to EVER use water cooling is when you have severe space limitations and can't fit an air cooler on top of the CPU.

You've waited this long, Vega is launching this month. And sure, Nvidia will work with a Freesync monitor, as long as you don't care about the Freesync part working.

Is that a garage door opener?

What? Sauce? Prices?

I do apologies allow me to correct myself
The pumps in AIO's are an additional complex mechanical part is more prone to having failure than the fan counterparts.
And as said, you still need fans for aio's to cool the radiator.
It's just fan cooling with extra more expensive more failure prone steps.

the frontier edition, consumer versions are coming out in july.
source: kitguru.net/components/graphic-cards/luke-hill/computex-amd-rx-vega-consumer-july-2017-threadripper-summer-2017/

I own a gtx 970 and my roo. Heats to around 44 °C on summer with no AC. I have a fan to cool myself but the card mostly stays at 70~75 degrees when running full throttle

My room*

Inb4 aussie

Damn, I'm going to have to bite the bullet and get a temporary GPU until then. The only downside to AMD CPU's is no iGPU. I wish Nvidia wasn't so blatantly cancerous with their drivers.

The fact your temps are sub 100 °C proves you don't live on Criminal Island.

Freesync is useful for when there's a chance that you'll be dropping frames, for instance a very demanding game at high resolutions, otherwise, at long as you can maintain 144+ you're golden. So in theory a nvidia card can work, but you're not going to be able to use freesync unless nvidia somehow decides to allow it through a driver update (as freesync is pretty much adaptive sync which is a VESA standard it should be possible in theory)

So you either get a high tier nvidia card now and hope it's enough, or wait for Vega and don't worry about lower frames as long as it is within the Freesync range of your monitor. AFAIK they go as low as 30 and some also have the added benefit of LFC. If i were you, i'd wait for Vega and save myself the hassle of incompatible technologies.


Looking at it on the bright side, you'll have driver support for longer, i guess.

thats what im gonna do, get a second hand 970 and hold out and sell it afterwards im sure some reddit chump will eat it right up

plebian

Your heatsink is probably getting filled with dust. I have seen a lot of people with older cards have theirs break down from overheating, and then they cry that it's AMD or Nvidia's fault.


This is wrong. Decent liquid cooling, i.e. not an AIO, has a lot of water to absorb the heat energy. And radiators are much better at dissipating heat, they have a lot more surface area than the little heat sinks on your CPU and graphics card.

I have a custom loop, it needs 3 120mm fans on the radiator to keep the whole thing cool. There's no whiny graphics card fan, nothing. Just three extra 120mm fans. My CPU never breaks 70c overclocked, and my GPU never breaks 45c overclocked. Graphic card fans are usually the worst, they are those blower types. But even if you get an aftermarket one with regular fans, they'll still make more noise than 3 120mm fans.

Those AIOs are a waste though, they usually don't have enough radiator, and a big enough pump, and enough liquid to make it worth it.

I'm using an h55 and I just cleaned my entire pc last month with compressed air.

Reminder not to be a retard and buy the Frontier Edition, because that's for Neural Net processing and not games. Also the leaks indicate the proper gaming version of the card will land somewhere between the 1080 and the 1080 Ti.

Why the fuck would neural net processing not wait for the better RX cards too?

Yeah like I said, I think my case just has a really poor airflow and my card being notorious for getting very hot isn't a good combination.

They're optimized at different environments is my guess. FE might have enterprise level support too which would merit a higher price that would could shy away even "prosumers" or whatever they call them now.

better questions:
where have the games been in the past 10 years justifying an actual upgrade?

This right here.

only reason I've upgraded recently is to decompress 7z files faster and watch higher quality 10bit anime. I've gotten more utility out of getting old hardware than new shit lately.

Only reason I want to upgrade is because I'm sick of nvidia's shit, and I feel like I might as well just wait a bit longer for the Vega and then I'll never have to upgrade ever again because consoles will always hold games back.

been using amd since the x1k series and there's plenty of shit here too. amd's got a solid plan going forward and I'll be very surprised if it doesn't succeed - but wait and see if shit really succeeds unless you're still on a 2500k/fx chip or something before you upgrade.

Only reason I want to upgrade is because I am a huge graphics fag who always wants the best graphic settings. I have a 780Ti that can still runs games on high but it is starting to fall behind a bit.

Can't topple when you lack drivers.

I'm on the 980. Almost three years old, but still a solid card. I'm just really looking for an excuse to get off the nvidia train.

thats nothing
my desktop cpu reaches 90 idle (has proper thermal paste, proper fan, sufficient cooling, etc. no idea whats causing it even on software side.)

This. nvdia has done this in the past, intel with cpu's.

change the thermal paste on the card and undervolt it a little. makes a world of difference in your temps. I used metro last light in the developers dlc right outside of the elevator as a sample for messing with it until it was stable and i havent had any problems since

I can almost guarantee your thermal paste is fucked.

if it does it'll do it marginally and then it'll get stomped by volta

do you have an x1800? if not you need to fix your shit cuz something's not right

why would those temps be bad?

that's retarded
those leaks are from benchmarks with unoptimized drivers and non-finalized hardware
tflops-wise it will beat the 1080ti

how is no igpu a downside?
intel is forcing that shit down our throats to justify them still selling a quadcore at the high end of the consumer market for a ridiculous price

Because it means he has to keep a shitty GPU on hand in case his main dies or he wants to sell it.

For the most part yes, but the AMD 9590 CPU I have upstairs is mandatory watercooling, and the dual R9 Fury X's in this PC are watercooling from the factory. Bear in mind that early 386 and 486 CPU's came with no cooling, or simple alloy heat sinks only, with no fan. The R7 1800X in this PC can be cooled with a standard air cooling, but I hate excessive fan noise, so I strapped on a H80i v2. in the end it can be a matter of personal choice, or manufacturers trying to push too much from outdated silicon.

You're really not doing a whole lot by blowing ambient air towards your motherboard you know. Yeah you might get a bit more cooling doing that but its far from optimal. You're forcing convection by displacing static pressure inside the case. This works but you might actually get more cooling by turning the fan around and using it as an exhaust fan instead. It really depends on a variety of factors though so it might not make a difference but its something to consider trying

That's not what cherry picking means

Like, at all

I upgraded last year to a 480 8gb when they were at a good price so I'm probably not going to upgrade again for a while. I didn't even really need to upgrade then either, but the gpu I had before the 480 only had 2gb vram and it's a noticeable difference for me.

I'll probably get a new mobo/gpu at some point, but probably not this year as my i5 is still doing fine. Ryzen is looking pretty good though so probably end up with a later iteration of the ryzen 5 then.

I've been running a 570 for years and it only gave me heating issues early on. It's hot as fuck right now, but I keep my room very cool. I'll be picking up F13 soon, so I hope it doesn't explode. It handles FF14 well enough, so I doubt it'll be too bad. I'm just waiting on the 1070 to fall in price, so that I can upgrade.

a few posters upthread answered that question – they are bored and spending large amounts of money removes boredom for a short period of time

Thanks, user, for the salsa.

graphics have plateaued for the amount of shekels that can be spent on graphics and still turn a profit.

Will it ever be released? is a better question.

gay

call me when they fix their openGL drivers

Fuck cryptocurrency.

There is no way graphic cards are still in any way worth using for any cryptocurrency, right?

Some two year old coin called etherium or something is exploding and people are buying GPUs in god damn bulk.

AMD has been unable to compete on both the CPU and GPU markets for a while so I say no, but I hope it does.

Time to clean the heatsink and replace thermal paste m8. Don't listen to most of what Holla Forums has to say, the only places that know less about tech than Holla Forums are tech-oriented reddit pages.

That's normal operating temperature for a computer part, although on the hot side.

All nvidia gpus starting with kepler throttle at 80 degrees. They're rated for more but from the factory that's where they throttle.

My 390x is 60 iddle since it's the Strix version it gets to 90 when maxed.
Do not recommend it's very loud.
Avoid any 0db cards if possible. Not worth.

Clean it, your cpu and fan are probably clogged with dust.

...

So they can make it faster but cannot make it more efficient, so as performance increases so does the waste heat. I'm waiting on when it becomes a standard practice for PC cases to have a radiator instead of side wall.

If your GPU maxes out at 68 degrees then that's poor optimization, yeah. Run FURMARK in stress test mode ("burnout" it's called I think?) and see how hot it gets. Temperature is proportional to workload, so if your CPU is only partway between idle and its maximum stress temperature, then it's only partway between idle and full load.

Not to mention the gpus without blower style coolers just dump the heat inside your case.

What's with the ethernet ports?

...

can't tell if you're being sarcastic or just pretending or what, since tone doesn't carry over text

...

I think he's just a tard trying to be funny like the underrated post at

It's so you can download more ram.

That's not that bad of an idea. It's cheaper than having a trillion ones mounted on the thin parts of the case.

Please talk only about subjects of your knowledge.

So you actually have a desktop computer that idles at 50 degrees celsius and goes to 80-90 during gameplay?

Nigger I have a 3½ year old laptop and even only Witcher 3 put me above 80 a few times.

But GPUs need VRAM, and I can't find any download sites for that. Do you guys hide it all behind those private torrent tracker thingies?

...

...

why? what are you pretending to care about?

He's right though. If you aren't flooring the gas then you aren't using your engine to it's maximum potential. That's just the physics of it. It doesn't mean you're using your engine incorrectly, it means you're not going 100%.

And, realistically, nobody ever goes full-fucking 100%. You get a better engine and a better car for marginal performance increases between 20% and 70%, which is where you're going to be using it for the vast majority of its lifetime. You also get a better engine and a better car to show off in front of peers and feel better about yourself.

And if I drop the car analogy:

Nigger I've pushed my engine plenty times to absolute 100% on the Autobahn and that needle never budged above 90°C. It's a modern VW diesel
Power output in a car is not limited by heat, but by the abuse the internals can take and by the boost, and equally, a GPU isn't limited by thermal output but by maximum voltage before the electrons start going where they shouldn't. Liquid cooled GPUs with a large radiator never reach their thermal limits before they reach their physical limitations.
You nigger.

When game runs as fast as it possibly can utilizing every bit of performance it can reach, and yet your hardware is still only half-loaded, then that's poor optimization. Well optimized game is the one that can leverage hardware to its full potential. That doesn't only means high framerate, but also that there is no headroom for improving anything.

That's because it's not a thermometer, it's a "temperature indicator". It basically only has there positions - "cold", "normal" and "hot". It might as well have been three lamps instead. And just so you can feel better, it's also made to travel from position to position smoothly over time and not instantly. If you want temperature readout, you need a thermometer gauge. But then you'll be shitting your pants seeing the temperature going all over the place however the fuck it pleases.

Again, please only talk about subjects you know about.
if my IP changed it's because of cellular networks

Well you clearly know grand dick about GPU programming, but I welcome your opinion, provided you can state one.

Stop proving you don't know nothing, is this some kind of dunning-kruger where you somehow believe you know about GPUs without ever having written opengl/d3d/whateverthefuck?
No, failing to stress the TMU while having a trillion shaders running isn't poor optimization.

Thank you for your fuckall of meaningful input and resorting to re-iterating your previous posts, as if it made a difference.

Clearly, if you stall your hardware, it's because you're incompetent programmer who doesn't knows in and out of the hardware he's working with. Some optimization can be done to improve the situation, and therefore the previous state is lack of optimization.

Literally nobody knows the ins and outs of the hardware they're working with you fucking non-programmer retard, this isn't 1987.

How about instead of projecting and repeating your previous empty statements, you provide ONE meaningful argument? Having to provide one single example isn't an impossibly high standard, even a Holla Forumstard such as yourself should be able to muster that much.

As for "not knowing", hardware nowdays is arguably more uniform than it used to be in 1987. For example, with all hardware and software, GPU calls are at a premium, so simply by rendering your scene in many small chunks you can stall the GPU by API and/or bus saturation, resulting in poor performance, which could have been easily avoidable. The same way, unpredictable branching is at premium, in CPUs but moreso in GPUs. On CPU resolving one failed prediction will often cost much more than evaluating entire function behind it. On GPU, each block of shader units has to execute same operation, so if execution has a branch that's not unfiform for all the fragments, it will also stall, executing all necessary operations sequentially instead. And so on.

Thanks crypto-miners!

100% CPU and GPU utilisation isn't realistic on anything but fixed hardware setup you're developing specifically for. What's the point in CPU pushing 400 UPS if the GPU can only pump out 60 FPS, just to make sure the CPU is used 'to its full potential'?

Source?
All I could find was this;
nvidia.custhelp.com/app/answers/detail/a_id/2752/~/nvidia-gpu-maximum-operating-temperature-and-overheating
And they officially state its designed to shutoff at 105C but nowhere does it mention throttle temps

I believe you though since my 970 doesn't seem to want to go past 75 degrees but I was attributing that to make having pretty okay cooling

Again, there's a difference between not utilizing hardware by choice (vsync etc.) and not utilizing hardware due to poor programming.

Yes, your reading comprehension is apparently shit because you completely missed my point.

No I get your point, but you miss my point. When game runs at cinematic framerate while your GPU is idling, then that's shit optimization. Especially if CPU is also idling.

Why are you now bringing up low framerate? The original user's post didn't say anything about low performance, just temperature.

Because apparently you fail to understand the logic for same exact situaiton where framerate isn't low.

Nigger are you retarded? If the framerate is low and both the CPU and GPU are idling then something is clearly wrong. That wasn't even the argument you were making, you goalposting moving cunt.

Then if you realize that if hardware is idling then that's poor optimization, why are you arguing that point?

I was a crypto miner back in the R9 200 era, I was one of the people buying multiple cards and driving up prices. I know it sucks but there is a silver lining. The unusually high demand helps fill AMD's pockets, which probably helped fund the development of Ryzen, and now that it's happening again with the 500 cards it will help fund development of the next stuff in the pipeline. If you want competition in the market, they will need as much cash flow as they can get.

Also, after the cards inevitably lose profitablity, they will flood the used market at low prices. Obviously you shouldn't buy a card that is suffering problems, so if you get one used find ones that have been sent back to the manufacturer for repair. I sent three back to ASUS, two of them were for dead fans and one was for artifacting issues. All three cards are still alive and well today and play games no problem. Plus AMD still offers driver support for these cards.

The only cards that fail during mining are those that were housed in an airtight box and had no cooling. Putting a fucking floor fan on one side of the farm and leave a hole on the other side so the hot air can escape is a piss easy and cheap fix, but you know how it is with miners, they're really pinching pennies on one hand and are nowhere near as engineering-savvy as they should on the other.

Yeah in my case I had it set up in my garage during the coldest months of winter (and it was especially cold that year) but it still put my fans through some torture and the one with the lowest ASIC quality (thus the one that ran the hottest) still suffered the artifacting problems, likely due to a damaged RAM chip.