Doom PC Vulkan Patch Tested

Explain this fucking bullshit!

Can someone please tell me this is bullshit?

I just bought 1080

...

AMD's middle of the road cards outperformed top tier NV shit in DX12 and Vulkan, because AMD develops non-malicious middleware and actually makes sure that their cards are feature compliant, while NV develops middleware to sabotage the competition (and their own older cards) while colluding with MS to lower the DX12 spec to make them look better. This has been known for a while now.

AMD shills


Thas a joke that keeps on giving

In this day and age both AMD and Nvidia are equally jokes. I miss Voodoo cards.

I dont see that many nvidia shills here

Nvidia farts, and you can count the seconds until the AMD shill force shows up, doing it for free, of course

Shitting on nvidia != shilling for amd

...

AMD released Mantle. Didn't do well.
Incorporated it into Vulkan.

I'd rather OpenGL was added to and refined.

But this issue may be with "Asynchronous Shaders". Which AMD supports and Nvidia doesn't.

AMD and Nvidia have these silly tiffs where one does something right, and the other says it is dumb then does it later.

Regardless Nvidia hardware will usually offer better frame times.

OpenGL should be used and improved with every game.

Should probably be an OpenEngine too.

Things done the best way possible. Nothing ever becoming incompatible. Games done on one platform working on all others. Graphics lowered and raised by hardware, gameplay remaining the same, set framerate or uncap/increase it to degrade visuals in return for more smoothness.

Software that acts upon hardware directly is fine. That functionality should be integrated into OpenGL. If hardware has safeguards against it's capebilities being used to their full potential then overcoming them will be worked into OpenGL/OpenEngine/OpenOS etc.

Focusing software and hardware development around the content created instead of those taking credit for it, and selling it will have creations free, with uncontrolled worlds that are fought for being better by adding what is suited, the best the first time.

Sabotaging creation will have that use of resource and time set someone behind.

People continually doing their best for something to have it better may become part of a force present in what is passed on, looking after it so there always is (more and) better.

hey Nvidia
you guys were all over when the 480 dropped, literally around 6 threads about it and how its the worst thing to happen

faggots are waking up to your goy machine

Nvidiagoy please

what?

...

Underage please go

kek

Why would you ever buy something that ages like fucking milk?


6 9 9 U S D O L L A R S

OpenGL was trash and the devs admitted that unfortunately, but it was the only real thing stopping Microsoft monopolizing the gaming industry. Vulkan has outstanding performance improvements over other low level APIs and it has crossplatform support so if anything is going to stick for a very long time, it's going to be Vulkan.

it's just another graphics card company but it's old as shit and they stopped making them

Forgot to mention this graph isn't even with async enabled. The AMD gains would've been even more significant with it on.

they got taken over by nvidia

Nvidia got its ass kicked in Vulkan because they made the choice to either do excessive calls in either Async or in sequence, the second increases frame times if the calls get too high

However they did this because Nvidia didn't think there would be much use for it, and Nvidia wanted DX12.1

AMD has always been better at computational horsepower, but gets its ass kicked under high geometry and excessive shaders

Both hardware makers have always thought they knew what the "Market" was going to ask for and build their cards around it, GCN cores kicking the shit out of Cuda cores in terms of number crunching power is beyond old news

Also it doesn't help that in 8+ months Nvidia has literally dragged its ass in making drivers work for Async (Its still not properly enabled) because demand for DX12 games ATM is still pathetically low and both hardware makers are still more concerned with DX11 games than DX12 ATM

And no not a shill, just this is shit I've known about for almost half a decade and for some reason ppl are deciding to shit their ass out on now

GPU threads are basically mustard console wars.

Voodoo, also known as 3DFX was a old graphics company that came out of Silicon Graphics in the early 90s and ran until the late 90s until eventually bought out by Nvidia due to bad management putting too many resources to a project that never came to.

The only real thing they did that still lives on is SLI, the nvidia graphics card linking stuff.

Video linked is by the excellent Lazy Game Reviews, and explains everything in about 8 minutes.

I can't get over the proportions here, It really sticks out in an image I'd otherwise quite like.

I miss Matrox gaming graphics cards.

Name one good game that requires a 1080.

RX "750ti Killer" 460 when

latest Nvidia drivers fix Vulkan but it still runs like ass compared to OpenGL. 😂😂😂😂😂😂 👏👏👏👏👏👏👏 💯💯💯 👌👌👌👌👌👌👌

It was expected, user. How are you not aware by now that Nvidia ignored DX12/Vulkan and lied about supporting its features initially?

AMD has them beat on architecture, just not on actual card vs card performance.

I dealt with AMD for years and had issues on both linux and windows. their drivers are a nightmare and their software is ass.

i switched to nvidia and avoided all their bullshit "geforce experience" software and I have been happy so far. I update when I need to

Problem with Maxwell/Pascal is they both support DX12.1 and AMD only supports 12.0

Also like I said above DX12 is not really mainstream yet so Nvidia has openly did little to no work in turning it on in the drivers properly , when it is turned on properly it takes off like a rocket

But Nvidia's been fucking lazy with drivers the last 2.5+ years so its no surprise at this point, they're great with day one drivers but often fuck around after that

About time AMD makes cards that aren't shit. People that but $800 GPUs are furfags and degenerate redditors who think all cards are low/shit tier except the 1070 or higher.

My GTX980 ran everything other than The Division at 1080p at over 80FPS but I have all settings cranked

Truth is you can lower a quarter of video options in most games and visually it only impacts it so minuscule you wouldn't see it unless you were looking for it and at the same time increasing FPS by around 40-80FPS even with something like a GTX 960 etc

Also see here

Both hardware makers have had pro/cons to their hardware for ages, AMD's con is high polygon counts and shaders, Nvidia's is grunt compute performance and texture fill rates which is why AMD often kicked Nvidia's ass past 1080p, though I am not sure if that's still true with Pascal for fill rate, but Cuda has never been a match for GCN goes for compute performance

Oh and Cuda cores can handle physics in their sleep, be it either through PhysX or DX11/12, GCN cores hate them mostly due to GCN cores are designed to handle huge computations vs Nvidia which has thousands of CUDA cores which handle (I think still) double digit computations while GCN cores can handle huge amounts of data

In short:

GCN compute cores = Giant earth mover dump trucks capable of doing a shitload of work, but not quick in execution of things that require rapid updates, making them great to do things such as async, because each call is a huge amount of data which GCN can do in its sleep, which also makes AMD cards literal beasts for stuff like digital coin mining

Cuda cores: Great for executing tiny instructions quickly, like constant updates to a physics system or stuff like CGI rendering where it can break down a scene easily into thousands of parts that it can execute, think of them as hundreds of ferrari's moving small packages

The problem is they absolutely shit themselves for anything requiring large data clusters/amounds because it has to break down the instruction into dozens of clumps, for example that same earth mover example above all that dirt would have to first be placed into each ferrari which takes time, then it has to make it to its destination, then come back to get refilled, meanwhile the giant earth mover has dropped off 6 loads in the time it took to load up all the ferrari's to complete one load

Anyways sorry for derailing the thread with logic, resume your respective fan camp bashing of either brand

You do

do what?

Remind me of the babe!

Why can't we have a half-and-falh glued-together card with both the AMD and Nvidia strengths?

DX12.1 is just 12.0 with features actually removed because nVey didn't want to add support for them.

...

...