Jump to content


Photo

Upgrading PC


  • Please log in to reply
117 replies to this topic

#61 DoYouEvenModBro

DoYouEvenModBro

    High King

  • Citizen
  • PipPipPipPipPipPip
  • 2,337 posts

Posted 07 June 2013 - 10:10 AM

Well I've used AMD for a while for the same reason as DYEMB: it's cheaper. I use them for processors, and I used them for GPUs. But, with major price drops on the 660GTX, better support for ENB and Skyrim, my pretty small amount of VRAM, and CUDA (which is important for me as a supporter of distrubited computing), I'm going to make the switch to Nvidia. PhysX is a nice bonus - in fact, I'm getting Metro automatically by getting the new graphics card. So, it should all work out. And I don't have to worry as much about getting over the VRAM cap...

I guess I was also talking more along the high-end models for both companies. I got the 7970 ghz, which was about $400, since the equivalent, the 680, was around $500. I don't know if one card is slightly better than the other but for arguments sake I am just assuming that they are basically the same in terms of performance. I think the 680 has 4gb of VRAM whereas the 7970 ghz only has 3 GB I think. You can also get a 6gb 7970 but that was just unnecessary. 

What exactly is CUDA?


  • 0

#62 rootsrat

rootsrat

    High King

  • Mod Author
  • PipPipPipPipPipPip
  • 2,610 posts

Posted 07 June 2013 - 10:16 AM

I was a fan of nVidia for GPU's and AMD for CPU's. Few years ago I switched from nVidia to AMD card and I was very disappointed I must say. Perfromance and comatibility... painful. I can't remember when was it, a few years ago now, so it may not be relevant now. When that AMD card became a bit old I switched back to nVidia and I was a happy bunny again. There is this stereotype that nVidia is for gaming and AMD is for processing - and I can see where this came from.
  • 0

#63 Aiyen

Aiyen

    High King

  • Mod Author
  • PipPipPipPipPipPip
  • 3,387 posts

Posted 07 June 2013 - 10:21 AM

What exactly is CUDA?

An API for using the GPU for rendering and computation instead of the CPU... basically. 

Just do a quick search for CUDA on google and you will get a ton of hits that explains it in huge detail. 
  • 0

#64 DoYouEvenModBro

DoYouEvenModBro

    High King

  • Citizen
  • PipPipPipPipPipPip
  • 2,337 posts

Posted 07 June 2013 - 10:24 AM

What exactly is CUDA?

An API for using the GPU for rendering and computation instead of the CPU... basically. 

Just do a quick search for CUDA on google and you will get a ton of hits that explains it in huge detail. 

Will do. I'm very happy with my 7970 ghz but my next card might need to be nvidia. I've also decided to stick with single gpus and not go for SLI/Crossfire ever again because of the pain-in-the-ass compatibility issues/microstutter/extra power and heat, etc. 
  • 0

#65 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Mod Author
  • PipPipPip
  • 1,453 posts

Posted 07 June 2013 - 10:35 AM

What exactly is CUDA?

A framework by NVIDIA for using the GPU for things other than graphics. AMD cards can do it too, but more distributed computing projects use CUDA than AMD OpenCL.

OT: Anyone for a STEP team on BOINC?
  • 0

#66 Besidilo

Besidilo

    General

  • Citizen
  • PipPip
  • 985 posts

Posted 07 June 2013 - 10:35 AM

I've got a couple of rigs, one using an Nvidia GTX 580, the other a Radeon HD 7970. People like to exaggerate problems with ATI cards. Granted, Nvidia Inspector is more user-friendly than Radeon Pro, same goes for the equivalent control panels, but the differences are often to aesthetics. I take it a lot of users just don't know how to configure them optimally. Catalyst drivers have been more solid since 12.11 than GeForce drivers in the past few months. I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card. Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth. Getting that card is a bad idea. But sure the peeps on this forum know better. Edit: DoYouEvenModBro, I'd love to see that microstutter on your single-GPU card, bro.
  • 0

#67 DoYouEvenModBro

DoYouEvenModBro

    High King

  • Citizen
  • PipPipPipPipPipPip
  • 2,337 posts

Posted 07 June 2013 - 10:54 AM

I've got a couple of rigs, one using an Nvidia GTX 580, the other a Radeon HD 7970. People like to exaggerate problems with ATI cards. Granted, Nvidia Inspector is more user-friendly than Radeon Pro, same goes for the equivalent control panels, but the differences are often to aesthetics. I take it a lot of users just don't know how to configure them optimally. Catalyst drivers have been more solid since 12.11 than GeForce drivers in the past few months.

I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card.

Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth.

Getting that card is a bad idea. But sure the peeps on this forum know better.

Edit: DoYouEvenModBro, I'd love to see that microstutter on your single-GPU card, bro.

I don't even use Radeon PRO. I use Catalyst Control Center. Seems to give me all the driver options I need although sometimes CCC sucks with forcing AA/Super Sampling. I don't understand your last line. 
  • 0

#68 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Mod Author
  • PipPipPip
  • 1,453 posts

Posted 07 June 2013 - 11:56 AM

Correct me if I'm wrong, but to me it seems that in a comparision between the 660 and the Radeon 7870 (which is about the same price), in most cases the 660 matches or even beats the Radeon by a few frames despite having a smaller memory bus than the Radeon. Doesn't seem like to me that the Radeon is worth it when you have the 660. However, the smaller memory bus does limit AF/AA, but I can compensate for that.
  • 0

#69 Besidilo

Besidilo

    General

  • Citizen
  • PipPip
  • 985 posts

Posted 07 June 2013 - 12:01 PM

Correct me if I'm wrong' date=' but to me it seems that in a comparision between the 660 and the Radeon 7870 (which is about the same price), in most cases the 660 matches or even beats the Radeon by a few frames despite having a smaller memory bus than the Radeon. Doesn't seem like to me that the Radeon is worth it when you have the 660. However, the smaller memory bus does limit AF/AA, but I can compensate for that.

You're not even comparing the right cards. I was talking about the Radeon 7870 XT, I even linked a handful a reviews of that card for you in one of my previous posts you seem to have omitted.

And find benchmarks that are newer than 6 months.
  • 0

#70 Aiyen

Aiyen

    High King

  • Mod Author
  • PipPipPipPipPipPip
  • 3,387 posts

Posted 07 June 2013 - 12:13 PM

People like to exaggerate problems with ATI cards.

That is true! I do not feel we have gone into a fanboy contest here yet, and I hope we do not ever get there... it is such a waste of time. 
That said there also are issues with Nvidia. Like the latest beta version and ENB showed... alot of people upgraded and suddenly everyone had the sun through every building! 
Driver issues happen for both companies, and AMD have gotten better over the years, but they still have a horrible reputation to get past. Granted a large part of this is due to some of the major AAA titles have been more optimized for Nvidia cards, and hence produced better results in benchmarks. 

but the differences are often to aesthetics


Aesthetics are part of it no doubt. However there are also large differences in what the cards support. I mention CUDA since it is the most obvious one where Nvidia is still far ahead of AMD. Again mostly because they where out quicker, and hence most people naturally started to use it.
PhysX is the next largest difference, and games made for PhysX will ofc. have more effects at higher framerates then AMD can provide since they cannot use the technology. 

I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card.

It is not just performance wise that Nvidia is better for ENB. It is also largely in terms of stability and weird bugs etc. There are again subtle differences at the driver level that can have something work on Nvidia and not on AMD, but of course this also goes the other way around! Boris is afterall just one guy who does this in his spare time. And if he develops on an Nvidia card, then some AMD related issues slip through obviously. 

Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth.

Not sure what you mean by this to be honest. The reason they put the extra Gb on the card is just a marketing stunt I imagine. Higher numbers always look better after all. 
As for it not having enough power to utilise it... not sure what you mean here. I have not seen a computer based on any of the more modern cards that have the bus speed of the GFX card being a bottleneck... at least not in games. There CPU, RAM, HDD etc. will all cause a bottleneck much earlier. 
So I guess the reverse question is also relevant. Why do you need a card with such a high bandwidth when it is almost never the cause of bottlenecks ? 

Getting that card is a bad idea. But sure the peeps on this forum know better.

Again my point is only to get the card that suits your needs. And not a general "Nvidia is always better then AMD" since that is just not true. 
If you need CUDA, PhysX etc. then there is not even a choice in the matter sadly. And in terms of cost/performance ratio the 660 GTX 3Gb is the best one Nvidia has to offer. 
Sorry if you felt that I advocated that people just get Nvidia because they are so much better etc! That was not my intention! 
  • 0

#71 Besidilo

Besidilo

    General

  • Citizen
  • PipPip
  • 985 posts

Posted 07 June 2013 - 12:32 PM

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it. Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment. Yet it's usage is limited to a handful of titles when it comes to actual visual improvements. The part about more effects at higher framerates is not necessarily true. A lot of PhysX enabled games cripple the performance, or at least used to in the past. I can't really comment on ENB all that much as I haven't used it on AMD cards in a while, but from my recent testing in Skyrim, my Radeon 7970 does surprisingly well with it. Boris tends to ***** about both vendors, which is understandable. 192-bit bus width is standard for cards with 2 GB of VRAM, not 3. Its usage is down to the design, which is why 3GB memory will have limited (as in slower) memory bandwidth available resulting in a not so great performance if you really want to load it up. I agree with you on the other point that it's a marketing gimmick anyway, since a card like GTX 660 would be throttled by its raw power way before VRAM becomes a factors. I disagree that anybody needs PhysX. It's quite useless and I even had a dedicated PhysX card for a while. It's a nice addition to have, but I wouldn't pay for it. At least you can reason your arguments in an appropriate manner, there's nothing wrong with disagreeing with my point of view. Bear in mind that I've been using all sorts of hardware in the past 15 years or so, particularly not caring about being loyal to either Nvidia or ATI in the past few years, since the latter often offers superior value for money despite the bad rep it gets from uninformed consumers.
  • 0

#72 DoYouEvenModBro

DoYouEvenModBro

    High King

  • Citizen
  • PipPipPipPipPipPip
  • 2,337 posts

Posted 07 June 2013 - 12:40 PM

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it. Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment. Yet it's usage is limited to a handful of titles when it comes to actual visual improvements. The part about more effects at higher framerates is not necessarily true. A lot of PhysX enabled games cripple the performance, or at least used to in the past.

I can't really comment on ENB all that much as I haven't used it on AMD cards in a while, but from my recent testing in Skyrim, my Radeon 7970 does surprisingly well with it. Boris tends to ***** about both vendors, which is understandable.

192-bit bus width is standard for cards with 2 GB of VRAM, not 3. Its usage is down to the design, which is why 3GB memory will have limited (as in slower) memory bandwidth available resulting in a not so great performance if you really want to load it up. I agree with you on the other point that it's a marketing gimmick anyway, since a card like GTX 660 would be throttled by its raw power way before VRAM becomes a factors.

I disagree that anybody needs PhysX. It's quite useless and I even had a dedicated PhysX card for a while. It's a nice addition to have, but I wouldn't pay for it.

At least you can reason your arguments in an appropriate manner, there's nothing wrong with disagreeing with my point of view. Bear in mind that I've been using all sorts of hardware in the past 15 years or so, particularly not caring about being loyal to either Nvidia or ATI in the past few years, since the latter often offers superior value for money despite the bad rep it gets from uninformed consumers.

I think physx is definitely worth it SOMETIMES. For example,  Metro Last Light has superior lighting and particle physics. With physx enabled, the dust swirls dynamically around character models. Without it on, it just stays static and floats in place. Same with bullet sparks, etc. If you turn it on with an ATI card (even a 7970), fps drops to like 10 whenever there is a demand for physx, so basically whenever bullets or flying or you get to an area with a lot of smoke or vapor. 7970s seem to work fine for ENB as you said. I barely get an fps drop with Skyrealism on. 
  • 0

#73 Besidilo

Besidilo

    General

  • Citizen
  • PipPip
  • 985 posts

Posted 07 June 2013 - 12:48 PM

I haven't had a chance to play Metro Last Light, but I've heard good things about PhysX effects in that game. Then again, you won't be able to enjoy it with all bells and whistles on a GTX 660 at 1920x1080, so my argument in that context stands. Like I said before, I do appreciate PhysX effects in games like Batman, but they could be done using an open technology if there was enough demand for it. As it stands, Nvidia is restricting PhysX for the sole purpose of having it as their USP. Developers get money to use PhysX in their games.
  • 0

#74 DoYouEvenModBro

DoYouEvenModBro

    High King

  • Citizen
  • PipPipPipPipPipPip
  • 2,337 posts

Posted 07 June 2013 - 12:59 PM

I haven't had a chance to play Metro Last Light, but I've heard good things about PhysX effects in that game. Then again, you won't be able to enjoy it with all bells and whistles on a GTX 660 at 1920x1080, so my argument in that context stands.

Like I said before, I do appreciate PhysX effects in games like Batman, but they could be done using an open technology if there was enough demand for it. As it stands, Nvidia is restricting PhysX for the sole purpose of having it as their USP. Developers get money to use PhysX in their games.

Exactly and that's honestly ******** and I hope it is changed. I mean right when you start up Metro, you see at least 3 god damn Nvidia logos and a giant banner that says NVIDIA: THE WAY IT WAS MEANT TO BE PLAYED. I kind of just hate nvidia for putting up an ego similar to that of microsoft
  • 0

#75 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Mod Author
  • PipPipPip
  • 1,453 posts

Posted 07 June 2013 - 01:40 PM

Right... I play at 1440x900 anyway, so, for me, I can run about all games with maxed out settings easilly with the 660GTX. I'm really not convinced that I should get the 7870 when the 660, for me, is the best mix between cost and performance. Don't want to break my bank... Also, being able to use the GPU for more distributed computing projects AND PhysX, which can come in handy in quite a few games, adds a seal to the deal.
  • 0


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users