Jump to content


Photo

Upgrading PC


  • Please log in to reply
117 replies to this topic

#76 Aiyen

Aiyen

    Dragon King

  • Super Moderators
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 3,536 posts

Posted 07 June 2013 - 03:08 PM

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it.

Sounds like we are going into an argument of Open Source vs. Proprietary Software. Which is a bit off topic. :)  
But regardless of that then it is a feature of Nvidia cards so it is worth having in mind when making a decision! 
I have not used it in years so I cannot speak for the specific performance and documentation for it today. I have only briefly looked up the details of today, since I just started on 3d modelling etc. as my new hobby. 

Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment.

I have only vaguely read about these better alternatives. The last one I saw promised the end of polygon rendering etc and could produce very nice images of static objects etc. However they never ever released their codes to the public, and only showed the result of the "revolutionary" methods on youtube. Guess there theoretically are a few ideas that have made it into prototypes but they still are a long way from convincing the industry to change standards. 

I partly disagree with your point requiring PhysX.
Imo. then PhysX is only maturing these years, since it has always suffered under DX9 due to the same reason we suffer... not enough memory. However this issue all the other technologies have also had, hence why it has been too costly an affair to try to wrest the market from PhysX on marginal performance gains. I think we are going to see more of them pop up in the following years as physics simulations become more demanding and required in games. 

Developers get money to use PhysX in their games.

Actually it is the game engine developers that decide it should be in their engines! Game makers are then pretty much bound to use it unless they want to implement some sort of merge between the softwares. It is true that Nvidia have deals here, and I am fairly sure they are mutually beneficial for both companies and royalties go both ways. 

Boris tends to ***** about both vendors, which is understandable.

Hehe yeah true that! Thanks for that laugh! ;) 

As for the memory bandwidth then yeah you are of course right that it will get outperformed by cards with a higher bus speed at tasks where this is the bottleneck. However this bottleneck is really only an issue if you are running seriously demanding graphics operations at high resolutions. Which most games today cant do anyways since they are mainly made for last generation consoles. Most of the time other bottlenecks will hit you hard first.  

Also thank you for keeping a civil tone! It is refreshing when dealing with this topic! Sorry if I sound a bit fanboish in some posts! I try my best to avoid it! We all fall into that trap every now and again I guess! 

#77 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 07 June 2013 - 05:27 PM

Sounds like we are going into an argument of Open Source vs. Proprietary Software. Which is a bit off topic. :)  
But regardless of that then it is a feature of Nvidia cards so it is worth having in mind when making a decision! 
I have not used it in years so I cannot speak for the specific performance and documentation for it today. I have only briefly looked up the details of today, since I just started on 3d modelling etc. as my new hobby. 

I have only vaguely read about these better alternatives. The last one I saw promised the end of polygon rendering etc and could produce very nice images of static objects etc. However they never ever released their codes to the public, and only showed the result of the "revolutionary" methods on youtube. Guess there theoretically are a few ideas that have made it into prototypes but they still are a long way from convincing the industry to change standards. 

I partly disagree with your point requiring PhysX.
Imo. then PhysX is only maturing these years, since it has always suffered under DX9 due to the same reason we suffer... not enough memory. However this issue all the other technologies have also had, hence why it has been too costly an affair to try to wrest the market from PhysX on marginal performance gains. I think we are going to see more of them pop up in the following years as physics simulations become more demanding and required in games. 

Actually it is the game engine developers that decide it should be in their engines! Game makers are then pretty much bound to use it unless they want to implement some sort of merge between the softwares. It is true that Nvidia have deals here, and I am fairly sure they are mutually beneficial for both companies and royalties go both ways. 

Hehe yeah true that! Thanks for that laugh! ;) 

As for the memory bandwidth then yeah you are of course right that it will get outperformed by cards with a higher bus speed at tasks where this is the bottleneck. However this bottleneck is really only an issue if you are running seriously demanding graphics operations at high resolutions. Which most games today cant do anyways since they are mainly made for last generation consoles. Most of the time other bottlenecks will hit you hard first.  

Also thank you for keeping a civil tone! It is refreshing when dealing with this topic! Sorry if I sound a bit fanboish in some posts! I try my best to avoid it! We all fall into that trap every now and again I guess! 


With regards to CUDA, it's good for what it is, but there's no reason to not use an open source alternative if that's an option. CUDA isn't really anything special and a lot of people don't realise that there are alternative OpenCL implementations. Take Photoshop CS6 for example, hardware acceleration exists on both Nvidia and AMD GPUs. Yet there are many other applications in which CUDA is the only way to go. If you know that CUDA is something that you will make a lot of use out of, a GeForce card seems like a worthy consideration, however you cut it. AFAIK, PhysX doesn't use much memory at all. It runs solely on CUDA cores and the CPU. All that matters is the number and speed of CUDA cores. It's never been popular and only a bunch of PhysX-designated titles are released each year. A lot of them aren't even hardware accelerated.

Bullet Physics is an engine that has been used in games such as GTA IV, Red Dead Redemption, GTA V, or even one of the most famous benchmarking software, 3DMark 11. It was also used in a handful of movies and is a well-known open-source physics engine. I truly believe projects like this are the future, especially when you take into account the new generation of consoles are all AMD partnerships.

Skyrim and Source-based games use the Havok engine, which whilst proprietary, is used in many other popular games. It does not have the same capabilities as Physx, and runs on CPU only if I remember correctly, but it's worth a mention anyway.

It probably should be said now, most of PhysX effects still run on the CPU. Only a portion is designed to be utilised by the CUDA cores. NVIDIA is very dedicated to pumping substuntial funds into their closed-source technology, but I really hope it won't last much longer. A lot of developers are finding the open alternatives that run on most hardware to be more future-proof.
  • 0

#78 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 07 June 2013 - 05:32 PM

Right... I play at 1440x900 anyway, so, for me, I can run about all games with maxed out settings easilly with the 660GTX. I'm really not convinced that I should get the 7870 when the 660, for me, is the best mix between cost and performance. Don't want to break my bank...

Also, being able to use the GPU for more distributed computing projects AND PhysX, which can come in handy in quite a few games, adds a seal to the deal.

Radeon 7870 XT is both faster and cheaper than the GTX 660 3GB, but whatever. At least don't get the 3GB version of the card, because that's a massive waste of money at that resolution.
  • 0

#79 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Super Moderators
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1,572 posts

Posted 07 June 2013 - 06:33 PM

According to this post, while the Radeon card is 5-10% faster overall, the Nvidia card provides more overall features - in fact, enough that it allows for smoother gameplay. I was a little unsure now when I looked at the mentioned Radeon card directly, but now, I am 100% sure that I'll be getting the 660 3gb model, as Ayien originally recommended.

EDIT: Not arguing with Besidilo anymore on this - decision is set. This thread also provides evidence to back up my decision. I will respond to any other posts, however.

#80 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 07 June 2013 - 07:20 PM

According to this post while the Radeon card is 5-10% faster overall, the Nvidia card provides more overall features - in fact, enough that it allows for smoother gameplay. I was a little unsure now when I looked at the mentioned Radeon card directly, but now, I am 100% sure that I'll be getting the 660 3gb model, as Ayien originally recommended.

You're still linking to the wrong card. And you still don't realise how stupid buying a GTX 660 with 3GB on board is.

I advise posting posting about your choice on some larger tech forum for some laughs. You can't have enough of those.

Have a good night, mate.

EDIT: I've just looked at that thread again, and oh my god, are some people funny.
 

Gaming experience is a different thing. The gaming experience comes in GTX 660's favor. It offers boost, PhysX, adaptive v-sync, TXAA and FXAA. Which offers a smoother gameplay with better Visuals.


First of all, having a Boost feature means nothing. The performance is already measured with it on. PhysX has been already discussed in this thread, AMD has their own adaptive vsync method (can be forced through RadeonPro), you won't be able to use TXAA with a single GTX 660 and FXAA is available on all cards (yes, that means Radeons too).


  • 0

#81 Aiyen

Aiyen

    Dragon King

  • Super Moderators
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 3,536 posts

Posted 07 June 2013 - 07:26 PM

AFAIK, PhysX doesn't use much memory at all.


Physics rendering is today largely limited by bounding boxes to limit the amount of memory and computation power required for each effect. However this also limit multi object physics interaction to a high degree. 
With more memory you can prerender more things and just read them from memory instead of having to render them every single time. 
In games this is properly mostly relevant for environmental effects that the player is not meant to interact with, since those would always require real time computations.. but would still allow for more realistic and epic scenes. 
The last I read about was realistic weather systems for use in games. So instead of just textures for clouds you would have real clouds. 

Also yeah Havok runes on CPU only, which is part of the reason why Skyrim is such a CPU heavy game. 

 That being said then I agree on most other points you have mentioned. Open source is maturing to create competitive software, and hopefully the trend will continue. So more companies can start up without having to pay large amounts of money to get started... Or existing ones can stop playing silly amounts in licence fees, and spend that money on investing in expanding their business. 

All that said I guess this is a bit off topic by now! :) 

#82 MontyMM

MontyMM

    High King

  • Site Founders
  • PipPipPipPipPipPipPipPipPipPipPipPip
  • 1,144 posts

Posted 07 June 2013 - 08:30 PM

 

According to this post while the Radeon card is 5-10% faster overall, the Nvidia card provides more overall features - in fact, enough that it allows for smoother gameplay. I was a little unsure now when I looked at the mentioned Radeon card directly, but now, I am 100% sure that I'll be getting the 660 3gb model, as Ayien originally recommended.

You're still linking to the wrong card. And you still don't realise how stupid buying a GTX 660 with 3GB on board is.

I advise posting posting about your choice on some larger tech forum for some laughs. You can't have enough of those.

Have a good night, mate.

EDIT: I've just looked at that thread again, and oh my god, are some people funny.

 


We're getting tired of warning you about your attitude. Keep posting your tech opinions if you want, but stop being obnoxious about it, or go and do it somewhere else.


  • 0

#83 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Super Moderators
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1,572 posts

Posted 07 June 2013 - 08:53 PM

Besidilo, please also note that you have acted really badly in front of a person who can potentially ban you if he gets fed up enough. While you can state your opinions, it is strongly recommended that you state them in a respectful and polite manner.

Anyway... thank you again, Ayien, for the video card recommendation. I'm really looking forward to having the 660 installed in my computer soon - especially an EVGA made version... :dance:

#84 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 08 June 2013 - 05:42 AM

Feel free to ban me if you don't like what I say. However obnoxious I may sound, I always try to be helpful to other users and share my knowledge. I'm not pushing William to agree with me, but it would be nice if my advice stopped him from making a bad decision. My advice to post on some tech/overclocking forum was honest, although provided in a cheeky way. If you don't trust my expertise, there are countless other individuals with far better knowledge base and experience than me, eager to help. Unfortunately, on most forums people will post their opinions with no proper understanding

Anyway, if you're settled on that EVGA GTX 660 card (which is a fine value for money, by any means), I'd strongly advise getting the 2GB version. 3GB ones is sort of just "stuck" on the same PCB, since the card doesn't have a higher bus width to handle the bandwidth. You can read about it in this thread, post #6 by lehpron. Please read it, it's an honest advice from someone who seems to know what he's talking about. Once again, I recommend going for the 2GB version of the card (here's the cheapest EVGA GTX 660 I could find) that isn't crippled by bad design choices. It's all up to you in the end.

You guys are a nice lot and I'd hate to leave this place, but I'm afraid I've been on the Internet for far too long to change my ways. I respect all and every one of you, so hopefully you don't see my condescending tone as demeaning. If you'd like me to stop posting tech advice, I can abstain from it completely. I do think that it's valuable, at least to some degree, but I can understand why you might see my posts as flaming from your perspective. On the other hand, invite someone over with more knowledge about computers and let them judge what I said in this thread. That might clear up my stance on the subject a little bit.

Either way, have a good day and let's enjoy this community for what it truly stands for, that is making the most out of our experience with Skyrim.

PS William, please don't scare me with your ability to ban me at will. You have to realise I couldn't care less if you're an admin, mod or God when I'm talking to you. I treat everyone equally.


EDIT: actually, I'm wrong, Nvidia uses slightly different memory chips than I thought and 3GB on 192-bit, whilst slow, isn't all that odd. So yeah, if you don't mind paying $50 extra, go for it.

https://hexus.net/te...0-ti-ex-oc-3gb/

Here's the review that proves me partially wrong. That is, the 3GB version of the card is not using mismatched memory modules, the 2GB is. You're still struggling with the limited memory bandwidth on the 3GB model and justifying the need to utilise it at 1600x900.

And the actual review of your card that proves there is no visible benefit to the larger frame buffer on the 3GB model of the card.

https://hexus.net/te...tw-signature-2/
  • 0

#85 MontyMM

MontyMM

    High King

  • Site Founders
  • PipPipPipPipPipPipPipPipPipPipPipPip
  • 1,144 posts

Posted 08 June 2013 - 11:42 AM

No-one wants to do any banning - we don't like mashing the moderator buttons around here. But we have discussed this attitude before - treating other people in a derisory way because they disagree with you - and you agreed that this is not helpful. Asking you to stop posting your technical opinions would be very much against the spirit and intention of the site, but, equally, so is implying that other people are stupid and laughable because you think that their decisions are not optimal.
  • 0

#86 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 08 June 2013 - 12:28 PM

so is implying that other people are stupid and laughable because you think that their decisions are not optimal.


I've never done that. I might argue that somebody's actions might be perceived that way, but I wouldn't resort to name calling.

It was late at night and I went overboard with that post, it was not in good spirit, for which I'm sorry. The advice to post on a tech forum was genuine, though.
  • 0

#87 MontyMM

MontyMM

    High King

  • Site Founders
  • PipPipPipPipPipPipPipPipPipPipPipPip
  • 1,144 posts

Posted 08 June 2013 - 01:28 PM

I would always advise people to research properly on the serious tech sites. Mind you, I would hope that if someone went in saying that they really wanted an Nvidia card, with more than 2gb of vram, they would get the sort of helpful and clear technical advice as provided by that guy Lephron in your thread - laying out the considerations and options. If they get laughed out of town by gurus, I would suggest they pick another forum. On the question of vram - to suggest that the vram requirement is entirely dictated by resolution is somewhat misleading. You will often see it discussed as if this were the case, because the limitations of current game engines have meant that the requirements for textures and effects have remained fairly static. So, in practice, for a long time the only real reason to invest in more vram was to drive larger frames, or more frames in the case of post processing. But this is not the whole picture, and it is changing, particularly in the wake of the suddenly raised scope of the next gen consoles. Displaying the pixels of the final frame at a given resolution is only part of the story for vram use. One other major consideration is simply the number and quality of textures that are sent to the GPU to be processed (and cached) at a given time. There is enormous scope for this to increase, as we demonstrate with STEP, and increases of this sort do not necessarily tax the GPU to the point where it cannot provide decent performance. The another consideration is that the frames generated and held in vram are by no means only the final frames that are output to the screen. Many effects and post processing are achieved by rendering multiple frames, parts of frames, frames consisting only of individual textures, and so on, as frame objects. These are then combined to output the final scene. This applies to optional effects like AA, but also to many effects within the game engine itself, and, again, there is tremendous scope for these to increase. Increases of this sort clearly do also demand a higher processing burden. The point is, that though the question of whether it is worthwhile to invest in more vram at a given point is certainly open to debate, it is fundamentally wrong to insist that extra vram will be useless below certain resolutions.
  • 0

#88 Besidilo

Besidilo

    Jarl

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 993 posts

Posted 08 June 2013 - 02:07 PM

The point is, that though the question of whether it is worthwhile to invest in more vram at a given point is certainly open to debate, it is fundamentally wrong to insist that extra vram will be useless below certain resolutions.


I treat that argument from the practical point of view. The current generation of games doesn't utilise near anywhere near that amount of VRAM at sub-1080p resolution today. Even with 2GB VRAM you have a fairly decent buffer for using deferred anti-aliasing methods, such as SGSSAA with GeForce cards and then some for the high resolution textures. I've struggled with becoming limited by 2GB VRAM at 2560x1440 in Skyrim. With 3GB VRAM it's simply not an issue any more. Increasing the resolution of textures would result in a RAM-related crash sooner than me running into a VRAM wall.

Now, what the future brings, is another issue entirely. Whilst we might be surprised by the quality of textures, I can almost guarantee that said GTX 660 3GB will not have enough raw power for fancy effects + maximum texture details in game to make real use out of that memory. Then you take PhysX and driver-forced methods of AA, and you're simply running off the ballpark of comfortable performance threshold to fully utilise the video buffer on your card.

It's all meaningless anyway, since there really isn't a viable alternative to the GTX 660 3GB if William insists on the aforementioned criteria. And from what I've read, the limited memory bandwidth on the card would be an issue . It is a great value for money card either way you cut it. I just don't see how paying $50, or 17%, extra for that 1GB more VRAM is going to make a difference in real world.

I think the fact that Nvidia's only recently released high-end cards, such as GTX 770 and GTX 760 Ti, are still using 2GB VRAM as standard is telling us a different story to what you're portraying. However, I'd like to be proven wrong, as with accordance to what you said, we've been held back by the consoles for quite a while now.

EDIT: all larger tech forums have idiots trolling new members or people seeking for advice. They've also got hardcore fanboys who'd defend one camp or the other, no matter the circumstances. Being unbiased feels strange in those place at times.
  • 0

#89 MontyMM

MontyMM

    High King

  • Site Founders
  • PipPipPipPipPipPipPipPipPipPipPipPip
  • 1,144 posts

Posted 08 June 2013 - 02:46 PM


I think the fact that Nvidia's only recently released high-end cards, such as GTX 770 and GTX 760 Ti, are still using 2GB VRAM as standard is telling us a different story to what you're portraying.

Just to that idea generally - I would think that the good people at Nvidia would not be too distressed at the idea of garnering fine benchmarks and reviews in the here and now, and having people wishing for an upgrade before too much longer.  :P
  • 0

#90 WilliamImm

WilliamImm

    Legendary Blue Dragon

  • Super Moderators
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1,572 posts

Posted 08 June 2013 - 03:15 PM

Well, the important thing is (and the reason why I decided on 3gb) is because when you are heavily modding Skyrim, you can easily go over 2GB vram if you have enough texture packs installed (just ask Neovalen). While the extra memory may not come into play for other games, it does come into play for Skyrim with tons of texture packs.


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users