NVIDIA GeForce GTX 680 Technology Report

Discussion in 'Reviews & Articles' started by Adrian Wong, Mar 23, 2012.

  1. Adrian Wong

    Adrian Wong Da Boss Staff Member

    In an industry where 6 month product cycles are considered a norm, the new NVIDIA Kepler architecture took a while to get here. The last NVIDIA microarchitecture, Fermi was launched about 18 months ago, or roughly 3 product cycles ago. In the preceding 6 months, NVIDIA had only launched two new Fermi-based graphics cards - the GeForce GTX 560 Ti Limited Edition (with 448 CUDA cores), and the budget-level GeForce 510. As such, the launch of its latest Kepler architecture is truly a breath of fresh air.

    As is NVIDIA's habit, they mark the launch of a new microarchitecture with their flagship model of the family. So here we are, with the tech report on NVIDIA's latest graphics behemoth, the NVIDIA GeForce GTX 680. Let's take a look.

    [​IMG]

    Link : NVIDIA GeForce GTX 680 Technology Report
     
  2. zy

    zy zynine.com Staff Member

    Any other 600 series card coming out soon??
     
  3. Chai

    Chai Administrator Staff Member

    680 is actually not the highest end of the 6 series family. 660 is rumoured to be out soon.
     
  4. zy

    zy zynine.com Staff Member

    Do hope 660 comes out soon. Wondering how much will it be. :shifty:
     
  5. Chai

    Chai Administrator Staff Member

    Personally, I'm more interested in Radeon 7800 series. They are very efficient, espeically when idle, and they are fast enough for all games at 1920x1200.
     
  6. zy

    zy zynine.com Staff Member

    Interesting. Hmm :think: I'm looking for a graphic card that will let me play at 1920x1080 at decent quality if possible. Not now but in a month or so nth or so :angel: :whistle:
     
  7. Adrian Wong

    Adrian Wong Da Boss Staff Member

    GTX 680 is the fastest member of the family what.. The GTX 660 will be the high-mainstream card.
     
  8. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Well, I wouldn't go for the GTX 680 personally because I cannot afford it. Hehe.. But it would be more cost effective to just wait and get a mid-range card, like the future GTX 660.

    I don't think most of us will bother with 4xMSAA, or even the new FXAA in Kepler. It's nice to have if we can "afford it" but if not, I would do without AA - I would rather have a good frame rate.
     
  9. Chai

    Chai Administrator Staff Member

    From the chip code name GK104, you will know that it is not the highest end of the range. Either the GK100 is a failure, or it is just too expensive to manufacture, or maybe Nvidia is just trying to make more cash from a lower end chip marked up as high end, since AMD is really not that great in terms of price and performance, especially the 7970.
     
  10. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Hmm.. I'm not sure you can just conclude that from the code name. It could be that there will only be 4 chips, and the cards are all using full or partially-disabled versions of those chips. Oh well, we will know soon enough! :D
     
  11. Chai

    Chai Administrator Staff Member

    It might not sound obvious to you, but the previous generation of code name is like this:

    GF110 = 580 series
    GF114 = 560 series

    This pattern goes all the way back to 400 series too.

    Now the GK104 = 680 series. So what happened to GK100, I have no idea. Another hint is the memory interface is 256bit, instead of the previous 384bit. The die size is also much smaller than both 560 and 580 series. Usually they are matched or just slightly bigger than the previous generation. Of course, you can use the excuse that they are optimizing the power usage now, but dropping from 520mm to just 294mm is really too drastic for a top end chip.

    Everyone was expecting the GK100 to appear, but it never happened.
     
    Last edited: Apr 1, 2012
  12. Adrian Wong

    Adrian Wong Da Boss Staff Member

    I know but I was at the NVIDIA launch event. From what they told us, this is the highest end card. And if you see their previous launches, they tend to release their top card first, followed by lower-end cards.

    As for the memory interface, that's not really important for performance. The key is the actual memory bandwidth. They managed to cut it down to 256-bit because they used faster DDR5 memory running at 1502 MHz (6008 MHz QDR). This gives them the same bandwidth as the GeForce GTX 580's 384-bit interface running at 1002 MHz (4008 MHz QDR).

    The advantage of the narrower interface is lower board complexity and costs. They can also use fewer memory chips, which further reduces board size and cost.

    The drop in the die size can be explained by the 28 nm process technology. Possibly the much smaller amount of control logic in the new SMX has a role to play too. If you look from a transistor count POV, the GK104 has just 540 million (18%) more transistors than the GF110 chip.
     
    Last edited: Apr 1, 2012
  13. Chai

    Chai Administrator Staff Member

    Yeah, which is why there's only 2 reasons why they scrapped the GK100. The GK104 is good enough to beat 7970, using a 'low cost' solution, small die size, cheaper memory interface, while charging even more! Or GK100 was a failure.

    GK104 was supposed to be the midrange of the 600 series, which is why the transistor count was only 18% more. You don't see that pattern in the previous series.
     
  14. Chai

    Chai Administrator Staff Member

    While I was reading about Geforce 600 series, I found many articles on GK110 core, which is still missing from the lineup. This is supposed to be the true replacement of the GF110 cards.

    Nvidia Debuts GK110-based 7.1 Billion Transistor Super GPU
    Inside Nvidia's GK110 monster GPU • The Register
    GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot | techPowerUp

    So why is nvidia delaying this release? Problem with manufacturing? Too complacent?
     
  15. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Well, probably complacent although I suppose it is possible they are facing problems getting it out in good quantities.
     
  16. ZuePhok

    ZuePhok Just Started

    most likely tmsc is having yield issue with their 28nm tech. in fact, everyone but intel is having problem moving beyond 32nm. if 28nm node was reliable apple would have picked it for its ipad3.

    nvidia has already stated in their q1 earning report that 600 series supply would remain tight in q2 because they can't make it in large quantity. so I doubt we will see gk110 this year. heck they might already scrapped it and moved on to the next gen.

    out of topic I know but don't you think Intel's processing tech (22nm scale is already in mass production, 16nm soon) will be so compelling that apple might just consider adopting x86 for their future mobile device? too bad NVIDIA can't get Intel to product their GPUs but sil process tech is really hurting NVIDIA's / AMD's progress.
     
  17. Chai

    Chai Administrator Staff Member

    IMO, it's combination of both. That's why they have decided to make GK104 the highest end of the current range like I have said before, since AMD has not progressed much.

    But I did make mistake of mentioning GK100 as the high end instead of GK110.
     
  18. Adrian Wong

    Adrian Wong Da Boss Staff Member

    I think the industry is going to have trouble going beyond 28 nm... This is going to put a serious crimp on future chips.
     

Share This Page