Original Link: https://www.anandtech.com/show/483
NVIDIA GeForce SDR Roundup (February 00)
by Matthew Witheiler on February 16, 2000 1:32 AM EST- Posted in
- GPUs
Shopping for a video card in today's market is like being a kid in a toy store. With glamorous boxes, expensive advertisements, and nifty features, how can one ever decide on which card to get? Is the video card that comes in the best looking package always the best performing or best suited for you? Back as little as 3 years ago, the choices on the video card market were not nearly as diverse. There were a select few companies that produced cards based upon a specific chip. Mainly, you had three options for 3D gaming: cards based off of 3dfx's Voodoo chip, those based off of NVIDIA's Riva128 chip, and ATI's 3D Rage Pro.
Needless to say, a lot has happened in 3 years. In these years we have seen the merger of 3dfx and STB, creating only one supplier for 3dfx cards, very little advance in ATI's side of the corner, and a near monopoly founded on NVIDIA. For the most part, people who want the best 3D performance disregard 3dfx's current Voodoo 3 chipset due to lack of 32-bit support. In addition, the high price and possible lag concerns deter some away from ATI's Rage Fury MAXX card. Finally, Matrox's long running G400 series has gained significant support but many are wary of spending a good amount of money on a card that is almost a year old. Disregarding the others, this leaves only one major contender in the high performance 3D graphics market: NVIDIA.
NVIDIA produces the chip with the most buzz factor on the market today: the GeForce. Rather than sell these chips to one particular manufacturer or to produce the graphics cards themselves, NVIDIA sells these chips to almost any company that can afford to buy them, meaning that almost everyone and their brother is making a GeForce card. While this provides great possible variation in the market, in reality almost all GeForce cards are the same. With this in mind, it seems that the biggest differences between GeForce based cards are cooling methods, overclocking potential, drivers and support, memory type, and software bundle: almost exactly the same set of difference that were mentioned in our NVIDIA TNT2 Roundup. In this roundup, we have attempted to collect a handful of SDR based GeForce cards (thus removing memory type from the list of differences) and point out the pros and cons of each, making your decision as a buyer less complicated.
NVIDIA considered the GeForce processor so revolutionary that they even gave it its own name, the GPU for Graphics Processing Unit. Hyped as "a significant breakthrough in realism" by NVIDIA, the goal of the GPU was to allow a video card to do complex processes that previously had to be performed by the CPU. By distributing the workload to the graphics card as well, game designers are, in theory, able to add enhanced polygons to a scene (due to the GeForce's hardware Transform and Lighting) and also enhance gameplay by using more CPU power (for such things as artificial intelligence, ect). In reality, we are yet to see a game take much advantage of the hardware T&L found in the GPU, meaning that all the GeForce processor is capable to doing now is increasing frame rate in games. Perhaps this will change in the future, however it could also end up like Intel's much touted MMX extensions which remain to make a dent in the high performance 3D market.
Having a total of 23 million transistors on a chip produced using .22 micron architecture, the GeForce GPU has a fill rate of 480 Million Pixels per Second. This is significantly more than that of TNT2 Ultra which comes in at 300 Million Pixels per Second and thus results in an increase in frame rate in common 3D games. Therefore, in some part, the GeForce's enhancements over its younger brother is rather astonishing.
On another note, as was common in the days of the TNT2, many manufacturers choose to use NVIDIA's reference design in production of a GeForce card (only the ELSA ERAZOR X and the ASUS V6600 Deluxe reviewed here differ from the reference design). Every time that NVIDIA releases a new processor, they also release a reference design which tells manufacturers how to make a board. While this decreases research and development time for the manufacturers (which is crucial for maintaining the 6 month processor cycle that NVIDIA is on) and also ensures a degree of quality, the use of the reference deign leaves many of the SDR GeForce cards behaving in the same way.
While the GeForce GPU only comes in one type, the memory type used on GeForce cards actually come in two varieties: SDR and DDR RAM chips. SDR, the RAM type examined in this roundup, stands for Single Data Rate. This means that for every clock cycle, the RAM is written to once. The RAM found in SDR cards can be SDRAM or SGRAM, with the vast majority of manufacturers using EliteMT SDRAM chips (ELSA uses SAMSUNG SDRAM chips). DDR RAM, standing for Double Data Rate, is used in cards that generally cost $50 more than their SDR counterparts. DDR RAM can actually write to the RAM twice per clock cycle, on the rising and falling edge of the cycle. This provides a speed increase in the memory bus, as the RAM runs at 150 MHz x 2 resulting in a 300 MHz memory clock speed.
Due to the similarities between SDR and DDR cards, AnandTech has actually found that some manufacturers are phasing out the SDR versions of their cards completely. Creative stated phasing out rather early in the game, and the SDR version of their popular 3D Blaster Annihilator card is almost impossible to find now because Creative does not produce it any more. We have also gotten word from Leadtek that they will soon "be producing many more DDR cards than SDR ones". The reason for this is two fold. First, due to the fact that the reference design for SDR cards is very similar to that of DDR cards, manufacturers can make the transition from using SDR to DDR RAM rather easily. Secondly is the cost of DDR RAM. From the chip manufacturer's view, DDR RAM costs only about 3% more to make than conventional SDR RAM. Due to the high demand of DDR RAM, the RAM producers are actually selling the chips at a higher premium. Card manufacturers, in anticipation of DDR RAM price dropping to 3% more than SDR RAM, have stated phasing out SDR cards to make way for better performing DDR cards at less cost.
While SDR cards are still prominent in the market, DDR GeForce cards are not an option for some people due to the higher price tag. The fact of the matter is that the only difference between DDR GeForce cards and SDR GeForce cards is the type of RAM used and the position of about 2 capacitors near each RAM chip. The processors are not different, the cooling system remains the same, and the reference design does not change significantly. For many people, the added speed increase with a DDR GeForce card will not be noticed under normal gaming, thus making the less expensive SDR model a better choice.
The path taken by NVIDIA in the TNT2 production line was a rather interesting one. Rather than sell the processor with a set clock speed, they sold the chips with recommended clock speeds. This placed the clock speed decision in the manufacturer's hands, allowing the producer to sell cards at what ever speed they felt the processor was stable at.
In a step that seems counterproductive, with the GeForce processor NVIDIA will not allow manufacturers to sell cards with speeds higher than the suggested 120/166 MHz rating for SDR cards. We have heard from quite a few manufacturers that NVIDIA will respond with "no" to any suggestion of selling a card overclocked, no matter how good the cooling of the card is. Due to lack of comment from NVIDIA, we are left to speculate why the clock and memory speeds of the GeForce are regulated. The first possible reason comes from competition levels. Perhaps NVIDIA does not want SDR cards to compete with higher priced DDR cards, as overclocking could reduce the margin of difference between the two cards. This makes perfect sense from a manufacturer's point of view, for they would not like to have a less expensive overclocked SDR card competing with a more expensive stock DDR card. However, this begs to question why NVIDIA also regulates the speed of DDR cards.
Another possibility is the fact that perhaps NVIDIA does not want GeForce cards competing with their upcoming chip, code named NV15. It is too early to tell how much faster this chip will perform compared to the GeForce processor, thus it is possible that the 6 month cycle that NVIDIA uses was too fast to produce a GeForce killing chip. By regulating the clock speed which manufacturers are able to sell GeForce cards at, this takes away from competition at the retail level.
Due to the regulations, all SDR GeForce cards come with the default clock speed of 120 MHz for the core and 166 MHz for the memory. While many companies are offering cards with higher rated RAM and more efficient cooling, it is not until the card is in your computer can overclocking take place. How feasible is this overclocking and how much difference does it make? Read on to the next two sections to find out.
It is a well known fact that manufacturers are overly cautious when it comes to speed ratings. In the past, we have seen 5.5 ns SDRAM used on TNT2 cards go all the way to 20 MHz past their "rated" speed of 183 MHz with similar results with regard to the TNT2 processor itself. The GeForce's 23 million transistors and high reliance on memory cause the overclocking outcome to be a bit different than its younger brother.
Two factors go into the ability to overclock the GeForce GPU: heat and quality. These are the same two factors that we found were essential in TNT2 cards as well. Heat, every processor's enemy, is most readily conquered by the heatsink and fan found on every card. The difference here lies with the fan and heatsink used and the compound used to attach the heatsink to the GPU. Many companies, including ASUS, Leadtek, and Gigabyte use the commonly found 4 cm x 4 cm low profile heatsink and fan. ASUS, with the V6600 Deluxe, and ELSA, with the ERAZOR X, chose to use higher quality fans and heatsinks: the ASUS having a very high quality hardware monitoring fan and the ELSA having a large 5 cm x 5 cm heatsink on it.
While the cooling method chosen may affect overclocking performance slightly, it is the way that the heatsink and fan are attached to the GPU that really dictates performance. In this roundup, we see everything from the worst to way to attach a heatsink to the best, as well as some things between. To test the efficiency of the cooling system used, we placed a thermister from an ABIT BF6 motherboard which uses the Winbond W83782D hardware monitoring chip to the back side of pressboard right under the processor core. Temperature was monitored via Mother Board Monitor 4.12 during a 30 minute loop of Quake III Arena. The highest temperature reached was then recorded, as the graph below shows.
Only two of the boards which we reviewed included what AnandTech considers the most efficient cooling compound: thermal grease. It also is no coincidence that both of these boards come from the same manufacturer: ASUS. Both the ASUS V6600 SDRAM and the V6600 Deluxe came with aptly applied thermal grease to aid in cooling of the hot GPU. It was no surprise that both of these cards had the lowest running temperatures when in stock mode, reaching 161.6 degrees Fahrenheit (72.0 degrees Celsius).
Next best was the Gigabyte GA-GF2560 which has its heatsink attached via thermal tape. While thermal tape is not the most efficient way to transfer heat, it proves to be much more effective than no compound at all. The Gigabyte heated up to 170.6 degrees Fahrenheit (77 degrees Celsius), providing a cool system.
The third best cooling option, as suspected, turns out to be the thermal glue used in the ELSA ERAZOR X. The temperature reached in the ELSA was a not so cool 174.2 degree Fahrenheit (79.0 degrees Celsius), showing how thermal glue does not provide the best heat transfer.
Finally there was the Leadtek WinFast GeForce 256 SDR, which came with a heatsink attached via pins and a scant amount of what appears to be thermal glue. The ineffective heat transfer of this setup is shown with the hot 179.6 degrees Fahrenheit temperature (82 degrees Celsius) reached. It was no surprise that this card performed worst in overclocking tests.
The quality of the core chip also plays a role in overclocking potential. Unfortunately, the quality of a chip can not be predicted at time of purchase. Just as there are good batches of RAM, some batches of processors have more overclocking potential. Thus, we are left relying cooling methods to predict overclocked speeds.
While the largest enemy of the processor core is heat, fluctuations in memory speed are often due to chip purity. When produced, memory manufacturers rate the speed at which the chip is supposed to run. In the GeForce world, we have only seen chip ratings of 5.5 and 5.0 ns. These speeds produce a MHz rating of 183 MHz and 200 MHz respectively. All of the cards reviewed were at least able to be pushed from the stock memory speed of 166 MHz to their speed rating, however some chip batches performed better than others.
The best example of this is the EliteMT 5.5 ns SDRAM chips used in both the ASUS V6600 Pure and the Gigabyte GA-GF2560. While the chips are the same, the maximum overclocked speed of the EliteMT RAM in the ASUS was 186 MHz while the same chip in the Gigabyte card reached a speedy 204 MHz. Both cards use the standard reference design for board type, thus the speeds reached must be independent of the board. The difference is due to the yield of the production sheet. Some chips have less impurities as others, thus causing higher performance in each RAM batch. Below are pictures of the RAM types used.
SAMSUNG 5 ns SGRAM used by the ASUS V6600 Deluxe | SAMSUNG 5.5 ns SDRAM used by the ELSA ERAZOR X | EliteMT 5.5 ns SDRAM used in both the Gigabyte GA-GF2560 and the ASUS V6600 SDRAM. | EliteMT 5 ns SDRAM used in the Leadtek Winfast GeForce 256 |
Another factor in the overclocking of memory is the high reliance that the GeForce processor has upon the memory. The high fill rate of the GPU results in additional use of memory for storing and handling the data passed to it. It is for this reason that the memory clock speeds found in GeForce cards rarely come as high as the TNT-2 memory speeds, as this processor did not overwhelm the memory to such an extent.
Given the fact that both the maximum core and processor speed of your SDR GeForce may vary, lets take a look at how the GeForce processor performs when overclocked.
Provided that your SDR GeForce card can reach overclocked speeds, the table below shows how much improvement you can expect to gain from overclocking. One of the reference boards tested here, the Gigabyte GA-GF2560, was used to push the clock and memory speeds higher than the stock rating of 120/166 MHz. Using PowerStrip, the clock and memory speeds were increased in 5 MHz intervals and the resulting frame rate was recorded. Below are graphs of both 16-bit and 32-bit performance evaluated with a Pentium III 550E in Quake III Arena.
As the above graphs show, increasing clock speed results in a significant improvement in FPS rating provided that the resolution is high enough to take advantage of the speed. At the low resolution of 640x480x16, overclocking the card does not result in any significant speed increase. It is not until the resolution of 640x480x32 is reached do we see a linear increasing trend in the data. In fact, the 32-bit data points really show how clock speed influences frame rate in a linear fashion, as show by the graph below.
The data is linear, meaning that a function can be fit to it and used to predict outcome for any clock and memory speed (given that both increase at the same rate). The equation that fits the data at 1024x768x32, which is y=.2935x+42.022, suggests that for every 1 MHz step you make to the clock and memory speeds, you can expect performance to increase .2935 FPS over the base rating of 42.022 FPS. This equation can also be used to estimate the performance gained by overclocking the memory and core by any constant amount. For example, if you wished to overclock the core and memory 12 MHz above the stock speed, the equation suggests that your FPS rating in Quake III Arena at 1024x768x32 would be .2935(12)+42.022, which evaluates to 45.5 FPS. For reference, equations for the other resolutions are listed below. These may be used as a rough estimate, but keep in mind that not all cards are going to perform exactly the same unless the testbed system is mirrored exactly.
Resolution |
Equation |
640 x 480 x 16 |
N/A |
640 x 480 x 32 |
y = 0.140x + 90.36 |
800 x 600 x 16 |
y = 0.101x + 93.3689 |
800 x 600 x 32 |
y = 0.33068x + 68.82 |
1024 x 768 x 16 |
y = 0.34x + 71.633 |
1024 x 768 x 32 |
y = 0.2935x + 42.022 |
The performance increase is most prevalent at 800x600x32, as this is the speed where the increased frequency pays off. With a fill rate of 640 Million Pixels per second at the highest speed we could overclock to (160/206 MHz), the resulting speed increase is a large 52%, coming at 800x600x32. At 1024x768, a 28% increase in speed is found at 32 bit color. Over the long run, it seems that performance is helped most by increasing the clock speeds at 1024x768x16, for this is where the slope of the line is greatest. The difference is least noticeable at 640x480x32, where a 6% performance gain is to be had.
If there is one thing to keep in mind it is the fact that it is not typical of a GeForce card to reach the high marks that we found with the Gigabyte GA-GF2560. In fact, at 160/206 MHz stability was compromised and the card crashed in an extensive loop. It is quite likely that your mileage will also vary. With this in mind, performance gains are likely through overclocking, as long as it is performed with a careful hand.
Now lets take
a look at the contenders, see how they perform, how far they can overclock,
the drivers used, and the pros and cons of each card.
Memory | 32 MB EliteMT SDRAM 5.5 ns |
Cooling | Heatsink/Fan Combo |
TV-Out | Not Supported |
TV-In | Not Supported |
Drivers | ASUS custom |
Highest Overclock | 150/186 MHz |
Overclocking Utility | None |
Software/Gaming Bundle | Turok2, XG2 |
Estimated Street Price | $220.00 |
ASUS's first stab at the GeForce market resulted in the V6600 SDRAM. The goal here was to reduce production time and make one of the first GeForce cards to hit the market. Because of this, ASUS chose to use the NVIDIA reference design, meaning that the V6600 SDRAM is very similar to other reference boards here. Even the EliteMT SDRAM chips are almost identical to RAM seen in other reference boards. Having such a large volume and high quality standards, the V6600 SDRAM remains a very strong reference board, especially with the luxury thermal grease that comes attaching the heatsink to the GPU.
The drives used by ASUS are essentially reference drivers with a new face and a few additional features. The interface is nice but due to the fact that this card shares its drivers with the V6600 Deluxe and V6800 Deluxe, you may find yourself longing for the advanced video and VR features found on these cards.
Pros: Widely available, midrange price, thermal grease used to keep the GPU cool.
Cons: Reference design keeps the card from standing out, no TV-out option, unified drivers leave you wanting for more.
Useful settings at your fingertips via the taskbar resident driver icon.
Color adjustment options are easy found for not only the desktop, but also for
D3D and OpenGL.
Refresh rate adjustments are easy to make.
The Direct 3D settings are plentiful and easily adjusted.
On the downside: the unified drivers leave you wanting more (mainly the V6600
Deluxe).
Memory | 32 MB SAMSUNG SGRAM 5 ns |
Cooling | Heatsink/Fan Combo |
TV-Out | Chrontel 7005C |
TV-In | Phillips SAA7113A |
Drivers | ASUS custom and Smart Doctor |
Highest Overclock | 158/208 MHz |
Overclocking Utility | None |
Software/Gaming Bundle | Drakan, Rollcage, Ulead VideoStudio, ASUS DVD |
Estimated Street Price | $265.00 |
After establishing itself in the GeForce market with the reference V6600 SDRAM,
ASUS switched to their own board design to allow for additional features. Built
in TV-out, TV-in and VR functions all contribute to make this card a multimedia
powerhouse. While the TV-in features are not meant for professional video editing,
it can surely be used to transfer home videos to your computer. The built in
VR port and included glasses will provide a few days worth of entertainment,
however don't be surprised if the glasses end up in your desk drawer after the
novelty wares off. The high quality cooling, fast SGRAM, and aforementioned
video functions provide for a GeForce powerhouse; one with a high price of almost
$45 more than some of its competitors.
The drivers used here are unified for the V6x00 series, meaning that ASUS had time to add a few features and change a few cosmetic points, but the fact of the matter is that the drivers are based on the NVIDIA reference ones. The card also comes with the highly useful Smart Doctor utility which provides hardware monitoring via the onboard Winbond W8371D chip. Temperature, voltage, and fan speed are all tracked in an easy to use utility. Also worth mention is the V6600 Deluxe's ability to dynamically overclock, meaning that the card will automatically underclock when the graphics card is not being used to any great extent. The speed jumps right up to normal levels when stress is placed on the GPU.
Pros: Widely available, video-in support, composite and S-Video out support, good cooling via thermal grease, fast SGRAM, dynamic overclocking, hardware monitoring.
Cons: Expensive, VR is more of a toy than a useful feature, video-in pictures may show interlacing.
The taskbar allowed us to
change settings at a the click of a mouse.
D3D settings were easy to
find and tweak.
OpenGL settings could be modified easily as well.
The D3D VR settings are essential for having a proper VR setup.
The desktop color tweaking utility.
Proving a color adjustment
is key for proper video output.
The Smart Doctor utility can be very useful for proper monitoring of the card.
Memory | 32 MB SAMSUNG SDRAM 5.5 ns |
Cooling | Large Heatsink/Fan Combo |
TV-Out | Not Supported |
TV-In | Not Supported |
Drivers | ELSA WINman Suite |
Highest Overclock | 150/183 MHz |
Overclocking Utility | ELSA WINman Suite |
Software/Gaming Bundle | None |
Estimated Street Price | $225.00 |
ELSA's first stab at the GeForce market provided an interesting card. Rather than use the generic reference design, ELSA chose to make the ERAZOR X fit in NLX form factor cases. While this is essential for NLX case owners, it means less to AT and ATX computer owners as the card fits in all three form factor cases. One would suspect that the oversized heatsink would provide for a low heat card; however, the fact that the heatsink is attached to the GPU with thermal glue means that less heat can be transferred to the heatsink's large surface area. The 5.5 ns SAMSUNG SDRAM chips perform similar to the 5.5 ns EliteMT SDRAM chips.
Of the cards rounded up here, only ELSA took the reference drivers to a new level. By adding an advanced monitor mode, one can actually set the specifics of the monitor gun itself, proving the ability to make your own custom resolution. The tasbar resident driver utility can be annoying at times, as the most used features are found almost 3 levels deep. ELSA also includes their own type of hardware monitoring, provided via the Chip Doctor utility. Rather than include an extra chip to perform hardware monitoring, the ERAZOR X monitors the voltage that goes to the fan to ensure that the fan is functioning at the proper speed. Chip temperature is also monitored via a temperature sensor. This information is then passed to the Chip Doctor and if a problem occurs Chip Doctor will notify the user as well as shut down the computer in extreme cases. While this may all sound fancy, the driver implementation is rather weak, as the only information that is given is a small green light indicating that everything is functioning properly. It is not until an error has occurred do you see any type of box. No amount of clicking or pointing will show you chip temperature or fan RPM. An included overclocking screen allows for changes to be made to the core and memory speeds with relative ease.
Pros: NLX form factor, not reference design, large heatsink, powerful driver set, Chip Guard.
Cons: Drivers can be convoluted, thermal glue used for heatsink attachment, no TV-out, Chip Guard's implementation.
Advanced monitor mode: powerful stuff.
Color Settings.
DirectX Settings.
Overclocking was a breeze.
Memory | 32 MB EliteMT SDRAM 5.5 ns |
Cooling | Heatsink/Fan Combo |
TV-Out | Brooktree 868 |
TV-In | Not Supported |
Drivers | Reference Gigabyte |
Highest Overclock | 160/204 MHz |
Overclocking Utility | NVIDIA Reference |
Software/Gaming Bundle | Populous, Future Cop, Superbike World Championship, DVD Player |
Estimated Street Price | $230.00 |
Gigabyte made their transition into the GeForce market rather recently, meaning that it was a bit late into the game. However, by entering the market late and using NVIDIA's reference design, Gigabyte was able to make the GA-GF2560 into a very solid card. Cooled with a heatsink/fan combo attached to the GPU with thermal tape, the GA-GF2560 that we tested was actually able to overclock to the highest levels we have ever seen on an SDR card. The EliteMT 5.5 ns SDRAM chips are the same chips used in almost every other reference design board.
Driver wise, Gigabyte chose to use NVIDIA's reference drivers with the addition of the Gigabyte logo. While this provides for sufficient options for the majority of users (as well as more frequent driver updates), it means that the GA-GF2560 lacks some of the more advanced features found in the ELSA ERAZOR X or the ASUS cards. One very nice feature is that Gigabyte chose to enable the overclocking tab in the reference drivers, a tab which is normally only seen with a registry hack.
Pros: Overclocking utility, good cooling, TV-out support, very solid performance wise.
Cons: Very hard to find, reference design, reference drivers, does not come with Gigabyte's trademark Dual Cooling system.
The taskbar utility has the same features as the reference utility, with the
addition of the Gigabyte logo.
Overclocking made easy with the Hardware Options screen.
The information screen shows a few of Gigabyte's additions, such as the "GIGABYTE
on the Internet" button.
The same Direct3D settings found on almost all GeForce cards.
The OpenGL settings also remain identical to the reference design.
Output from the Brooktree 868 chip can be controlled via the above screen.
Color adjustments are easily made.
Memory | 32 MB EliteMT SDRAM 5.5 ns |
Cooling | Heatsink/Fan Combo |
TV-Out | Brooktree 869 |
TV-In | Not Supported |
Drivers | Reference Leadtek |
Highest Overclock | 130/195 MHz |
Overclocking Utility | Speed Runner |
Software/Gaming Bundle | 3D F/X, Digital Video Producer, Web 3D, RealiMation STE, WinDVD |
Estimated Street Price | $230.00 |
|
The Leadtek WinFast GeForce 256 SDR card was one of the very first GeForce cards to hit the market. Using stock reference design, Leadtek choose to use 5.0 ns EliteMT SDRAM chips, which provides additional overclocking potential over the other reference designed cards that use 5.5 ns SDRAM chips. TV-out is provided via the common Brooktree 869 chip and an onboard S-Video port. What Leadtek gains by using fast 5.0 ns SDRAM chips, it looses with its poor cooling method. The heatsink/fan combo comes attached to the GPU with what appeared to be scarcely applied thermal glue and pressure pins on the side of the fan. Upon removing these pins, the heatsink/fan combo quickly fell off the card. While this should prove to be sufficient cooling for everyday use, overclocking potential is brought down significantly due to the heat that gets trapped between the heatsink and the GPU.
Leadtek chose to use NVIDIA's reference drives with a few cosmetic twists. Once again, this provides for speedy and frequent driver updates but also takes away from functionality of the card. Also included is Speed Runner, Leadtek's overclocking utility, which provides an easy way to tweak core and memory speeds. Finally, there is Leadtek's Eye Protection utility which reminds you to take a break from looking at your monitor too long. Useless in home situations, perhaps this could be used at work as an excuse to take a coffee break.
Pros: Very fast 5 ns SDRAM chips, TV-out support, overclocking utility.
Cons: Poor cooling, reference design.
While very similar to the
NVIDIA taskbar utility, Leadtek's bar provides easy access to commonly used
features.
The OpenGL settings are the
same as those found in the standard NVIDIA drives.
The standard NVIDIA Direct3D
screen is also found.
An information page provides
all essential details.
Also included is Speed Runner,
the overclocking utility.
Color adjustments are easily
made.
The Eye Protection utility. Yes, the picture is to scale.
So you have it narrowed down to a GeForce SDR as the setup you want in your next video card. The next step is to choose which card you want. Upon first glance, the cards in the SDR GeForce market may seem identical. Many have very similar features, due to the fact that they all are based upon the same processor. When the cards are compared, however, it is possible to distinguish the best from the rest. With this in mind, lets take a look at the best.
AnandTech's pick for best SDR GeForce card goes to ASUS V6600 Deluxe for many reasons. First off, the 5 ns SGRAM that the card uses provides great stock speed and even better overclocking speed potential. By choosing to use a non-reference design with the card, ASUS was able to fit in many extras that are not found on any other SDR GeForce card (as well as any DDR GeForce card, with the exception of ASUS's own V6800 Deluxe). The video-in features allow for any home movie buff to edit videos, write them to CD-ROM, or produce on-line videos. The built-in VR port and included glasses are fun to play with for a while and are always neat to show friends, but their true usability is easily questioned. The hardware monitoring and great cooling also make the V6600 Deluxe stand out from other SDR GeForce cards. The only negative side to a card having so many nice and useful features is the fact that its price runs nearly as high as some DDR GeForce cards. The advantage to getting this card over a DDR card is the video-input and output features. If these are important to you, the extra price of the V6600 Deluxe is easily justified.
For those out there with price concerns (which are many people in the SDR market), the Gigabyte GA-GF2560 seems to be the most logical decision, hence it places runner up in the roundup. As far a price/performance ratio goes, the high overclocked speeds (which may vary from card to card), good cooling methods, and low price all make the Gigabyte card a strong performer. The reference design and use of reference drivers may take away from the card's originality, but it also allowed Gigabyte to produce a card that is on par price wise with its competition. The major downside to this card is the fact that many Gigabyte products are in short supply. Even picking up one of there older TNT2 cards can be a challenge. If you can find the GA-GF2560, it is a very strong buy.
The decision of which video card to buy is often a difficult one. Prices are high and features are similar, traits which could leave the you, the consumer, with your head spinning. Hopefully, by reading the roundup, looking at the candidates, and seeing performance, your decision of which card to buy will be a bit easier.