|
Author |
Topic Options
|
AdamNF
Forum Elite
Posts: 1134
Posted: Thu Apr 08, 2004 1:06 pm
Whats your favorite video card brand name (ATI, Nvidia, etc.)
What video card do you use and why did you chose it?
I'm using a ATI Radeon 9800Pro. I'll never go back to Nvidia as long as i live.
|
Johnnybgoodaaaaa
Forum Elite
Posts: 1433
Posted: Thu Apr 08, 2004 3:09 pm
AdamNF AdamNF: Whats your favorite video card brand name (ATI, Nvidia, etc.) What video card do you use and why did you chose it?
I'm using a ATI Radeon 9800Pro. I'll never go back to Nvidia as long as i live.
Damn, you are one lucky mofo. I have to use a nvidia 5200fx pci slot because my computer doesn't have an agp slot or whatever it is. Hopefully if I can get a new motherboard I can get a better videocard(ati 9600 or 9800, most likey an ati 9600 though cause I don't have much money). I was thinking though about getting an Nvidia 5700 geforcefx, cause my brother has an ati 9600 and he said it messes up when he's doing antialiasing and stuff. I have like every game and can't even play them to the extreme, especially since I just got battlefield vietnam and I can't even play that shit on high graphics.
|
AdamNF
Forum Elite
Posts: 1134
Posted: Thu Apr 08, 2004 3:39 pm
I would go with a Sapphire 9600XT, you can get it for cheap. The 9800Pro is amazing, i was just playing Knights of the old republic and i had all the setting maxed and it worked perfict. The new Nvidia card come out this month...
|
Johnnybgoodaaaaa
Forum Elite
Posts: 1433
Posted: Thu Apr 08, 2004 4:05 pm
AdamNF AdamNF: I would go with a Sapphire 9600XT, you can get it for cheap. The 9800Pro is amazing, i was just playing Knights of the old republic and i had all the setting maxed and it worked perfict. The new Nvidia card come out this month...
How are the new nvidia cards going to work? I heard they are for pci express....do you know if that is the same as the normal pci cards, or is there going to be something new that you have to get which is different than the normal pci?
|
AdamNF
Forum Elite
Posts: 1134
Posted: Thu Apr 08, 2004 4:06 pm
I really don't know, i'll try to find out.
|
Johnnybgoodaaaaa
Forum Elite
Posts: 1433
Posted: Thu Apr 08, 2004 4:07 pm
AdamNF AdamNF: I really don't know, i'll try to find out.
Yeah, it would be great if you didn't have to get some special motherboard for the new nvidia pcx video cards, because then more money more money wouldn't be needed, but from what I have seen it looks like you need some special motherboard with a pci express thing on it, although I could be wrong....
|
sk1d
Active Member
Posts: 193
Posted: Thu Apr 08, 2004 7:53 pm
you gotta go for the all in wonder card by ati
unless you can find a pvr here in canada, this is the next best thing. Record everything on to your hard drive, burn onto a vcd and watch whenever you want
or just watch tv while you're doing work on the computer
|
Johnnybgoodaaaaa
Forum Elite
Posts: 1433
Posted: Thu Apr 08, 2004 8:21 pm
sk1d sk1d: you gotta go for the all in wonder card by ati
unless you can find a pvr here in canada, this is the next best thing. Record everything on to your hard drive, burn onto a vcd and watch whenever you want
or just watch tv while you're doing work on the computer
I had one of those. What I like about those is I could watch the movies I download and play on my computer on my tv because I could connect it to the Tv. For some reason it doesn't work for my Geforce 5200fx, even though it says it should. For right now the Geforce 5200fx works good for playing games, better than the all in wonder did for me....
|
AdamNF
Forum Elite
Posts: 1134
Posted: Thu Apr 08, 2004 8:32 pm
the all in wonder 9600xt comes with a tv tuner right?? thats why its so much money.
the 9800xt comes with a tv tuner too 
|
Posted: Mon Apr 12, 2004 8:03 pm
9800xt doesn't come with a tv tuner, the fastest aiw card out there is the aiw 9800 pro
and I pick ATi > nVidia simply because I'm an ATi fanboy 
|
AdamNF
Forum Elite
Posts: 1134
Posted: Mon Apr 12, 2004 8:38 pm
My mistake, tell us hat cards you have not just the brand.
|
AdamNF
Forum Elite
Posts: 1134
Posted: Wed Apr 14, 2004 7:36 pm
The Nvidia 6800 Ultra is out...with a price tag of $500+ American i won't be going near it. But the 3DMark03 score 9000+ is an eye opener.
Story Here
|
WarHawk
Active Member
Posts: 231
Posted: Fri Apr 16, 2004 12:22 am
I use an MSI Geforce FX 5600XT 256MB  I decided to go with Nvidia becuase I can never get ATI's drivers to work.
|
AdamNF
Forum Elite
Posts: 1134
Posted: Sun Apr 18, 2004 6:59 am
I was thinking about the 256MB 9800Pro but i acualy scored lower in most benchmarks, plus 256MB is over kill and no games use it. I don't know much acount Nvidia, is the 5600 in the same class as the Radeon 9600.
|
Posted: Sun May 23, 2004 8:09 am
Nobody denies that Intel's 0.13-micron-architecture Pentium 4 "Northwood" is a powerful processor, but it's only the second strongest chip in many PC gamers' desktops. The GeForce 6800 Ultra graphics processing unit (GPU) that Nvidia Corp. introduced last week has over 220 million transistors -- as many as four Northwoods or two of AMD's Athlon 64s, without using millions for on-chip cache as the PC processors do. If you think technical jargon about clock speeds and memory interfaces is only for CPUs, it's time you got up to speed on another silicon arena, where the warring superpowers are Nvidia and ATI Technologies instead of Intel and AMD. Let's take an introductory look at the engines that power today's games' and animated films' increasingly realistic 3D worlds.
Nvidia coined the acronym GPU -- and defined it as a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines, capable of producing at least 10 million polygons per second -- when introducing its GeForce 256 in August 1999. Archrival ATI tries to avoid the term, referring to "visual processing units" (or, for its motherboard chipsets, "integrated graphics processors"), but the GPU tag has become popular enough for our purposes.
Whether in a game player's PC or a scientific engineer's workstation, the GPU is designed to take a load off the system processor by handling the majority of 3D rendering and setup duties. Ever-more-complex transform and lighting (T&L) engines and vertex and pixel processors have promoted GPUs' growth in size and complexity, but most consist of the same basic components, seen below in ATI's block diagram of its R350 (Radeon 9700 Pro) core.
Some old-school parts of the chip include the 2D engine for productivity applications and image editing -- once the main performance consideration, now considerably overshadowed by the 3D circuitry -- and interfaces that pass data in and out of the GPU, whether the AGP bus (soon to be pushed aside by PCI Express) or various interfaces for various types of monitors such as CRTs and LCDs. (If you run into the term RAMDAC, it's short for Random Access Memory Digital-to-Analog Converter and converts digital image data to analog for a CRT's red, green, and blue electron guns; a higher RAMDAC clock offers faster screen-refresh rates.)
Moving further into the GPU brings us to today's main attractions, the 3D-specific components. These may differ in terms of naming conventions and architectural design, but remain pretty consistent in terms of function, and include various setup engines, memory compression algorithms (HyperZ III in ATI's chart), an antialiasing unit, memory interface, and 3D rendering engine.
Rendering Hardware
Strictly speaking, of course, when we talk about 3D games or 3D graphics, we're almost always talking about something viewed on a 2D screen. The latter is made up of pixels, while a computerized 3D model is composed of meshes of polygons -- triangles whose corners (vertexes) define three points in space (three sets of X, Y, and Z coordinates). Using more and smaller triangles makes objects look smoother and more realistic, just as using more pixels smoothes out jagged lines in 2D.
Rendering is the process of converting a 3D model to 2D for display. The pace of progress in rendering processor and memory architectures has outpaced that of any other PC technology in recent years; just as today's superscalar CPUs can execute multiple instructions in one clock cycle, GPUs have grown from four to eight to, in the case of Nvidia's brand-new GeForce 6800, 16 parallel pipelines, each able to render one pixel per clock.
The pipelines, combined with texture-mapping units (TMUs), produce the end-result image data. More pipelines mean faster performance, while more TMUs mean better-looking pixels; the speed-versus-quality equation is often noted in pipeline x TMU form, with ATI's Radeon 9800 series featuring an 8x1 design while Nvidia's GeForce FX 5900 has a 4x2 architecture.
Real-world speed measurement for GPUs involves not megahertz or gigahertz, but the number of pixels rendered. This can be expressed as pixel fillrate -- the number of pipelines times clock speed, such as 8 times 380MHz to yield 3.04 gigapixels/sec for the Radeon 9800 XT. Another popular spec is texel fillrate, or number of textured pixels per second, which multiplies the pixel fillrate by the number of TMUs per pipe.
Two very popular buzzwords in the GPU marketplace are pixel and vertex shaders. These are actually programs or functions performed by pixel and vertex processors within the GPU, which load data into registers, execute shader instructions, and render various visual effects and textures. Successive versions of Microsoft's DirectX programming specification, such as the current DirectX 9 versus its predecessor DirectX 8.1, permit more complex vertex and pixel shaders with more instructions and higher mathematical accuracy, which in turn permit more realistic-looking models.
The programmable nature of current vertex and pixel shaders means developers can not only use default instructions, but design new ones to fit their custom needs. Pop Finding Nemo into the DVD player, and you'll see the benefit of high-end pixel shaders and their ability to render complex surfaces.
While game fanatics still brag about sheer speed, few users can detect the difference between -- and no current monitors can keep up with -- a 1,600 by 1,200-pixel image delivered at 150 frames per second instead of 130. So the 3D market, having achieved satisfactory speed, is increasingly turning to improved image quality. The current hot spot here is the GPU's antialiasing unit, which is responsible for maintaining detail while smoothing lines and curves in the rendered image.
Older graphics cards relied on supersampling -- up-sampling or adding more detail to image data, which worked well but took a severe toll on performance. The next step was hardware multisampling, in which individual textures are sampled using an algorithm before final pixel data is generated; multisampling algorithms and formats differ between vendors and GPUs as far as how, where, and in what pattern the texture data is sampled. The number of texture samples supported is usually selectable between 2X and 8X; the higher the number, the cleaner the image and the greater the performance penalty.
Anisotropic filtering (along with bilinear and trilinear filtering) is another popular method of improving image quality. This technique samples multiple textures to smooth out various texture artifacts, especially as objects fade into the distance. The higher the AF setting (usually 2X to 16X), the more samples used (usually 2X to 16X). Anisotropic filtering is processed through the pixel engine or shader, and does not have as great a performance impact as antialiasing -- but doesn't have as great an image-quality impact, either.
As shown by the most recent Nvidia announcement, the GPU landscape continues to move ahead at breakneck speed. Later this year, PCI Express is expected to supplant AGP as the performance interface of choice, offering even more bandwidth to high-end GPUs. Microsoft hasn't set a date for DirectX 10, but is already leaking new pixel- and vertex-shader specifications. And pixel pipelines and memory and core speeds continue to climb. The goal is nothing short of virtual reality, and that'll take all the processing power vendors have to give. Ati is good for photography and film editing but Nvidia is the card of choice for Gaming and High end Graphic's... I guess the poll says it all.
|
|
Page 1 of 5
|
[ 62 posts ] |
Who is online |
Users browsing this forum: No registered users and 1 guest |
|
|