Regardless of pure benchmark results, what matters most is performance in actual applications (or games). I tried to choose a small number of applications that were common and popular. (Photoshop, Appleworks, Quake3 and Unreal Tournament.) There are many others than could have been run, but with limited time I chose the most popular (based on reader interest/mails) of the programs I personally own. (The latest update includes results from a simple animation test in Maya PLE (personal learning edition.) for OS X.)
Just as a FYI - OS X Internet Explorer window resizing has always been a sore point...no graphics card I've seen helps that. (Faster CPUs help a bit, but I'm mentioning this so that readers won't expect a graphics card to make OS X IE 5 window resizing performance faster.)
Note: There's a dual display/OS X scrolling bug in the 1.0 drivers that is already fixed in the next driver update which should be posted to the web sometime in April. There is also a problem in OS 9 if booting with the display set to 1152x864 mode, although 1152x870 and other resolutions are fine. I've reported this to ATI. (3D Performance is expected to increase also in the next driver update for both the 8500 and 7500 models.)
Resolutions Available (w/Sony FW900)
The image below shows the list of resolutions with the Radeon 8500 Mac Edition when connected to a Sony FW900 (widescreen) CRT monitor. (Later driver udpates may offer more/different resolutions and the selection will vary depending on the monitor used.)
Note: ATI's 8500 Mac Edition Specs page notes the maximum DVI LCD resolution supported is 1600x1200.
2D General Screen Sharpness: In my opinion, the 8500 card had the sharpest monitor image quality of any card I've tested at higher resolutions. Most of the OEM cards shipped in new Macs in my opinion were not as sharp as the 8500 Radeon at high resolutions on the Sony FW900.
About DVD Movie Performance:
As I've noted in past reviews (most recently the GeForce4MX vs Radeon 7500 comparison), I've seen the best image quality with ATI's Radeon cards. The ATI Radeon cards also had noticably lower CPU cycle usage during OS X DVD movie playback - typically 1/2 that of the Nvidia cards for instance.
Here's a repeat of ATI's comments about DVD playback performance:
In terms of hardware acceleration support, it's the same as we had under OS9 for the Radeon product. We take over the chain once the macroblocks have been parsed and use the hardware to perform the inverse discrete cosine transform (iDCT), motion compensation, subpicture overlay, YUV to RGB conversion and scaling to the screen. As such, our actual driver only takes about 1-2ms (depending on the speed of your CPU) to send all this information to our hardware and execute it concurrently with the software decoder. Adapative and Temporal Deinterlacing support are not in the current shipping OSX, but we've already added the support here and it will be available in an upcoming build.
Under OS9, we've got the exact same hardware acceleration support, but since it is not a fully preemptive multitasking OS, the decoder can get locked out by other actions which results in dropped frames or macroblock corruption.
As we offload so much more from the CPU than the GeForce4MX, and since we can do it concurrently with Apples decoder, it results in substantially lower CPU usage. As you've seen from your experiments, this results in much better user responsiveness and fewer dropped frames even on OSX.
(Arshad of ATI)
Maya PLE Tests (OS X): The free Maya PLE edition was not available when I had access to the dual 1GHz system, so I used my Dual G4/533 system for this test with the Radeon 8500 and OEM GeForce3 card. (Maya PLE does not take advantage of dual processors however.) The graph below lists the time to play the "mousetrap" sample scene with both graphics cards.
The test was run twice and the avg. time used (rounded to one decimal place). Considering the margin for human error in starting/stopping a stopwatch, the results are identical literally. I have my doubts that this scene animation playback is a really a good test of graphics cards (the timing of the scenes actions may be a limiter and with the same CPU it may be more timing/cpu bound than anything else). If any experienced Maya users have suggestions for a better test, let me know.
Both the 8500 and GeForce3 had excellent image quality for the scene. In limited use I saw no abnormalities with either card. (The PLE edition's watermarks make it less than ideal for this however.)
Lightwave 3D 6.5 Tests (OS 9): I compared times to generate a sample scene in Lightwave 3D v6.5 under OS 9.2.2 with the Radeon 8500 and OEM GeForce3 card. (I didn't use the "hummer" scene I had in past tests since it shows abnormalities with the Nvidia cards, reportedly do to the distance scaling in that scene.) The graph below lists the time to generate the "foggy train" sample scene with both graphics cards. (The March 2002 ATI driver update may improve performance - these tests were run before that driver update. See the Video articles page for feedback on the March 2002 driver update and a link to an ATI firmware update for the Radeon 8500.)
I've not tested it with the GF3 yet, but I have seen some "sparking" (pixel noise) in scenes with the Geforce4MX running Lightwave 7.0 in OS X. (Refreshing the scene, or any screen operation clears it however - but I did not see this with the ATI cards with LW 7.)
Photoshop 5.5 Scrolling Tests
I measured the time it took to scroll the Flowers.psd file (resampled to 300DPI) at the maximum zoom (1600%). I positioned the scroll bars at the max left and top positions, and then timed how long it took for each card to scroll the image horizontally (left to right) and then vertically (down). Display was set to 1600x1200, millions colors. All times are in seconds, lower numbers are faster.
I know it's odd that the GeForce4MX was fastest at this test. I repeated the tests and the results were still the same. (There's some margin for stopwatch starting/stopping error - but the GF4MX was still fastest. All Nvidia cards used the same drivers that shipped on the Dual G4 1GHz - Nvidia drivers v2.5)
AppleWorks 6.2 Scrolling Tests:
I measured the time it took to scroll from the top to the bottom of a 100 page Newsletter document. (Multiple columns with images and text on each page.)
The graph below shows how long each card took to complete the test. (Lower numbers are faster.)
In the same system, all cards were literally identical in performance on this test. (Easily within the margin of human error in starting/stopping a stopwatch.)
AppleWorks Scroll Test w/Dual Displays: I ran the same Appleworks scroll test with the 8500 in a Dual G4/533 with one and two displays connected. The difference in scroll times were within the margin of error for stopwatch testing. (1600x1200 CRT used for Appleworks, 2nd display was a 1024x768 VGA LCD monitor. Both monitors set to millions colors.)
(Graph above updated for OEM GeForce3 tests in DP533 with single monitor. It does not support dual monitors.)
Quake3 Arena Tests
Quake3 1.31beta 4 used in OS X (r_smp=1 so that both CPUs were used). All tests used the standard game options with "High Quality" settings (only the resolution was changed.) [high geometric detail, texture quality slider one notch down from max, all game options on, Trilinear Filtering, 32bit mode/textures.] No config file tweaks were used. (In fact before the tests I deleted the quake3 config file forcing a rebuild of it to remove the chance that any settings had been modified from prior tests.) Desktop mode was 1600x1200/Millions colors.
All results are in frames-per-second, higher is better.
The same results in a line graph:
The Radeon 8500 performance at lower resolutions indicates there may be some room for improvement (driver optimization). But at the high end the 8500 Radeon delivered the best performance I've seen to date from any Mac graphics card. When it ships later this spring, the $399 GeForce4 Ti (Titanium) card may be faster however.
Update - DP533 System Tests: To see how the Radeon 8500 performs in a Dual G4/533, I ran tests with a clean Q3 config file and one with the common tweak to increase the com_hunkMegs setting (to 128) and the s_chunksize to 2048. (Note: setting s_chunksize to 4096 shows even more boost at the low end.) The graph below shows results with 1 and 2 CPUs enabled (R_SMP=0/R_SMP=1) and with a "clean" config file and with the increased hunk_megs and s_chunksize setting. (Same game detail settings used in all cases).
The * indicates using a com_hunkMegs=128 setting (vs 56 with clean/fresh config file) and s_chunksize=2048 (vs 512 default). DP533 is with r_smp=1 (both CPUs enabled); G4/533 scores are with r_smp=0. (I'll be adding scores with the GeForce3, etc. to this graph later.)
Quake3 Tests with 2 Displays Connected: The graph below shows Q3 results with the 8500 in a dual G4/533 with the game running on a 1600x1200 mode CRT while a 2nd LCD monitor (1024x768) was connected.
(Both displays set to millions colors desktop. I'll be adding tests later with the CRT+DVI Cinema display combination). I used a fresh/clean config file (no tweaks, although as shown above they help FPS rates.) The DP533 scores are with "r_smp 1" (both CPUs enabled) - the SP533 scores are with "r_smp 0" (one CPU).
As you can see from the graph below, the performance impact from the 2nd display was practically NIL. (The 2nd display is turned off when Quake3 is run - the video signal indicator on the LCD showed no signal when the game was running.) Resetting the game resolution would show the 2nd display turn on for a second while the mode change was executing, but then back off. Exiting Quake3 turns the 2nd display back on.
NOTE: I could not get the OEM GeForce3 to run Quake3 1.31b4 under OS X 10.1.3. (everything from clean configs, config file/refresh rate tweaks, reinstalls of 10.1.3, etc. didn't work. Reverting to 10.1.2 fixed that but I'm not the only one having Q3 problems in 10.1.3 with a GF3. Same problem with the GF3 card and 10.1.3 on a DP533 and DP500. Quake2 opengl and UT X opengl ran fine however.)
Unreal Tournament Tests:
Unreal Tournament as mentioned before in previous reviews is not a really a good video card benchmark. (It seems more CPU bound than video card and delivers relatively low framerates with every video card/system I've tested, especially with the Wicked400 demo.) Unreal Tournament version (436) was used, along with the UTbench and Wicked400 demo tests. RAVE and OpenGL modes were tested with Medium detail, 32-bit mode and low audio settings. Min desired FPS was set to "0" (zero). UT was allocated 250MB of RAM.
All results are in frames-per-second (FPS), higher is better. Since the UT's timedemo stats reports min, max and average framerates, all are listed below. 3D graphics mode was set to 1600x1200, millions colors. (1024x768 mode and below delivered similar results with all cards - within 1-3 FPS avg. usually at lower resolutions, since UT is primarily CPU bound at lower resolutions.) [Edits to the Config file allowed RAVE mode resolutions higher than the 1024x768 normally shown in RAVE mode preferences - thanks to Eric Jaeger of ATI for the note. ]
Wicked400 Demo - RAVE 1600x1200:
UTBench Demo - RAVE 1600x1200:
(Since I'm packing up the dual 1GHz system for return, I did not have time to retest all the cards in RAVE mode, so they are missing from the graphs above.)
Wicked400 Demo - OpenGL 1600x1200:
Note: GeForce2MX scores not shown with Wicked400 demo as I had sold the card before running the tests. (Only after I saw the odd slow-down with ATI cards in GL mode with the UTbench demo did I decide to also test with Wicked400 at 1600x1200 mode - by then I had already sold the GF2MX card as part of a system. I'm looking to buy another 2MX OEM card to replace it now.)
UTBench Demo - OpenGL 1600x1200:
ATI Card OpenGL Demo Notes: Although I didn't see it in actual gameplay, with each of the ATI cards playing back the UTBench demo there was a noticeable slowing of the demo as it progressed. Towards the end, you could dramatically see this effect (almost like slow-motion), and it surely results in lower Average scores (and note the lower min-FPS scores.) I've written ATI to ask why there's such a dramatic slowdown as the recorded demo plays back. (During play with the 7500 card for instance, I saw avg FPS rates in the 40-50FPS range.)
I didn't see this slowdown as the demo progressed in the Wicked400 demo, which takes place in one room and is shorter in duration. (UTbench covers a larger area and is more typical of actual play in my opinion than the Wicked400 demo.)
The next page of this review has benchmark test results with the two cards (MacBench, Cinebench 2000, RaveBench, Walker 1.2, Let1kWindowsBloom)