| (Update: For comparison tests of the new Radeon 8500 AGP to the GeForce3, GeForce2MX, Radeon 7500 and GeForce4MX in a Dual 1Ghz G4 - see this Radeon 8500 AGP preview.)
This page covers Benchmark test results using MacBench 5, Throughput 1.5, CineBench 2000, Walker 3D, G4Timedemo, RaveBench and "Let1KWindowsBloom". (The latter reports the time required to create/close 1000 windows in OS X.)
MacBench 5.0 Results
The graph below shows the results of MacBench's Graphics and Publishing Graphics scripted tests (emulation of Mac applications graphics calls, scrolling, zooming, searching/replacing, etc.). Resolution for both tests was 1600x1200 (millions colors)
Graphics Primitives Results: In MacBench's Quickdraw primitives tests, the GeForce4MX was usually 10-15% faster than the Radeon 7500 in most functions except text/character related ones like DrawChar, DrawString and DrawText, in which the Radeon 7500 was appx. 10-15% faster. There were a few case (such as blends), where the scores were much higher with the GF4MX.
Create/Close 1000 Windows (OS X)
A reader sent a Carbon benchmark some time back via email called "Let1kWindowsBloom". Since there are so few benchmarks for OS X I used it with both cards. The time to create/close 1000 windows with each card is shown in the graph below.
RaveBench 1.11 Test Results
I used VillageTronic's RaveBench 1.11 Benchmark (only available on the CD supplied with their graphics cards to my knowledge). I ran this comparison at the highest resolution supported by RaveBench - 1024x768. Desktop (Monitors Control Panel) was set to 1280xx1024, millions color mode. (The desktop must be set higher than the Ravebench test window.)
By watching the window of each test, you can also see how well the card/drivers support the feature tested (as far as image quality). The only apparent issue I saw with was that the GeForce4MX showed almost no actual transparency in that test. See the image below. (The blue water should be fully transparent, showing the ocean floor below.)
Although I've not tested every RAVE app., as a FYI the water in the Unreal Tournament Tutorial appeared fully transparent.
For more info on Ravebench's tests (and sample scenes) see my 1998 Illustrated Guide to Ravebench.
Walker 1.2 QD3D Tests
I measured the minimum and maximum framerates using Lightwork's Walker program (no longer available at their web site) using the highest polygon scene; Corridor (49,002 polgyons). Graphics mode was set to 1280x1024, millions colors. Scores are in Frames Per Second (higher is better) based on min/max rates displayed during ten spins (rotations) of the scene. (In my opinion the most important figure is the lowest framerate during the test, as that indicates how the card handles the most demanding part of the scene.)
I noted the GeForce4MX card exhibited very noticable gaps at the edges (seams) of objects in some areas of the Corridor scene. (For example around the frames of windows.)
Maxon's Cinebench 2000 benchmark (available here) is a cross-platform 3D application simulation benchmark. Cinebench reports a score for Software shading (no OpenGL hardware acceleration), OpenGL Shading (hardware accelerated), a single CPU raytracing score and a Dual CPU raytracing score. (I show both single and dual CPU results just as a FYI. The test was run under OS 9.2.2 at the recommended 1024x768, millions colors graphics mode.)
As you can see from the scores - the graphics card made very little difference in this benchmark. For software that supports dual CPUs, you can see the benefits in the rendering score.
G4Timedemo (available here) is a 3D benchmark that uses Altivec instructions (aka 'Velocity Engine). For this test Preferences were set to Millions colors and Amazing quality. G4Timedemo was one of the first 3D benchmarks available that used Altivec. (Actually I still don't know of another pure benchmark that does use Altivec.) It draws a scene and moves a mirrored 3D blob around the scene and reports an average framerate (Frames per Second) score. I didn't bother to load down the page with a graph of the scores since they were so close (within a run/run variation I suspect.)
- Radeon 7500 114.5 FPS
- GeForce4MX 113.6 FPS
ThroughPut 1.5 Tests:
Rene Trost's ThroughPut 1.5 benchmark was also ran. Here's how Rene describes this 'pure' benchmark:
ThroughPut is a little application, written in PPC assembler, that tests how much data your Mac can
push through the PCI or AGP port to feed your graphics card with data."
See his web page for more details, but basically his benchmark tests the amount of data the system can deliver to the graphics card using several modes - CPU (32bit stream), FPU (64bit stream), Altivec (128bit stream) and CopyBits (256bit busmaster). [Altivec requires a G4 CPU of course.]
Also see below for comments from an ATI engineer in reply to my questions about this benchmark from last year. (I had kept that reply but forgot to post it here originally.)
The CPU/FPU scores of the GeForce4MX card are the highest I've seen by a factor of about 2 or 3. I can only guess the Nvidia card has fast writes (direct to vram) or write combining enabled perhaps, but will try to find out more. (Some of the GeForce4MX's results are near the theoretical max 4x AGP bus rate of 1066.67 MB/sec.)
However these scores do not translate anywhere near linearly in actual applications or game tests as shown in those test results elsewhere in this article. (For more info on AGP, see this Intel AGP Technology page. Fast writes and software write combining are covered on this page.)
Here's comments from an ATI engineer in reply to comments I had on this benchmark from about a year ago. (When the Benchmark was first released I had questions about the scores during previous review of GeForce2MX vs Radeon cards here last March.)
Mike, here's the response from another engineer here:
Throughput does not really measure AGP performance because:
A) Only the AGP Master, the graphics card, can initiate AGP transfers and
this application does not do that
B) The test actually is measuring RAW slave cycle linear write performance
across the bus (PCI 66 (same for AGP 1x/2x/4x/etc) or PCI 33). This does
not represent real world performance since how often does the CPU start at
the top of the display and update the entire frame buffer? Almost never!
C) The CopyBits test scan be considered valid as it is copying a huge amount
of data from the GWorld in system memory to the video cards frame buffer.
However, the likelihood of this type and size of CopyBits operation
occurring under normal application use is almost never.
D) The one thing that this test can determine is whether the graphics card
(ASIC) and the North Bridge support Fast Write and whether that feature is
enabled. This requires a system that is one of the new AGP 4x Apple
machines. However Fast Writes do not directly translate into superior 2D
graphics performance. It can only increase the speed of data being moved
across the bus when the data is at linear addresses, using consecutive
writes. CopyBits (SrcCopy) of large images, as this test does, will show a
significant performance increase but in a real world application, there are
many individual copies to account for the pitch of the image versus the
frame buffer and thus this advantage of Fast Writes isn't fully taken
From what I have seen so far with these two cards, both offer good performance in games and applications. The GeForce4MX in my opinion has a higher GPU clock rate than the Radeon 7500, but similar memory bandwidth. Both cards deliver good framerates in games, with the GeForce4MX a bit faster at 800x600 to 1280x1024 modes (nearly identical at 640x480 and 1600x1200) in Quake3 with the current drivers. Both cards deliver similar performance in Unreal Tournament, which is usually CPU bound.
Despite the "4" in the GeForce4MX, it's clear this card is not as fast (3D/Gaming wise) as the previous GeForce3 as shown in this GF4MX vs. GeForce3 comparison, although it is faster than the earlier GeForce2MX card. (The MX versions of Nvidia's cards are typically their lower end model.) And I as noted previously, both ATI and Nvidia have new high end cards (the 8500 and GeForce4 [non-MX] versions respectively). The ATI Mac 8500 AGP card is expected to be shipping later this month as noted in my recent interview with ATI. Although there are no retail Mac Nvidia cards available, there is speculation there may be non-MX GeForce4's available at BTO in the future. [Update: On Tuesday, Feb 5th there's a Press Release about the GeForce4 Ti (Titanium) being offered at BTO. The card features support for dual LCDs and has 128MB of DDR video ram. It will be offered separately for $399 later this spring according to the press release.]
DVD playback from what I have seen so far is noticably more responsive with the Radeon 7500, using less CPU cycles and delivering a bit better image quality.
I've not yet tested the cards with dual monitors so I can't comment on that. The 64MB of video ram might be a plus for running two large/high resolution monitors, if there are no other problems (such as drivers, specific monitor support, etc.).
Although these cards delivered good performance, I'm hoping that the ATI 8500 AGP and non-MX GeForce4 cards will eventually appear as BTO options. (The retail ATI Mac Edition 8500 AGP card is expected to ship later this month, however the retail cards don't have ADC ports, which would be important for owners of Apple LCD displays made after summer 2000.) There have been no retail Nvidia cards for the Mac to date, although as noted over the last year, many readers have flashed PC GeForce3 and GeForce2MX AGP cards for use with the Mac. (However the latest GeForce3Ti cards have not had any successes with the Mac GeForce3 ROM that I know of.)
Related Links: For more info on graphics card performance, reviews and other related articles - see the main www.xlr8yourmac.com site's video cards page.