[helpc] Dual-Processor Platforms in Adobe Photoshop

  • From: "Shaka( Rudy)" <strub.rudy@xxxxxxxxx>
  • To: <helpc@xxxxxxxxxxxxx>
  • Date: Sun, 7 Apr 2002 16:23:59 +0200



 <http://www.xbitlabs.com/cpu/>  <http://www.xbitlabs.com/cpu/> Video
<http://www.xbitlabs.com/cpu/> 

 
<http://a.tribalfusion.com/i.click?site=Xbitlabs&adSpace=ROS&requestID=8
7811190> Click Here 



 



Dual-Processor Platforms in Adobe Photoshop

Adobe Photoshop can be considered the most popular graphics application
today. Besides, we suppose it to be the most self-sufficient of all the
professional software, in other words, many Photoshop users work only
with this application and do not need to engage any other programs. This
allows us to assume that checking the performance of different
hardware/software configurations in Photoshop will be of interest to
many people. This is one of the reasons why many frequently used
Photoshop operations are included into all complex benchmark packages,
such as Winstone, Winbench, Sysmark. And most testers provide the
results obtained in these particular benchmarks. Is it good or bad? Of
course, using complex benchmarks is pretty convenient for the guys
running all the tests, because they don't need to interfere with the
testing process. All the testers need to do in this case is to assemble
the system and to click the "Start" button. Again, it is very convenient
and absolutely correct to compare different hardware pieces with one
another considering the results obtained by running the same benchmarks
and to make up some databases with the test results for different
hardware components, which can be very helpful in the future. However,
although the mentioned benchmark sets feature some indisputable
advantages, they are also not free from some unpleasant drawbacks. By
the way, the drawbacks, we are going to touch upon in this article are
just the reverse side of the mentioned advantages. 
First of all, all the testing packages do not include only Photoshop
benchmarks and hence aren't optimized for Photoshop. As a result, the
set of Photoshop functions and filters selected is far from being
complete and in many cases it is just not optimal at all. Moreover, the
developers of testing packages like that usually do their best to make
sure that their offspring will be able to run smoothly in absolutely
different hardware/software configurations. And you do understand that
the difference between these configurations can be incredibly
significant. Since all the testing packages are arranged so that they
could work fine on any hardware platform, even on a not very up-to-date
one, the benchmarks included into these packages are somehow averaged
and reduced to a common "hardware" denominator. For instance, basing on
the memory size, say, 64MB. This way the initial file, which will be
processed by Photoshop part of the testing package, gets smaller. And
since the file is smaller, less time is needed to carry out all the
operations. As a result, when the tests are run on the today's fast
systems, the influence of measuring errors appears unacceptably great.
For example, if you need to increase the image size from 1MB up to 25MB
it will take your system built on 1GHz+ CPU only 1 (!) second. However,
for the operation to be carried out correctly much more memory is
required: 512MB. And as we have just said above, all the complex
benchmarks are optimized for a much smaller memory size. Therefore, the
similar operation is most likely to be undertaken for a 100KB file to be
modified into 1MB one instead of 1MB file into 25MB. On the today's
platforms this operation will hardly take more than one tenth of a
second. Add the measuring error to this value and you will understand
why Pentium III 1GHz will be able to easily beat a much more powerful
Athlon 1.33GHz according to results obtained in the Photoshop part of
the benchmarks package. And this "phenomenon" will be explained not by
some secret optimization, but by a simple mistake caused by the
measuring error and incomparably short time required for the considered
operation. By the way, since we came to speak about measuring errors, we
have to stress that Photoshop is a very unstable benchmark, i.e. it can
show very different results for one and the same hardware configuration
depending on the operations carried out before testing in Photoshop. In
other words, if it doesn't require much effort to achieve some correct
results in 3ds max, for instance, then testing in Photoshop is quite far
from being a trivial task. You need to restart the system every time
after the test is completed and run the same tests for 2-3 times in
order to get some more or less exact average results. Otherwise, you
will get at least up to 20% performance difference for testing with
larger files. As for smaller files, testing modern platforms with them
doesn't make any sense at all, because the absolute error will be
several times greater than the time spent even on the most complicated
operations. 
Well, we tried to explain why you shouldn't take the results obtained in
the Photoshop part of any test packages seriously. However, besides test
packages, there are also the so-called scripts, i.e. successions of
actions for Photoshop operations. These scripts can boast a great
advantage over the test packages: they are optimized for Photoshop,
since they are intended only for testing in this application. You can
easily find scripts with the initial image of any size you like. Some
progressive scripts can also repeat different operations several times.
However, all the scripts run without a single system restart, because
you can't add a system restart command into a Photoshop script. And
without restarting the system, its operation memory very soon appears
occupied with some "bits" remaining from the previous actions, which
definitely tells on the results obtained. You can check this out very
easily. Create a file of the maximum size, which can be operated on your
system without requesting the HDD and apply to it the Gaussian Blur
filter (with any settings), Undo, Gaussian Blur (with the same
settings), again Undo, etc. 5 times, for instance. You will notice that
the second cycle takes less time than the first one, and the third on -
less than the second one. It means that Photoshop saves partially in the
cache the results of the user's actions. That is why it doesn't make any
sense to measure the same parameters without rebooting the system
beforehand, which is never done in Photoshop scripts. But this is not
the end. If this kind of caching has a positive effect on the time
required for the same type of actions, the effect on the time required
for a different action will be just the opposite. First of all, caching
occupies some memory and it may turn out insufficient for the new
operation to be carried out, and secondly, sorting out and updating the
already cached data also requires some time. In case of smaller files,
it won't be that noticeable. However, if you keep working with some
medium sized files for a while, your system may get into a real stupor,
when a considerable part of the memory used suddenly turns out virtual,
i.e. much slower than the operation memory. It means that RAM is more
than enough for a single operation over a medium size file, but as it
comes to a series of operations, the memory gets stuffed with too much
cached data and shifts to a swap-file. That is why many professional
users working with large files now and then run special programs
cleaning the memory. Therefore, you see that testing with scripts
doesn't have any practical value at all. You can also download any
script of the kind, run it several times and compare the results
obtained: the difference will be great, believe us. Again we would like
to stress that the problem of getting correct measurements is especially
acute for modern systems testing. 
So, in our case we decided not to use any scripts and benchmarks
packages. We measured the time required for each operation to be
completed with the help of a special function, then rebooted the system
and ran the same test two more times (also with system rebooting). The
obtained results were averaged and then taken for the database. 

Testing Methods

The fastness of the graphics card doesn't tell on your work in Photoshop
that is why our end-task sounded as follows: which is the best mainboard
+ CPU + memory combination for Photoshop needs? To answer this question
we took Dune.tif file from Photoshop 6.0 distributive.
 <http://www.xbitlabs.com/cpu/photoshop-platform/scr1.jpg> 
The initial size of this pic is 600x600 pixels and it is saved in RGB
format.

Since the time spent on each benchmark working on a 1MB file is two
short (about 1sec), we increased the file size up to 3000x3000 pixels
and hence up to 17MB. The time was measured with the help of a special
Photoshop "timer" function. Each operation was run three times and the
average result was taken. After any operation was completed, we
restarted the system. In the ongoing chapters you will see the
description of benchmarks together with the analysis of the results
obtained, which will give you a clear idea of what platforms are most
suitable for this or that type of Photoshop operations. The filters were
applied with those settings, which you can see on the screenshots, if no
special comment is made. 

Tested Hardware Configurations

For Socket A platforms I took Tyan TigerMP mainboard, which is one of
the fastest solutions today of all widely spread in the market. It is
based on AMP-760MP chipset, supports AMD Athlon/AthlonMP/Duron
processors. This mainboard doesn't have any additional integrated
equipment onboard. It is provided with a regular AGP 4x slot, two 32bit
and four 64bit 33MHz PCI slots. The available 4 memory DIMM slots
support up to 3GB PC2100/PC1600 Registered DDR SDRAM. TigerMP doesn't
require any special power supply units. 
In this article I will compare the performance of the dual-processor
Socket A platform with a dual-processor Socket370 platform built on
Iwill DVD266U-RN mainboard on VIA Apollo Pro266T chipset. I tested these
mainboards with AthlonMP 1200MHz and Pentium III 1133MHz 512KB CPUs. I
decided not to take the fastest processors on purpose, because in a
month or so there will be totally different processors at the top of the
list anyway. Moreover, if we compare two fastest processors but working
at different clock frequencies, it will be pretty hard to draw any
conclusions about the other processors from the same family. 
In other words, I checked the abilities and potential of the most
contemporary processor cores of Intel and AMD processors working in
dual-processor systems: Palomino (Athlon) and Tualatin (Pentium III-S)
working at the maximally close core clock frequencies. The obtained
results can be easily extrapolated over the other CPUs from these
families. 
Platform 1: 
*       2 AMD Athlon MP 1200MHz CPUs; 
*       Tyan TigerMP mainboard; 
*       1024MB PC2100 DDR SDRAM; 
*       VisionTek Xtasy 6964 (NVIDIA GeForce3 Ti500) graphics card; 
*       IBM DTLA 15GB 7,200rpm HDD. 
Platform 2: 
*       2 Intel Pentium III 1133MHz CPUs with 512KB L2 cache; 
*       Iwill DVD266U-RN mainboard; 
*       1024MB PC2100 DDR SDRAM; 
*       VisionTek Xtasy 6964 (NVIDIA GeForce3 Ti500) graphics card; 
*       IBM DTLA 15GB 7,200rpm HDD. 
For a better comparison, I added the results obtained on a typical
Pentium 4 platform as well. Here is this system: 
*       Intel Pentium 4 1700MHz CPU; 
*       ASUS P4T mainboard; 
*       1024MB PC2100 DDR SDRAM; 
*       VisionTek Xtasy 6964 (NVIDIA GeForce3 Ti500) graphics card; 
*       IBM DTLA 15GB 7,200rpm HDD. 
We used the following software: 
*       Windows 2000 SP2; 
*       Adobe Photoshop 6.01. 

Benchmark 1: Convert to GIF

GIF is one of the most popular formats for graphics files. The initial
17MB TIFF-file was converted to GIF with the following settings:




Benchmark 2: Gaussian Blur


The filters from the Blur section serve to smoothen very contrasting
image segments and are really helpful for curing grainy images. Gaussian
Blur filter allows setting the radius of action for this filter, within
which all the pixels will be averaged. Gaussian Blur is one of the most
frequently used Photoshop filters. In order to load the processor,
mainboard and memory quite heavily, we set a pretty large radius. It is
equal to 75 pixels, which is not very common for usual tasks.


Benchmark 3: Smart Blur

This is one more kind of "smoothing" filters. It allows changing far
more parameters than Gaussian Blur:



All the system performed almost equally fast. 

Benchmark 4: Diffuse Glow

Again dual-processor systems showed similar results, which were about
10% better than the results shown by uni-processor system.




Benchmark 5: Glass

This filter is just ideal to model some glass wall between the image and
the viewer. The settings are really diverse, which offers a user a rich
choice of different effects.



The result is similar to that shown in the previous benchmark. However,
this time dual-Pentium III system proved faster. 

Benchmark 6: Crystallize

Well, the name of this filter speaks for itself. As for the available
settings, there is only one: cell size.



In this benchmark the performance depends a lot on the memory bus
bandwidth. As a result, the system built with one Pentium 4 processor
managed to get very close to the dual-processor competitors. 

Benchmark 7: Lens Flare

Here are some settings provided for this filter:



This filter makes active use of the FPU and loads the memory bus quite
heavily. 

Benchmark 8: Lighting Effects

This filter allows creating a great lot of various lighting effects. It
has a lot of adjustable settings, which determine this diversity.



This filter carries out all the calculations via FPU. As is known the
FPU of Pentium 4 processor is less powerful than that of Pentium III and
far less powerful than that of Athlon. The memory bus bandwidth, on the
contrary, doesn't matter that much. That is why single Pentium 4 based
platform proved so slow. 

Benchmark 9: Sharpen Edges

This filter is used basically to smoothen the borders and edges when the
colors change very radically from one to another. The filter doesn't
have any settings at all.


Benchmark 10: Unsharp Mask

This filter is aimed at sharpening the image and increasing its
contrast. In other words, it does just the opposite thing to what
Gaussian Blur is.




Benchmark 11: Chrome

Judging by the name you understand that this filter chromes the entire
image or a part of it. The settings available within this filter can't
be called numerous:




Benchmark 12: Bas Relief

This filter applies relief to the selected image.




Benchmark 13: Water Paper





Benchmark 14: Extrude

Extrude filter allows getting really funny effects. Something like this:
 <http://www.xbitlabs.com/cpu/photoshop-platform/scr15.jpg> 
In this benchmark the settings looked as follows:




Benchmark 15: Find Edges

Find Edges filter allows making the borders between different colors a
bit sharper and more evident.
 <http://www.xbitlabs.com/cpu/photoshop-platform/scr17.jpg> 


It doesn't have any settings, which could be changed manually. 

Benchmark 16: Rotate Canvas

We carried out two rotations: by 1o and by 99o clockwise.

Here the FPU is loaded quite a lot, that is why the results shown by
Pentium 4 here are so low. 

Benchmark 17: Convert to Other Color Systems

As you remember, the initial image was in RGB format. We converted it to
Grayscale, CMYK and Lab Color.


Conclusion 


 
2 x AthlonMP 1200MHz
2 x Pentium III 1133MHz
Pentium 4 1700MHz

Benchmark 1: Convert to GIF
1.1
1.4
1.6

Benchmark 2: Gaussian Blur
2.2
2.4
4.8

Benchmark 3: Smart Blur
3.8
3.8
3.6

Benchmark 4: Diffuse Glow
7.4
7.1
8.2

Benchmark 5: Glass
8.1
7.1
8.6

Benchmark 6: Crystallize
15.4
16.3
17.5

Benchmark 7: Lens Flare
2.3
2.6
3.8

Benchmark 8: Lighting Effects
2.6
2.4
4.6

Benchmark 9: Sharpen Edges
1.2
1.7
2.8

Benchmark 10: Unsharp Mask
1.6
1.6
1.6

Benchmark 11: Chrome
11.4
9.9
9.3

Benchmark 12: Bas Relief
6.8
5.8
7.8

Benchmark 13: Water Paper 
18.4
18.3
20.3

Benchmark 14: Extrude 
14.4
18.1
20.1

Benchmark 15: Find Edges 
1.2
1.6
2.4

Benchmark 16: Rotate Canvas (1o)
1.6
1.6
4.1

Benchmark 16: Rotate Canvas (99o)
1.8
1.9
4.3

Benchmark 17: Convert to GrayScale
1.4
1.6
2.1

Benchmark 17: Convert to CMYK
7.4
8.1
10.5

Benchmark 17: Convert to Lab Color
4.1
4.5
6.1
The tests of dual-processor systems showed that neither AMD platform,
nor Intel platform managed to get any advantage: powerful FPU of Athlon
processors is opposed by twice as large L2 cache of Pentium III
processors. In some tests AMD platform manages to get a little bit ahead
of the rival, in other tests Intel's platform performs slightly faster
t6han AMD one. That is why we can say that both platforms are equally
fast provided the processor frequencies are close. 
At present the fastest Athlon processors available in the market are
Athlon 2000+ (1667MHz) based on the same core as the Athlon MP piece we
tested this time. At the same time, Pentium III processors with 512KB L2
cache are now working at the maximum of 1.4GHz. When I worked on the
article, the top models from Intel and AMD cost about the same, while
the core clock frequency of AMD Athlon CPU appeared about 20% higher
than that of the Intel solution. This way we would call AMD platform
more attractive for work in Adobe Photoshop so far. 
 
 
 
--->>>
Shaka( Rudy)
HelPC list owner
shaka.rudy@xxxxxxxxx
 
 

Other related posts:

  • » [helpc] Dual-Processor Platforms in Adobe Photoshop