Welcome to Atalasoft Community Sign in | Help

What *is* a pixel, anyway?

I was talking to some engineers about pixels and I realized that, while these were fine programmers, they didn't really understand pixels.  Like many things, programmers can choose to be oblivious to all the details and be just fine, but something so fundamental as pixels should be understood.

Most people think of pixels as a dot of light on a screen or dots of ink on a piece of paper.

I think this is too limited.  I think of a pixel as a measurement and without dimensions, a measurement is pretty much useless.

To me, a pixel is a unit of area that includes an indication of color.  Color is its own sticky problem, so I'm not going to address it here.

To leave out the dimensions of a pixel is akin to leaving off "acres" "square feet" or "hectares" when talking about the size of a plot of land.  You'd never purchase a property of size 1.8 without knowing 1.8 of what.

Knowing the dimensions of pixels is especially important when you're working with anything remotely WYSIWYG since it's the only way to actually guarantee that.  The ability to do that predictably in current commercial markets really stems from the Macintosh.  I won't say "they got it right" because they didn't.  What they did do was create a machine that for many years has a consistent pixel size.  Every pixel on the screen was 1/72 of an inch square.  This wasn't the case with the ill-fated Lisa, which had rectangular pixels.

In addition to being able to say what the dimensions of a pixel was, they sold a dot matrix printer that had a resolution of 144 dpi, which made it very easy to print graphics at screen resolution.  The real stroke of genius was the Apple LaserWriter, which while it had an output of 300 dpi, was being driven by Adobe PostScript, which is a resolution independent printing language.

Apple succeeded with WYSIWYG by only making equipment that conformed to their pixel dimensions.  The problem was a little ugly, though.  It has to do with Apple's definition of a BitMap structure (in Pascal):
TYPE BitMap =
RECORD
baseAddr: Ptr; {pointer to bit image}
rowBytes: Integer; {row width}
bounds: Rect; {boundary rectangle}
END;
This defines a BitMap, but it doesn't define the dimensions of the individual pixels in any kind of real world units.  Initially, this was not an issue, but when Apple started putting video support on peripheral cards, it was possible to use 3rd party video cards and third party display devices.  At this point, any software that assumed 1/72 of an inch was no longer truly WYSIWYG.  Fortunately, Apple had to revamp it's BitMap model in order to be able to handle color, so at the same time they added new elements to handle resolution:

TYPE  PixMap = 
RECORD
baseAddr: Ptr; {pixel image}
rowBytes: Integer; {flags, and row width}
bounds: Rect; {boundary rectangle}
pmVersion: Integer; {PixMap record version number}
packType: Integer; {packing format}
packSize: LongInt; {size of data in packed state}
hRes: Fixed; {horizontal resolution}
vRes: Fixed; {vertical resolution}
pixelType: Integer; {format of pixel image}
pixelSize: Integer; {physical bits per pixel}
cmpCount: Integer; {logical components per pixel}
cmpSize: Integer; {logical bits per component}
planeBytes: LongInt; {offset to next plane}
pmTable: CTabHandle; {handle to the ColorTable record }
{ for this image}
pmReserved: LongInt; {reserved for future expansion}
END;
Now they had resolution and software could be written to take advantage of that knowledge.  Unfortunately, few good software writers bothered to do that.  Consider this problem when you look at the way images are handled in web browsers.  The answer is typically, "very badly".

In .NET, fortunately, resolution is pervasive.  The Image and Bitmap objects both have resolutions.  In addition, the display device also has a resolution, so software can adjust to fit, as long as the author is conscientious about it.

An AtalaImage also includes a resolution definition.  We differ from the standard .NET objects in that we allow you to specify the actual unit in terms of either Metric or English units.  I did some experiments in creating more flexible unit types so you could potentially descirbe a pixel as being .1 attolightyears by 1.26 furlongs, if that's what you'd like.  I let this go as being frivolous, esoteric, and of little real-world use.

Where these measurements become important is in doing things like performing optical character recognition on an image and being able to accurately represent the fonts in another context (say, PDF).  Pixels alone won't do it.

As a last notion about pixels, consider coordinate systems.  Computer graphics coordinate systems usually have little to do with mathematical definitions, which is a shame.  In most systems, the origin (0,0) is in the upper left of an image and X increases to the right and Y increases going down.  The mislocation of the origin and flipped orientation of the Y axis adds a step in translating mathematical formulae that one must at least be aware of.

The real question to consider is: on a uniform grid with pixels of equal dimensions, where does a pixel start and where does it end?  If we were mathematical, we would say that a pixel is centered over the coordinates that describe it.  This is a fantastic, predictable definition.  Sadly, hardly anyone uses it.  Instead pixels are usually thought of as covering the range (x, x+1], (y, y+1].  In other words, they start at x and go up to, but not including x+1 and start at y and go up to, but not including y+1.  In mathematical terms, these are both half-open ranges.  However, you will get code that uses closed ranges so that the endpoints are included.  This might be important in trying to stitch together polygons on a display so as to avoid holes due to floating point or rounding errors.

The most important thing is to be consistent and to be open about what your methodology is.

Dimension your pixels.  Your users will thank you.
Published Thursday, October 05, 2006 10:34 AM by Steve Hawley

Comments

Wednesday, November 15, 2006 12:03 PM by Steve's Tech Talk

# Measuring Text

I had the task of measuring the length of string rendered in a given font.  I'd like to be able...
Anonymous comments are disabled