Friday, August 19, 2011

SCANNERS


Digital imaging has come of age. Equipment that was once reserved for the wealthiest bureaux is now commonplace on the desktop. The powerful PCs required to manipulate digital images are now considered entry level, so it comes as no surprise to learn that scanners, the devices used to get images into a PC, are one of the fastest growing markets today.

At its most basic level, a scanner is just another input device, much like a keyboard or mouse, except that it takes its input in graphical form. These images could be photographs for retouching, correction or use in DTP. They could be hand-drawn logos required for document letterheads. They could even be pages of text which suitable software could read and save as an editable text file.

The list of scanner applications is almost endless, and has resulted in products evolving to meet specialist requirements:

Ø          high-end drum scanners, capable of scanning both reflective art and transparencies, from 35mm slides to 16-foot x 20in material at high (10,000dpi+) resolutions

Ø      compact document scanners, designed exclusively for OCR and document management

Ø      dedicated photo scanners, which work by moving a photo over a stationary light source

Ø      slide/transparency scanners, which work by passing light through an image rather than reflecting light off it

Ø      Handheld scanners, for the budget end of the market or for those with little desk space.

However, flatbed scanners are the most versatile and popular format. These are capable of capturing color pictures, documents, pages from books and magazines, and, with the right attachments, even scan transparent photographic film.

Operation

On the simplest level, a scanner is a device, which converts light (which we see when we look at something) into 0s and 1s (a computer-readable format). In other word, scanners convert analogue data into digital data.


All scanners work on the same principle of reflectance or transmission. The image is placed before the carriage, consisting of a light source and sensor; in the case of a digital camera, the light source could be the sun or artificial lights. When desktop scanners were first introduced, many manufacturers used fluorescent bulbs as light sources. While good enough for many purposes, fluorescent bulbs have two distinct weaknesses: they rarely emit consistent white light for long, and while they're on they emit heat which can distort the other optical components. For these reasons, most manufacturers have moved to 'cold-cathode' bulbs. These differ from standard fluorescent bulbs in that they have no filament. They therefore operate at much lower temperatures and, as a consequence, are more reliable. Standard fluorescent bulbs are now found primarily on low-cost units and older models.

By late 2000, Xenon bulbs had emerged as an alternative light source. Xenon produces a very stable, full-spectrum light source that's both long lasting and quick to initiate. However, xenon light sources do consume power at a higher rate than cold cathode tubes.


To direct light from the bulb to the sensors that read light values, CCD scanners use prisms, lenses, and other optical components. Like eyeglasses and magnifying glasses, these items can vary quite a bit in quality. A high-quality scanner will use high-quality glass optics that are color-corrected and coated for minimum diffusion. Lower-end models will typically skimp in this area, using plastic components to reduce costs.

The amount of light reflected by or transmitted through the image and picked up by the sensor, is then converted to a voltage proportional to the light intensity - the brighter the part of the image, the more light is reflected or transmitted, resulting in a higher voltage. This analogue-to-digital conversion (ADC) is a sensitive process, and one that is susceptible to electrical interference and noise in the system. In order to protect against image degradation, the best scanners on the market today use an electrically isolated analogue-to-digital converter that process data away from the main circuitry of the scanner. However, this introduces additional costs to the manufacturing process, so many low-end models include integrated analogue-to-digital converters that are built into the scanner's primary circuit board.

The sensor component itself is implemented using one of three different types of technology:

Ø          PMT (Photo Multiplier Tube), a technology inherited from the drum scanners of last year

Ø          CCD (Charge-coupled device), the type of sensor used in desktop scanners

Ø          CIS (Contact image sensor), a newer technology which integrates scanning functions into fewer components, allowing scanners to be more compact in size.

CCD

CCD technology is responsible for having made scanning a desktop application and has been in use for a number of years in devices such as fax machines and digital cameras. A charge-coupled device is a solid state electronic device that converts light into an electric charge. A desktop scanner sensor typically has thousands of CCD elements arranged in a long thin line. The scanner shines light through red, green and blue filters and the reflected light are directed into the CCD array via a system of mirrors and lenses. The CCD acts as a photometer, converting the measured reflectance into an analogue voltage, which can then be sampled and changed to discrete digital values by an analogue-to-digital converter (ADC).



CIS

CIS is a relatively new sensor technology which began to appear at the budget end of the flatbed scanner market in the late 1990s. CIS scanners employ dense banks of red, green and blue LEDs to produce white light and replace the mirrors and lenses of a CCD scanner with a single row of sensors placed extremely close to the source image. The result is a scanner that is thinner and lighter, more energy efficient and cheaper to manufacture than a traditional CCD-based device - but that is not, as yet, capable producing as good results.

The technology employed by its sensor mechanism is not, however, the only factor that governs a scanner level of performance. The following are equally important aspects of a given unit's specification:

Ø          resolution

Ø          bit depth

Ø          dynamic range.

Resolution


Resolution relates to the fineness of detail that a scanner can achieve, and is usually measures in dots per inch (dpi). The more dots per inch a scanner can resolve, the more detail the resulting image will have. The typical resolution of an inexpensive desktop scanner in the late 1990s was 300 x 300.

A typical flatbed scanner has a CCD element for each pixel, so for a desktop scanner claiming a horizontal optical resolution of 600dpi (dots per inch) - alternatively referred to as 600ppi (pixels per inch) - and a maximum document width of 8.5in there’ll be an array of 5,100 CCD elements in what’s known as the scan head.

The scan head is mounted on a transport, which is moved across the target object. Although the process may appear to be a continuous movement, the head moves a fraction of an inch at a time, taking a reading between each movement. In the case of a flatbed scanner, the head is driven by a stepper motor, a device, which turns a predefined amount, and no more, each time an electrical pulse is fed.

The number of physical elements in a CCD array determines the x-direction-sampling rate, and the number of stops per inch determines the y-direction-sampling rate. Although these are conveniently referred to as a scanner’s ‘resolution’, the term is not strictly accurate. The resolution is the scanner’s ability to determine detail in an object and is defined by the quality of electronics, optics, filters and motor control, as well as the sampling rate.

The actual scan head, though capable of reading a raster line 8.5in wide, will be much smaller than that, typically around 4in wide. The reflected light is presented to the scan head through a lens, and the quality of the optics can have a greater effect on the resolution of the scan than the sampling rate. High-resolution optics in a 400dpi scanner is likely to produce better results than a 600dpi device with poor optics.

By late 1998 the physical limit as to how many CCD elements could be placed side by side in one inch stood at 600. It is, however, possible for the apparent resolution to be increased using a technique known as interpolation, which under software or hardware control guesses intermediate values and inserts them between the real ones. Some scanners do this much more effectively than others.

Colour scanners


Colour scanners have three light sources, one for each of red, green and blue primary. Some scanning heads contain a single fluorescent tube with three filtered CCDs, while others have three coloured tubes and a single CCD. The former produce the entire colour image in a single pass, the target being illuminated by the three rapidly changing lights, while the latter have to go back-and-forth three times.

Single-pass scanners have problems with the stability of light levels when they’re being turned on and off rapidly. Older three-pass scanners used to suffer from registration problems along with being slow. More modern three-pass units are much improved and able to match some single-passers for speed. However, by the late 1990s most colour scanners were single-pass devices.

These scanners use one of two methods for reading light values: beam splitter or coated CCDs. When a beam splitter is used, light passes through a prism and separates into the three primary scanning colours, which are each read by a different CCD. This is generally considered the best way to process reflected light, but to bring down costs many manufacturers use three CCDs, each of which is coated with a film so that it reads only one of the primary scanning colours from an unsplit beam. While technically not as accurate, this second method usually produces results that are difficult to distinguish from those of a scanner with a beam splitter.

Bit-depth


When a scanner converts something into digital form, it looks at the image pixel by pixel and records what it sees. That part of the process is simple enough, but different scanners record different amounts of information about each pixel. How much information a given scanner records is measured by its bit-depth.

The simplest kind of scanner only records black and white, and is sometimes known as a 1-bit scanner because each bit can only express two values, on and off. In order to see the many tones in between black and white, a scanner needs to be at least 4-bit (for up to 16 tones) or 8-bit (for up to 256 tones). The higher the scanner's bit-depth, the more accurately it can describe what it sees when it looks at a given pixel. This, in turn, makes for a higher quality scan.

Most modern colour scanners are at least 24-bit, meaning that they collect 8 bits of information about each of the primary scanning colours: red, blue, and green. A 24-bit unit can theoretically capture over 16 million different colours, though in practice the number is usually quite smaller. This is near-photographic quality, and is therefore commonly referred to as 'true colour' scanning.

Recently, an increasing number of manufacturers are offering 30-bit and 36-bit scanners, which can theoretically capture billions of colours. The only problem is that very few graphics software packages can handle anything larger than a 24-bit scan, because of limitations in the design of personal computers. Still, those extra bits are worth having. When a software program opens a 30-bit or 36-bit image, it can use the extra data to correct for noise in the scanning process and other problems that hurt the quality of the scan. As a result, scanners with higher bit-depths tend to produce better colour images.

Dynamic range


Dynamic range is somewhat similar to bit-depth in that it measures how wide a range of tones the scanner can record. It is a function of the scanner's analogue-to-digital converter - along with the purity of the illuminating light and coloured filters and any system noise.

Dynamic range is measured on scale from 0.0 (perfect white) to 4.0 (perfect black), and the single number given for a particular scanner tells how much of that range the unit can distinguish. Most colour flatbeds have difficulty perceiving the subtle differences between the dark and light colours at either end of the range, and tend to have a dynamic range of about 2.4. That's fairly limited, but it's usually sufficient for projects where perfect colour isn't a concern. For greater dynamic range, the next step up is a top-quality colour flatbed scanner with extra bit-depth and improved optics. These high-end units are usually capable of a dynamic range between 2.8 and 3.2, and are well suited to more demanding tasks like standard colour pre press. For the ultimate in dynamic range, the only alternative is a drum scanner. These units frequently have a dynamic range of 3.0 to 3.8, and deliver all the colour quality one could ask of a desktop scanner. Although they are overkill for most projects, drum scanners do offer high quality in exchange for their high price.

In theory, a 24-bit scanner offers an 8-bit range (256 levels) for each primary colour - the difference between 256 levels is commonly accepted to be indiscernible to the human eye. Unfortunately, a few of the least significant bits are lost in noise, while any post-scanning tonal corrections reduce the range still further. That’s why it’s best to make any brightness and colour corrections in one go from the scanner driver before making the final scan itself. More expensive scanners with 30- or 36-bit depths have a much wider range to start with, offering better detail in the shadow and highlight areas, allowing you to make tonal corrections and still end up with a decent 24-bits at the end. A 30-bit scanner collects 10-bits of data each for the red, green and blue colour components while 36-bit scanners collect 12-bits for each. The scanner driver allows the operator to control which 24 of those 30 or 36 bits are kept and which ones are discarded - this adjustment being made by changing the Gamma Curve, accessed through the TWAIN driver's Tonal Adjustment control.

Scan resolution


Prior to scanning any image, it is necessary to determine what resolution to scan at. Since modern advertising has conditioned us to think that more is always better, it is not difficult to understand why many users have a tendency to scan at too high a resolution. The scan resolution should always be determined by the capability of the output device - and for all practical purposes it is rarely necessary to scan at higher than 240dpi.

Printed images use a technique called “half toning” to reproduce different levels of colour. In magazines an ordered halftone is used, where regular dots of differing sizes produce the varying levels of colour. Most inkjet printers use dithering, where the dots are scattered across the area of each pixel. This produces better-looking results at lower resolutions. The use of “half toning” means that the number of pixels per inch the printer can reproduce is lower than its stated 'dpi' resolution.

The rule of thumb for printing at 24-bit colour is that the number of pixels per inch is 16 times less than the resolution. This means that for a 600dpi printer a scan resolution of 40 pixels per inch is appropriate. The typesetters used in offset lithography - the technology used for printing glossy magazines - are capable of printing at 133 lines per inch. This technology is not quite the same as laser or inkjet printer technology and the general rule here is for layout artists to scan at 1.5 times the printing resolution - an equivalent of 200dpi.

When scanning for output on an inkjet printer, a commonly used rule of thumb is to scan at 1/3 of the resolution it is intended to print at. So, for typical modern inkjet printer with a maximum resolution of 720dpi, 240dpi are an appropriate scan resolution. Attempting to print at a printer's maximum resolution on ordinary plain paper is not, however, recommended. In this case, 360dpi is a more suitable print resolution - and correspondingly 120dpi a more appropriate scan resolution. If scanning in “grey scale” or line art, its better to use the full resolution of the printer without dividing it by three.

When scanning images for inclusion in Web pages or for displaying directly on a PC monitor, the scan resolution is chosen based on the desired size of the displayed image. Graphics cards are capable of different display modes - 640x480, 800x600, 1024x768 etc. - and monitors come with a number of different screen sizes. However, as a general rule of thumb, images for subsequent display on a PC monitor should be scanned at a resolution of around 72dpi.

Scan modes


PCs represent pictures in a variety of ways - the most common methods being are line art, halftone, greyscale, and colour:

Ø          Line art is the smallest of all the image formats. Since only black and white information is stored, the computer represents black with a 1 and white with a 0. It only takes 1 bit of data to store each dot of a black and white scanned image. Line art is most useful when scanning text or line drawing. Pictures do not scan well in line art mode

Ø          While computers can store and show greyscale images, most printers are unable to print different shades of grey. They use a trick called halftoning. Halftones use patterns of dots to fool the eye into believing it is seeing greyscale information

Ø          Greyscale images are the simplest of images for the computer to store. Humans can perceive about 255 different shades of grey - represented in a PC by a single byte of data with the value 0 to 255. A greyscale image can be thought of as equivalent to a black and white photograph

Ø          True colour images are the largest and most complex images to store, PCs using 8-bits (1 byte) to represent each of the colour components (red, green, and blue) and therefore 24-bits in total to represent the entire colour spectrum.

File formats


The format in which a scanned image is saved can have a significant effect on file size. File size is an important consideration when scanning, since the high resolutions supported by many modern scanners can result in the creation of image files as large as 30MB for an A4 page.

Windows bitmap (BMP) files are the largest, since they store the image in full colour without compression or in 256 colours with simple run-length encoding (RLE) compression. Images to be used as Windows wallpaper have to be saved in BMP format, but for most other cases it can be avoided.

Tagged image file format (TIFF) files are the most flexible, since they can store images in RGB mode for screen display, or CMYK for printing. TIFF also supports LZW compression, which can reduce the file size significantly without any loss of quality. This is based on two techniques introduced by Jacob Ziv and Abraham Lempel in 1977 and subsequently refined by Unisys researcher Terry Welch. LZ77 creates pointers back to repeating data, and LZ78 creates a dictionary of repeating phrases with pointers to those phrases.

CompuServe’s Graphics Interchange Format (GIF) stores images using indexed colour. A total of 256 colours are available in each image, although what these colours are can change from image to image. A table of RGB values for each index colour is stored at the start of the image file. GIFs tend to be smaller than most other file formats because of this decreased colour depth, making them a good choice for use in WWW-published material.

The PC Paintbrush (PCX) format has fallen into disuse, but offers a compressed format at 24-bit colour depth. The JPEG file format uses lossy compression and can achieve small file sizes at 24-bit colour depth. The level of compression can be selected - and hence the amount of data loss - but even at the maximum quality setting JPEG loses some detail and is therefore only really suitable for viewing images on-line. The number of levels of compression available depends on the image editing software being used.

Unless there is a need to preserve colour information from the original document, images stored for subsequent OCR processing are best scanned in greyscale. This uses a third of the space of an RGB colour scan. An alternative is to scan in line-art mode - black and white with no greyscales - but this often loses detail, reducing the accuracy of the subsequent OCR process.

The table below illustrates the relative file sizes that can be achieved by the different file formats in storing a 'native' 1MB image, and also indicates the colour depth supported:


File format
Image size
No. of colours
BMP – RGB
1MB
16.7 million
BMP –RLE
83KB
256
GIF
31KB
256
JPEG - min. compression
185KB
16.7 million
JPEG - min. progressive compression
150KB
16.7 million
JPEG - max. compression
20KB
16.7 million
JPEG - max. progressive compression
16KB
16.7 million
PCX
189KB
16.7 million
TIFF
1MB
16.7 million
TIFF - LZW compression
83KB
16.7 million

OCR

When a page of text is scanned into a PC, it is stored as an electronic file made up of tiny dots, or pixels; it is not seen by the computer as text, but rather, as a ‘picture of text’. Word processors are not capable of editing bitmap images. In order to turn the group of pixels into editable words, the image must go through a complex process known as Optical Character Recognition (OCR).

OCR research began in the late 1950s, and since then, the technology has been continually developed and refined. In the 1970s and early 1980s, OCR software was still very limited - it could only work with certain typefaces and sizes. These days, OCR software is far more intelligent, and can recognize practically all typefaces as well as severely degraded document images.

One of the earliest OCR techniques was something-called matrix, or pattern matching. Most text is either in Times, Courier, or Helvetica typefaces in point size between 10 and 14. OCR programs, which use the pattern matching method, have bitmaps stored for every character of each of the different font and type sizes. By comparing a database of stored bitmaps distributed to the bitmaps of the scanned letters the program attempts to recognise the letters. This early system was only really successful using non-proportional fonts like Courier where letters are spaced regularly and are easier to identify. Complex multi-font documents were well beyond its scope and an obvious limitation to this method is that it is only useful for the fonts and sizes stored.
Feature extraction was the next step in OCR’s development. This attempted to recognise characters by identifying their universal features, the goal being to make OCR typeface-independent. If all characters could be identified using rules defining the way that loops and lines join each other, then individual letters could be identified regardless of their typeface. For example: the letter 'a' is made from a circle, a line on the right side and an arc over the middle. The arc over the middle is optional. So, if a scanned letter had these 'features' it would be correctly identified as the letter 'a' by the OCR program.

In terms of research progress, feature extraction was a step forward from matrix matching, but actual results were badly affected by poor-quality print. Extra marks on the page, or stains in the paper, had a dramatic effect on accuracy. The elimination of such ‘noise’ became a whole research area in itself, attempting to determine which bits of print were not parts of individual letters. Once noise can be identified, the reliable character fragments can then be reconstructed into the most likely letter shapes.

No OCR software ever recognises 100% of the scanned letters. Some OCR programs use the matrix/pattern matching and/or feature extraction methods to recognise as many characters as possible - and complement this by using spell checking on the hitherto unrecognized letters. For example: if the OCR program was unable to recognise the letter 'e' in the word 'th~ir', by spell checking 'th~ir' the program could determine the missing letter is an 'e'.

Recent OCR technology is far more sophisticated than the early techniques. Instead of just trying to identify individual characters, modern techniques are able to identify whole words. This technology, developed by Caere, is called Predictive Optical Word Recognition (POWR).

Using higher levels of contextual analysis, POWR is able to virtually eliminate the problems caused by noise. It enables the computer to sift through the thousands or millions of different ways that dots in a word can be assembled into characters. Each possible interpretation is then assigned a probability, and the highest one is selected. POWR uses sophisticated mathematical algorithms, which allow the computer to hone in on the best interpretation without examining each possible version individually.

When probabilities are assigned to individual words, all kinds of contextual information and evidence is taken into account. The technology makes use of neutral networks and predictive modeling techniques taken from research in Artificial Intelligence (AI) and Cognitive Science. This enables POWR to identify words in a way, which more closely resembles human visual recognition. In practice, the technique significantly improves the accuracy of word recognition across all document types. Combining all sources of evidence, from low-level pixel-based information to high-level contextual clues assesses all the possible interpretations of a word. The most probable interpretation is then selected.

Although OCR systems have been around for a long time, their benefits are only just being appreciated. The first offerings were extremely costly, in terms of software and hardware, and they were inaccurate and difficult to use. Consequently many of the early adopters became frustrated with the technology. Over the past few years, however, OCR has been completely transformed. Modern OCR software is highly accurate, easy to use and affordable and for the first time OCR looks set to be adopted in all kinds of work environments on a mass scale.

Unless there is a specific need to preserve colour information from the original document, its best to scan documents for OCR in greyscale. This uses a third of the space of an RGB colour scan. Line-art mode makes for even smaller file sizes - but this often loses detail, reducing the accuracy of subsequent OCR processing.

MONITOR & SERVICING

MONITOR & SERVICING

Your monitor provides the link between you and your computer. Although you can possibly get rid of your printer, disk drives, and expansion cards, you cannot sacrifice the monitor. Without it, you would be operating blind; you could not see the results of your calculations or the mistyped words on-screen.

The video subsystem of a PC consists of two main components:
·        Monitor (or video display)
·        Video adapter (also called the video card or graphics card)
This chapter explores the range of available PC-compatible video adapters and the displays that work with them.

Monitors

The monitor is, of course, the display located on top of, near, or inside your computer. Like any computer device, a monitor requires a source of input. The signals that run to your monitor come from video circuitry inside or plugged into your computer. Some computers such as those that use the low profile (LPX) or new low profile (NLX) motherboard form factor usually contain this circuitry on the motherboard. Most systems, though, use Baby-AT or ATX style motherboards and normally incorporate the video on a separate circuit board that is plugged into an expansion or bus slot. The expansion cards that produce video signals are called video cards, video adapters, or graphics cards. Whether the video circuit is built into the motherboard or on a separate card, the circuitry operates the same way and uses generally the same components.

Display Technologies

A monitor may use one of several display technologies. By far the most popular is cathode ray tube (CRT) technology, the same technology used in television sets. CRTs consist of a vacuum tube enclosed in glass. One end of the tube contains an electron gun; the other end contains a screen with a phosphorous coating.

When heated, the electron gun emits a stream of high-speed electrons that are attracted to the other end of the tube. Along the way, a focus control and deflection coil steers the beam to a specific point on the phosphorous screen. When struck by the beam, the phosphor glows. This light is what you see when you watch TV or your computer screen.

The phosphor chemical has a quality called persistence, which indicates how long this glow will remain on-screen. You should have a good match between persistence and scanning frequency so that the image has less flicker (if the persistence is too low) and no ghosts (if the persistence is too high).

The electron beam moves very quickly, sweeping the screen from left to right in lines from top to bottom, in a pattern called a raster. The horizontal scan rate refers to the speed at which the electron beam moves across the screen.

During its sweep, the beam strikes the phosphor wherever an image should appear on- screen. The beam also varies by intensity in order to produce different levels of brightness. Because the glow fades almost immediately, the electron beam must continue to sweep the screen to maintain an image a practice called redrawing or refreshing the screen.
Most displays have an ideal refresh rate (also called a vertical scan frequency) of about 70 hertz (Hz), meaning that the screen is refreshed 70 times a second. Low refresh rates cause the screen to flicker, contributing to eyestrain. The higher the refresh rates the better for your eyes.

Some monitors have a fixed refresh rate. Other monitors may support a range of frequencies; this support provides built-in compatibility with future video standards. A monitor that supports many video standards is called a multiple-frequency monitor. Most monitors today are multiple- frequency monitors, which means that they support operation with a variety of popular video signal standards. Different vendors call their multiple-frequency monitors by different names, including multisync, multifrequency, multiscan, autosynchronous, and autotracking.

Phosphor-based screens come in two styles curved and flat. The typical display screen is curved, meaning that it bulges outward from the middle of the screen. This design is consistent with the vast majority of CRT designs (the same as the tube in your television set).

The traditional screen is curved both vertically and horizontally. Some models use the Trinitron design, which is curved only horizontally and is flat vertically. Many people prefer this flatter screen because it results in less glare and a higher-quality, more accurate image. The disadvantage is that the technology required to produce flat-screen displays is more expensive, resulting in higher prices for the monitors.

Alternative display designs are available. Borrowing technology from laptop manufacturers, some companies provide LCD (liquid-crystal display) displays. LCD’s have low-glare flat screens and low power requirements (5 watts versus nearly 100 watts for an ordinary monitor). The color quality of an active-matrix LCD panel actually exceeds that of most CRT displays. At this point, however, LCD screens usually are more limited in resolution than typical CRTs and are much more expensive; for example, a 12.1-inch screen costs several thousand dollars. There are three basic LCD choices: passive-matrix monochrome, passive-matrix color, and active-matrix color. The passive-matrix designs are also available in single- and dual-scan versions.

In a LCD, a polarizing filter creates two separate light waves. In a color LCD, there is an additional filter that has three cells per each pixel one each for displaying red, green, and blue.

The light wave passes through a liquid-crystal cell, with each color segment having its own cell. The liquid crystals are rod-shaped molecules that flow like a liquid. They enable light to pass straight through, but an electrical charge alters their orientation, as well as the orientation of light passing through them. Although monochrome LCD’s do not have color filters, they can have multiple cells per pixel for controlling shades of gray.

In a passive-matrix LCD, each cell is controlled by electrical charges transmitted by transistors according to row and column positions on the screen's edge. As the cell reacts to the pulsing charge, it twists the light wave, with stronger charges twisting the light wave more. Supertwist refers to the orientation of the liquid crystals, comparing on mode to off mode the greater the twist, the higher the contrast.

Charges in passive-matrix LCD’s are pulsed, so the displays lack the brilliance of active-matrix, which provides a constant charge to each cell. To increase the brilliance, some vendors have turned to a new technique called double-scan LCD, which splits passive-matrix screens into a top half and bottom half, cutting the time between each pulse. Besides increasing the brightness, dual-scan designs also increase the response time or speed of the display, making this type more usable for video or other applications where the displayed information changes rapidly.

In an active-matrix LCD, each cell has its own transistor to charge it and twist the light wave. This provides a brighter image than passive-matrix displays because the cell can maintain a constant, rather than momentary, charge. However, active-matrix technology uses more energy than passive-matrix. With a dedicated transistor for every cell, active-matrix displays are more difficult and expensive to produce.

In both active and passive-matrix LCD’s, the second polarizing filter controls how much light passes through each cell. Cells twist the wavelength of light to closely match the filter's allowable wavelength. The more light that passes through the filter at each cell, the brighter the pixel.

Monochrome LCD’s achieve gray scales (up to 64) by varying the brightness of a cell or dithering cells in an on-and-off pattern. Color LCD’s, on the other hand, dither the three-color cells and control their brilliance to achieve different colors on the screen. Double-scan passive-matrix LCD’s have recently gained in popularity because they approach the quality of active-matrix displays but do not cost much more to produce than other passive-matrix displays.

The big problem with active-matrix LCD’s is that the manufacturing yields are low, forcing higher prices. This means that many of the panels produced have more than a certain maximum number of failed transistors. The resulting low yields limit the production capacity and incur higher prices.

In the past, several hot CRTs were needed to light a LCD screen, but portable computer manufacturers now use a single tube the size of a cigarette. Light emitted from a tube gets spread evenly across an entire display using fiber-optic technology.

Thanks to supertwist and triple-supertwist LCD’s, today's screens enable you to see the screen clearly from more angles with better contrast and lighting. To improve readability, especially in dim light, some laptops include backlighting or edgelighting (also called sidelighting). Backlit screens provide light from a panel behind the LCD. Edgelit screens get their light from the small fluorescent tubes mounted along the sides of the screen. Some older laptops excluded such lighting systems to lengthen battery life. Most modern laptops enable you to run the backlight at a reduced power setting that dims the display but allows for longer battery life.

The best color displays are active-matrix or thin-film transistor (TFT) panels, in which each pixel is controlled by three transistors (for red, green, and blue). Active-matrix-screen refreshes and redraws are immediate and accurate, with much less ghosting and blurring than in passive-matrix LCDs (which control pixels via rows and columns of transistors along the edges of the screen). Active-matrix displays are also much brighter and can easily be read at an angle.

An alternative to LCD screens is gas-plasma technology, typically known for its black and orange screens in some of the older Toshiba notebook computers. Some companies are incorporating gas-plasma technology for desktop screens and possibly color high-definition television (HDTV) flat-panel screens.

Monitor Resolution

Resolution is the amount of detail that a monitor can render. This quantity is expressed in the number of horizontal and vertical picture elements, or pixels, contained in the screen. The greater the number of pixels, the more detailed the images. The resolution required depends on the application. Character-based applications (such as word processing) require little resolution, whereas graphics-intensive applications (such as desktop publishing and Windows software) require a great deal.

There are several standard resolutions available in PC graphics adapters. The following table lists the standard resolutions used in PC video adapters and the term used to commonly describe them:

Resolution
Acronym
Standard Designation
640x480
VGA
Video Graphics Array
800x600
SVGA
Super VGA
1,024x768
XGA
extended Graphics Array
1,280x1,024
UVGA
Ultra VGA

In a monochrome monitor, the picture element is a screen phosphor, but in a color monitor, the picture element is a phosphor triad. This difference raises another consideration called dot pitch, which applies only to color monitors. Dot pitch is the distance, in millimeters, between phosphor triads. Screens with a small dot pitch contain less distance between the phosphor triads; as a result, the picture elements are closer together, producing a sharper picture. Conversely, screens with a large dot pitch tend to produce images that are less clear.

Another consideration of resolution is the dot pitch of the monitor. Smaller pitch values allow the monitor to produce sharper images. The original IBM PC color monitor had a dot pitch of 0.43mm, which is considered to be poor by almost any standard. The state-of-the-art displays marketed today have a dot pitch of 0.25mm or less.

Energy and Safety

A properly selected monitor can save energy. Many PC manufacturers are trying to meet the Environmental Protection Agency's Energy Star requirements. Any PC-and-monitor combination that consumes less than 60 watts (30 watts apiece) during idle periods can use the Energy Star logo. Some research shows that such "green" PCs can save each user about $70 per year in electricity costs.

Monitors, being one of the most power-hungry computer components, can contribute to those savings. Perhaps the best-known energy-saving standard for monitors is VESA's Display Power-Management Signaling (DPMS) spec, which defines the signals that a computer sends to a monitor to indicate idle times. The computer or video card decides when to send these signals.

If you buy a DPMS monitor, you can take advantage of energy savings without remodeling your entire system. If you do not have a DPMS-compatible video adapter, some cards can be upgraded to DPMS with a software utility typically available at no cost. Similarly, some energy-saving monitors include software that works with almost any graphics card to supply DPMS signals.

Another trend in green monitor design is to minimize the user's exposure to potentially harmful electromagnetic fields. Several medical studies indicate that these electromagnetic emissions may cause health problems, such as miscarriages, birth defects, and cancer. The risk may be low, but if you spend a third of your day (or more) in front of a computer monitor, that risk is increased.

The concern is that VLF (very low frequency) and ELF (extremely low frequency) emissions might affect the body. These two emissions come in two forms: electric and magnetic. Some research indicates that ELF magnetic emissions are more threatening than VLF emissions, because they interact with the natural electric activity of body cells. Monitors are not the only culprits; significant ELF emissions also come from electric blankets and power lines.

These two frequencies are covered by the new Swedish monitor-emission standard called SWEDAC, named after the Swedish regulatory agency. In many European countries, government agencies and businesses buy only low-emission monitors. The degree to which emissions are reduced varies from monitor to monitor. The Swedish government's MPR I standard, who dates back to 1987, is the least restrictive. MPR II, established in 1990, is significantly stronger (adding maximums for ELF as well as VLF emissions) and is the level that you will most likely find in low-emission monitors today.

A more stringent 1992 standard called TCO further tightens the MPR II requirements. In addition, it is a more broad-based environmental standard that includes power-saving requirements and emission limits. Nanao is one of the few manufacturers currently offering monitors that meet the TCO standard.

A low-emission monitor costs about $20 to $100 more than similar regular-emission monitors. When you shop for a low-emission monitor, don't just ask for a low-emission monitor; also find out whether the monitor limits specific types of emission. Use as your guideline the three electromagnetic-emission standards described in this section.

If you decide not to buy a low-emission monitor, you can take other steps to protect yourself. The most important is to stay at arm's length (about 28 inches) from the front of your monitor. When you move a couple of feet away, ELF magnetic emission levels usually drop to those of a typical office with fluorescent lights. Likewise, monitor emissions are weakest at the front of a monitor, so stay at least 3 feet from the sides and backs of nearby monitors and 5 feet from any photocopiers, which are also strong sources of ELF.

Electromagnetic emissions should not be your only concern; you also should be concerned about screen glare. In fact, some antiglare screens not only reduce eyestrain but also cut ELF and VLF emissions.

VGA Adapters and Displays

When IBM introduced the PS/2 systems on April 2, 1987, it also introduced the Video Graphics Array (VGA) display. On that day, in fact, IBM also introduced the lower-resolution multicolor Graphics Array (MCGA) and higher-resolution 8514 adapters. The MCGA and 8514 adapters did not become popular standards like the VGA, and both were discontinued.

Digital Vs Analog Signals.

Unlike earlier video standards, which are digital, the VGA is an analog system. Why are displays going from digital to analog when most other electronic systems are going digital? Compact-disc players (digital) have replaced most turntables (analog), and newer VCRs and camcorders have digital picture storage for smooth slow motion and freeze-frame capability. With a digital television set, you can watch several channels on a single screen by splitting the screen or placing a picture within another picture.

Most personal-computer displays introduced before the PS/2 are digital. This type of display generates different colors by firing the RGB electron beams in on-or-off mode. You can display up to eight colors (2 to the third power). In the IBM displays and adapters, another signal intensity doubles the number of color combinations from 8 to 16 by displaying each color at one of two intensity levels. This digital display is easy to manufacture and offers simplicity with consistent color combinations from system to system. The real drawback of the digital display system is the limited number of possible colors.

In the PS/2 systems, IBM went to an analog display circuit. Analog display work like the digital displays that use RGB electron beams to construct various colors, but each color in the analog display system can be displayed at varying levels of intensity 64 levels, in the case of the VGA. This versatility provides 262,144 possible colors (643). For realistic computer graphics, color often is more important than high resolution, because the human eye perceives a picture that has more colors as being more realistic. IBM moved graphics into analog form to enhance the color capabilities.

Video Graphics Array (VGA)

PS/2 systems contain the primary display adapter circuits on the motherboard. The circuits, or VGA, are implemented by a single custom VLSI chip designed and manufactured by IBM. To adapt this new graphics standard to the earlier systems, IBM introduced the PS/2 Display Adapter. Also called a VGA card, this adapter contains the complete VGA circuit on a full-length adapter board with an 8-bit interface. IBM has since discontinued its VGA card, but many third-party units are available.

The VGA BIOS (Basic Input/Output System) is the control software residing in the system ROM for controlling VGA circuits. With the BIOS, software can initiate commands and functions without having to manipulate the VGA directly. Programs become somewhat hardware-independent and can call a consistent set of commands and functions built into the system's ROM-control software.

Future implementations of the VGA will be different in hardware but will respond to the same BIOS calls and functions. New features will be added as a superset of the existing functions. The VGA, therefore, will be compatible with the graphics and text BIOS functions that were built into the PC systems from the beginning. The VGA can run almost any software that originally was written for the MDA, CGA, or EGA.

In a perfect world, software programmers would write to the BIOS interface rather than directly to the hardware and would promote software interchanges between different types of hardware. More frequently, however, programmers want the software to perform better, so they write the programs to control the hardware directly. As a result, these programmers achieve higher-performance applications that are dependent on the hardware for which they were first written.

When bypassing the BIOS, a programmer must ensure that the hardware is 100 percent compatible with the standard so that software written to a standard piece of hardware runs on the system. Just because a manufacturer claims this register level of compatibility does not mean that the product is 100 percent compatible or that all software runs as it would on a true IBM VGA. Most manufacturers have "cloned" the VGA system at the register level, which means that even applications that write directly to the video registers will function correctly. Also, the VGA circuits themselves emulate the older adapters even to the register level and have an amazing level of compatibility with these earlier standards. This compatibility makes the VGA a truly universal standard.

The VGA displays up to 256 colors on screen, from a palette of 262,144 (256K) colors. Because the VGA outputs an analog signal, you must have a monitor that accepts an analog input.

VGA displays come not only in color but also in monochrome VGA models, using color summing. With color summing, 64 gray shades are displayed instead of colors; the translation is performed in the ROM BIOS. The summing routine is initiated if the BIOS detect the monochrome display when the system is booted. This routine uses a formula that takes the desired color and rewrites the formula to involve all three-color guns, producing varying intensities of gray. The color that would be displayed, for example, is converted to 30 percent red plus 59 percent green plus 11 percent blue to achieve the desired gray. Users who prefer a monochrome display, therefore, can execute color-based applications.

Super VGA (SVGA)

When IBM's XGA and 8514/A video cards were introduced, competing manufacturers chose not to clone these incremental improvements on VGA. Instead, they began producing lower-cost adapters that offered even higher resolutions. These video cards fall into a category loosely known as Super VGA (SVGA).

SVGA provides capabilities that surpass those offered by the VGA adapter. Unlike the display adapters discussed so far, SVGA refers not to a card that meets a particular specification but to a group of cards that have different capabilities.

For example, one card may offer several resolutions (such as 800x600 and 1,024x768) that are greater than those achieved with a regular VGA, whereas another card may offer the same or even greater resolutions but also provide more color choices at each resolution. These cards have different capabilities; nonetheless, both are classified as SVGA.

The SVGA cards look much like their VGA counterparts. They have the same connectors, including the feature adapter shown in Figure 10.6.

Because the technical specifications from different SVGA vendors vary tremendously, it is impossible to provide a definitive technical overview in this book. The pinouts for the standard VGA and SVGA video card connector are shown in the following table:

Pin
Function
Direction
1
Red
Out
2
Green
Out
3
Blue
Out
4
Monitor ID 2
In
5
Digital Ground
(monitor self-test)
--
6
Red Analog Ground
--
7
Green Analog Ground
--
8
Blue Analog Ground
--
9
Key (Plugged Hole)
--
10
Sync Ground
--
11
Monitor ID 0
In
12
Monitor ID 1
In
13
Horizontal Sync
Out
14
Vertical Sync
Out
15
Monitor ID 3
In

Video Memory

A video card relies on memory in drawing your screen. You can often select how much memory you want on your video card--for example, 256K, 512K, 1M, 2M, 4M, 6M, or 8M are common choices today. Most cards today come with at least 1M and usually have 2M. Adding more memory does not speed up your video card; instead, it enables the card to generate more colors and/or higher resolutions.

The amount of memory needed by a video adapter to display a particular resolution and color depth is a mathematical equation. There has to be a memory location used to display every dot (or pixel) on the screen, and the number of total dots is determined by the resolution. For example 1,024x768 resolution represents 786,432 dots on the screen.

If you were to display that resolution with only two colors, you would only need 1 bit to represent each dot. If the bit were a 0, the dot would be black, and if it were a 1, the dot would be white. If you used 4 bits to control each dot, you could display 16 colors, since there are 16 combinations possible with a four-digit binary number (2 to the 4th power equals 16). If you multiplied the number of dots times the number of bits required to represent each dot, you have the amount of memory required to display that resolution. Here is how the calculation would work:

1,024x768
= 786,432 dots x 4 bits per dot

= 3,145,728 bits

= 393,216 bytes

= 384K

As you can see, to display only 16 colors at 1,024x768 resolution would require exactly 384K of RAM on the video card. Because most cards would normally suppor0t only memory amounts of 256K, 512K, 1M, 2M, or 4M, you would have to install 512K to run that resolution. Upping the color depth to 8 bits per pixel results in 256 possible colors, and a memory requirement of 786,432 bytes or 768K. Again, since no video card can install that exact amount, you would have to install an actual 1M on the video card.

In order to use the higher resolution modes and greater numbers of colors in SVGA cards, such cards will need more memory than the 256K found on a standard VGA adapter.

From this table, you can see that a video adapter with 2M can display 65,536 colors in 1,024x768-resolution mode, but for a true color (16.8M colors) display, you would need to upgrade to 4M. In most cases, unless you are doing photo-realistic editing requiring 24-bit (16.8M color) support, 2M are all you need on your video adapter.

A 24-bit (or true-color) video card can display photographic images by using 16.8 million colors. If you spend a lot of time working with graphics, you may want to invest in a 24-bit video card with up to 4M of RAM. Many of the cards today can easily handle 24-bit color, but you may need to upgrade from 2M to 4M of RAM to get that capability in the higher-resolution modes.

Another issue with respect to memory on the graphics adapter is how wide the access is between the graphics chipset and the memory on the adapter. The graphics chipset is usually a single large chip on the card that contains virtually all of the adapter's functions. It is wired directly to the memory on the card through a local bus. Most of the high-end adapters use an internal 64-bit or even 128-bit wide memory bus. This jargon is confusing, because this does not refer to the kind of bus slot the card plugs into. In other words, when you read about a 64-bit graphics adapter, it is really a 32-bit (PCI or VLB) card that has a 64-bit local memory bus on the card itself.

Improving Video Speed

Many efforts have been made recently to improve the speed of video adapters because of the complexity and sheer data of the high-resolution displays used by today's software. The improvements in video speed are occurring along three fronts:
·        Processor

·        RAM

·        Bus

The combination of these three is reducing the video bottleneck caused by the demands of graphical user interface software, such as Microsoft Windows.

Video Output Devices:

When video technology was first introduced, it was based upon television. There is a difference between the signals a television uses and the signals used by a computer. In the United States, the National Television System Committee (NTSC) established color TV standards in 1953. Some countries, such as Japan, followed this standard. Many countries in Europe developed more sophisticated standards, including Phase Alternate Line (PAL) and SEquential Couleur Avec Memoire (SECAM).

A video-output (or VGA-to-NTSC) adapter enables you to show computer screens on a TV set or record them onto videotape for easy distribution. These products fall into two categories: those with genlocking (which enables the board to synchronize signals from multiple video sources or video with PC graphics) and those without. Genlocking provides the signal stability needed to obtain adequate results when recording to tape but is not necessary for simply using a video display.

VGA-to-NTSC converters come as both internal boards and external boxes that you can port along with your laptop-based presentation. These latter devices do not replace your VGA adapter but instead connect to your video adapter via an external cable that works with any type of VGA card. In addition to VGA input and output ports, a video-output board has a video output interface for S-Video and composite video.

VGA-to-TV converters support the standard NTSC television format and may also support the European PAL format. The resolution shown on a TV set or recorded on videotape is often limited to straight VGA at 640x480 pixels. Such boards may contain an "anti-flicker" circuit to help stabilize the picture, which often suffers from a case of the jitters in VGA-to-TV products.

Advanced Power Management (APM)

APM is a specification created by Microsoft and Intel that allows the system BIOS to manage the power consumption of the system and various system devices.

For displays, power management is implemented by a standard called DPMS (Display Power Management Signalling). This standard defines a method for signalling the monitor to enter into the various APM modes. The basis of the DPMS standard is the condition of the synchronization signals being sent to the display. By altering these signals, a DPMS-compatible monitor can be forced into the various APM modes.
The defined monitor states in DPMS are as follows:

·        On. Refers to the state of the display when it is in full operation.

·        Stand-By. Defines an optional operating state of minimal power reduction with the shortest recovery time.

·        Suspend. Refers to a level of power management in which substantial power reduction is achieved by the display. The display can have a longer recovery time from this state than from the Stand-By State.

·        Off. Indicates that the display is consuming the lowest level of power and is non-operational. Recovery from this state may optionally require the user to manually power on the monitor.




Adapter and Display Troubleshooting

Solving most graphics adapter and monitor problems is fairly simple, although costly, because replacing the adapter or display is the usual procedure. A defective or dysfunctional adapter or display usually is replaced as a single unit, rather than repaired. Most of today's cards cost more to service than to replace, and the documentation required servicing the adapters or displays properly is not always available. You cannot get schematic diagrams, parts lists, wiring diagrams, and so on for most of the adapters or monitors. Many adapters now are constructed with surface-mount technology that requires a substantial investment in a rework station before you can remove and replace these components by hand.

Servicing monitors is a slightly different proposition. Although a display often is replaced as a whole unit, many displays are simply too expensive to replace. Your best bet is to either contact the company from which you purchased the display, or to contact one of the companies that specializes in monitor depot repair.

Depot repair means that you would send in your display to depot repair specialists who would either fix your particular unit or return an identical unit they have already repaired. This is normally accomplished for a flat-rate fee; in other words, the price is the same no matter what they have done to repair your actual unit.

Because you will usually get a different (but identical) unit in return, they can ship out your repaired display immediately on receiving the one you sent in, or even in advance in some cases. This way you have the least down time and can receive a repaired display as quickly as possible. In some cases, if your particular monitor is unique or one they don't have in stock, then you will have to wait while they repair your specific unit.

Troubleshooting a failed monitor is relatively simple. If your display goes out, for example, a swap with another monitor can confirm that the display is the problem. If the problem disappears when you change the display, then the problem was almost certainly in the original display; if the problem remains, then it is likely in the video card or PC itself.

After you narrow down the problem to the display, call the display manufacturer for the location of the nearest factory repair depot. There are also alternative third-party depot repair service companies that can repair most displays; their prices often are much lower than factory service.
For most displays, you are limited to making simple adjustments. For color displays, the adjustments can be quite formidable if you lack experience. Even factory service technicians often lack proper documentation and service information for newer models; they usually exchange your unit for another and repair the defective one later. Never buy a display for which no local factory repair depot is available.

If you have a problem with a display or adapter, it pays to call the manufacturer who might know about the problem and make repairs available, as occurred with the IBM 8513 display. Large numbers of the IBM 8513 color displays were manufactured with components whose values change over time and may exhibit text or graphics out of focus. I discovered that IBM replaced these displays at no cost when focusing is a problem. As the 8513 have been out of production for some time, replacements are no longer available.

Remember that most of the problems you have with modern video adapters and displays will be related to the drivers that control these devices rather than the hardware itself. Contact the manufacturers to ensure that you have the latest and proper drivers; there may be a solution that you are unaware of.

Troubleshooting a Monitor


When a monitor goes bad you are usually stuck with replacing it. If you are lucky you can find someone to repair it. The number one thing to remember is never to try and fix it from the inside you even when the monitor has been unplugged from the wall for a while. A lethal charge will remain in the monitor for a good amount of time.

Now what is some thing's to look at when troubleshooting monitor problems? Most of these are fairly common and easily fixed. Again not from the inside though.

Most of the time your problems are with the cabling, or interference from other devices.

Interference is caused by

Ø      Uneven Electrical Currents
Ø      Interference From speakers, fans, and telephones
Ø      Bad connection through cabling
Ø      Cabling overlength

Electrical currents can cause some real strange problems as with the rest of the computer. You may already know what a culprit surge protectors or even the power supply of the computer can be. If you suspect that your power supply or surge protector is causing the problems simply plug the monitor into the wall separately. Now in many offices that share power hungry devices on the same circuit you can run into some real problems with your monitor, or system itself. Find another less crowded power source. This can get so hairy you might need to call an electrician to look up more power to the home or office.

Another way to remedy this is to purchase (UPS). Uninterruptible Power Supply, this will take up the slack in the line. The UPS can also help out in those power outage times.

Other Symptoms

Interference, can be a funny little problem to have. If you are having interference this is usually to fans, speakers, or anything that produces electromagnetic fields. This can show up in really cheap multimedia speakers on your system. They should be shielded but don't count on it.

The source of the interference is fairly easy to recognize simply because it will warp the screen a little more in one direction than the rest. This means you will need to take something further away from the desktop to cure it.

More Symptoms

Cabling, the cable itself for the monitor can be bad. Not only can these cables have electromagnetic interference like the rest of the monitor but will also distort you signal if interfered with. This is unlikely in most cases since the cable is shielded itself but can happen.

Length, the cable length can cause you a problem should it be to long. Most cable for a monitor are under 5' long and should stay at that. The longer the cable the weaker the signal. If you do have a cable extender make sure it is shielded.