Digital Imaging Terminology
Jargon: Medical Digital Imaging Terminology Explained!
This is a good range of terms relating to medical digital imaging terminology that you will come up against in the medical digital imaging world. It is by no means complete, but some of the most common ones to help you out.
Read as much or as little as you want but ask us if you are unsure of something you have read or been told. If you want something added and explained, just contact us.
Click below to learn more about the terminology used on the website.
AE Title
Application Entities (AE’s) – the nodes in the DICOM network and their name – AE Title
Cassette
A light-proof housing for x-ray film, containing front and back intensifying screens, between which the film is placed and held during exposure. Although it is usual to have two screens, there may be only one where there is a special need for a high-detail picture.
Cassettes are also used in Computed Radiography (CR) Systems but hold an Imaging Plate (IP) instead of film. In this case, we will refer to this as the cassette or cassette shell.
See “Imaging Plate” and “Computed Radiography”
Cassette Grid
A cassette grid comprises alternating strips of lead and radio translucent material such as aluminium. Placed on top of the cassette, it permits the passage only of the x-rays that are passing directly to the film. Scattered rays are absorbed by the lead, reducing the effect of scattering on the film and providing a clearer image.
X-ray grids improve the quality of a radiograph by trapping most of the scattered radiation, the biggest contributing factor to poor diagnostic quality. Introducing an x-ray grid between the x-ray beam and the film or plate will provide a clearer and more detailed image.
Let the experts at ARO help you choose the grid that fits your needs. We can help you with many sizes and configurations, including grids for CR, DR, C-Arm, decubitus, mammography, and standard applications.
Different ratios, line spacing and focal distances are available to suit varying needs. Talk to us about what is best for you.
Caesium Iodide
Caesium iodide (CsI) is an ionic compound often used as the input phosphor of an x-ray image intensifier tube found in fluoroscopy equipment. Caesium iodide photocathodes are highly efficient at extreme ultraviolet wavelengths.[1]
An important application of caesium iodide crystals, which are scintillators, is electromagnetic calorimetry in experimental particle physics. Pure CsI is a fast and dense scintillating material with a relatively high light yield. It shows two main emission components: one in the near ultraviolet region at the wavelength of 310 nm and one at 460 nm. The drawbacks of CsI are a high-temperature gradient and a slight hygroscopicity.
Caesium iodide can be used as a beamsplitter in Fourier Transform Infrared (FT-IR) spectrometers. CsI has a wider transmission range than the more common potassium bromide beamsplitters, extending its usefulness into the far infrared. A problem with optical-quality CsI crystals is that they are very soft with no cleavage, making it difficult to create a flat polished surface. Also, the CsI optical crystals must be stored in a desiccator to prevent water damage to the surfaces and coated (typically with germanium) to minimise water damage from short-term atmospheric exposure during beamsplitter swap outs.
Source and more information: https://en.wikipedia.org/wiki/Caesium_iodide
Computed Radiography
Computed Radiography (CR) uses very similar equipment to conventional radiography, except that in place of a film to create the image, an imaging plate (IP) made of photostimulable phosphor is used. The imaging plate is housed in a special cassette and placed under the body part or object to be examined, and the x-ray exposure is made. Hence, instead of taking an exposed film into a darkroom for development in chemical tanks or an automatic film processor, the imaging plate is run through a special laser scanner, or CR reader, that reads and digitizes the image. The digital image can then be viewed and enhanced using software very similar to other conventional digital image-processing software, such as contrast, brightness, filtration and zoom.
Source and more information: https://en.wikipedia.org/wiki/Computed_radiography
DICOM
Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. It includes a file format definition and a network communications protocol. The communication protocol is an application protocol that uses TCP/IP to communicate between systems. DICOM files can be exchanged between two entities capable of receiving image and patient data in DICOM format. The National Electrical Manufacturers Association (NEMA) holds the copyright to this standard. It was developed by the DICOM Standards Committee, whose members[2] are also partly members of NEMA.
Source and more information: https://en.wikipedia.org/wiki/DICOM
Direct Radiography
This is known as Direct Radiography (DR) or Direct Digital Radiography (DDR)
There are two major digital image capture device variants: flat panel detectors (FPDs) and CCD detectors.
Flat Panel Detectors (FPDs) are further classified into two main categories:
1. Indirect FPDs. Amorphous silicon (a-Si) is the most common material of commercial FPDs. Combining a-Si detectors with a scintillator in the detector’s outer layer, which is made from caesium iodide (CsI) or gadolinium oxysulfide (Gd2O2S), converts X-rays to light. Because of this conversion, the a-Si detector is considered an indirect imaging device. The light is channelled through the a-Si photodiode layer, which is converted to a digital output signal. The digital signal is then read by thin film transistors (TFTs) or fibre-coupled CCDs. The image data file is sent to a computer for display.
2. Direct FPDs. Amorphous selenium (a-Se) FPDs are known as “direct” detectors because X-ray photons are converted directly into charge. The outer layer of the flat panel in this design is typically a high-voltage bias electrode. X-ray photons create electron-hole pairs in a-Se, and the transit of these electrons and holes depends on the potential of the bias voltage charge. As the holes are replaced with electrons, the resultant charge pattern in the selenium layer is read out by a TFT array, active matrix array, electrometer probes or microplasma line addressing.
Charge Coupled Detectors (CCD)
The design of a charge-coupled device (CCD)-based DR system is straightforward. The detector comprises a large FOV (e.g., 43 cm by 43 cm) scintillator that converts absorbed X-ray energy into light. It also includes an optical lens assembly to focus the light onto the photosensitive CCD array and a CCD camera to integrate, scan and output the corresponding light image. While there were initially several configurations in early systems, today’s CCD-based detector is typically comprised of a single-compound optical lens and a high-resolution CCD camera comprised of 9 million pixels (3000× 3000 pixels) to 16 million pixels (4000 × 4000 pixels) or greater. When referred back to the image plane, this results in image pixel sizes of ~0.10 to ~0.14 µm (Figure 2). The photosensitive area of the CCD chip is quite small, on the order of 2.5 cm × 2.5cm to 4.0 cm × 4.0 cm, which is required to maintain extremely high charge-coupling efficiency and low-noise operation during the readout of the image. Thus, a large optical demagnification is necessary to focus the full FOV light image onto the CCD sensor. One physical difficulty is the inefficiency of light collection caused by the dispersed light emission from the phosphor, resulting in only a small fraction that can be focused onto the CCD, thus potentially reducing the statistical integrity of information carried by the X-ray photons and increasing the overall noise in the image. This is determined by the demagnification factor, conversion efficiency, luminance and directionality of the light emission. A non-structured phosphor such as gadolinium oxysulfide has a high light dispersion and corresponding low fraction of light that can be focused on the CCD.
In contrast, a structured phosphor such as cesium iodide (CsI) produces a more forward-directed light output so that the lens-light collection efficiency, and thus the SNR in the output image, is better for a given incident X-ray exposure. Newer, advanced CCD systems with a CsI phosphor have proven to be reasonably efficient, particularly when using higher kilovolt peak (kVp) techniques that produce more light photons per absorbed X-ray photon. One minor disadvantage in some positioning situations is the relatively large and bulky enclosure of a CCD-based DR system, necessitated by placing the CCD out of the direct X-ray beam and using mirror optics to reflect the light to the photosensor array.
Linear CCD arrays optically coupled to a scintillator by fiberoptic channel plates (often with a demagnification taper of 2:1 to 3:1) are used in slot-scan geometries (Figure 3). A significant advantage is pre- and outpatient collimation that limits X-ray scatter and allows grid-free operation with equivalent image quality (SNR) of a large area FOV at 2 to 4 times less patient dose. Disadvantages include the extended exposure time required for image acquisition with potential motion artifacts and reduced X-ray tube efficiency. Nevertheless, imaging systems based on slot-scan acquisition have provided excellent clinical results for dedicated chest and full-body trauma imaging.
Source and more information: https://appliedradiology.com/articles/digital-radiography-the-bottom-line-comparison-of-cr-and-dr-technology
Gadolinium Oxysulfide
Gadolinium oxysulfide (Gd2O2S), also called gadolinium sulfoxylate, GOS or Gadox, is an inorganic compound, a mixed oxide-sulphide of gadolinium. Its CAS number is [12339-07-0].
Uses
The main use of gadolinium oxysulfide is in ceramic scintillators. Scintillators are used in radiation detectors for medical diagnostics. The scintillator is the primary radiation sensor emits light when struck by high-energy photons. Gd2O2S-based ceramics exhibit final densities of 99.7% to 99.99% of the theoretical density (7.32 g/cm3) and an average grain size ranging from 5 micrometres to 50 micrometres in dependence on the fabrication procedure.[1] Two powder preparation routes have successfully synthesised Gd2O2S: Pr, Ce, and F powder complexes for the ceramic scintillators. These preparation routes are called the halide flux and sulphite precipitation methods. The scintillation properties of Gd2O2S: Pr, Ce, F complexes demonstrate that this scintillator is promising for imaging applications. There are two main disadvantages to this scintillator: the hexagonal crystal structure, which emits only optical translucency and low external light collection at the photodiode. The other disadvantage is the high X-ray damage to the sample.[2]
Terbium-activated gadolinium oxysulfide is frequently used as a scintillator for x-ray imaging. It emits wavelengths between 382-622 nm, though the primary emission peak is at 545 nm. It is also a green phosphor in projection CRTs, though its drawback is a marked lowering efficiency at higher temperatures.[1] Variants include, for example, using praseodymium instead of terbium (CAS registry number [68609-42-7], EINECS number 271-826-9), or using a mixture of dysprosium and terbium for doping (CAS number [68609-40-5], EINECS number 271-824-8).
Gadolinium oxysulfide is a promising luminescent host material because of its high density (7.32 g/cm3) and high effective atomic number of Gd. These characteristics lead to a high stopping power for X-ray radiation. Several synthesis routes have been developed for processing Gd2O2S phosphors, including the solid-state reaction method, reduction method, combustion synthesis method, emulsion liquid membrane method, and gas sulfuration method. The solid-state reaction and reduction methods are most commonly used because of their high reliability, low cost, and high luminescent properties. (Gd0.99, Pr0.01)2O2S sub-microphosphors synthesized by the homogeneous precipitation method is very promising for a new green emitting material to be applied to the high-resolution digital X-ray imaging field[3] Gadolinium oxysulfide powder phosphors are intensively used for conversion of X-rays to visible light in medical X-ray imaging. Gd2O2S: Pr-based solid state X-ray detectors have been successfully reintroduced to X-ray sampling in medical computed tomography (imaging by sections or sectioning, through any penetrating wave).
Crystal Structure
The crystal structure of gadolinium oxysulfide has trigonal symmetry and a space group with one formula unit per unit cell. Each gadolinium ion is coordinated by four oxygen atoms and three sulphur atoms in a non-inversion symmetric arrangement. The Gd2O2S structure is a sulphur layer with double gadolinium and oxygen layers.[4]
Source and more information: https://en.wikipedia.org/wiki/Gadolinium_oxysulfide
Fundus Photography
Fundus Photography involves capturing a photograph of the back of the eye, i.e. fundus. Specialized fundus cameras that consist of an intricate microscope attached to a flashed-enabled camera are used in fundus photography. The main structures visualized on a fundus photo are the central and peripheral retina, optic disc and macula. Fundus photography can be performed with coloured filters or specialized dyes, including fluorescein and indocyanine green.
The models and technology of fundus photography have advanced and evolved rapidly over the last century. Since the types of equipment are sophisticated and challenging to manufacture to clinical standards, only a few manufacturers/brands are available in the market. For more information on our products, click here for Medical Fundus Camera or here for Veterinary Fundus Camera.
Source and more information: https://en.wikipedia.org/wiki/Fundus_photography
Imaging Plate (IP)
The Computed Radiography (CR) imaging plate (IP) contains photostimulable storage phosphors, which store the radiation level received at each point in local electron energies. When the plate is put through the scanner, the scanning laser beam causes the electrons to relax to lower energy levels (photostimulated luminescence), emitting light detected by a photo-multiplier tube, which is then converted to an electronic signal. The electronic signal is then converted to discrete (digital) values and placed into the image processor pixel map.
Imaging plates can theoretically be re-used thousands of times if handled carefully. However, IP handling under industrial conditions may result in damage after a few hundred uses. An image can be erased by simply exposing the plate to a room-level fluorescent light. Most laser scanners automatically erase the image plate after laser scanning is complete. The imaging plate can then be re-used. Reusable phosphor plates are environmentally safe but must be disposed of according to local regulations.
They are generally stored inside a cassette or shell for use and storage. See “Cassette” and “Computed Radiography”
IP Address
An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: “A name indicates what we seek. An address indicates where it is. A route indicates how to get there.”
Source and more information: https://en.wikipedia.org/wiki/IP_address
JPEG
JPEG (JAY-peg) is a commonly used method of lossy compression for digital images, particularly those produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.[citation needed]
JPEG compression is used in several image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices. JPEG/JFIF is the most common format for storing and transmitting photographic images on the World Wide Web.[citation needed] These format variations are often not distinguished and are simply called JPEG.
The term “JPEG” is an acronym for the Joint Photographic Experts Group, which created the standard. The MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions, which provide a MIME type of image/pjpeg when uploading JPEG images.[2] JPEG files usually have a filename extension of .jpg or .jpeg.
JPEG/JFIF supports a maximum image size of 65535×65535 pixels,[3] hence up to 4 gigapixels (for an aspect ratio of 1:1).
Source and more information: https://en.wikipedia.org/wiki/JPEG
Modality
Imaging Modalities are the source of acquisition of digital imaging data. They can be devices including ultrasound (US), magnetic resonance (MR), positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), Digital radiography (DR), computed radiography (CR) ophthalmology, etc.
MPEG
The Moving Picture Experts Group (MPEG) is a working group of authorities formed by ISO and IEC to set standards for audio and video compression and transmission.[1] It was established in 1988 by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione,[2] group Chair, since its inception. The first MPEG meeting was in May 1988 in Ottawa, Canada.[3][4][5] As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG’s official designation is ISO/IEC JTC 1/SC 29/WG 11 – Coding of moving pictures and audio (ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 11).
Source and more information: https://en.wikipedia.org/wiki/Moving_Picture_Experts_Group
PACS
Picture Archiving and Communication System (PACS) is a medical imaging technology that provides economical storage of and convenient access to images from multiple modalities (source machine types).[1] Electronic images and reports are transmitted digitally via PACS; this eliminates the need to file, retrieve manually, or transport film jackets. The universal format for PACS image storage and transfer is DICOM (Digital Imaging and Communications in Medicine). Non-image data, such as scanned documents, may be incorporated using consumer industry standard formats like PDF (Portable Document Format) once encapsulated in DICOM. A PACS consists of four major components: The imaging modalities such as X-ray plain film (PF), computed tomography (CT) and magnetic resonance imaging (MRI), a secured network for the transmission of patient information, workstations for interpreting and reviewing images, and archives for the storage and retrieval of images and reports. With available and emerging web technology, PACS can deliver timely and efficient access to images, interpretations, and related data. PACS breaks down the physical and time barriers associated with traditional film-based image retrieval, distribution, and display.
Source and more information: https://en.wikipedia.org/wiki/Picture_archiving_and_communication_system
Pixel
In digital imaging, a pixel, or pel, (picture element) is a physical point in a raster image or the smallest addressable element in a display device; so, it is the smallest controllable element of a picture represented on the screen. The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a two-dimensional grid and are often represented using dots or squares, but CRT pixels correspond to their timing mechanisms and sweep rates.
Source and more information: https://en.wikipedia.org/wiki/Pixel
Port Number
In computer networking, a port is an application-specific or process-specific software construct serving as a communications endpoint in a computer’s host operating system. A port is associated with the IP address of the host, as well as the type of protocol used for communication. The purpose of ports is to uniquely identify different applications or processes running on a single computer, enabling them to share a single physical connection to a packet-switched network like the Internet.
Veterinary Practice Management Software
Veterinary Practice Management Software (VPMS) is a management system specific to the veterinary market, considering the needs of vets, their owners and animals. In some ways, it needs to be more sophisticated than its human equivalent as it has to deal with multiple breeds and species and nuances of veterinarians have a broad range of skill sets to deal with many facets of treating animals.
Universal Serial Bus (USB)
Universal Serial Bus is an industry-standard developed in the mid-1990s that defines the cables, connectors and communications protocols used in a bus for connection, communication and power supply between computers and electronic devices.
USB was designed to standardize the connection of computer peripherals (including keyboards, pointing devices, digital cameras, printers, portable media players, disk drives and network adapters) to personal computers, both to communicate and to supply electric power. It has become commonplace on other devices, such as smartphones, PDAs and video game consoles. USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports, as well as separate power chargers for portable devices.
As of 2008, approximately 6 billion USB ports and interfaces were in the global marketplace, and about 2 billion were sold yearly.
Source and more information: https://en.wikipedia.org/wiki/USB
Uninterruptible Power Supply (UPS)
An Uninterruptible Power Supply, also an uninterruptible power source, UPS or battery/flywheel backup, is an electrical apparatus that provides emergency power to a load when the input power source, typically mains power, fails. A UPS differs from an auxiliary or emergency power system or standby generator in that it will provide near-instantaneous protection from input power interruptions by supplying energy stored in batteries or a flywheel. The on-battery runtime of most uninterruptible power sources is relatively short (only a few minutes) but sufficient to start a standby power source or properly shut down the protected equipment.
A UPS is typically used to protect computers, data centres, telecommunication equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss. UPS units range in size from units designed to protect a single computer without a video monitor (around 200 VA rating) to large units powering entire data centres or buildings.
Source and more information: https://en.wikipedia.org/wiki/Uninterruptible_power_supply