A Microscope Setup and Methodology for Capturing Hyperspectral and RGB Histopathological Imaging Databases
Abstract
:1. Introduction
- The ability to capture, calibrate, and stitch hyperspectral cubes with a wedge-linescan camera in the range of 470 and 900 nm.
- The capacity to operate a high-resolution RGB camera configured for microscope use.
- An autofocus algorithm based on passive approach (i.e., by analyzing only the image capture by the sensor) that outperforms the ability of expert pathologists to focus the images.
- The capacity to create RGB whole-slide images with an algorithm that corrects possible errors in the overlap by applying affine transformations.
- An algorithm that allows for automatic capture of HS and RGB snapshots of areas designated by an expert pathologist. This functionality reduces the time that a system operator needs to spend performing the process manually by at least three times, in addition to ensuring perfect framing and reducing human error.
- The creation of a simple labeling tool for whole-slide images that allows an expert pathologist to determine areas of biological interest to be captured in detail later.
- A methodology for using the entire system so as to ensure the correct capture of large databases in a fast and efficient manner. In addition, key features are determined in order to adapt the methodology to such a system.
2. Materials and Methods
2.1. Microscopic Hyperspectral System
2.2. Software
2.2.1. Microscope Controller
- A. Autofocus of both cameras.
- B. Hyperspectral scanning (HS scanning).
- C. Whole slide RGB scanning (WS scanning).
- D. Automatic sequence of HS scans and RGB snapshots.
- Extra: Synchronization and managing the connected devices, both cameras, the motorized stage, and the Z-axis and to configure both cameras according to the selected magnification.
- A.
- Autofocus
- Disable manual controls (joysticks and keyboard shortcuts) to leave complete control to the software.
- The stage is moved automatically to the default Z position, corresponding to the lens-to-sample distance specified by the lens manufacturer (also known as the working distance). In the presented system these are 470 m for , 350 m for , and 340 m for .
- Take an RGB capture in the current position and evaluate its focus.
- Next, it is necessary to evaluate the direction of movement (uphill or downhill). To do this, the Z-axis is first moved upwards by twice the default displacement distance, after which an evaluation of the focus of the frame is performed. Then, the same evaluation is performed by moving the Z-axis downwards by twice the default displacement distance from the initial position. If the evaluation is greater in the uphill direction, the Z-axis moves in that direction and the sequence of movements starts from that point. The same applies for the downhill direction. If there is no improvement in either direction, the Z-axis remains in the initial position, and the process ends.
- If one of the two directions have been selected, repeat the following steps until the most focused image is found:
- (a)
- The stage is moved in steps of half the default displacement distance for the selected magnification.
- (b)
- In each step, an RGB capture is taken and the focus is evaluated.
- (c)
- If the evaluation of the focus is greater than the previous one, the loop continues. Otherwise, if the number of captures taken is less than 15, the loop continues. If the number of captures taken is greater than 15, evaluate all the captures and choose the distance where the focus is maximized. A maximum number of steps is specified to prevent the system of selecting false positives (local maxima).
- When focusing is complete, re-enable manual controls and notify the system operator that the autofocusing is done.
- B.
- Hyperspectral Scan
- User dialog: A pop-up window where the system operator introduces the identifier (ID) of the sample, if the capture is for white reference or not, and the maximum number of frames to be captured during the scan. As explained in Section 2.1, the minimum number of frames is 432. From the number of frames introduced, the length to be covered by the scan is calculated based on the magnification set at that moment; thus, the system operator can determine whether the sample is fully covered.
- Initial checks: A check of every component of the system involved in the scanning process is carried out. If one of them is not operational, then the scan is aborted.
- Folders structure creation: This generates a folder in the work path with the sample’s ID; a folder inside it called data, which contains a folder with the magnification to be scanned.
- Movement calculation: In this step, the distance (d) that the motorized stage has to move between each frame is calculated by applying Equation (2), where is the vertical field of view (FOV) in m of the HS camera (which depends on the selected magnification), is the height in pixels of a single band, and h is the height of the sensor in pixels.
- Initialization: Disable the manual controls, create the scan information file, initialize the control variables, disable the open loop of the cameras to allow the algorithm to control when a frame is captured, and move the motorized stage half of the sample length to the left, which is the initial position for scanning.
- Scanning: This loop consists of capturing a frame, moving the stage the calculated distance d, and checking whether the step is the final one.
- Reassembly operations: Return the motorized stage to the initial position, enable manual controls, and enable the open loop of both cameras.
- Compression: This last step consists of grouping each of the captured frames into a single compressed file to save as much space as possible. Because each captured frame is ultimately a grayscale image and the scan is a succession of these frames, H.265 [27] lossless video compression with a ratio of 1.8:1 is applied to generate an video file in mp4 format. For instance, in a scan with 432 frames, the compressed file results in weights of 480 MB on average, while the uncompressed weight would be 920 MB. The compressed raw HS data are used in later stages for processing, as described in Section 2.2.3.
- C.
- Whole-Slide Scan
- User dialog: In this first step, the system operator determines the region to be captured using a pop-up window showing a simulated image of the sample. This step returns the coordinates of the top left point of the region (, ) along with its width and height (, ) in pixels.
- Folder structure creation: The folder structure used to save the created datafiles is created.
- Calculations: In this step, the number of images to be captured is calculated according to the size of the camera sensor. To do this, it is necessary to identify the initial point of the scan in m, calculated from the scanning region determined by the user. To transform from pixels to m, Equation (3) is applied, where are constants that represent the coordinates of the top left point of the sample (in m), are the coordinates of the bottom right point (in m), and W and H are the width and height of the simulated image where the system operator has described the bounding box (in pixels).With these two coordinates, it is possible to calculate the number of columns (number of images per row) and number of rows to be captured. To calculate these, Equations (5) and (6) are applied accordingly, where and are the FOV of the RGB camera corresponding to the magnification applied for that capture. It is important to note that the overlapping of the images needs to be half of the image to ensure that the stitching algorithm can merge the images correctly; this is why and are divided by two. Knowing and , the total number of images can be calculated with the Formula (7). In this formula, note that the initial coordinates are referenced to the actuator and that the rows and columns take into account the rotation of the camera according to the actuator.
- Information file: A JSON file is created with all the information about the scan.
- Secondary operations: Manual controls (joysticks and keyboard shortcuts) are disabled to leave complete control to the software. The stage is moved to the initial scanning position.
- Scanning: The loop consists of capture, saving into a temporary folder, and moving to the next position. If the last image taken is a modulus of of that row, move to the first next column of the next row (the one just below) and change the direction of the movement, i.e., by performing a snake scan pattern as shown in Figure 7.
- Stitching: This last step launches the stitching process described below to generate a final image with the final resolution of the WS image ( × H, × W).
- D.
- Automatic Sequence of Hyperspectral Scans
- Get ROI information: Read the information of the ROI (top left point coordinates in pixels, width and height in pixels, histopathology name, pathology number, and magnification used to capture the WS).
- Create folders: Create a folder with the name of the histopathology and its number.
- Conversion: In this step, the center point of the labeled ROI is calculated so that the center of the ROI matches with the center of the capture, then is converted from pixels to m following Equation (8). In Equation (8), stands for the coordinates of the starting point of the WS scan, are the coordinates of the top left point of the ROI, and are the width and height of the ROI, W and H are the width and height of the sensor of the RGB camera, M is the magnification used to capture the WS image, and is the pixel size of the RGB camera (in m).
- Initial position: Move the stage to the center position of the ROI.
- Autofocus: Perform the autofocus process to perfectly focus both cameras.
- RGB capture: Obtain an RGB snapshot capture of the centered region.
- HS scan: Start the whole HS scan of this region as described above. For these HS scans, it is sufficient for the number of frames to be the minimum number of frames, i.e., 432.
2.2.2. Whole-Slide Labeling Tool
2.2.3. Hyperspectral Preprocessing Chain
- Decompression: As explained above, after an HS scan the raw HS data frames are saved in a compressed lossless video format. In this stage, the video file is decompressed and each frame is loaded in memory.
- Cube composition: In this stage, each frame is calibrated and then fragmented band-by-band for placement in the final position. The stage is divided in two steps:
- –
- Calibration: In this step, radiometric calibration is applied to standardize the HS frame by referring the frame to two reference images that set the maximum amount of light (white reference) and noise level of the camera (dark reference) [32]. The white reference image is a capture of a non-sample area of the plate captured at the same exposure time with the same microscope adjustments as the frame to be calibrated. The dark reference image is a capture with the optical channel of the microscope covered; thus, the captured information is just the baseline noise of the camera. For calibration it is necessary to apply Equation (9), where R is the raw frame, DC is the dark reference frame, WR is the white calibration frame, and T is the transmittance calibrated frame. In a well calibrated frame, the transmittance values will be between 0 and 1.
- –
- Stitching: This step places each band of each calibrated frame in its final position in the HS cube. The process is described in the work of Villa et al. [33].
- Spectral correction: When each band has been composed, it is necessary to multiply the whole cube by a matrix provided by the manufacturer according to the specifications of the camera. This orders the bands from least to most according to their wavelength and composes the spectral signature from the secondary lobes of other bands resulting from imperfections in the fabrication process of the camera filter [34].
- Rotation: As shown in Figure 2, the cameras of this system are oriented 90 degrees counterclockwise with respect to the sample to be captured. Therefore, in this stage it is necessary to rotate the entire cube 90 degrees counterclockwise to ensure that it correlates spatially with the sample. When this step is finished, the HS cube is saved in ENVI format [35] under the files “preprocess.hdr” and “preprocess.bsq”.
- Pseudo-RGB creation: In this last step, an RGB image is generated from the three bands closest to the pure bands of the visible spectrum colors (red, green, and blue). The purpose of this image, called “pseudo_rgb.tiff”, is to provide a visual reference of the captured image. In this case, the band selected for the blue color corresponds to 462.14 nm, the green one is 543.65 nm, and the red one is 605.61 nm.
2.3. Data Acquisition Methodology
- System checklist: Before initializing the system, it is necessary to check the microscope from bottom to top, with no filters selected on the illumination optical circuit, the field iris diaphragm completely open, the condenser removed from the optical path. At top position, minor magnification above 2× must be selected, the knobs of the trinocular must be completely pulled out, the cameras connected, and the driver of the motorized stage powered on.
- Sample preparation: After cleaning the protective glass of the sample with 90% isopropyl alcohol to avoid artifacts during capture, the sample is placed in the sample holder while ensuring that it is not twisted.
- Autofocus: In this step, the microscope controller must be initialized; when the autofocus process is complete, it is important to check that the sample is well focused.
- WS scan: The first image to be captured is the WS scan. To do this, it is necessary for the user to introduce the sample ID and then select the appropriate area to be scanned. In order to optimize the pathologist’s labeling time, it is recommended to capture the samples with the WS scan sequentially to allow the pathologist to label as many as possible in one working session.
- Labeling tool: When the samples have been captured, the pathologist is notified to proceed with labeling. Thanks to the design of the labeling software, if the project is associated with a whole pathology department, the labeling process can be parallelized by dividing the samples into batches and assigning each batch to a different pathologist.
- Automatic sequence: After labeling, the last step is to load each JSON file with the labels provided by the pathologist and then start the sequence of HS scans. In the capture sequence, the magnification required by the type of tissue to be captured must be selected. Normally, this is not the magnification used for the WS scan. The sequence of scans is fully automatic, and the system operator must change the sample when the capture is finished.
3. Results
3.1. Autofocus Validation
3.2. Hyperspectral Scanning and Preprocessing Chain Validation
3.3. Whole-Slide Scan
3.4. Use Case Analysis
4. Conclusions and Future Work
- Currently, the speed of the motorized stage is very conservative, which is to ensure the correct capture of the images in both the HSI scan and the WS scan to the greatest degree possible. Accelerating the speed of the capture system would reduce scanning times, which would be especially notable in the automatic sequence of scans.
- Accelerating the stitching algorithm to reduce the waiting time in generating WS images as much as possible.
- Applying more intelligent compression algorithms for the WS and HSI images could reduce disk size; this measure would be especially useful for capturing databases with a large number of samples.
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
WSI | Whole-Slide Imaging |
RGB | Red, Green, Blue |
HS | Hyperspectral |
HSI | Hyperspectral Imaging |
IR | Infrared |
ROI | Region of Interest |
FOV | Field of View |
ID | Identification |
GUI | Graphic User Interface |
CMOS | Complementary metal-oxide-semiconductor |
References
- Bonert, M.; Zafar, U.; Maung, R.; El-Shinnawy, I.; Kak, I.; Cutz, J.; Naqvi, A.; Juergens, R.; Finley, C.; Salama, S.; et al. Evolution of anatomic pathology workload from 2011 to 2019 assessed in a regional hospital laboratory via 574,093 pathology reports. PLoS ONE 2021, 16, e0253876. [Google Scholar] [CrossRef]
- Jahn, S.W.; Plass, M.; Moinfar, F. Digital Pathology: Advantages, Limitations and Emerging Perspectives. J. Clin. Med. 2020, 9, 3697. [Google Scholar] [CrossRef]
- Kiran, N.; Sapna, F.; Kiran, F.; Kumar, D.; Raja, F.; Shiwlani, S.; Sonam, F.; Bendari, A.; Perkash, R.S.; Anjali, F.; et al. Digital Pathology: Transforming Diagnosis in the Digital Age. Cureus 2023, 15, e44620. [Google Scholar] [CrossRef] [PubMed]
- Baxi, V.; Edwards, R.; Montalto, M.; Saha, S. Digital pathology and artificial intelligence in translational medicine and clinical practice. Mod. Pathol. 2022, 35, 23–32. [Google Scholar] [CrossRef] [PubMed]
- Kumar, N.; Gupta, R.; Gupta, S. Whole Slide Imaging (WSI) in Pathology: Current Perspectives and Future Directions. J. Digit. Imaging 2020, 33, 1034–1040. [Google Scholar] [CrossRef]
- Farahani, N.; Parwani, A.; Pantanowitz, L. Whole slide imaging in pathology: Advantages, limitations, and emerging perspectives. Pathol. Lab. Med. Int. 2015, 7, 23–33. [Google Scholar] [CrossRef]
- Borowsky, A.D.; Glassy, E.F.; Wallace, W.D.; Kallichanda, N.S.; Behling, C.A.; Miller, D.V.; Oswal, H.N.; Feddersen, R.M.; Bakhtar, O.R.; Mendoza, A.E.; et al. Digital whole slide imaging compared with light microscopy for primary diagnosis in surgical pathology: A multicenter, double-blinded, randomized study of 2045 cases. Arch. Pathol. Lab. Med. 2020, 144, 1245–1253. [Google Scholar] [CrossRef]
- Morrison, D.; Harris-Birtill, D.; Caie, P.D. Generative Deep Learning in Digital Pathology Workflows. Am. J. Pathol. 2021, 191, 1717–1723. [Google Scholar] [CrossRef]
- Rabilloud, N.; Allaume, P.; Acosta, O.; De Crevoisier, R.; Bourgade, R.; Loussouarn, D.; Rioux-Leclercq, N.; Khene, Z.E.; Mathieu, R.; Bensalah, K.; et al. Deep learning methodologies applied to digital pathology in prostate cancer: A systematic review. Diagnostics 2023, 13, 2676. [Google Scholar] [CrossRef]
- Bueno, G.; Déniz, O.; Fernández-Carrobles, M.D.M.; Vállez, N.; Salido, J. An automated system for whole microscopic image acquisition and analysis. Microsc. Res. Tech. 2014, 77, 697–713. [Google Scholar] [CrossRef]
- Bian, Z.; Guo, C.; Jiang, S.; Zhu, J.; Wang, R.; Song, P.; Zhang, Z.; Hoshino, K.; Zheng, G. Autofocusing technologies for whole slide imaging and automated microscopy. J. Biophotonics 2020, 13, e202000227. [Google Scholar] [CrossRef] [PubMed]
- Yoon, J. Hyperspectral imaging for clinical applications. BioChip J. 2022, 16, 1–12. [Google Scholar] [CrossRef]
- Mangotra, H.; Srivastava, S.; Jaiswal, G.; Rani, R.; Sharma, A. Hyperspectral imaging for early diagnosis of diseases: A review. Expert Syst. 2023, 40, e13311. [Google Scholar] [CrossRef]
- Terentev, A.; Dolzhenko, V.; Fedotov, A.; Eremenko, D. Current state of hyperspectral remote sensing for early plant disease detection: A review. Sensors 2022, 22, 757. [Google Scholar] [CrossRef] [PubMed]
- Peyghambari, S.; Zhang, Y. Hyperspectral remote sensing in lithological mapping, mineral exploration, and environmental geology: An updated review. J. Appl. Remote Sens. 2021, 15, 031501. [Google Scholar] [CrossRef]
- Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef]
- Aviara, N.A.; Liberty, J.T.; Olatunbosun, O.S.; Shoyombo, H.A.; Oyeniyi, S.K. Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. J. Agric. Food Res. 2022, 8, 100288. [Google Scholar] [CrossRef]
- Khan, A.; Vibhute, A.D.; Mali, S.; Patil, C.H. A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications. Ecol. Inform. 2022, 69, 101678. [Google Scholar] [CrossRef]
- Lv, M.; Li, W.; Tao, R.; Lovell, N.H.; Yang, Y.; Tu, T.; Li, W. Spatial-Spectral Density Peaks-Based Discriminant Analysis for Membranous Nephropathy Classification Using Microscopic Hyperspectral Images. IEEE J. Biomed. Health Inform. 2021, 25, 3041–3051. [Google Scholar] [CrossRef]
- Ishikawa, M.; Okamoto, C.; Shinoda, K.; Komagata, H.; Iwamoto, C.; Ohuchida, K.; Hashizume, M.; Shimizu, A.; Kobayashi, N. Detection of pancreatic tumor cell nuclei via a hyperspectral analysis of pathological slides based on stain spectra. Biomed. Opt. Express 2019, 10, 4568–4588. [Google Scholar] [CrossRef]
- Maktabi, M.; Wichmann, Y.; Köhler, H.; Ahle, H.; Lorenz, D.; Bange, M.; Braun, S.; Gockel, I.; Chalopin, C.; Thieme, R. Tumor cell identification and classification in esophageal adenocarcinoma specimens by hyperspectral imaging. Sci. Rep. 2022, 12, 4508. [Google Scholar] [CrossRef]
- Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Myers, L.; Sumer, B.D.; Fei, B. Hyperspectral imaging of head and neck squamous cell carcinoma for cancer margin detection in surgical specimens from 102 patients using deep learning. Cancers 2019, 11, 1367. [Google Scholar] [CrossRef]
- Tran, M.H.; Gomez, O.; Fei, B. An automatic whole-slide hyperspectral imaging microscope. In Proceedings of the Label-Free Biomedical Imaging and Sensing (LBIS) 2023, SPIE, San Francisco, CA, USA, 28–31 January 2023; Volume 12391, pp. 24–33. [Google Scholar]
- Cozzolino, D.; Williams, P.; Hoffman, L. An overview of pre-processing methods available for hyperspectral imaging applications. Microchem. J. 2023, 193, 109129. [Google Scholar] [CrossRef]
- Jia, D.; Zhang, C.; Wu, N.; Zhou, J.; Guo, Z. Autofocus algorithm using optimized Laplace evaluation function and enhanced mountain climbing search algorithm. Multimed. Tools Appl. 2022, 81, 10299–10311. [Google Scholar] [CrossRef]
- OpenCV. Laplace Operator. Available online: https://docs.opencv.org/3.4/d5/db5/tutorial_laplace_operator.html (accessed on 20 May 2024).
- Sullivan, G.J.; Ohm, J.R.; Han, W.J.; Wiegand, T. Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
- OpenCV. Affine Transformations. Available online: https://docs.opencv.org/4.x/d4/d61/tutorial_warp_affine.html (accessed on 25 May 2024).
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Panchal, P.; Panchal, S.; Shah, S. A comparison of SIFT and SURF. Int. J. Innov. Res. Comput. Commun. Eng. 2013, 1, 323–327. [Google Scholar]
- OpenCV. Feature Matching. Available online: https://docs.opencv.org/4.x/dc/dc3/tutorial_py_matcher.html (accessed on 25 May 2024).
- Pichette, J.; Goossens, T.; Vunckx, K.; Lambrechts, A. Hyperspectral calibration method for CMOS-based hyperspectral sensors. In Proceedings of the Photonic Instrumentation Engineering IV, SPIE, San Francisco, CA, USA, 31 January–2 February 2017; Volume 10110, pp. 132–144. [Google Scholar]
- Villa, M.; Sancho, J.; Villanueva, M.; Urbanos, G.; Sutradhar, P.; Rosa, G.; Vazquez, G.; Martin, A.; Chavarrias, M.; Perez, L.; et al. Stitching technique based on SURF for Hyperspectral Pushbroom Linescan Cameras. In Proceedings of the 2021 XXXVI Conference on Design of Circuits and Integrated Systems (DCIS), Vila do Conde, Portugal, 24–26 November 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Ximea. xiSpec: Hyperspectral Imaging Camera Series. Am Mittelhafen 16, 48155 Münster, Germany. 2019. Available online: https://www.ximea.com/en/products/xilab-application-specific-oem-custom/hyperspectral-cameras-based-on-usb3-xispec (accessed on 1 June 2024).
- NV5 Geospatial Software. Available online: https://www.nv5geospatialsoftware.com/docs/ENVIHeaderFiles.html (accessed on 1 June 2024).
System Component | Parameter | Value |
---|---|---|
Microscope | Model | Olympus BX51 (Olympus, Tokyo, Japan) |
Trinocular | U-TRU | |
Lenses | UPlanSApo (4×) UPlanApo (10×) PlanApo (40×) | |
Illumination type | Transmittance | |
Light bulb | Model | 100 W 7724 (Phillips, Amsterdam, The Netherlands) |
Type | Halogen | |
Spectral range | 400 to 1800 nm | |
Motorized stage | Model | Stage X-Y PRior H101BXDK (Prior Scientific Instruments, Fulbourn, Cambridge, United Kingdom) |
Pitch screw X & Y axis | 2 mm | |
Resolution X & Y axis | 0.02 m/step | |
Resolution Z axis | 0.1 m/step | |
HS camera | Model | MQ022HG-IM-LS150-NIR (Ximea, Münster, Germany) |
Camera type | Wedge linescan | |
Spectral bands | 145 | |
Spectral range | 470 to 900 nm | |
Sensor technology | CMOS | |
Sensor resolution | pixels | |
Band height | 5 pixels | |
Vertical FOV | 4× = 6.875 m 10× = 2.75 m 40× = 0.687 m | |
Horizontal FOV | 4× = 2860 m 10× = 1144 m 40× = 286 m | |
Sensor shutter | Global shutter | |
Pixel bit depth | 8 or 10 bits (configurable) | |
Sensor size | mm | |
Pixel size | m | |
Max. theoretical fps | 170 fps | |
Housing | C-mount | |
RGB camera | Model | Basler ace acA5472-17uc (Exton, PA, USA) |
Sensor technology | CMOS | |
Sensor resolution | pixels | |
Sensor shutter | Rolling shutter | |
Pixel bit depth | 10 or 12 bits (configurable) | |
Sensor size | mm | |
Pixel size | m | |
Max fps | 17 fps | |
Housing | C-mount |
Family | Parameter | Considerations |
---|---|---|
Cameras | Model | Olympus BX51 (Olympus, Tokyo, Japan) |
Camera rotation | If the camera(s) is 90 degrees rotated, the movement of the scans should be performed accordingly. | |
Camera resolution | For quality, the minor resolution above 1920 × 1080. | |
FOV spectral band | Transmittance | |
Calibration | White reference | Captures a frame of an empty and clean sample area. |
Dark reference | Capture a frame with the lighting system off. | |
Spectral correction | If the HS camera requires it, apply according to manufacturer’s specifications. | |
Magnifications | Magnifications | Select the values according to the needs of the use case. Most common values: 5×, 10×, 20×, 40×. |
Whole slide scan magnification | The minor magnification above 2×. | |
Z step | For the WS magnification, 10 m. From there, scale it according to the ratio between the WS magnification and the desired magnification. | |
Default Z position | Value equivalent to the working distance lens-sample specified by the manufacturer. | |
Ilumination | Technology | Recommended is halogen, but multi-LED spectral can also be used. |
Spectral range | It must cover the full range of the HS camera as well as the visible range for the RGB. |
Action | Time [mm:ss] |
---|---|
Sample preparation time | 1:00 |
Autofocus | 0:30 |
Whole slide scan time | 9:30 |
Whole slide stitching | 3:20 |
Labeling time | 5:00 |
Automatic HS sequence | 60:00 |
Total | 79:20 min |
File | Disk Space |
---|---|
WS image | 1.2 GB |
WS scan info file | 338 bytes |
Labels file | 5 kB |
Raw HS data compressed | 480 MB × 20 scans = 9.4 GB |
HS cube | 2.7 GB × 20 scans = 54 GB |
RGB snapshot | 5.5 MB × 20 scans = 1.1 GB |
Total | 66 GB |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rosa-Olmeda, G.; Villa, M.; Hiller-Vallina, S.; Chavarrías, M.; Pescador, F.; Gargini, R. A Microscope Setup and Methodology for Capturing Hyperspectral and RGB Histopathological Imaging Databases. Sensors 2024, 24, 5654. https://doi.org/10.3390/s24175654
Rosa-Olmeda G, Villa M, Hiller-Vallina S, Chavarrías M, Pescador F, Gargini R. A Microscope Setup and Methodology for Capturing Hyperspectral and RGB Histopathological Imaging Databases. Sensors. 2024; 24(17):5654. https://doi.org/10.3390/s24175654
Chicago/Turabian StyleRosa-Olmeda, Gonzalo, Manuel Villa, Sara Hiller-Vallina, Miguel Chavarrías, Fernando Pescador, and Ricardo Gargini. 2024. "A Microscope Setup and Methodology for Capturing Hyperspectral and RGB Histopathological Imaging Databases" Sensors 24, no. 17: 5654. https://doi.org/10.3390/s24175654
APA StyleRosa-Olmeda, G., Villa, M., Hiller-Vallina, S., Chavarrías, M., Pescador, F., & Gargini, R. (2024). A Microscope Setup and Methodology for Capturing Hyperspectral and RGB Histopathological Imaging Databases. Sensors, 24(17), 5654. https://doi.org/10.3390/s24175654