A super-resolution scanning algorithm for lensless microfluidic imaging using the dual-line array image sensor

The lensless optical fluid microscopy is of great significance to the miniaturization, portability and low cost development of cell detection instruments. However, the resolution of the cell image collected directly is low, because the physical pixel size of the image sensor is the same order of magnitude as the cell size. To solve this problem, this paper proposes a super-resolution scanning algorithm using a dual-line array sensor and a microfluidic chip. For dual-line array sensor images, the multi-group velocity and acceleration of cells flowing through the line array sensor are calculated. Then the reconstruction model of the super-resolution image is constructed with variable acceleration. By changing the angle between the line array image sensor and the direction of cell flow, the super-resolution image scanning and reconstruction are achieved in both horizontal and vertical directions. In addition, it is necessary to study the row by row extraction algorithm for cell foreground image. In this paper, the dual-line array sensor is implemented by adjusting the acquisition window of the image sensor with a pixel size of 2.2μm. When the tilt angle is 21 degrees, the equivalent pixel size is 0.79μm, improved 2.8 times, and after de-diffraction its average size error was 3.249%. As the angle decreases, the image resolution is higher, but the amount of information is less. This super-resolution scanning algorithm can be integrated on the chip and used with a microfluidic chip to realize on-chip instrument.


Introduction
Collecting and analyzing cell images of biological tissues is an important basis for disease diagnosis, health monitoring, and new drug development in medicine today [1,2]. Flow cytometry can quickly and accurately perform cell detection. However, its promotion and application are harmed by cost and portability. With the popularization of concepts such as smart medicine and telemedicine, the lensless optical fluid microscope technology for miniaturization, automation, and low cost of cell image acquisition instruments were proposed in 2006 [3]. Since the pixel size of the image sensor and cell size is on the same order of magnitude as cell size, the resolution of the image collected by the lensless optical fluid microscope is low. Then the method of passing the target through a special aperture array is proposed to reduce the pixel size and achieve super-resolution imaging to solve this problem [4,5]. Scholars from all over the world are trying to solve the problem of low resolution of the imaging results of lensless systems by implementing super-resolution reconstruction. A method of real super-resolution reconstruction by generating a micro-lens effect above or on the surface of the object has been proposed [6,7]. In order to obtain more details of cells, the multi-angle micro-displacement of the optical path is generated, and the cells are scanned for micro-displacement [8,9]. Then the high-resolution image is synthesized into a group of low-resolution sub-pixel-shifted images. But at the same time, an accurate optical path system is required, and the implementation cost is high. Similarly, the fluid flow first generates low-resolution images of multiple frames of targets and then reconstructs a single super-resolution image through a multi-frame super-resolution algorithm [10][11][12]. Different from this, the convolutional neural network structure is improved to establish the feature mapping relationship between low-resolution images and high-resolution images [13][14][15]. The multi-wavelength phase recovery and multi-angle light source diffraction tomography was used to realize the high-resolution imaging of the lensless system and restores the depth image [16,17]. Also, an up-sampling phase retrieval scheme is proposed to bypass the resolution limit of the pixel size of the imager [18]. This method introduces some optical devices and improves the resolution through the corresponding phase recovery algorithm. Our research team has proposed a method of super-scan imaging using a single-line array detector, which sets an oblique linear array image sensor under the microfluidic channel to scan the flowing cells. After reconstruction, a super-resolution scan of the cells can be obtained. Compared with the area array image sensor, its method greatly reduces the power consumption occupied by the pixel unit. However, this method requires very high control accuracy of the cell flow rate, and the reconstructed image is easily distorted.
In this article, our proposed solution is to build a super-resolution scanning system using a dual-line image sensor. It can accurately calculate cell flow velocity and acceleration. Firstly, two single-line array detectors with micro-pitch and parallel structure are adopted to construct the double-line array structure. The time difference between the cells flowing through two independent linear array sensors is used to accurately calculate the instantaneous flow velocity and acceleration of the cells. Secondly, the single-line scan imaging process is re-modeled, and the transformation relationship between the line scan image and the object image coordinates is pushed to reconstruct the line scan image and restore the super-resolution image of the object. In addition, the foreground separation of the line scan image, speed calculation, and other issues have been studied in depth. Based on the mean background modeling, a multithreshold foreground coarse segmentation method is proposed to update the background model, and the foreground model of the line scan image is extracted by the background model. Feature detection and feature matching algorithms are used to match the time difference and displacement difference of cells as they pass through two linear array sensors, and accurately calculate the instantaneous flow velocity and acceleration information of the cells.

System structure and basic model
The system structure (Fig 1A) of the dual-line array image sensor consists of a 405nm laser plane wave source, a microfluidic chip, and a CMOS plane array image sensor MT9P031. In the system, when the pixel size of the image sensor is smaller, the resolution of the reconstructed image will be higher and the image will be clearer. However, the current commercial linear array image sensor has too large pixels, so we choose a smaller pixel and a high sampling rate area array image sensor. This sensor can adjust the size of image acquisition window through the function of a region of interest (ROI), so as to replace the two-wire array sensor, and its pixel size is 2.2μm. In this function, only the pixel reading of the window area will be activated, so the row rate can be greatly improved. The schematic diagram of the dual-line array sensor structure is shown in Fig 1B, and the two linear array sensors are placed in parallel with a space of d. When the linear array sensor is at an acute angle to the direction of cell flow, the lateral resolution of scanning imaging can be increased, and that is the principle of super-resolution scanning imaging. In this section, the basic mathematical modeling of its structure will be carried out, including the establishment of the coordinate system, speed model, resolution model and distance model.
As shown in Fig 1B, the acquisition resolution in the inclined placement mode is smaller than that in the vertical placement mode. Taking the first intersection of the object flow direction and the linear array sensor as the origin, the object flow direction of the channel as the axis y, the flow direction as the positive direction, the direction perpendicular to the channel as the axis x, and the direction pointing to the channel one measurement as the positive direction, the rectangular coordinate system of the channel object image is established, named C 1 . By a similar process, the rectangular coordinate system of the linear scanning image, called C 2 , is established. Special attention should be paid to the fact that the intersection of the axis x 0 and the axis y 0 is not the zero point of the axis y 0 , but the coordinate of the axis y 0 when the cell is passing through the linear array sensor.  Suppose that there is an object flowing at a speed V in the channel, as shown in Fig 2A. As calculated using Eq (1), the velocity V x , V y , V x 0 and V y 0 can be obtained by decomposing the velocity in the coordinate system C 1 and C 2 respectively. Similarly, the transformation relationship of acceleration between two coordinate systems is determined in Eq (2).
8 > > > > > > > > < > > > > > > > > : is an enlarged view of the intersection of axes. When the pixel spacing of the linear array sensor is d, the imaging resolution of the axis x 0 direction is d, that of the axis x direction is d x . Then the transformation formula between d and d x can be deduced in Eq (3). To ensure that the scale of the reconstructed image is the same as that of the real object, the resolution in the axis x direction should be equal to the resolution in the axis y direction.
( When the imaging sample reaches the origin, the linear array sensor starts acquiring images, and suppose there is a point P 1 on the object this time. As shown in Fig 3, its coordinate is (x,y). The distance from the axis x and the axis y is S x and S y . If the object without a lateral velocity flows, the pixel corresponding to point P 1 on the linear array sensor is L 1 , else is L 2 and the lateral flow distance is S V x . Then the coordinate of the corresponding point P 2 on the linear scanning image is (x',y'). The ordinate y' represents the number of frames since the object starts imaging when P 1 is acquired. The true distance between point P 2 and the axis y is S x' . According to the relationship between imaging resolution, pixel size and pixel coordinates, the Eq (4) can be obtained.

Methods of cell foreground extraction
When the linear array sensor scanning is used to image cells in microfluidics, the influence of background impurities in microfluidics can be avoided, and only the dynamic change information of cells flowing can be collected. However, because of the noise of image sensor pixels and the non-uniformity of the light source, the uneven fringe noise will be formed on the scanned image. After the system is started, the noise will remain stable. In the linear scanning image, the pixels and light intensity of each line are the same. As a result, for a continuous, short period of time, it can be considered that the background is almost the same. Based on this assumption, the pixel value of background can be obtained by simple mean modeling that build without cells flow. Further in this paper, the background model is updated in real-time by identifying pixels with cells flow through the multi-threshold method, which reduces the interference to the background model.
Firstly, i is the current number of collected times, and when the sensor first collects, i is 1. N rows of background images, from i-N to i-1, are buffered to establish a background initial mean model. P i is the line pixel value of the F i row collected currently, and � P i is the mean value of the rows from F i-N to F i-1 . Meanwhile, P i,j is a pixel value of the column j in the F i row, so the formula of background initial mean model is Then the initial foreground difference information EP i of line F i will be obtained after line F i is cached in Eq (6).
Based on this information, the background mask MP i of line F i is where T 1 and T 2 are the lower and the upper threshold of the background, respectively. The pixels in which cells are present will be filtered by this mask, and the new value of background

PLOS ONE
mean model � P i 0 of rows F i-N+1 to F i will be re-calculated by Finally, the new foreground difference information EP 0 i of cells is

Methods of instantaneous velocity
The accuracy of cell velocity calculation determines the distortion of reconstructed scanned images. In lensless imaging, when the object is far from the imaging surface, the light will be diffracted through the object to form a diffraction ring, each ring of gray level is uniform. Therefore, the maximally stable extremal regions (MSER) algorithm is used to detect the alternating light and dark diffraction rings, and then the feature points on the boundary of the maximally stable extremal regions are screened out. Finally, the scanning matching of feature points in another linear array sensor acquisition image is carried out by the sum of squared differences (SSD) algorithm. Then the set of feature points on two linear array sensors is obtained for calculating cell flow velocity. MSER, similar to the watershed, can detect connected regions such as diffraction rings in cell diffraction images. Under the obvious detection effect, we compressed the dynamic range of the image before MSER detection to reduce the calculation time. According to the characteristics of diffraction rings, the corner features are mostly distributed in the upper and lower vertex positions of the MSER region. So the coordinates of the Corner point are quickly determined, its ordinate is the extremum of the MSER area's ordinate, and the value of its abscissa is the mean value of the abscissa at the extremum of the ordinate. Then each MSER region can extract two corner features, and select the appropriate corners to match. It is necessary to analyze the coordinates of each corner after the initial extraction of corner features. Each corner point must be not too close, and a corner point with a more obvious corner feature should be selected while the distance is relatively close. The minimum distance between the corners' ordinates can be determined by extracting the maximum difference of the ordinates of corners. The corner points with more obvious features can be screened by the calculation matrix of the corner point features in Eq (10).
M data is a window matrix of the corner point, and M corner is a corner feature calculation matrix, which is related to the actual line array direction and is obtained by experiments. M corner and M data are the same size, and the larger the V corner , the more obvious the feature.
Assume that K feature points are extracted from the scanned image of the first linear array sensor, and an image of size (H+1)×(W+1) is extracted around the feature point k. The feature point, that is the center point of this image, is denoted as M L1 (0,0,k). Then the SSD matching algorithm on the image of the second linear array sensor is where M L2 (i,j) is the pixel value of the coordinates (i,j) on the scanned image of the second linear array sensor, and V SSD (i,j,k) is the SSD value of the pixel point and the feature point k on the scanned image of the first linear array sensor. The large the SSD value, the higher the matching degree between the two feature points. During the search process, the feature point with the largest SSD value is selected as the final matching point.
The difference in the displacement of cell images acquired by the dual-line array sensors is small because of the short distance between the first and the second linear array sensor. So an SSD matching search area of the second linear array sensor is set up based on the coordinates of the feature points of the first linear array sensor, to reduce the search efficiency of the SSD matching algorithm. Assume that the line rate of the line array sensor is f, the pixel size is s pixel , and the line array pitch is d. The coordinates of two adjacent feature points on the first linear array sensor are (x i ,y i ), (x i+1 ,y i+1 ), and the coordinates of corresponding matching points on the second linear array sensor are ðx 0 i ; y 0 i Þ, ðx iþ1 0 ; y iþ1 0 Þ. Then the time difference between the first and the second linear array sensor of the point on the cell is ðy 0 i À y i Þ=f , and the lateral displacement is : Similarly, the lateral velocity V x iþ1 0 and longitudinal velocity V y iþ1 0 of this point in the coordinate system C 2 can be calculated. Then the lateral acceleration a x 0 and longitudinal acceleration a y 0 during this period can be calculated by the two adjacent feature points V x 0 , V y 0 . The time difference between the two feature points on the cell after passing the first linear array sensor is (y i+1 −y i )/f, and the acceleration a x 0 i of the cell during this period is 8 <

:
In this way, the velocity and acceleration information of the coordinate system C 1 can be obtained. By a similar process, the K velocities and the K−1 accelerations of all feature points are calculated.

The reconstruction with variable acceleration
Suppose that an object flows in the microchannel, with V x and V y as the initial velocity of the axis x and axis y, a x and a y as the acceleration of the axis x, and axis y. According to the physical relationship of distance, speed and acceleration, the Eq (14) can be obtained Then the coordinate transformation formula of the object coordinate system mapping in the line scanned coordinate system is So the solution y 0 of the one-variable quadratic equation can be written as y 0 ¼ À B � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi In reality, it is difficult for small objects to maintain a constant acceleration flow, which is mostly variable acceleration flow. It is assumed that the object is running at speed V 0 as shown in Fig 4A, and the linear array sensor starts to acquire at time t 0 . So the instantaneous flow velocity of the object at t 1 , t 2 and t 3 are V 1 , V 2 and V 3 , respectively. That is mean, the object has three acceleration a 0 , a 1 and a 3 for three time periods when flowing through the linear array sensor. In this case, this paper adopts an iterative mapping method to map the acceleration change time in the linear scan coordinate system on the object coordinate system, so that the different acceleration areas of the object correspond to the linear scan area one by one. Then the object coordinate system maps the reconstructed image on the line scan coordinate system. As shown in Fig 4B, it is a schematic diagram of an object passing through a linear array sensor at this speed change, which respectively shows the position of the linear array sensor on the object at each time point. At the moment, the object and the linear array sensor intersect at two points a and b respectively, and the area between the original point on the object and the two points a and b flows through the linear array sensor with V 0 as the initial speed and a 0 as the acceleration. Similarly, the area between the four points a, b, c, and d on the object flows through the line array sensor with V 1 as the initial speed and a 1 as the acceleration. If the object collects data from line y 0 from t 0 to t 1 , the distance the object moves along the axis y 0 is Then the distance from the point b to the axis x is Considering the linear array sensor as a straight line, the equation of the straight line at time t 1 can be obtained by the slope of the linear array sensor in the coordinate system C 1 .
Similarly, the equation at time t 2 is Then the three acceleration regions can be mapped to the coordinate system of the object.

> < > :
To generalize it to K accelerations, the Eq (22) can be written as where The Eq (23) can be written as Then the coordinates of the object coordinate system are mapped to the linear scanning imaging coordinate system, and the speed information of the corresponding area is brought into the corresponding pixel value. By solving the quadratic equation of each acceleration region, the coordinates ðx 0 i ; y 0 i Þ of the linear scan coordinate system, mapped by the coordinates (x i ,y i ) of the object coordinate system, can be obtained. It should be noted that ðx 0 i ; y 0 i Þ are the coordinates relative to each acceleration segment, and the coordinates ðx @ i ; y @ i Þ in the coordinate system C 2 should be Then the pixel value of coordinates (x i ,y i ) can be calculated from the pixels around the coordinate ðx @ i ; y @ i Þ.

Analysis of cell foreground extraction
We used 20μm microspheres as test objects, and the angle of the dual-line array sensor is 21 degrees. When the number of acquisition lines of the sensor is set to 10 lines, the frame rate is 1230fps. The flow rate of the solution is related to the sampling rate of the image sensor. When the sampling rate of the image sensor is higher, the more samples can be processed per unit time. After considering these issues, this paper chooses a suitable flow rate of solution, which is 5 μL/min~10 μL/min. We extract an image every 10 frames in Fig 5, and the microsphere flowed in 0.12s. In our system, only two lines of pixels are used to reconstruct super-resolution images. This chapter explains the foreground extraction of the scanned image. Fig 6A is the image of 500 lines which are scanned pixels from the first linear array sensor. Due to the unevenness of the light source, there are vertical stripes with uneven brightness and width on the scanned image. When the number of cache lines N is taken as 20, the background mean model and the algorithm in this paper are tested. Fig 6B is the extracted microsphere scanned image by the background mean model, and the background part does not eliminate the uneven noise very well, although the microsphere foreground part can be separated. Differently, the pixel value of foreground difference information is firstly assigned 1, when is between 15 and -15, otherwise is 0, as shown in Fig 6C. Obviously, it roughly divides the white background part and the black foreground part. After avoiding the influence of the foreground part on the background mean model, the microsphere scanned image is shown in Fig 6D. Compared with the image extracted by the background mean model, the algorithm proposed in this paper has less background noise. Therefore, background interference can be largely reduced, and cell images can be more accurately extracted.

PLOS ONE
To reduce the search efficiency of the SSD matching algorithm, this paper sets up the SSD matching search area of the second linear array sensor based on the feature point coordinates of the first linear array sensor. Using this algorithm when H = W = 20, 11 feature points are detected and matched on the scanning image (Fig 9A). The first and third lines are the images corresponding to the feature points on the first linear array, and the second and fourth lines are the images corresponding to the feature points on the second linear array. As you see the feature points matching is accurate in Fig 9B. Then the velocities of all 11 feature points and 10 acceleration information, shown in Table 1, can be calculated by applying formulas (1), (2), (12), (13). When the microspheres flow in the channel, the maximum change of the transverse velocity along the channel direction is about 216 μm/s, and the longitudinal velocity is about 514 μm/s. It is obvious that the horizontal and vertical velocity are not stable in the process of cell flow, and the instantaneous velocity of cell flow can be accurately calculated using the method presented in this paper.

Reconstruction results
Having gathered the information of the velocity of the microsphere, the scanned image of the 20μm microsphere was reconstructed. To compare the superiority of the algorithm of the variable acceleration, we use the algorithm of the variable acceleration and the uniform velocity to reconstruct the microsphere, and the results are shown in Fig 10A and 10B. Significantly, the latter is much better than the former, and the multi-order diffraction ring of the 20μm

PLOS ONE
microsphere with little distortion, can be observed from the reconstruction image. This shows that the algorithm of the variable acceleration is necessary in the case that the actual flow direction of the cell is variable and the speed is variable. The resolution is related to the pixel size and the tilt angle of the linear array sensor. According to the Eq (3), the equivalent pixel size is 0.79μm, which is 2.78 times higher than the pixel size of the area array sensor. When the microsphere is collected by the area array sensor with the same pixel size 2.2μm, its resolution is very low as shown in in Fig 10C. The details are much fuzzier than that in Fig 10B, when we enlarge the image to 2.78 times in Fig 10D. The reconstructed super-resolution image is a diffraction image of the microsphere, and the image of the microsphere can be recovered by the de-diffraction algorithm. This algorithm

PLOS ONE
has been studied in the paper [19], and is cited directly here. Fig 11A shows an image of a 20μm microsphere under a 10x microscope. After being magnified four times, the de-diffracted image of Fig 10D is shown in Fig 11B, and the de-diffracted image of Fig 10D is shown in Fig 11C. The pixel values of the white line are plotted in Fig 11D, by comparing these images, the recovery results in this paper has smoother edges and clearer details. We calculated the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) about the enlarged image of area array sensor and the de-diffraction image of dual-line array sensor. In Table 2, because the ideal image is used as the reference image, the PSNR and SSIM of all images are relatively low. However, the de-diffraction image of the dual-line array

PLOS ONE
sensor has a higher PSNR (Improved 1.62 times), and its SSIM is closest to 1 (Improved 3.96 times). After de-diffraction, the size of this microsphere is 20.7375μm, and the error of microsphere size in our experiment is less than 10%. We calculated the size and its error of 50 microsphere images of dual-line array sensor in Fig 12. Their diameter calculated is shown in Fig 12A, and the white column represents the more part, the black column represents the less part. So it can be seen that the error between the calculated diameter and the real diameter is small, almost within 2μm. Meanwhile, the diameter error of each microsphere is also calculated in Fig 12B,  In the above test, the angle between the micro-channel and the linear array sensor is 21 degrees, and the sensor pixel size is 2.2μm. If a smaller pixel sensor or a smaller angle is used, the method in this paper is still applicable, and the equivalent pixel size is smaller, as shown in Fig 13. The resolution magnification is only related to the tilt angle, not the pixel size.
We have done the same experiment when the angle is 15 or 10 degrees and the pixel size is 2.2μm (Fig 14). As the angle is 15 degrees, the equivalent pixel size is 0.569μm. As the angle is 10 degrees, the equivalent pixel size is 0.382μm. It is smaller than the equivalent pixel size of 0.775μm in paper [8] and 0.770μm in paper [20] by the pixel size image sensor (1.67μm).
The quantity of information can be evaluated by image entropy, and the higher the value, the more information. In Table 3, with the decrease of angle, the image entropy becomes less and less at the real size. When these image is enlarged to same size, the image entropy decreases greatly. This means that when the angle is smaller, the image resolution is higher, but the amount of information is less. Therefore, the tilt angle can be selected to meet you different needs conveniently, just rotating the microfluidic chip.

Conclusion
In summary, the super-resolution scanning system, using the dual-line array image sensor, is demonstrated to obtain the super-resolution image of cells. Firstly, the method, combined by background mean model and a multi-threshold foreground coarse segmentation method, is designed to extract the cells foreground information from the line of scanning image. Secondly, the multiple sets of velocities and accelerations of cells passing through linear array

PLOS ONE
sensors can be calculated with the MSER and SSD algorithm. Then the reconstruction model of scanning image is deduced with uniform speed, uniform acceleration and variable acceleration flow. Finally, the super-resolution image of the cells can be reconstructed. When the pixel size of the linear array sensor is 2.2μm and the angle is 21 degrees, the equivalent pixel size is 0.79μm (Improved 2.8 times, and improved 2.15 times in paper [8,20]). After de-diffraction, the size error of 20μm microsphere was 3.249%, and the PSNR was improved 1.62 times, the SSIM was improved 3.96 times. With the same system structure, the equivalent pixel size can be 0.382μm as the angle is 10 degrees, but the image entropy also decreases. Furthermore, the resolution and the flow rate of solution can be improved by using image sensors with smaller pixels and higher sampling rates, or using the high-throughput microfluidic chips of the multi-channel, and high-throughput analysis can be achieved in the paper [21]. Therefore, it is sufficient to demonstrate that the proposed super-resolution scanning algorithm and system is

PLOS ONE
effective. The application of the algorithm in lensless optical fluid microscopy can provide a more convenient method of cell detection instruments.