
反卷积效果演示
20120413 08:47:17Algorithms for Deconvolution Microscopy Over the past ten years, a wide variety of both simple and complex algorithms has been developed to assist the microscopist in removing blur from digital imageAlgorithms for Deconvolution Microscopy
Over the past ten years, a wide variety of both simple and complex algorithms has been developed to assist the microscopist in removing blur from digital images. The most commonly utilized algorithms for deconvolution in optical microscopy can be divided into two classes: deblurring and image restoration. Deblurring algorithms are fundamentally twodimensional, because they apply an operation planebyplane to each twodimensional plane of a threedimensional image stack. In contrast, image restoration algorithms are properly termed "threedimensional" because they operate simultaneously on every pixel in a threedimensional image stack.
Before continuing, several technical terms must be defined. The object refers to a threedimensional pattern of light emitted by fluorescent structures in the microscope's field of view. A raw image refers to an unprocessed digital image or image stack acquired from the microscope. Particular regions of interest within the image are referred to as features.
Deblurring Algorithms
Although common algorithms often referred to as nearestneighbor, multineighbor, noneighbor, and unsharp masking are fundamentally twodimensional, they are classified for the purposes of this discussion as deblurring algorithms. As a class, these algorithms apply an operation planebyplane to each twodimensional plane of a threedimensional image stack. For example, the nearestneighbor algorithm operates on the plane z by blurring the neighboring planes (z + 1 and z  1, using a digital blurring filter), then subtracting the blurred planes from the z plane. Multineighbor techniques extend this concept to a userselectable number of planes. A threedimensional stack is processed by applying the algorithm to every plane in the stack. In this manner, an estimate of the blur is removed from each plane. Figure 1 presents a single focal plane selected from a threedimensional stack of optical sections, before processing (Figure 1(a)), and after deconvolution by a nearestneighbor algorithm (Figure 1(b)). The data were acquired from a preparation of Xenopus cells stained for microtubules.
The deblurring algorithms are computationally economical because they involve relatively simple calculations performed on single image planes. However, there are several major disadvantages to these approaches. First, noise from several planes is added together. Second, deblurring algorithms remove blurred signal and thus reduce overall signal levels. Third, features whose point spread functions overlap in particular z planes may be sharpened in planes where they do not really belong (in effect, the apparent position of features may be altered). This problem is particularly severe when deblurring single twodimensional images because they often contain diffraction rings or light from other structures that will then be sharpened as if they were in that focal plane. Taken together, these findings indicate that deblurring algorithms improve contrast, but they do so at the expense of decreasing the signaltonoise ratio and may also introduce structural artifacts in the image.
Twodimensional deblurring algorithms may be useful in situations where a quick deblurring operation is warranted or when computer power is limited. These routines work best on specimens that have fluorescent structures distributed discretely, especially in the zaxis. However, simple deblurring algorithms induce artifactual changes in the relative intensities of pixels and should be applied with caution (or preferably, not applied at all) to images destined for morphometric measurements, quantitative fluorescence intensity measurements, and intensity ratio calculations.
Image Restoration Algorithms
The primary function of image restoration algorithms in deconvolution microscopy is to deal with blur as a threedimensional problem. Instead of subtracting blur, they attempt to reassign blurred light to the proper infocus location. This is performed by reversing the convolution operation inherent in the imaging system. If the imaging system is modeled as a convolution of the object with the point spread function, then a deconvolution of the raw image should restore the object. However, the object cannot be restored perfectly because of the fundamental limitations inherent in the imaging system and the imageformation model. The best that can be done is to estimate the object given these limitations. Restoration algorithms estimate the object, following the logic that a good estimate of the object is one that, when convolved with the point spread function, yields the raw image.
An advantage of this formulation is that convolution operations on large matrices (such as a threedimensional image stack) can be computed very simply using the mathematical technique of Fourier transformation. If the image and point spread function are transformed into Fourier space, the convolution of the image by the point spread function can be computed simply by multiplying their Fourier transforms. The resulting Fourier image can then be backtransformed into real threedimensional coordinates.
Inverse Filter Algorithms
The first image deconvolution algorithms to be developed were termed inverse filters. Such filters, along with their cousins the regularized inverse filters, have been employed in electronic signal processing since the 1960s and were first applied to images in the late 1970s. In most imageprocessing software programs, these algorithms go by a variety of names including Wiener deconvolution, Regularized Least Squares, Linear Least Squares, and TikhonovMiller regularization.
An inverse filter functions by taking the Fourier transform of an image and dividing it by the Fourier transform of the point spread function. Because division in Fourier space is the equivalent of deconvolution in real space, this is the simplest method to reverse the convolution that produced the blurry image. The calculation is rapid, about as fast as the twodimensional deblurring methods discussed above. However, the utility of this method is limited by noise amplification. During division in Fourier space, small noise variations in the Fourier transform are amplified by the division operation. The result is that blur removal is compromised as a tradeoff against a gain in noise. In addition, an artifact known as ringing can be introduced.
Noise amplification and ringing can be reduced by making some assumptions about the structure of the object that gave rise to the image. For instance, if the object is assumed to be relatively smooth, noisy solutions with rough edges can be eliminated. This approach is termed regularization. A regularized inverse filter can be described as a statistical estimator that applies a certain kind of constraint on possible estimates, given some assumption about the object: in this case, smoothness. A constraint on smoothness enables the algorithm to select a reasonable estimate out of the large number of possible estimates that can arise because of noise variability.
Regularization can be applied in one step within an inverse filter, or it can be applied iteratively. The result is usually smoothed (stripped of higher Fourier frequencies). Much of the "roughness" being removed in the image occurs at Fourier frequencies well beyond the resolution limit and, therefore, the process does not eliminate structures recorded by the microscope. However, because there is a potential for loss of detail, software implementations of inverse filters typically include an adjustable parameter that enables the user to control the tradeoff between smoothing and noise amplification.
Constrained Iterative Algorithms
In order to improve the performance of inverse filters, a variety of additional threedimensional algorithms can be applied to the task of image restoration. These methods are collectively termed constrained iterative algorithms, and operate in successive cycles (thus, the term "iterative"). In addition, these algorithms also usually apply constraints on possible solutions, which not only help to minimize noise and other distortion, but also increase the power to restore blurred signal.
A typical constrained iterative algorithm operates as follows. First, an initial estimate of the object is performed, which is usually the raw image itself. The estimate is then convolved with the point spread function, and the resulting "blurred estimate" is compared with the original raw image. This comparison is employed to compute an error criterion that represents how similar the blurred estimate is to the raw image. Often referred to as a figure of merit, the error criterion is then utilized to alter the estimate in such a way that the error is reduced. A new iteration then takes place: the new estimate is convolved with the point spread function, a new error criterion is computed, and so on. The best estimate will be the one that minimizes the error criterion. As the algorithm progresses, each time the error criterion is determined not to have been minimized, the new estimate is blurred again, and the error criterion recomputed. The entire process is repeated until the error criterion is minimized or reaches a defined threshold. The final restored image is the object estimate at the last iteration.
The data presented in Figure 2 are taken from a threedimensional image stack containing 70 optical sections, and recorded at intervals of 0.2 micrometers through a single XLK2 cell. A widefield imaging system, equipped with a high numerical aperture (1.40) oil immersion objective, was employed to acquire the images. The lefthand image (Figure 2(a), labeled Original Data) is a single focal plane taken from the threedimensional stack, before the application of any data processing. Deblurring by a nearestneighbor algorithm produced the result shown in the image labeled Nearest Neighbor (Figure 2(b)). The third image (Figure 2(c), Restored) illustrates the result of restoration by a commercially marketed constrained iterative deconvolution software product. Both deblurring and restoration improve contrast, but the signaltonoise ratio is significantly lower in the deblurred image than in the restored image. The scale bar in Figure 2(c) represents a length of 2 micrometers, and the arrow (Figure 2(a))designates the position of the line plot presented in Figure 4.
A majority of the algorithms currently applied to deconvolution of optical images from the microscope incorporate constraints on the range of allowable estimates. A commonly employed constraint is smoothing or regularization, as discussed above. As iterations proceed, the algorithm will tend to amplify noise, so most implementations suppress this with a smoothing or regularization filter.
Another common constraint is nonnegativity, which means that any pixel value in the estimate that becomes negative during the course of an iteration is automatically set to zero. Pixel values can often become negative as the result of a Fourier transformation or subtraction operation in the algorithm. The nonnegativity constraint is realistic because an object cannot have negative fluorescence. It is essentially a constraint on possible estimates, given our knowledge of the object's structure. Other types of constraints include boundary constraints on pixel saturation, constraints on noise statistics, and other statistical constraints.
Classical Algorithms for Constrained Iterative Deconvolution
The first applications of constrained iterative deconvolution algorithms to images captured in the microscope were based on the JanssonVan Cittert (JVC) algorithm, a procedure first developed for application in spectroscopy. Agard later modified this algorithm for analysis of digital microscope images in a landmark series of investigations. Commercial firms such as Vaytek, Intelligent Imaging Innovations, Applied Precision, Carl Zeiss, and Bitplane currently market various implementations of Agard's modified algorithm. In addition, several research groups have developed a regularized least squares minimization method that has been marketed by Vaytek and Scanalytics. These algorithms utilize an additive or multiplicative error criterion to update the estimate at each iteration.
Statistical Iterative Algorithms
Another family of iterative algorithms uses probabilistic error criteria borrowed from statistical theory. Likelihood, a reverse variation of probability, is employed in the maximum likelihood estimation (MLE) and expectation maximization (EM) commercially available algorithms implemented by SVI, Bitplane, ImproVision, Carl Zeiss, and Autoquant. Maximum likelihood estimation is a popular statistical tool with applications in many branches of science. A related statistical measure, maximum entropy (ME  not to be confused with expectation maximization, EM) has been implemented in image deconvolution by Carl Zeiss.
Statistical algorithms are more computationally intensive than the classical methods and can take significantly longer to reach a solution. However, they may restore images to a slightly higher degree of resolution than the classical algorithms. These algorithms also have the advantage that they impose constraints on the expected noise statistic (in effect, a Poisson or a Gaussian distribution). As a result, statistical algorithms have a more subtle noise policy than simply regularization, and they may produce better results on noisy images. However, the choice of an appropriate noise statistic may depend on the imaging condition, and some commercial software packages are more flexible than others in this regard.
Blind Deconvolution Algorithms
Blind deconvolution is a relatively new technique that greatly simplifies the application of deconvolution for the nonspecialist, but the method is not yet widely available in the commercial arena. The algorithm was developed by altering the maximum likelihood estimation procedure so that not only the object, but also the point spread function is estimated. Using this approach, an initial estimate of the object is made and the estimate is then convolved with a theoretical point spread function calculated from optical parameters of the imaging system. The resulting blurred estimate is compared with the raw image, a correction is computed, and this correction is employed to generate a new estimate, as described above. This same correction is also applied to the point spread function, generating a new point spread function estimate. In further iterations, the point spread function estimate and the object estimate are updated together.
Blind deconvolution works quite well, not only on highquality images, but also on noisy images or those suffering from spherical aberration. The algorithm begins with a theoretical point spread function, but adapts it to the specific data being deconvolved. In this regard, it spares the user from the difficult process of experimentally acquiring a highquality empirical point spread function. In addition, because the algorithm adjusts the point spread function to the data, it can partially correct for spherical aberration. However, this computational correction should be a last resort, because it is far more desirable to minimize spherical aberration during image acquisition.
The results of applying three different processing algorithms to the same data set are presented in Figure 3. The original threedimensional data are 192 optical sections of a fruit fly embryo leg acquired in 0.4micrometer zaxis steps with a widefield fluorescence microscope (1.25 NA oil objective). The images represent a single optical section selected from the threedimensional stack. The original (raw) image is illustrated in Figure 3(a). The results of deblurring by a nearest neighbor algorithm appear in Figure 3(b), with processing parameters set for 95 percent haze removal. The same image slice is illustrated after deconvolution by an inverse (Wiener) filter (Figure 3(c)), and by iterative blind deconvolution (Figure 3(d)), incorporating an adaptive point spread function method.
Deconvolution of Confocal and Multiphoton Images
As might be expected, it is also possible to restore images acquired with a confocal or multiphoton optical microscope. The combination of confocal microscopy and deconvolution techniques improves resolution beyond what is generally attainable with either technique alone. However, the major benefit of deconvolving a confocal image is not so much the reassignment as the averaging of outoffocus light, which results in decreased noise. Deconvolution of multiphoton images has also been successfully utilized to remove image artifacts and improve contrast. In all of these cases, care must be taken to apply the appropriate point spread function, especially if the confocal pinhole aperture is adjustable.
Implementation of Deconvolution Algorithms
Processing speed and quality are dramatically affected by how a given deconvolution algorithm is implemented by the software. The algorithm can be exercised in ways that reduce the number of iterations and accelerate convergence to produce a stable estimate. For example, the unoptimized JanssonVan Cittert algorithm usually requires between 50 and 100 iterations to converge to an optimal estimate. By prefiltering the raw image to suppress noise and correcting with an additional error criterion on the first two iterations, the algorithm converges in only 5 to 10 iterations. In addition, a smoothing filter is usually introduced every five iterations to curtail noise amplification.
When using an empirical point spread function, it is critical to use a highquality point spread function with minimal noise. No deconvolution package currently in the market uses the "raw" point spread function recorded directly from the microscope. Instead, the packages contain preprocessing routines that reduce noise and enforce radial symmetry by averaging the Fourier transform of the point spread function. Many software packages also enforce axial symmetry in the point spread function and thus assume the absence of spherical aberration. These steps reduce noise and aberrations, and make a large difference in the quality of restoration.
Another important aspect of deconvolution algorithm implementation is preprocessing of the raw image, via routines such as background subtraction, flatfield correction, bleaching correction, and lamp jitter correction. These operations can improve the signaltonoise ratio and remove certain kinds of artifacts. Most commercially available software packages include such operations, and the user manual should be consulted for a detailed explanation of specific aspects of their implementation.
Other deconvolution algorithm implementation issues concern data representation. Images can be divided into subvolumes or represented as entire data blocks. Individual pixel values can be represented as integers or as floatingpoint numbers. Fourier transforms can be represented as floatingpoint numbers or as complex numbers. In general, the more faithful the data representation, the more computer memory and processor time required to deconvolve an image. Thus, there is a tradeoff between the speed of computation and the quality of restoration.
Conclusions
Iterative restoration algorithms differ from both deblurring algorithms and confocal microscopy in that they do not remove outoffocus blur but instead attempt to reassign it to the correct image plane. In this manner, outoffocus signal is utilized rather than being discarded. After restoration, pixel intensities within fluorescent structures increase, but the total summed intensity of each image stack remains the same, as intensities in formerly blurred areas diminish. Blur occurring in surrounding details of the object is moved back into focus, resulting in sharper definition of the object and better differentiation from the background. Better contrast and a higher signaltonoise ratio are also usually achieved at the same time.
These properties are illustrated in Figure 2, where it is demonstrated that restoration improves image contrast and subsequently enables better resolution of objects, without the introduction of noise that occurs in deblurring methods. Perhaps more importantly for image analysis and quantitation, the sum of the fluorescence signal in the raw image is identical to that in the deconvolved image. When properly implemented, image restoration methods preserve total signal intensity but improve contrast by adjustment of signal position (Figure 4). Therefore, quantitative analysis of restored images is possible and, because of the improved contrast, often desirable.
The graphical plot presented in Figure 4 represents the pixel brightness values along a horizontal line traversing the cell illustrated in Figure 2 (the line position is shown by the arrow in Figure 2(a)). The original data are represented by the green line, the deblurred image data by the blue line, and the restored image data by the red line. As is apparent in the data, deblurring causes a significant loss of pixel intensity over the entire image, whereas restoration results in a gain of intensity in areas of specimen detail. A similar loss of image intensity as that seen with the deblurring method occurs with the application of any twodimensional filter.
When used in conjunction with widefield microscopy, iterative restoration techniques are light efficient. This aspect is most valuable in lightlimited applications such as highresolution fluorescence imaging, where objects are typically small and contain few fluorophores, or in livecell fluorescence imaging, where exposure times are limited by the extreme sensitivity of live cells to phototoxicity.
来源：如上 
反卷积效果1
20120412 16:32:42Widefield Images Raw Deconvolution Downloads: AutoDeblur & AutoVisualize Brochure ...Clearing Up Deconvolution BrWidefield Images
Raw
Deconvolution
Downloads:
AutoDeblur & AutoVisualize Brochure Clearing Up Deconvolution Brochure AutoQuant X consists of two of the most advanced image deconvolution and 3D visualization software products available today! AutoQuant X comes with both AutoDeblur and AutoVisualize.
AutoDeblur
Offers the most powerful Deconvolution tools, including AutoQuant X's proprietary Blind Deconvolution. The Blind Deconvolution algorithm is both iterative and constrained, yet unlike other deconvolution products, it does not require the manual calibration and measurement of the point spread function (PSF). Instead, it constructs the PSF directly from the collected data set.
All microscopes are limited by the laws of physics, and these laws state that when light passes through a medium, that light will bend. This is one of the most common causes of haze and blur in microscopy images. Deconvolution can correct this problem, not only removing the haze and blur, but restoring vital detail to datasets.
AutoDeblur works with Widefield EpiFluorescence, DIC, TransmittedLight Brightfield and most Confocal microscopes including TwoPhoton and Spinning Disk.
On the hardware side of the application, AutoDeblur is also multithreaded. This gives you the ability to run concurrent processes. To take it a step further, dual processors can be utilized to their maximum potential by running CPU intensive processes concurrently on each one (such as deconvolution).
Please note, AutoQuant X operates most efficiently with a 64bit Windows computer. While a 32bit Windows machine can be used, processing time will increase ten fold.
Downloads:
AutoDeblur & AutoVisualize Brochure Clearing Up Deconvolution Brochure AutoVisualize
The goal of creating, enhancing, and saving images is to see, analyze and measure them, from all angles. AutoVisualizewill let you do just that. With AutoQuant X's 5D Viewer, timeseries datasets can be rotated through time to any angle, an orthogonal slice can be moved through the dataset, you can create movies showing the full rotation of the dataset, multiple projections are available for dynamic display of your dataset.
Side by side before and after comparison has never been easier than now with AutoQuant X’s multiPreviewer capability. Multiple concurrent 5D Viewers can be opened and synchronized for striking comparisons of data. Use the Movie Maker feature to create an .avi file of a movie as your data rotates through time, and use it in a PowerPoint presentation, or post it to the web.
Additionally, deconvolving and viewing the dataset are only part of the process; once these are done, analysis is the next step. With AutoVisualize, you can measure distances between objects, measure the surface area and volume of objects, and calculate statistics on the dataset.
Colocalization
ImagePro Express is the costeffective image enhancement software perfect for basic imaging or image capture stations.
As the first step in the ImagePro software series, ImagePro Express comes equipped with the basic features needed to capture and enhance images for scientific, medical, and industrial research.
FRET
Created for researchers focusing on proteinprotein interactions, our FRET module incorporates the two most commonly accepted algorithms: Elangovan and Periasamy, and Gordon and Herman, and adds our own proprietary algorithm as well. All three algorithms correct Cross Talk, but AutoQuant X's proprietary algorithm goes one step further. Where other algorithms make assumptions about the Cross Talk, our Maximum Likelihood Estimation algorithm mathematically solves the Cross Talk, allowing for a much more accurate analysis of the images.
FRET X also includes several helpful preprocessing tools to turn your images into precisely analyzed statistics. The Channel to Channel alignment tool corrects for shifts between channels, shifts that would otherwise corrupt your analyses. The Background Subtraction tool eliminates artifacts that can compromise the results of your analyses.
Image Alignment
A common problem during 3D image acquisition is misalignment between the slices due to stage vibration, filter cube, or a host of any other causes. AutoQuant X's Image Alignment module is the cure for this ill. With powerful and accurate algorithms for slice to slice alignment as well as channel to channel alignment, AutoQuant X can correct virtually any misalignment issue. Our Image Alignment feature handles problems from vertical to horizontal shift, warping, as well as rotation.
Object Counting and Tracking
The ultimate in timeseries image analysis, our Object Counting and Tracking module has the ability to count a nearly infinite number of objects. Our platform is multiDimensional compatible, and can load and process 3D timeseries datasets, making it the ultimate tool for counting and tracking 3D objects through time. Counting and Tracking X has powerful and intuitive preprocessing tools to give complete control over the objects to be counted and tracked.
Once the objects have been counted, the objects can then be tracked through the time series. Easy to follow tracking lines show where the object has moved through time, for a vivid graphic depiction of the objects’ activities.
Finally, properties such as the size, circularity, volume, speed, acceleration, distance traveled between timepoints and much more can be calculated and exported to a spreadsheet for later analysis.
Ratiometrics
This module is tailored for intercellular ionic imaging. Designed for researchers who study the effect of changing the environment of a sample by comparing the same sample with differing concentrations of calcium, or changing the pH, Ratiometrics X fills the bill.
Ratiometrics X employs the Grynkiewicz Equation for Ion Concentration and produces accurate results with visuallyemphasized color mapping.
Built in pre and postprocessing steps such as Automatic Alignment, Remove Spot Noise and Gaussian Smoothing make for a cleaner resultant image with less steps for the user.
Once the preprocessing has been done, Ratiometrics X delivers concise statistics, and is easily exported into an xls file. Select specific areas to analyze by creating regions of interest, fully defined by the user. Get the results you need from the areas you want.
Deconvolution Algorithms
No/Nearest Neighbor  The No/Nearest Neighbor algorithms work by deblurring one 2D image slice at time. They utilize a subtractive approach based on the simplifying approximation that the outoffocus contribution in the image slice is equal to a blurred version of the collected adjacent slices. These algorithms are fast, qualitative and work particularly well on images with strong signal to noise ratios.
Inverse Filter  The Inverse filter or Wiener filter is a onestep image process performed in Fourier space by dividing the captured image by the PSF. This algorithm is a fast and effective way to remove the majority of the blur from widefield images using a symmetric or sphericallyaberrated theoretical or acquired point spread function. Image noise is managed through an adjustable smoothing operation applied during processing. Algorithm results are qualitative and generally better than the no/nearest neighbor algorithms especially in the XZ and YZ perspectives.
NonBlind  NonBlind Deconvolution is a constrained iterative approach that requires a measured or synthetically acquired PSF for processing. This algorithm is based on the same statistical and computational foundation of AutoQuant's renowned Adaptive Blind algorithm and shares the same superior noise handling characteristics and flexibility. However, the PSF provided is assumed to be accurate and is not modified during the deconvolution. NonBlind offers an excellent balance between quality results, quantitative analysis and time to process.
Adaptive Blind  AutoQuant's Blind Deconvolution algorithm draws upon the statistical techniques of Maximum Likelihood Estimation (MLE) and Constrained Iteration (CI) to produce the most robust and statistically accurate results available on the market today. It does not require a measured or acquired PSF, but instead iteratively reconstructs both the underlying PSF and best image solution possible from the collected 3D dataset. It is well suited for environments where signal to noise ratios are challenging and operates across the full spectrum of modalities.
2D Blind  2D Blind Deconvolution is an adaptive method for 2D data that does not require your microscope and image parameters. 2D Blind Deconvolution works by iteratively improving the data set and works with time series image sets, individual color channels or intensity images. 2D Blind Deconvolution is capable of restoring features at a subpixel resolution level and can work with almost any 2D image.
2D Realtime  Uses AutoQuant's powerful 2D blind deconvolution algorithm to remove blur from a single image. No microscope or image parameters are required. Useful Sharper/Smoother, thickner/thinner and brighter/dimmer controls help guide you to get the very most out of your data. Deblur one frame instantaneously or several in near realtime.
来源：http://www.meyerinst.com/imagingsoftware/autoquant/index.htm

反卷积效果2
20120412 16:42:22Digital image processing Raoul Behrend Geneva Observatory CH1290 Sauverny  Switzerland Raoul.Behrend@unige.ch Version francophone ...My hobbies: celestrial mechanics, ...astrometry, earth satDigital image processing
Raoul Behrend
Geneva Observatory
CH1290 Sauverny  Switzerland
Raoul.Behrend@unige.chMy hobbies: celestrial mechanics, astrometry, earth satellites, aso. The corresponding programs I wrote are Aplaxxxx (complete offsetdarkflat treatement of ccd's frames  coming soon as a stampware (freeware registered by a beautyful stamped postcard <;3)~~~~)), BifsConv, Carte, Eph, Nifflo, Photo, MiniMir, DetOrb, SatJup, CourbRot, RotaRap, and others. They are made for personnal computer (PC). A very nice application of DetOrb was the determination of the scale of the solar system using the parallax of the asteroid 2000 QW_{7}: 8.796±0.003" (document in french only).
Here are some pictures from my image processor BIFSxx. The left parts are original images and the right parts represent the results. The 48 supported formats are derived from the following families: bmp, imq (spaceprobes, with or without Huffman's compression), ibg, fit (fits, 1, 2 and 4bytes integers, 8bytes reals), st* (st4, st4x, st5, st6, st7, st8, st9, stx, pixcel, with or without compression), cpa (~15 bits, with or without compression), t1m, arn, pds (and vicar), per (Titus format of satellite images), img, ccd, raw, pic, tif (tiff, 8 and 16 bits, uncompressed), mx5, hx5, xl8, gif, pcx, imi (TMSat satellite).
To limit the bandwidth, this document's images are the shortest files from jpg and png. The other better pictures can be accessed with a simple click.
Deconvolution and elimination of interferences.
Turbulences on the surface of Jupiter. The dark current and flat field were not taken into account; this explain the embossed shape. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Io. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Io's surface. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Europe. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Ganymed. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Callisto. Origin: Voyager probe.
Zoom, deconvolution and elimination of interferences.
Amalthea. Origin: Voyager probe.
Deconvolution and elimination of interferences.
Titan's atmosphere. Origin: Voyager 1 probe.
Histogram adaptation after deconvolution and elimination of interferences.
Saturn's rings. Origin:Voyager probe.
Zoom, deconvolution and elimination of interferences.
Triton. Origin: Voyager 2 probe.
Deconvolution.
Sun's surface in alphaband of hydrogen. Origin: the st4 of Armin Behrend.
Deconvolution.
Sun spot. Origin: the st4 of Armin Behrend.
Deconvolution.
Jupiter. Origin: negative film by Armin Behrend.
Parasits removal.
Mars' ground. Origin: Viking probe.
Deconvolution (non final version).
Mars: composit of twenty images. Origin: the StarLight Xpress HX516 camera of Martino Nicolini, at Cavezzo Observatory.
Deconvolution.
Craters on the moon. Origin: sample image of the SBIG's st4.
Zoom and deconvolution.
Saturne. Origine: la st4 de Michel Mollard.
Deconvolution.
Nucleus and jets of the comet HaleBopp. Origin: André Blécha. Remark: the telescope was in construction and the mirror not yet recovered with aluminium; that's why the quality is not very high.
Deconvolution after sampling of the point spread function.
Unguided erratic motion of the telescope. Origin: negative film by Raphaël Jubin.
No corrections of nonlinearities were made; original image in jpg format.
Deconvolution after sampling of the point spread function.
Unguided erratic motion of the telescope. Origin: negative film by Raphaël Jubin.
No corrections of nonlinearities were made; original image in jpg format.
Deconvolution after sampling of the point spread function. Right: detail of the double star.
Unguided erratic motion of the telescope. Origin: Matthieu Conjat's ST4.
Deconvolution of three R G and B images. Origin: the st7 of Richard Jacobs.
The trichromic recombination is (fastly) made with Gimp, a PaintShopPro like program...
Deconvolution and elimination of interferences.
SanFrancisco's Bay seen by the TMSat satellite and operated by the Thai MicroSatellite Company Ltd and the Surrey Satellite Technology Ltd. The ccd image is raw, non processed for the sensivity of each column; that explains the horizontal lines.
Deconvolution.
Grand Canyon seen by the UOSat12 satellite operated by the Surrey Satellite Technology Ltd.
Deconvolution. Image of Jupiter taken by Bastien Confino, at OFXBStLuc.
Deconvolution. Image of Jupiter taken by Bastien Confino, at OFXBStLuc.
Local adaptation of contrast and binning of pixels. Calcifying supraspinatus tendinitis in the context of a shoulder periarthritis. (Medical informations related to this picture: DrABiz). Radiographic material kindly loaned by Sion Hospital and scanned at Geneva Observatory. Grace to the binning, the pixels issued by the 8bits scanner are transformed to ~12bits metapixels.More informations:
You may post here all kind of images to test with BIFSxx, even in less common formats. Please avoid the loosy jpg and immoral images! Don't preprocess nonlinearities (for example histogram and gamma corrections, except for film); dark current, zero level and flat field are warmly encouraged for ccd images. The best results will be shown here. The raw images of Voyagers and Vikings probes are kindly supplied by NASA/JPL/CalTech as cidirom. The processed and raw images wih no indication of origin are ©OMG. No reproduction without my preliminary agreement.Some links:
My personnal homepage ftp zone: some files and programs, depending on mood. Actually: SatJup, a small simulator of galilean moons.
 St42Bmp, a st4 to bmp converter.
 BifsConv, a converter from many formats (bmp, imq, st4, stx, ima, cpa, pic, tif, gif, mx5, hx5, xl8, dat, pcx, per, dib, arn, blu, bb1, bb2, bb3, bb4, grn, irq, ibg, img, ir1, ir2, ir3, n07, n15, pds, red, sgr, sur, sun, vio, fts, 08b, raw, sbi, st5, st6, st7, st8, st9, 237, imi, ccd) to fit. The complete description is available here.
 The Tle.New file which gives orbital elements of visual (by naked eyes) satellites in real Tle format is directly «pumped» from T.S. Kelso's site. An other very good source for TLEs is the site of Mike McCants.
 The TleSort.Sat and Oldies.Sat files form a database of earth artificial satellites orbital elements, they are also compressed by GZip. The format is near to the ordinary Tle's, but it's possible that they are not recognized by other programs than MiniMir (demos: DemoMir1 and DemoMir2).
 TleSort.zip contains all files needed to execute
TleSort. Options for
TleSort are:
 /L sort by launch
 /C sort in reverse order
 /L0 delete line 0 which may be uncompatible with some software
来源：http://obswww.unige.ch/~behrend/page_mgo.html 
反卷积原理
20180513 15:17:20一 介绍反卷积，可以理解为卷积操作的逆运算。这里千万不要当成反卷积操作可以复原卷积操作的输入值，反卷积并没有那个功能，它仅仅是将卷积变换过程中的步骤反向变换一次而已，通过将卷积核转置，与卷积后的结果再...一 介绍反卷积，可以理解为卷积操作的逆运算。这里千万不要当成反卷积操作可以复原卷积操作的输入值，反卷积并没有那个功能，它仅仅是将卷积变换过程中的步骤反向变换一次而已，通过将卷积核转置，与卷积后的结果再做一遍卷积，所以它还有个名字叫转置卷积。虽然它不能还原出原来卷积的样子，但是在作用上具有类似的效果，可以将带有小部分缺失的信息最大化恢复，也可以用来恢复被卷积生成后的原始输入。反卷积具体步骤如下：1 首先是将卷积核反转（并不是转置，而是上下左右方向进行递序操作）。2 再将卷积结果作为输入，做补0扩充操作，即往每一个元素后面补0.这一步是根据步长来的，对于每个元素沿着步长方向补（步长1）个0。例如，步长为1就不用补0了。3 在扩充后的输入基础上再对整体补0。以原始输入的shape作为输出，按照前面介绍的卷积padding规则，计算pading的补0的位置及个数，得到补0的位置及个数，得到补0的位置要上下和左右各自颠倒一下。4 将补0后的卷积结果作为真正的输入，反转后的卷积核为filter，进行步长为1的卷积操作。注意：计算padding按规则补0时，统一按照padding='SAME'、步长为1*1的方式来计算。二 举例上图上面部分展示：以一个[1,4,4,1]的矩阵为例，进行filter为2*2，步长为2*2的卷积操作。其反卷积操作步骤如上图下半部分。在反卷积过程中，首先将2*2矩阵通过步长补0的方式变成4*4，再通过padding反方向补0，然后与反转后的filter使用步长为1*1的卷积操作，最终得出结果。但是这个结果已经与原来的全1矩阵不等了，说明转置卷积只能恢复部分特征，无法百分百的恢复原始数据。 
颜色反卷积
20180622 10:06:24颜色反卷积算法的设计针对RGB摄像机获取的颜色信息，基于免疫组化技术使用的染色剂RGB分量光的特异性吸收，分别计算每种染色剂对图像的作用效果。免疫组织化学图像处理通常用的染色包括DAB、H&E。颜色反卷积... 
基于光谱采样率的反卷积算法分析
20210209 07:46:03在光谱仪采样率低的情况下，迭代反卷积的分辨率增强效果优于维纳滤波。随着采样率的增加，维纳滤波的误差小于迭代反卷积。实验分别测量了单纵模和多纵模632.8 nm HeNe激光器光谱，并对测量结果进行反卷积处理。结果... 
tensorflow实现卷积与反卷积自编码框架
20190401 11:25:25从DCGAN中了解到了反卷积的操作，所以我本来打算能通过卷积操作作为编码器将一帧图像转换为一个20维的向量，而后再通过反卷积实现解码功能从而达到图像恢复效果，先把程序贴上，后续有空再调整网络层数和参数吧 ... 
反卷积棋盘效应解决方法
20190331 22:03:18反卷积（转置卷积）后生成的图像，放大后往往会出现棋盘效应，在深色部分尤为明显...（2）堆叠反卷积减轻重叠（效果一般） （3）网络末尾使用1x1的反卷积，可以稍微抑制棋盘效应 （4）调整卷积核权重 2.修改上采样形... 
一句话描述深度可分离卷积、分组卷积、空洞卷积、反卷积
20200505 00:31:28深度可分离卷积（(depthwise separable convolutions)...实现效果：模型轻量化，例如5*5*3（长，宽，channel）的卷积核参数个数为（5*5*3）。深度可分离卷积的参数个数为（1*1*3+3*5*1） 分组卷积（Group Convoluti... 
基于伪随机序列的维纳滤波反卷积算法的改进
20210127 17:38:11应用于相关辨识中的维纳滤波反卷积算法对噪声的适应性不理想，辨识效果不佳。据此分析了维纳滤波反卷积算法在对大地辨识的过程中对噪声适应性不理想的原因，并提出了相应的改进算法：根据检测系统冲激响应的频谱范围... 
基于特征融合的扩展反卷积SSD小目标检测方法
20200312 11:26:55基于特征融合的扩展反卷积SSD小目标检测方法，刘惠禾，高欣，为了提高SSD在小目标上的检测效果，本文提出一种基于特征融合的扩展反卷积SSD目标检测模型。该模型首先选取基础网络特定层，通过上 
卷积系列：Deconvolution（反卷积）/Transpose Convolution（转置卷积）/Fractional convolution
20190311 18:17:47在基本的网络模型中，卷积是最常见的操作，通常用来提取特征。随着卷积操作，特征图往往会逐渐变小。...因此除了卷积和池化这样会“提炼”图像的方法，也需要能够扩大特征图的“还原”图像的方法，常用的有反卷积... 
激光测距多光子分立时刻的反卷积解算方法
20210212 20:58:19并验证了在1 s观测时间内，千赫兹激光测距系统经反卷积解算前后观测量对比效果。分析结果表明，当统计分布中峰谷分辨度大于10时，各光子分立时刻可以有效分辨，该方法的时间分辨力达到0.2~0.7 ns。 
论文研究基于双层反卷积的宽场荧光显微图像盲复原.pdf
20190722 22:35:43针对宽场荧光显微图像盲复原中的不适定性和细节模糊问题，提出了基于双层反卷积的宽场荧光显微图像盲复原算法，该算法通过双层反卷积，结合图像金字塔，实现了由粗略到细致的图像复原。为抑制不适定性，外层反卷积... 
颜色反卷积算法分析（Colour Deconvolution ）
20170919 13:21:32颜色反卷积算法的设计针对RGB摄像机获取的颜色信息，基于免疫组化技术使用的染色剂RGB分量光的特异性吸收，分别计算每种染色剂对图像的作用效果。免疫组织化学图像处理通常用的染色包括DAB、H&E。 颜色反卷积可... 
PyQt5制作计算反卷积操作之后的大小的工具
20180914 09:40:01由于是DCGAN生成器模型，需要多次反卷积操作使feature变成一个给定的大小，所以要把最后的大小凑出来，虽然计算公式十分简单：(这里假设feature，kernel，input，output都是方形的，padding也是对称的) output=... 
生物信息学机制驱动的可解释深度神经网络，用于药物组合的协同预测和通路反卷积
20210315 16:56:37不仅可以减轻耐药性，而且可以提高治疗效果。抗癌药物数量的快速增长已经导致所有药物组合的实验研究变得昂贵和耗时。计算技术可以提高药物联合筛选的效率。尽管最近在将机器学习应用于协同药物组合预测方面取得了... 
上采样(UnSampling)、上池化(UnPooling)、反卷积(Deconvolution)与PixelShuffle(像素重组)之间的区别简介
20190809 21:09:05上采样(UnSampling)：即输入特征图，将其缩放到所需大小（比如2*2的宽高>4*4的宽高），其常用方法是插值方法（如最邻近插值、双线性...在深度学习中，我们经常是结合双线性插值和卷积操作来使用的。 上池... 
全卷积网络FCN模型实现
20201215 21:26:071.数据集 ...（2）使用反卷积恢复图像原来尺寸，可以接受任意大小的输入图像，而不用要求所有的训练图像和测试图像具有同样的尺寸。 (3) 得到的结果还是不够精细。进行8倍上采样虽然比32倍的效果.. 
机器学习笔记——空洞卷积
20200611 11:18:01写在前面的话 最近在学习图像课，我会把可能用到的看到挺好的知识分享出来。...总结：空洞卷积似乎是可以代替反卷积（上池化）来扩大特征图大小而且效果更佳，我觉得是一个处理小文本或者小物体的好方法。 ... 
论文研究基于复合卷积神经网络的图像超分辨率算法.pdf
20190906 15:56:45针对卷积神经网络图像超分辨率算法中的映射函数容易出现过拟合、梯度弥散等问题，提出一种由卷积网络和反卷积网络构成的复合卷积神经网络算法。提出使用RReLUs和Softplus函数结合形式作为激活函数，有效改善了过拟合... 
基于卷积神经网络的单幅图像超分辨
20210212 06:40:45改变网络结构, 图像重建由最后的反卷积上采样来实现; 采用自适应矩估计优化算法替换原本的随机梯度下降优化算法。分别在Set5和Set14测试集上进行对比实验, 实验结果表明, 改进算法在较少的训练时间下, 峰值信噪比... 
卷积自编码去噪tensorflow实现
20181130 15:40:13tensorflow下构建三层卷积层，三层反卷积层实现卷积自编码，针对系数为0.5的高斯噪声亦有较好效果，可通过tensorboard查看输入输出图像 
看懂卷积神经网络(CNN)
20140223 16:43:58为此，卷积网发明者Yann LeCun的得意门生Matthew Zeiler在2013年专门写了一篇论文，阐述了如何用反卷积网络可视化整个卷积网络，并进行分析和调优。由于课题研究需要使用卷积网，本人凭自己的理解将该文翻译成了中文... 
基于双通道卷积神经网络的深度图超分辨研究
20210212 08:54:22该模型由深、浅两个通道组成，21层的深层通道通过联合卷积与反卷积，结合跳跃连接与多尺度理论，实现深度图细节特征的快速学习；3层的浅层通道用于学习深度图的轮廓特征；最后融合深、浅两个通道，将细节与轮廓相... 
卷积神经网络在肝包虫病CT图像诊断中的应用
20201015 20:33:09以LeNet5模型为基础提出改进的CNN模型CTLeNet，采用正则化策略减少过拟合问题，加入Dropout层减少参数个数，对二分类肝包虫图像进行分类实验，同时通过反卷积实现特征可视化，挖掘疾病潜在特征。结果表明，CTLeNet... 
论文研究改进的卷积神经网络单幅图像超分辨率重建.pdf
20190906 15:41:57针对经典的基于卷积神经网络的单幅图像超分辨率重建方法网络较浅、提取的特征少、重建图像模糊等问题，提出了一种改进的卷积神经网络的单幅图像超分辨率重建方法，设计了由密集残差网络和反卷积网络组成的新型深度... 
“看懂”卷积神经网(Visualizing and Understanding Convolutional Networks)(转载)
20170729 09:40:23为此，卷积网发明者Yann LeCun的得意门生Matthew Zeiler在2013年专门写了一篇论文，阐述了如何用反卷积网络可视化整个卷积网络，并进行分析和调优。由于课题研究需要使用卷积网，本人凭自己的理解将该文翻译成了中文... 
基于全卷积回归网络的图像去雾算法
20210222 02:30:56针对当前去雾算法经常出现过度曝光、颜色失真等问题,提出了一种基于全卷积回归网络的去雾算法。该回归网络基于端到端系统,由特征提取和特征融合两部分构成。首先,输入有雾图像,经过特征提取和特征融合,最终回归为粗... 
基于卷积神经网络去噪正则化的条纹图修复
20210221 21:38:49为此,提出一种基于卷积神经网络(CNN)去噪正则化的条纹图高光区域修复算法。该方法仅需要在正常曝光和短曝光条件下获取两帧条纹图,快速实现条纹修复,步骤如下:利用Otsu方法对短曝光条纹的调制度图做二值化处理以确定...