The processing and analyzing derived of limited image data
Image big data and easy-use models such as deep networks(DNs) have expedited artificial intelligence (AI) currently. But, the issue for limited images which are often captured from complex, hostile and other scenarios has been challenging still yet:objects are too small to be recognized; the boundaries are fuzzy-overlapped; or even all the object information indicated in the images is uncertain due to camouflages and occlusions. The limited image data are featured with small-sample, small-object, incompleteness, uncertainty. Obviously, the tackle of limited image data is different from image big data:1) image big data fits Gaussian distributions (mean u and σ) in terms of statistical central limit theorem, especially when data scales are much larger than data dimensions. This feature is beneficial to make statistical inference by the 3σ rule, which indicates that 99.73% of samples are within the range of[u-3σ, u+3σ]. Possibly due to the concentration saliency fundamental, the statistical AI models such as DNs become very popular and seemly successful. However, for small dataset, the statistical consistency is usually poor, and the robust features cannot be identified based on concentration saliency. Thus, the statistical inference AI models are not feasible for small training data. 2) Image objects are often mutually occluded in costly and rare scenarios, and the delicate camouflages and masks, poor and complex luminous environment, as well as hostile disturbances, often make the image information itself or the information dimensions incomplete and unconfident. These issues make the computation load very heavy because the missed information or dimension has a large number of possibilities. 3) A large number of big data techniques have been proposed and seem very competitive in image processing field. For example, the DNs have achieved the best rank in many image big datasets publicly available. These features have extremely highlighted the contribution of DNs. However, many face recognition systems do not meet the requirement of accuracy and precision even when the face data is big enough. The statistical inference models such DNs cannot do precise inference, and some errors do exist due to the statistical inference itself. For limited image data, the consequence has been declined further especially when the number of samples is less than the dimensions. The partition boundaries in the sample space cannot be uniquely fixed by training samples. That is to say, to determine the inference model needs to solve irreversible inverse problems, meaning that the definite model cannot be uniquely fixed, and only a reduced model can be fixed. For any query sample, the simplified model cannot give an explicit solution, and only a subspace can just be given which is constituted of possible solutions. How to choose an appropriate solution from the subspace is still challenging, and seemingly there are no effective and general ways. Many techniques seem a little effective on studying limited image dataset, such as level-set methods, fussy logic methods, all these methods are based on the probability metrics measuring the divergence between the existent limited image data and the apriori knowledge or the specific background. That is to say, the cost functions indicating membership degrees, levels etc. are very critical for these methods.