Deriving High-Level Scene Descriptions from Deep Scene cnn features



Download 0.78 Mb.
Date01.06.2018
Size0.78 Mb.
#52663
Deriving High-Level Scene Descriptions from Deep Scene CNN Features

Akram Bayat1 and Marc Pomplun1


1 University of Massachusetts Boston, Boston,

MA, USA


E-mail: akram@cs.umb.edu, marc@cs.umb.edu



Abstract—In this paper, we generate two computational models in order to estimate two dominant global properties (naturalness and openness) for representing a scene based on its global spatial structure. Naturalness and openness are two dominant perceptual properties within a multidimensional space in which semantically similar scenes (e.g., corridor and hallway) are assigned to nearby points. In this model space, the representation of a real-world scene is based on the overall shape of a scene but not on local object information. We introduce the use of a deep convolutional neural network for generating features that are well-suited for estimating the two global properties of a visual scene. The extracted features are integrated in an efficient way and fed into a linear support vector machine (SVM) to classify naturalness versus man-madeness and openness versus closedness. These two global properties (naturalness and openness) of an input image can be predicted from activations in the lowest layer of the convolutional neural network which has been trained for a scene recognition task. The consistent results of computational models in full and restricted spatial frequency ranges suggest that the representation of an image in the lowest layer of the deep scene CNN contains holistic information of the images as it leads to highest accuracy in modelling the global shape of the scene.


Keywords Scene recognition, Deep learning, Global properties, Convolutional neural network, Shape of a scene, Spatial layout.

I. INTRODUCTION

The recognition of real world scenes has received considerable attention in computer vision. Scene recognition can facilitates vision tasks such as object detection [1], event recognition [2], and action recognition [3]. Different studies have portrayed scene recognition as different procedures. A scene can be presented as a collection of objects, and thus, the recognition of the scene would be perfect if the identity of all objects were known (i.e., lamp, chair, desk, phone, etc.) [4]. Alternatively, the same scene can be described by its overall layout and structure by global descriptors (i.e., man-made, enclosed, small, low clutter, etc.) [5].

Furthermore, the scene categories can reflect how the visual information is used for scene recognition. The way we categorize scenes at various levels can lead to different processing procedures: object-centered or scene-centered processing. The object-centered approach is based on the representation of scene information such as the configuration of objects, their identities and shapes. An alternative view, the scene-centered approach, is independent of the object recognition task, and the structure or shape of a scene is represented by describing the global properties of the scene [6]. Global properties are used to describe the structure of the scene (e.g., its openness, the level of naturalness). In an experimental study by Oliva and Torralba [7] aimed at understanding human visual perception mechanisms in the scene recognition task, it was shown that humans do not need to perceive the objects in a scene to identify its semantic category. In other words, in scene-centered representation of scenes, most of the detail and object information are ignored by encoding the global properties of the scenes. The role of global properties in human observers' scene perception have also been examined in other works [8] [9].



The rise of convolutional neural networks (CNNs) for learning high-level deep features together with the availability of large datasets (e.g., Places365 [10]) has established astonishing results for scene recognition tasks. Places365-Standard is a large scene-centric image database, a repository of 10 million real-world scene images, labeled with 365 scene semantic categories and attributes. The popular CNN architectures, AlexNet [11], GoogLeNet [12], and VGG 16 convolutional-layer CNN [13] have been trained on Places365-Standard data, leading to baseline CNN models for semantic scene classification whose performance almost matches that of human observers. Zhou et al. [14] show that these scene-centric models can perform both object localization and scene classification in the same networks. Interestingly, while these models have been trained on a scene-centered dataset, the object detectors emerge inside the inner layers of the network even though no supervision is provided for object recognition.

The main contribution of this paper is to show that the representations that are learned by the inner layers of the deep architectures can also be used to represent the global properties of a visual scene even though the global representations have not been explicitly learned. While the global properties can be obtained directly from low-level processing of images, we aim to predict them through a CNN scene-centered network. The global property information can then be channeled through a parallel pathway of scene categorization enabling high-level categorization at a superordinate level rather than basic-level categorization or in a complementary pathway to enhance the performance of scene recognition in scene-centered CNNs.


II. GLOBAL PROPERTIES IN VISUAL SCENE RECOGNITION

The global property information expresses a scene using high-level, global descriptions of spatial and functional aspects of scene space. There is a set of global properties that represent aspects of the scene structure (openness, mean depth, and expansion), scene content (naturalness and clutter), scene constancy (temperature and transience), and scene affordance (navigability and concealment) [15]. Modeling these spatial global properties is sufficient for representing a scene in terms of a high-level description.



We propose to consider two dominant global properties in this study that are the naturalness state of a scene (natural vs. man-made) and the openness of a scene (open vs. closed) for two reasons: First,

Fig. 1. Left: Sample images in each scale (more open, more closed) from the collection of 7035 images. Right: Reference ranking scales for those images were generated by the Bradly-Terry model.




the spatial structures of man-made and natural scenes are significantly different as man-made scenes mostly represent straight horizontal and vertical edges unlike natural scenes that have distributed edges. Second, in a work by Oliva and Torralba [16] about human visual perception mechanisms, naturalness and openness were shown to be two important descriptors of the shape of a scene. In an experimental study on the mechanisms underlying scene recognition in humans, a high proportion of subjects used degree of naturalness and degree of openness information for categorization of real-world scenes in three hierarchical stages in comparison to other global properties.
A. Ranking scale on global properties

We use subjective reference ranking scales on two global properties of a set of scene images that were obtained in an experiment by Zhang and her colleagues [17], [18]. The image collection was selected from the SUN database [19] that includes 7035 images at a size of 1024×768 pixels describing 174 basic-level categories. In their experiment, a total of 1055 prequalified observers were recruited on Amazon Mechanic Turk (AMT) to complete a task of exploring two global properties in the selected image dataset. In each trial, observers were shown two scenes and asked to choose the scene that looked more “natural”, “manmade”, “open”, and “closed”.

The annotated scene images were ranked by the Bradly-Terry model [20] to estimate their abilities in describing the global properties of the scene images. In this manner, a set of four ranking scales was built for each image including natural, man-made, open, and closed rankings. Figure 1 illustrates open and closed rankings for some of our scene images ranging from 1 to 7035 in both dimensions.
B. Ground truth on global properties

The image dataset contains ranking scales for four categories that have been computed on the basis of a wide range of human subjects’ perception of the scene images. In order to label the images by their global properties, we split the data in one case based on the naturalness property and in another case by considering the openness property. In this way, we find the ground truth for the “natural”, “manmade”, “open”, and “closed” categories. Figure 2 shows the image dataset based on its open and closed ranking scales. Intuitively, we can mark images (illustrated by dots) near the top-left corner as strong ground truth for the ”closed” category. Similarly, images near the bottom right corner can be labeled as “open” category.



The approach we use to obtain the strong ground truth for each category is to divide the data into high and medium degree of naturalness or openness. The terminology we will be using for the degree of naturalness and openness in this case are highly “natural” and highly “open” or medium “natural-manmade” and medium “open-closed”. Zhang et al. [17] showed that naturalness and man-madeness are highly inversely correlated, and so are openness and closedness. Using this fact we fit a linear predictive model to the values of “natural” and “manmade” rankings and similarly to the values of “open” and “closed” rankings. For example, the relationship between “open” and “closed” rankings is modeled by a straight line. We then split this line into three equal parts. For the openness property, each part is representative of high, medium, and low global property.

Fig. 2. Illustration of images based on their open and closed ranking scores. Each blue dot represents an image in our dataset.


The projected images located at both ends represent highly “natural” and “manmade” scenes, or similarly highly “open” and “closed” ones. In this way, the scene images are labeled by their corresponding category names and are considered as ground truth data for training and testing the classifiers. This approach is illustrated in Fig. 3, where, using linear regression, Y is modeled as a linear predictor function for estimating the degree of closedness given the degree of openness. The line Y is then divided into three equal segments in order to conform three subsets of images, namely “closed_set”, “openclosed_set”, and “open_set”. From the total of 7035 images, 2024 images are included in the “open_set”, 1306 images in the ”closed_set” and 3705 images in the “openclosed_set” subsets. Images in the “openclosed_set” subset cannot be labeled as open or closed by this method since they are ambiguous scenes that represent both open and closed structures and are not included in the learning process.

We combine the “open_set” and “closed_set” subsets in order to create a larger subset containing the images that are labeled as strongly open or closed. In the following section, we use this combined subset for training and validating in our classification process. In a similar manner, we create three subsets of images named “natural_set”, ”natural-manmade_set”, and “manmade_set”. Subsequently, we combine the “natural_set” and “manmade_set” to form a scene dataset for training and testing classifiers.



“openclosed_set” and “natural-manmade_set” are two subsets of images that are in the middle ranges of the ranking scales. For example, a scene image “aqueduct" (Fig.1, Image 6) could either be considered as “natural" because of the “sky" and “sand", or it could be counted as “manmade" for the artificial channel in it. We use none of these subsets for training or validation but for testing the classifiers. In order to find the reliable ground truth for labeling the images in these subsets we propose the following algorithm to label them. Considering both regression results and subjective ranking results: for a given image, if the “open” ranking is greater than the “closed” rankings in both results, then we label it as “open”. If the “closed” ranking is less than the “open” ranking in both results, then it is labeled “closed”, otherwise it is neither open nor closed and we remove the corresponding image from the data. A similar algorithm is applied to label images for naturalness vs. man-madeness and the resulting output images are used as test data for performance evaluation in the following section. We call this data “Test-Medium” for the rest of this paper.

Fig. 3. Assigning the ground truth categories for 7035 scene images by applying linear regression


III. DEEP FEATURES FOR PREDICTING GLOBAL FEATURES

The current CNNs for scene or object recognition have been trained based on the basic-level categorizations (e.g., park, street, or kitchen) that require processing of local information and computation of visual similarity. However, there is a different level of scene description based on global spatial structures and their spatial relationships at the so-called superordinate level [21]. This level of description can be useful for simple classifications (e.g., a city) and demands a different computational approach. In CNNs for scene classification, not much attention has been paid to this level of scene categorization.

In this section, we intend to use the deep features extracted from convolutional units of various layers of existing CNNs for building the models to predict the global properties of a scene. The predicted global properties can represent the structure of the visual scene at a superordinate level (naturalness and openness) rather than a basic level (e.g., street, kitchen).
A. Extracting deep features from the Places network

VGG16-Places365 [22] is the convolution neural network that has been trained on Places-365 standard data that is used throughout this paper. We extract the features at various levels of the network including Conv 1, Conv 3, Conv 5, and FC7 layers. The network architecture in those layers of VGG16-Places365, as proposed in [11] is listed in Table 1.

Table 1. The parameters of four layers of the network architecture used for VGG16-Places365.



Layer

Conv 1

Conv 3

Conv 5

FC7

Units

Features

96

55×55


384

13×13


256

13×13


4095

1


Feature vector

96×1

384×1

256×1

4095×1

We use the original dataset (7035 images) as an input for the network and extract the features in each of those layers. Then, the global average pooling is applied to obtain the spatial average of the feature map of each unit at the convolutional layer. These values generate a feature vector for each image in each layer. The global average pooling has been applied in each unit to prevent overfitting due to the curse of dimensionality during the classification process. The global average pooling is described as follows: for a given image, let A(x,y,l) depict the activation of unit l in a convolutional layer at a spatial location (x,y). Therefore, the global average pooling outputs A(l) for each unit l are computed as ∑ A(x,y,l). We use these values to generate the feature vectors as presented in Table 1. For instance, for a given image, a feature vector of 96×1 is obtained at layer Conv 1 in which each element in this vector corresponds to the global average pooling value that was computed for each unit.
B. Learning the global properties

This section discusses the details of learning the global properties using the features which are extracted from a pre-trained VGG16-Places365 network and a linear SVM. Figure 4 illustrates the labeling, feature extraction and classification process. After creating the three subsets from the 7035 images for each global property, we rum them through the VGG16-Places365 network, extracting the features at each layer. In each layer we obtain the global average pooling for each unit. Then, we train a linear SVM using the extracted feature vectors at each layer in order to build a computational model that can predict the naturalness or openness structures of a scene image. These global properties can describe the scene image at the superordinate level where the whole structure and shape of the scene are taken into account rather than the identity and configuration of objects. We repeat this process in the Conv 3, Conv 5, and FC7 layers for both the openness and naturalness properties.



C. Performance of the SVM model in predicting global properties

The resulting feature vectors are used for the classification of global properties. As we discussed in the previous section, we use images with a high degree of naturalness or man-madeness and a high degree of openness or closedness for training and initial testing. The data is separated into training data and test data (5-1 train/test splits). This test data is called “Test-High” data for future reference. We also test the model on “Test-Medium” data which possess medium descriptive characteristics of each global property as explained in the previous section. This data challenges the model on those images that depict ambiguous global properties for human observers. Tables 2 and 3 show the classification results obtained from the training and testing of the SVM models on the same datasets across features from different layers of VGG16-Places365. In both test data sets, the model resulted from the features of the first convolutional layer ”Conv 1” provides the best performance.



Fig. 4. A block diagram of the labeling, feature extraction and classification process for predicting the global structural properties of a scene.

Table 2. Classification accuracy on two test sets to predict

openness vs. closedness





Classification Accuracies




Test-High

Test-Medium

CONV 1

1

0.9301

CONV 3

0.9983

0.5985

CONV 5

0.9981

0.5244

FC7

0.9909

0.5056

Table 3. Classification accuracy on two test sets to predict naturalness vs. man-madeness



Classification Accuracies




Test-High

Test-Medium

CONV 1

0.9977

0.9865

CONV 3

0.9868

0.6375

CONV 5

0.9863

0.6979

FC7

0.9885

0.4463

The result of classifications show the effectiveness of the combination of deep features in the first layer of the deep scene CNN , global average pooling, access to ground truth data for supervised training, and linear SVM to distinguish between natural vs. manmade or open vs. closed. Furthermore, employing deeper layer features does not improve the performance but decreases the classification accuracy, particularly for images with a medium level of naturalness or openness. This result also proposes that spatial information in deeper layers of a deep scene CNN does not seem to be useful for representing a scene at the superordinate level, however, as shown by previous research, it is necessary for categorization of an image at the basic level.

VI. GLOBAL PROPERTIES IN DISTINCT SPATIAL FREQUENCIES



Different spatial frequency scales present diverse information about the scene image. In this section we evaluate the role of low and high spatial frequency information of visual images for processing of global properties in deep scene CNNs (e.g., VGG16-Places365). For this purpose, we use low spatial frequency and high spatial frequency of images as an input to VGG16-Places365 rather than full-spectrum images. We aim to show that global properties of a visual image can be predicted based on the features of images at different spatial frequencies that are extracted from CNN networks. We demonstrate to which extent the deep features extracted from various spatial frequencies can contribute to predict the global properties of a scene that are descriptors of its global structure.

A. High spatial frequency (HSF)

High spatial frequency of an image conveys features such as the shape of small surfaces, density of texture regions, and so-called edges of the image [23]. It has been shown that the structure of edges differs significantly between natural and man- made scenes [16]. For example, the horizontal and vertical edges can be an indicator of low degree of naturalness whereas images with the distributed edges have a higher degree of naturalness.

The main question here is whether the models that are built using deep features extracted from such local features can enable the recognition of global properties of an original image? In order to answer this question, we apply a Fourier transform to our original image dataset containing 7035 images in three color channels. We then remove the low frequencies by masking with a square-shaped window of size 60x60 and apply the inverse Fourier transform to reconstruct only the high spatial frequency content of the images in three color channels. We use these images as inputs to the VGG16-Places365 network and extract features from four layers as we did in the previous section. One SVM model is built for each layer considering the ground truth we obtained in Section II. Tables 4 and 5 show the classification accuracies of the resulting models associated with each layer. For openness vs. closedness state of a scene, the results are very similar to those obtained for the full-spectrum images shown in Tables 2 and 3, with slightly lower performance for the “Test-High” images but slightly higher performance for the ‘Test-Medium” ones. In contrast, for classification of natural vs. man-made images, the high accuracies are maintained in models associated with the deeper layers even for ambiguous images.

Table 4. Classification accuracy based on high spatial frequency (HSF) content for two test sets to predict openness vs. closedness



Classification Accuracies




Test-High

Test-Medium

CONV 1

1

0.9927

CONV 3

0.9946

0.6325

CONV 5

0.9858

0.7732

FC7

0.9673

0.5247

Table 5. Classification accuracy based on high spatial frequency (HSF) content for two test sets to predict “natural” vs. “manmade”



Classification Accuracies




Test-High

Test-Medium

CONV 1

1

0.9891

CONV 3

0.9978

0.9721

CONV 5

0.9632

0.9473

FC7

0.9608

0.9349

B. Low spatial frequencies (LSF)

Low spatial frequency of an image conveys global spatial layout information, also called “blobs” of the image [23]. Can models that are built using deep features from low spatial frequency content of a scene image enable the detection of global properties of that scene? In order to answer this question, again, we apply the Fourier transform to intensity values of 7035 original images. We convert the RGB images to grayscale images by taking the average over three color channels. The high frequencies are removed by masking the area outside of a square-shaped window of size 60x60 and applying the inverse Fourier transform to reconstruct the low spatial frequency content of the images. We use these images as an input image for VGG16-Places365 network and extract features from four layers as we did for high special frequency. One SVM model is built for each layer considering the ground truth we obtained in Section II. Tables 6 and 7 show the accuracies of the obtained models associated with each layer.
Table 6. Classification accuracy based on low spatial frequency (LSF) content for two test sets to predict “open” vs. “closed”



Classification Accuracies




Test-High

Test-Medium

CONV 1

0.9980

0.9616

CONV 3

0.9883

0.5895

CONV 5

0.9946

0.8741

FC7

0.9317

0.6763

Table 7. Classification accuracy based on high spatial frequency (LSF) for two test sets to predict “natural” vs. “manmade”



Classification Accuracies




Test-High

Test-Medium

CONV 1

0.9964

0.9954

CONV 3

0.9948

0.9810

CONV 5

0.9909

0.9690

FC7

0.9824

0.9453



The results suggests that naturalness and openness can be predicted by the features that have been extracted from the blurred image consisting of low spatial frequency content. This result is consistent with the results of studies that suggest that in low spatial frequency content (as low as 8 cycles/image) where the information of the objects and their location is masked, we are still able to extract a superordinate level categorization because the dominant spatial structure of a scene (naturalness and openness) can still be estimated [16].

Interestingly, for estimating naturalness vs. man-madeness, as for HSF, LSF input images improve accuracy for Test-Medium ones, in particular at deeper layers. While this finding indicates that LSF information is particularly relevant to the deeper layers of a CNN, there is a consistent pattern in the results.

Generally, for both naturalness and openness states of a scene, the results confirm the highest contribution of the lowest convolutional layer of the deep scene CNN in the holistic representation of a scene.

v. conclusion

In this work we demonstrate that the convolutional units of the first layer of the VGG16-Places365 network are well-suited for predicting two global properties (naturalness and openness) of a scene image. Predicted global properties can be used to enhance the performance and interpretation of the scene recognition task since it can represent a scene at a different level by describing its holistic structure. This level of representation is not an alternative for the current CNN scene recognition which categorizes a scene on the basic level.

The general results suggests that global properties can be extracted by initial analysis in the first layer of the convolutional neural network without invoking its deeper layers. This is similar to human visual perception, which can recognize the structure of a scene before segmentation and grouping [24].

The consistent results of computational models in full and restricted spatial frequency ranges suggest that the representation of an image in the lowest layer of a deep scene CNN contains holistic information of the image as it leads to the highest accuracy in modelling the global shape of the scene. That is, scene categorization at the superordinate level can be efficiently attained through bypassing object detection and localization stages in the higher layers of a deep scene CNN.
ACKNOWLEDGMENT

The authors would like to thank Hanshu Zhang and her colleagues at the Wright State University for providing us with the subjective ranking dataset.

REFERENCES
[1] H. L. Premaratne and J. Bigun. A segmentation-free approach to recognise printed Sinhala script using linear symmetry. Pattern Recognition, 37(10): 2081-2089, 2004.

[1] S. Ren, K. He, R. Girshick and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99, 2015.



[2] N. Takahashi, M. Gygli, B. Pfister and L. Van Gool. Deep convolutional neural networks and data augmentation for acoustic event detection. arXiv preprint arXiv:1604.07160, 2016.

[3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

[4] M. Greene and A. Oliva. The briefest of glances: The time course of natural scene understanding. Psychological Science, 20(4), 464-472, 2009.

[5] L. Nanni and A. Lumini. Heterogeneous bag-of-features for object/scene recognition. Applied Soft Computing13(4), 2171-2178, 2013.

[6] A. Torralba and A. Oliva. Statistics of natural image categories. Network: computation in neural systems14(3), 391-412, 2003.

[7] A. Oliva and A. Torralba. Scene-centered description from spatial envelope properties. In Biologically motivated computer vision, Springer Berlin/Heidelberg, pp. 263-272, 2002.

[8] I. Biederman and G. Ju. Surface versus edge-based determinants of visual recognition. Cognitive psychology, 20(1), 38-64,1988.

[9] M. C. Potter. Short-term conceptual memory for pictures. Journal of experimental psychology: human learning and memory2(5), 509, 1976.

[10] B. Zhou et al. An image database for deep scene understanding. arXiv preprint arXiv:1610.02055, 2016.

[11] A. Krizhevsky, I. Sutskever and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.

[12] C. Szegedy et al. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.

[13] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

[14] Zhou et al.. Object detectors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856, 2014.

[15] A. Oliva and A. Torralba. Building the gist of a scene: The role of global image features in recognition. Progress in brain research, 155, 23-36, 2006.

[16] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision, 42(3), 145-175, 2001.



[17] H. Zhang. Processing global properties in Scene Categorization. Diss. Wright State University, 2017.

[18] H. Zhang, J. W. Houpt and A. Harel. Linear ranking scales of naturalness and openness of scenes. Poster presented at the 57th Annual Meeting of the Psychonomic Society; Boston, MA, 2016.



[19] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Computer vision and pattern recognition (CVPR), 2010 IEEE conference on, pp. 3485-3492, 2010.

[20] D. Firth, H. L. Turner. Bradley-Terry models in R: the BradleyTerry2 package. Journal of Statistical Software48(9), 2012.

[21] E. Rosch and C. M. Mervis. Family resemblances: Studies in the internal structure of categories. Cognitive psychology7(4), 573-605, 1975.

[22] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba. Places: A 10 million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.

[23] A. Oliva. Gist of the scene. Neurobiology of attention696(64), 251-258, 2005.

[24] M. R. Greene and A. Oliva. Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive psychology58(2), 137-176, 2009.


Download 0.78 Mb.

Share with your friends:




The database is protected by copyright ©ua.originaldll.com 2024
send message

    Main page