InceptionV3
Rethinking the Inception Architecture for Computer Vision (CVPR 2016)
Convolutional networks are at the core of most stateof-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
Implementations
from keras.applications.inception_v3 import InceptionV3
InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
Inception V3 model, with weights pre-trained on ImageNet.
This model and can be built both with 'channels_first'
data format (channels, height, width) or 'channels_last'
data format (height, width, channels).
The default input size for this model is 299x299.
Arguments
- include_top: whether to include the fully-connected layer at the top of the network.
- weights: one of
None
(random initialization) or'imagenet'
(pre-training on ImageNet). - input_tensor: optional Keras tensor (i.e. output of
layers.Input()
) to use as image input for the model. - input_shape: optional shape tuple, only to be specified if
include_top
isFalse
(otherwise the input shape has to be(299, 299, 3)
(with'channels_last'
data format) or(3, 299, 299)
(with'channels_first'
data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 75. E.g.(150, 150, 3)
would be one valid value. - pooling: Optional pooling mode for feature extraction when
include_top
isFalse
.None
means that the output of the model will be the 4D tensor output of the last convolutional layer.'avg'
means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor.'max'
means that global max pooling will be applied.
- classes: optional number of classes to classify images into, only to be specified if
include_top
isTrue
, and if noweights
argument is specified.
Returns
A Keras Model
instance.
(@ keras.io) https://keras.io/applications/#inceptionv3