Why the MobileNetV2 is faster than MobileNetV1 only at mobile device?What is the best way to detect a mobile device?Why are GPUs more powerful than CPUsTensorFlow on Mobile Devices (Android, iOS, Windows Phone)What are the possible reasons that a deep learning model runs slower on GPU than running on CPU?Keras (Tensorflow backend) slower on GPU than on CPU when training certain networksDistributed Tensorflow model is no faster than standaloneWhy GPU slower than CPU in my case?Tensorflow Neural Network faster on CPU than GPU5-layer DNN in Keras trains slower using GPUWhat is the different between SSD and SSD Lite ??(Tensorflow)
What is the CR of a Metallic Dragon that used Change Shape?
What was redacted in the Yellowhammer report? (Point 15)
What's 待ってるから mean?
How are unbalanced coaxial cables used for broadcasting TV signals without any problems?
Why did it become so much more expensive to start a university?
What was the ultimate objective of The Party in 1984?
Double it your way
Bash, import output from command as command
Should you only use colons and periods in dialogues?
What was the motivation for the invention of electric pianos?
Is there a real-world mythological counterpart to WoW's "kill your gods for power" theme?
What is and what isn't ullage in rocket science?
2000s space film where an alien species has almost wiped out the human race in a war
Why do sellers care about down payments?
What is a realistic time needed to get a properly trained army?
My research paper filed as a patent in China by my Chinese supervisor without me as inventor
Is there any way to land a rover on the Moon without using any thrusters?
Were Roman public roads build by private companies?
Stucturing information on this trade show banner
Telling my mother that I have anorexia without panicking her
Why did they ever make smaller than full-frame sensors?
Does deswegen have another meaning than "that is why"?
Can I conceal an antihero's insanity - and should I?
How to publish superseding results without creating enemies
Why the MobileNetV2 is faster than MobileNetV1 only at mobile device?
What is the best way to detect a mobile device?Why are GPUs more powerful than CPUsTensorFlow on Mobile Devices (Android, iOS, Windows Phone)What are the possible reasons that a deep learning model runs slower on GPU than running on CPU?Keras (Tensorflow backend) slower on GPU than on CPU when training certain networksDistributed Tensorflow model is no faster than standaloneWhy GPU slower than CPU in my case?Tensorflow Neural Network faster on CPU than GPU5-layer DNN in Keras trains slower using GPUWhat is the different between SSD and SSD Lite ??(Tensorflow)
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I am studying about Google's brandnew MobileNetV2
architecture.
During studying, I've read this string at Tensorflow model zoo Github
'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'
So, my question is,
How that could be possible? I really want to know why.
tensorflow mobile gpu
add a comment
|
I am studying about Google's brandnew MobileNetV2
architecture.
During studying, I've read this string at Tensorflow model zoo Github
'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'
So, my question is,
How that could be possible? I really want to know why.
tensorflow mobile gpu
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
1
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53
add a comment
|
I am studying about Google's brandnew MobileNetV2
architecture.
During studying, I've read this string at Tensorflow model zoo Github
'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'
So, my question is,
How that could be possible? I really want to know why.
tensorflow mobile gpu
I am studying about Google's brandnew MobileNetV2
architecture.
During studying, I've read this string at Tensorflow model zoo Github
'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'
So, my question is,
How that could be possible? I really want to know why.
tensorflow mobile gpu
tensorflow mobile gpu
edited Mar 31 at 9:50
Seongkyun Han
asked May 17 '18 at 7:31
Seongkyun HanSeongkyun Han
281 silver badge6 bronze badges
281 silver badge6 bronze badges
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
1
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53
add a comment
|
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
1
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
1
1
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53
add a comment
|
2 Answers
2
active
oldest
votes
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
add a comment
|
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.
add a comment
|
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f50385735%2fwhy-the-mobilenetv2-is-faster-than-mobilenetv1-only-at-mobile-device%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
add a comment
|
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
add a comment
|
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
answered Mar 28 at 10:26
M. RichéM. Riché
465 bronze badges
465 bronze badges
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
add a comment
|
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
Thank you. really understandable for me :)
– Seongkyun Han
Mar 31 at 9:50
add a comment
|
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.
add a comment
|
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.
add a comment
|
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.
answered Aug 21 '18 at 18:48
Infinite LoopsInfinite Loops
5333 gold badges10 silver badges21 bronze badges
5333 gold badges10 silver badges21 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f50385735%2fwhy-the-mobilenetv2-is-faster-than-mobilenetv1-only-at-mobile-device%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
It was probably designed and tuned with a mobile experience in mind.
– Scath
May 17 '18 at 14:26
Thanks! But, is there any EXACT explaination about that?? :( Not probably
– Seongkyun Han
May 18 '18 at 5:31
1
You can read the paper about MobileNetV2. And here is the pdf.
– vbonnet
Jul 26 '18 at 15:56
I have already read paper, but there is no description about the reasons. I'm not a dude bro.
– Seongkyun Han
Jul 29 '18 at 7:53