New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pytorch][PR] Add ability for a mobile::Module to save as flatbuffer #70201
Conversation
CI Flow Status
|
Workflows | Labels (bold enabled) | Status |
---|---|---|
Triggered Workflows | ||
caffe2-linux-xenial-py3.7-gcc5.4 | ciflow/all , ciflow/cpu , ciflow/linux , ciflow/trunk |
|
docker-builds | ciflow/all , ciflow/trunk |
|
ios-12-5-1-arm64 | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-arm64-coreml | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-arm64-custom-ops | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-arm64-full-jit | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-arm64-metal | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-x86-64 | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-x86-64-coreml | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
ios-12-5-1-x86-64-full-jit | ciflow/all , ciflow/ios , ciflow/macos , ciflow/trunk |
|
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/libtorch , ciflow/linux , ciflow/trunk |
|
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/libtorch , ciflow/linux , ciflow/trunk |
|
linux-bionic-cuda10.2-py3.9-gcc7 | ciflow/all , ciflow/cuda , ciflow/linux , ciflow/slow , ciflow/trunk |
|
linux-bionic-py3.7-clang9 | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/noarch , ciflow/trunk |
|
linux-docs | ciflow/all , ciflow/cpu , ciflow/default , ciflow/docs , ciflow/linux , ciflow/trunk |
|
linux-docs-push | ciflow/all , ciflow/cpu , ciflow/linux , ciflow/scheduled |
|
linux-vulkan-bionic-py3.7-clang9 | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk , ciflow/vulkan |
|
linux-xenial-cuda11.3-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/default , ciflow/linux , ciflow/trunk |
|
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test | ciflow/all , ciflow/bazel , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
linux-xenial-cuda11.3-py3.7-gcc7-no-ops | ciflow/all , ciflow/cuda , ciflow/linux , ciflow/trunk |
|
linux-xenial-py3-clang5-mobile-build | ciflow/all , ciflow/default , ciflow/linux , ciflow/mobile , ciflow/trunk |
|
linux-xenial-py3-clang5-mobile-custom-build-static | ciflow/all , ciflow/default , ciflow/linux , ciflow/mobile , ciflow/trunk |
|
linux-xenial-py3.7-clang7-asan | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/sanitizers , ciflow/trunk |
|
linux-xenial-py3.7-clang7-onnx | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/onnx , ciflow/trunk |
|
linux-xenial-py3.7-gcc5.4 | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
linux-xenial-py3.7-gcc7 | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
linux-xenial-py3.7-gcc7-no-ops | ciflow/all , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
macos-10-15-py3-arm64 | ciflow/all , ciflow/macos , ciflow/trunk |
|
macos-10-15-py3-lite-interpreter-x86-64 | ciflow/all , ciflow/macos , ciflow/trunk |
|
macos-11-py3-x86-64 | ciflow/all , ciflow/macos , ciflow/trunk |
|
parallelnative-linux-xenial-py3.7-gcc5.4 | ciflow/all , ciflow/cpu , ciflow/linux , ciflow/trunk |
|
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/libtorch , ciflow/linux , ciflow/scheduled |
|
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/libtorch , ciflow/linux , ciflow/scheduled |
|
periodic-linux-bionic-cuda11.5-py3.7-gcc7 | ciflow/all , ciflow/cuda , ciflow/linux , ciflow/scheduled |
|
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck | ciflow/all , ciflow/cuda , ciflow/linux , ciflow/scheduled , ciflow/slow , ciflow/slow-gradcheck |
|
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug | ciflow/all , ciflow/cuda , ciflow/linux , ciflow/scheduled |
|
periodic-win-vs2019-cuda11.1-py3 | ciflow/all , ciflow/cuda , ciflow/scheduled , ciflow/win |
|
periodic-win-vs2019-cuda11.5-py3 | ciflow/all , ciflow/cuda , ciflow/scheduled , ciflow/win |
|
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build | ciflow/all , ciflow/android , ciflow/cpu , ciflow/linux , ciflow/trunk |
|
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single | ciflow/all , ciflow/android , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit | ciflow/all , ciflow/android , ciflow/cpu , ciflow/default , ciflow/linux , ciflow/trunk |
|
win-vs2019-cpu-py3 | ciflow/all , ciflow/cpu , ciflow/default , ciflow/trunk , ciflow/win |
|
win-vs2019-cuda11.3-py3 | ciflow/all , ciflow/cuda , ciflow/default , ciflow/trunk , ciflow/win |
|
Skipped Workflows | ||
linux-binary-conda | ciflow/binaries , ciflow/binaries/conda |
|
linux-binary-libtorch-cxx11-abi | ciflow/binaries , ciflow/binaries/libtorch |
|
linux-binary-libtorch-pre-cxx11 | ciflow/binaries , ciflow/binaries/libtorch |
|
linux-binary-manywheel | ciflow/binaries , ciflow/binaries/wheel |
You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow
For more information, please take a look at the CI Flow Wiki.
|
This pull request was exported from Phabricator. Differential Revision: D33239362 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
@pytorchbot ciflow rerun -l ciflow/all |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
flatbuffers::FlatBufferBuilder& fbb, | ||
const IValue& ivalue) { | ||
if (ivalue.isNone()) { | ||
return 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
Create a constant for index of None, something like kNoneIndex
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
} catch (const std::runtime_error&) { | ||
} catch (const c10::Error&) { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are causes of exceptions? Unable to hash an IValue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more comments. IValue::hash explicitly throws runtime_error for some IValues. And for Tensor operator== doesn't return bool so you get c10::Error.
flatbuffers::FlatBufferBuilder& fbb, | ||
ClassTypePtr class_type); | ||
|
||
// cached stuff |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed.
} | ||
|
||
table Module { | ||
version:int; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this version supposed to be?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 + whatever the latest (and last) pickle version should be. It is not explicitly used now.
ival_pos) | ||
.Union(); | ||
} else { | ||
AT_ERROR("Unknown IValue type for pickling: ", ivalue.tagKind()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL, I think we are done with "pickling"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rephased
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
Sync'd offline with @malfet and agreed to check-in the generated header file instead of messing with build scripts to generate it on the fly. PTAL. |
LGTM |
.jenkins/pytorch/win-build.sh
Outdated
@@ -56,6 +56,8 @@ if [[ $PYLONG_API_CHECK == 0 ]]; then | |||
fi | |||
set -ex | |||
|
|||
echo 'python %*' > /c/Windows/py.bat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As headers are checked in, this should not longer be necessary, should it?
@@ -70,6 +70,7 @@ set(TORCH_PYTHON_INCLUDE_DIRECTORIES | |||
|
|||
${TORCH_ROOT}/third_party/gloo | |||
${TORCH_ROOT}/third_party/onnx | |||
${TORCH_ROOT}/third_party/flatbuffers/include |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but in followup PR can we limit it to just 2 files in question? (So that we don't leak flatbuffers dependency to module extensions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean we can use different include paths when compiling flatbuffer_loader.cpp
and flatbuffer_serializer.cpp
but not when compiling other .cpp files? How do I do that? Also it's not just those 2 files also ones that happens to include flatbuffer_loader.h
or flatbuffer_serializer.h
.
@@ -345,6 +346,8 @@ if(HAVE_SOVERSION) | |||
VERSION ${TORCH_VERSION} SOVERSION ${TORCH_SOVERSION}) | |||
endif() | |||
add_dependencies(torch_python torch_python_stubs) | |||
add_dependencies(torch_python flatbuffers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I though flatbuffers is header only ones its generated, isn't it?
Looks good to me, but please remove changes to windows-builds (they should no longer be necessary) |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
This pull request was exported from Phabricator. Differential Revision: D33239362 |
…ytorch#70201) Summary: Pull Request resolved: pytorch#70201 Included functions: save_mobile_module -> saves a mobile::Module to flatbuffer load_mobile_module_from_file -> loads a flatbuffer into mobile::Module parse_mobile_module -> parses from bytes or deserialized flatbuffer module object Compared to previous attempts, this diff only adds flatbuffer to cmake target and leaves fbcode/xplat ones unchanged. Test Plan: unittest Reviewed By: malfet, gmagogsfm Differential Revision: D33239362 fbshipit-source-id: c89d5f8b96fee6906a2df9f034c081865b34548f
This pull request was exported from Phabricator. Differential Revision: D33239362 |
Update: removed change to win-build.sh and it seems to work. Binary size increase is ~15kb. We didn't propose as optional feature because this will replace Module._save_for_lite_interpreter(). |
1 similar comment
Update: removed change to win-build.sh and it seems to work. Binary size increase is ~15kb. We didn't propose as optional feature because this will replace Module._save_for_lite_interpreter(). |
Summary:
Included functions:
save_mobile_module -> saves a mobile::Module to flatbuffer
load_mobile_module_from_file -> loads a flatbuffer into mobile::Module
parse_mobile_module -> parses from bytes or deserialized flatbuffer module object
Compared to previous attempts, this diff only adds flatbuffer to cmake target and leaves fbcode/xplat ones unchanged.
Test Plan: unittest
Differential Revision: D33239362
The text was updated successfully, but these errors were encountered: