-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inductor workflow #23
Merged
Merged
Changes from all commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
bac0d19
inductor workflow
PaliC 67ca800
stuff
PaliC aa8260e
stuff
PaliC 2a32c01
stuff
PaliC 1ba6d10
stuff
PaliC ffc828c
lint fix
PaliC 82d23fa
lint fix
PaliC 097c32f
lint fix
PaliC 4b910cb
lint fix
PaliC 3113537
lint fix
PaliC d0c1cdb
lint fix
PaliC 3325a62
fix script
PaliC 8fc433c
lint fix
PaliC 98895bf
lint fix
PaliC db0f4ce
lint fix
PaliC 067713d
lint fix
PaliC 2586274
lint fix
PaliC 44a81c3
lint fix
PaliC 5d43e25
lint fix
PaliC 4c19677
trigger sleep
PaliC 0d1172d
try nvidia build
PaliC 5bc569d
try nvidia build
PaliC 543c556
add llvm build
PaliC 414c59a
better logging
PaliC 61dded4
better logging
PaliC f6f3446
do autoinstalls
PaliC d25a2e8
do autoinstalls
PaliC e66b688
fix script
PaliC e79968d
fix script
PaliC 250cbae
fix script
PaliC d34eac7
fix script
PaliC File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
name: Test build/test linux gpu | ||
|
||
on: | ||
pull_request: | ||
workflow_dispatch: | ||
inputs: | ||
triton_pin: | ||
description: 'Triton branch or commit to pin' | ||
default: 'main' | ||
required: false | ||
pytorch_pin: | ||
description: 'PyTorch branch or commit to pin' | ||
default: 'main' | ||
required: false | ||
|
||
jobs: | ||
build-test: | ||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main | ||
with: | ||
runner: linux.g5.48xlarge.nvidia.gpu | ||
gpu-arch-type: cuda | ||
gpu-arch-version: "12.1" | ||
timeout: 360 | ||
# docker-image: nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04 | ||
script: | | ||
set -x | ||
pushd .. | ||
echo "Installing triton" | ||
git clone https://github.com/triton-lang/triton.git | ||
pushd triton | ||
echo "Checking out triton branch or commit" | ||
git checkout ${{ github.event.inputs.triton_pin || 'main' }} | ||
sudo yum install -y zlib-devel | ||
echo "Installing build-time dependencies" | ||
pip install ninja==1.11.1.1 cmake==3.30.2 wheel==0.44.0 | ||
export llvm_hash=$(cat cmake/llvm-hash.txt) | ||
echo "llvm_hash: $llvm_hash" | ||
pushd .. | ||
echo "Cloning llvm-project" | ||
git clone https://github.com/llvm/llvm-project.git | ||
pushd llvm-project | ||
echo "Checking out llvm hash" | ||
git checkout "$llvm_hash" | ||
mkdir build | ||
pushd build | ||
echo "Building llvm" | ||
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON ../llvm -DLLVM_ENABLE_PROJECTS="mlir;llvm" -DLLVM_TARGETS_TO_BUILD="host;NVPTX;AMDGPU" | ||
ninja | ||
export LLVM_BUILD_DIR=$(pwd) | ||
popd | ||
popd | ||
popd | ||
LLVM_INCLUDE_DIRS=$LLVM_BUILD_DIR/include LLVM_LIBRARY_DIR=$LLVM_BUILD_DIR/lib LLVM_SYSPATH=$LLVM_BUILD_DIR pip install -e python | ||
echo "Installing triton python package" | ||
popd | ||
echo "Cloning pytorch" | ||
git clone https://github.com/pytorch/pytorch.git | ||
pushd pytorch | ||
echo "Checking out pytorch branch or commit" | ||
git checkout ${{ github.event.inputs.pytorch_pin || 'main' }} | ||
git submodule sync | ||
git submodule update --init --recursive | ||
pip install -r requirements.txt | ||
pip install mkl-static mkl-include pytest pytest-xdist | ||
echo "Installing magma-cuda121" | ||
conda install -y -c pytorch magma-cuda121 | ||
python setup.py install | ||
pytest -n 1 test/inductor/test_torchinductor.py |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets add
continue-on-error: true
until we get all the tests passing? ok with this since the failures arent on build