Skip to content

Commit

Permalink
rename top-level install_requirements.{py,sh,bat} to install_executorch
Browse files Browse the repository at this point in the history
Call it what it does. I did an automated find/replace for the easy cases in scripts/docs and then manually checked the rest.

ghstack-source-id: 4c830504122f219aed0a1dbfdaa9d6c30fbcb63f
ghstack-comment-id: 2596505222
Pull Request resolved: #7708
  • Loading branch information
swolchok committed Jan 16, 2025
1 parent d434bf7 commit abbbfea
Show file tree
Hide file tree
Showing 36 changed files with 62 additions and 62 deletions.
6 changes: 3 additions & 3 deletions .ci/scripts/utils.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,17 @@ retry () {
}

clean_executorch_install_folders() {
./install_requirements.sh --clean
./install_executorch.sh --clean
}

install_executorch() {
which pip
# Install executorch, this assumes that Executorch is checked out in the
# current directory.
if [[ "${1:-}" == "use-pt-pinned-commit" ]]; then
./install_requirements.sh --pybind xnnpack --use-pt-pinned-commit
./install_executorch.sh --pybind xnnpack --use-pt-pinned-commit
else
./install_requirements.sh --pybind xnnpack
./install_executorch.sh --pybind xnnpack
fi
# Just print out the list of packages for debugging
pip list
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/apple.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ on:
paths:
- .ci/scripts/setup-ios.sh
- .github/workflows/apple.yml
- install_requirements.sh
- install_executorch.sh
- backends/apple/**
- build/build_apple_frameworks.sh
- build/build_apple_llm_demo.sh
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/pull.yml
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ jobs:
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh "cmake"
# install pybind
bash install_requirements.sh --pybind xnnpack
bash install_executorch.sh --pybind xnnpack
# install Llava requirements
bash examples/models/llama/install_requirements.sh
Expand Down Expand Up @@ -414,7 +414,7 @@ jobs:
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh "cmake"
# install pybind
bash install_requirements.sh --pybind xnnpack
bash install_executorch.sh --pybind xnnpack
# install phi-3-mini requirements
bash examples/models/phi-3-mini/install_requirements.sh
Expand All @@ -441,7 +441,7 @@ jobs:
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh "cmake"
# install pybind
bash install_requirements.sh --pybind xnnpack
bash install_executorch.sh --pybind xnnpack
# install llama requirements
bash examples/models/llama/install_requirements.sh
Expand All @@ -468,7 +468,7 @@ jobs:
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh "cmake"
# install pybind
bash install_requirements.sh --pybind xnnpack
bash install_executorch.sh --pybind xnnpack
# install llama requirements
bash examples/models/llama/install_requirements.sh
Expand All @@ -495,7 +495,7 @@ jobs:
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh "cmake"
# install pybind
bash install_requirements.sh --pybind xnnpack
bash install_executorch.sh --pybind xnnpack
# install llama requirements
bash examples/models/llama/install_requirements.sh
Expand Down
2 changes: 1 addition & 1 deletion backends/apple/mps/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ I 00:00:00.122615 executorch:mps_executor_runner.mm:501] Model verified successf
### [Optional] Run the generated model directly using pybind
1. Make sure `pybind` MPS support was installed:
```bash
./install_requirements.sh --pybind mps
./install_executorch.sh --pybind mps
```
2. Run the `mps_example` script to trace the model and run it directly from python:
```bash
Expand Down
2 changes: 1 addition & 1 deletion backends/cadence/build_cadence_fusionG3.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ unset XTENSA_CORE
export XTENSA_CORE=FCV_FG3GP
git submodule sync
git submodule update --init
./install_requirements.sh
./install_executorch.sh

rm -rf cmake-out

Expand Down
2 changes: 1 addition & 1 deletion backends/cadence/build_cadence_hifi4.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ unset XTENSA_CORE
export XTENSA_CORE=nxp_rt600_RI23_11_newlib
git submodule sync
git submodule update --init
./install_requirements.sh
./install_executorch.sh

rm -rf cmake-out

Expand Down
2 changes: 1 addition & 1 deletion backends/vulkan/docs/android_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ First, build and install ExecuTorch libraries, then build the LLaMA runner
binary using the Android NDK toolchain.

```shell
./install_requirements.sh --clean
./install_executorch.sh --clean
(mkdir cmake-android-out && \
cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ After exporting the XNNPACK Delegated model, we can now try running it with exam
cd executorch

# Get a clean cmake-out directory
./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-out

# Configure cmake
Expand Down
2 changes: 1 addition & 1 deletion build/test_ios.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ say "Installing Requirements"

pip install --upgrade cmake pip setuptools wheel zstd

./install_requirements.sh --pybind coreml mps xnnpack
./install_executorch.sh --pybind coreml mps xnnpack
export PATH="$(realpath third-party/flatbuffers/cmake-out):$PATH"
./build/install_flatc.sh

Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ To build the documentation locally:
1. Run:

```bash
bash install_requirements.sh
bash install_executorch.sh
```

1. Go to the `docs/` directory.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/apple-runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
4. Install the required dependencies, including those needed for the backends like [Core ML](build-run-coreml.md) or [MPS](build-run-mps.md), if you plan to build them as well:

```bash
./install_requirements.sh --pybind coreml mps xnnpack
./install_executorch.sh --pybind coreml mps xnnpack

# Optional dependencies for Core ML backend.
./backends/apple/coreml/scripts/install_requirements.sh
Expand Down
2 changes: 1 addition & 1 deletion docs/source/build-run-xtensa.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ In order to run the CMake build, you need the path to the following:

```bash
cd executorch
./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-out
# prebuild and install executorch library
cmake -DCMAKE_TOOLCHAIN_FILE=<path_to_executorch>/backends/cadence/cadence.cmake \
Expand Down
20 changes: 10 additions & 10 deletions docs/source/getting-started-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,23 +92,23 @@ Alternatively, if you would like to experiment with ExecuTorch quickly and easil
# Install ExecuTorch pip package and its dependencies, as well as
# development tools like CMake.
# If developing on a Mac, make sure to install the Xcode Command Line Tools first.
./install_requirements.sh
./install_executorch.sh
```

Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_requirements.sh#L26-L29) to install with pybindings and dependencies for other backends.
Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
```bash
./install_requirements.sh --pybind <coreml | mps | xnnpack>
./install_executorch.sh --pybind <coreml | mps | xnnpack>

# Example: pybindings with CoreML *only*
./install_requirements.sh --pybind coreml
./install_executorch.sh --pybind coreml

# Example: pybinds with CoreML *and* XNNPACK
./install_requirements.sh --pybind coreml xnnpack
./install_executorch.sh --pybind coreml xnnpack
```

By default, `./install_requirements.sh` command installs pybindings for XNNPACK. To disable any pybindings altogether:
By default, `./install_executorch.sh` command installs pybindings for XNNPACK. To disable any pybindings altogether:
```bash
./install_requirements.sh --pybind off
./install_executorch.sh --pybind off
```

After setting up your environment, you are ready to convert your PyTorch programs
Expand All @@ -125,7 +125,7 @@ to ExecuTorch.
>
> ```bash
> # From the root of the executorch repo:
> ./install_requirements.sh --clean
> ./install_executorch.sh --clean
> git submodule sync
> git submodule update --init
> ```
Expand Down Expand Up @@ -208,7 +208,7 @@ The ExecuTorch repo uses CMake to build its C++ code. Here, we'll configure it t
```bash
# Clean and configure the CMake build system. Compiled programs will
# appear in the executorch/cmake-out directory we create here.
./install_requirements.sh --clean
./install_executorch.sh --clean
(mkdir cmake-out && cd cmake-out && cmake ..)

# Build the executor_runner target
Expand All @@ -226,7 +226,7 @@ The ExecuTorch repo uses CMake to build its C++ code. Here, we'll configure it t
>
> ```bash
> # From the root of the executorch repo:
> ./install_requirements.sh --clean
> ./install_executorch.sh --clean
> git submodule sync
> git submodule update --init
> ```
Expand Down
6 changes: 3 additions & 3 deletions docs/source/llm/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ git submodule update --init
# Create a conda environment and install requirements.
conda create -yn executorch python=3.10.0
conda activate executorch
./install_requirements.sh
./install_executorch.sh
cd ../..
```
Expand Down Expand Up @@ -83,7 +83,7 @@ cd third-party/executorch
git submodule update --init
# Install requirements.
PYTHON_EXECUTABLE=python ./install_requirements.sh
PYTHON_EXECUTABLE=python ./install_executorch.sh
cd ../..
```
Expand Down Expand Up @@ -396,7 +396,7 @@ At this point, the working directory should contain the following files:

If all of these are present, you can now build and run:
```bash
./install_requirements.sh --clean
./install_executorch.sh --clean
(mkdir cmake-out && cd cmake-out && cmake ..)
cmake --build cmake-out -j10
./cmake-out/nanogpt_runner
Expand Down
4 changes: 2 additions & 2 deletions docs/source/runtime-build-and-cross-compilation.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ cd executorch

# Clean and configure the CMake build system. It's good practice to do this
# whenever cloning or pulling the upstream repo.
./install_requirements.sh --clean
./install_executorch.sh --clean
(mkdir cmake-out && cd cmake-out && cmake ..)
```

Expand Down Expand Up @@ -122,7 +122,7 @@ Following are instruction on how to perform cross compilation for Android and iO
Assuming Android NDK is available, run:
```bash
# Run the following lines from the `executorch/` folder
./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-android-out && cd cmake-android-out

# point -DCMAKE_TOOLCHAIN_FILE to the location where ndk is installed
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial-xnnpack-delegate-lowering.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ After exporting the XNNPACK Delegated model, we can now try running it with exam
cd executorch

# Get a clean cmake-out directory
./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-out

# Configure cmake
Expand Down
4 changes: 2 additions & 2 deletions examples/demo-apps/android/ExecuTorchDemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ export ANDROID_NDK=<path-to-android-ndk>
export ANDROID_ABI=arm64-v8a

# Run the following lines from the `executorch/` folder
./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-android-out

# Build the core executorch library
Expand Down Expand Up @@ -114,7 +114,7 @@ export ANDROID_NDK=<path-to-android-ndk>
export ANDROID_ABI=arm64-v8a
export QNN_SDK_ROOT=<path-to-qnn-sdk>

./install_requirements.sh --clean
./install_executorch.sh --clean
mkdir cmake-android-out
cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ git submodule update --init
```
Install dependencies
```
./install_requirements.sh
./install_executorch.sh
```
## Setup Environment Variables
### Download Buck2 and make executable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ git submodule update --init
```
Install dependencies
```
./install_requirements.sh
./install_executorch.sh
```

## Setup QNN
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ git submodule update --init
```
Install dependencies
```
./install_requirements.sh
./install_executorch.sh
```

Optional: Use the --pybind flag to install with pybindings.
```
./install_requirements.sh --pybind xnnpack
./install_executorch.sh --pybind xnnpack
```


Expand Down
2 changes: 1 addition & 1 deletion examples/demo-apps/apple_ios/ExecuTorchDemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ python3 -m venv .venv && source .venv/bin/activate

pip install --upgrade cmake pip setuptools wheel

./install_requirements.sh --pybind coreml mps xnnpack
./install_executorch.sh --pybind coreml mps xnnpack
```

### 4. Backend Dependencies
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ git submodule update --init
Install dependencies

```
./install_requirements.sh
./install_executorch.sh
```

## Prepare Models
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ git submodule update --init
Install dependencies

```
./install_requirements.sh
./install_executorch.sh
```
Optional: Use the --pybind flag to install with pybindings.
```
./install_requirements.sh --pybind xnnpack
./install_executorch.sh --pybind xnnpack
```
## Prepare Models
In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5.
Expand Down
4 changes: 2 additions & 2 deletions examples/demo-apps/react-native/rnllama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ A React Native mobile application for running LLaMA language models using ExecuT

3. Pull submodules: `git submodule sync && git submodule update --init`

4. Install dependencies: `./install_requirements.sh --pybind xnnpack && ./examples/models/llama/install_requirements.sh`
4. Install dependencies: `./install_executorch.sh --pybind xnnpack && ./examples/models/llama/install_requirements.sh`

5. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`

Expand All @@ -40,4 +40,4 @@ A React Native mobile application for running LLaMA language models using ExecuT

10. Select the model and tokenizer in the app to start chatting:

[![rnllama]](https://github.com/user-attachments/assets/b339f1ec-8b80-41f0-b3f6-ded6698ac926)
[![rnllama]](https://github.com/user-attachments/assets/b339f1ec-8b80-41f0-b3f6-ded6698ac926)
2 changes: 1 addition & 1 deletion examples/devtools/build_example_runner.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ done
main() {
cd "${EXECUTORCH_ROOT}"

./install_requirements.sh --clean
./install_executorch.sh --clean

if [[ "${BUILD_COREML}" == "ON" ]]; then
cmake -DCMAKE_INSTALL_PREFIX=cmake-out \
Expand Down
6 changes: 3 additions & 3 deletions examples/models/llama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Llama 3 8B performance was measured on the Samsung Galaxy S22, S24, and OnePlus
## Step 1: Setup
> :warning: **double check your python environment**: make sure `conda activate <VENV>` is run before all the bash and python scripts.
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_requirements.sh --pybind xnnpack`
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh --pybind xnnpack`
2. Run `examples/models/llama/install_requirements.sh` to install a few dependencies.


Expand Down Expand Up @@ -440,8 +440,8 @@ This example tries to reuse the Python code, with minimal modifications to make
```
git clean -xfd
pip uninstall executorch
./install_requirements.sh --clean
./install_requirements.sh --pybind xnnpack
./install_executorch.sh --clean
./install_executorch.sh --pybind xnnpack
```
- If you encounter `pthread` related issues during link time, add `pthread` in `target_link_libraries` in `CMakeLists.txt`
- On Mac, if there is linking error in Step 4 with error message like
Expand Down
Loading

0 comments on commit abbbfea

Please sign in to comment.