Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect/missing Google Colab Links #20

Merged
merged 3 commits into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1314,7 +1314,7 @@
"source": [
"\n",
"\n",
"The max cut problem for the `sampleGraph3` took some time to solve sequentially. Like we did in [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb), we can create a python script to run the recursive algorithm in parallel. One option is to distribute the top-level of the recursion (e.g. solutions to `Global:0`, `Global:1`, ...`Global:n`) to the GPU processes, and then merge those results back together on GPU process 0. We've created the script for this and saved it as Example-03.py. If you have not yet already done so, download [Example-03.py](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/for-local-instance/Example-03.py) and save it in your working directory. Execute the cell below to find a max cut approximation of `sampleGraph3` using 4 GPU processes."
"The max cut problem for the `sampleGraph3` took some time to solve sequentially. Like we did in [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb), we can create a python script to run the recursive algorithm in parallel. One option is to distribute the top-level of the recursion (e.g. solutions to `Global:0`, `Global:1`, ...`Global:n`) to the GPU processes, and then merge those results back together on GPU process 0. We've created the script for this and saved it as Example-03.py. If you have not yet already done so, download [Example-03.py](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/Example-03.py) and save it in your working directory. Execute the cell below to find a max cut approximation of `sampleGraph3` using 4 GPU processes."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@
"source": [
"## 4.4 Weighted Max Cut using a modified Divide-and-Conquer QAOA\n",
"\n",
"If you have not already done so, download the Example-04.py from the repository and save it to your working directory. Add the modifications that were made in the exercises above to the [Example-04.py](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/for-local-instance/Example-04.py) which calls up the example graph from [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb) with random weights assigned to the vertices. In particular fill in your code between the lines `# Edit the code above` and `# Edit the code below` for the functions: `hamiltonian_max_cut`, `merger_graph_penalties`, and `cutvalue`. Make sure to save the file. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is. \n",
"If you have not already done so, download the Example-04.py from the repository and save it to your working directory. Add the modifications that were made in the exercises above to the [Example-04.py](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/Example-04.py) which calls up the example graph from [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb) with random weights assigned to the vertices. In particular fill in your code between the lines `# Edit the code above` and `# Edit the code below` for the functions: `hamiltonian_max_cut`, `merger_graph_penalties`, and `cutvalue`. Make sure to save the file. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is. \n",
"\n",
"For the assessment, make modifications to the Example-04.py to improve performance by making some adjustments as discussed at the end of [Lab 3](3_Recursive-divide-and-conquer.ipynb). Here are a few recommendations:\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions qaoa-for-max-cut/00_StartHere.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/00_StartHere.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/00_StartHere.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/00_StartHere.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
Expand Down Expand Up @@ -96,7 +96,7 @@
"\n",
"If you do not have a local installation of CUDA-Q or access to a GPU, you can use the \"Launch on qBraid\" buttons like this one <a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/00_StartHere.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a> to upload and execute notebooks on qBraid Lab.\n",
"\n",
"Alternatively, you can individually upload and execute notebooks in [Google Colaboratory](https://colab.google/) by following the \"Open in Colab\" buttons like this one <a href=\"https://colab.research.google.com/github/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/01_Max-Cut-with-QAOA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> that are located at the top of each notebook."
"Alternatively, you can individually upload and execute notebooks in [Google Colaboratory](https://colab.google/) by following the \"Open in Colab\" buttons like this one <a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/01_Max-Cut-with-QAOA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> that are located at the top of each notebook."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions qaoa-for-max-cut/02_One-level-divide-and-conquer-QAOA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/02_One-level-divide-and-conquer-QAOA.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/02_One-level-divide-and-conquer-QAOA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/02_One-level-divide-and-conquer-QAOA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
Expand Down Expand Up @@ -2252,7 +2252,7 @@
"id": "6097aab8"
},
"source": [
"If you have not yet already done so, download the [Example-02 files from the github repository](https://github.com/mmvandieren/cuda-q-academic/tree/main/qaoa-for-max-cut/for-local-instance) and save them in your working directory. Take a look at the Example-02-step-1.py file to see how the GPU process with `rank` 0 carries out the subgraph division and communicates the relevant subgraph data to the remaining GPU processes. The variable `assigned_subgraph_dictionary` (whose definition depends on the `rank` variable) is introduced to hold only the portion of the `subgraph_dictionary` that is needed by the GPU associated with the value of `rank`.\n",
"If you have not yet already done so, download the [Example-02 files from the github repository](https://github.com/NVIDIA/cuda-q-academic/tree/main/qaoa-for-max-cut) and save them in your working directory. Take a look at the Example-02-step-1.py file to see how the GPU process with `rank` 0 carries out the subgraph division and communicates the relevant subgraph data to the remaining GPU processes. The variable `assigned_subgraph_dictionary` (whose definition depends on the `rank` variable) is introduced to hold only the portion of the `subgraph_dictionary` that is needed by the GPU associated with the value of `rank`.\n",
"\n",
"Run the following cell to use MPI to execute this step and watch the output that reflects the actions that have been completed by the different processors."
]
Expand Down Expand Up @@ -2627,7 +2627,7 @@
"id": "d4e15edc"
},
"source": [
"![](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/images/nvidia-logo.png?raw=1)"
"![](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/images/nvidia-logo.png?raw=1)"
]
}
],
Expand Down
6 changes: 3 additions & 3 deletions qaoa-for-max-cut/03_Recursive-divide-and-conquer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/03_Recursive-divide-and-conquer.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/03_Recursive-divide-and-conquer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/03_Recursive-divide-and-conquer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
Expand Down Expand Up @@ -1815,7 +1815,7 @@
"source": [
"\n",
"\n",
"The max cut problem for the `sampleGraph3` took some time to solve sequentially. Like we did in [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb), we can create a python script to run the recursive algorithm in parallel. One option is to distribute the top-level of the recursion (e.g. solutions to `Global:0`, `Global:1`, ...`Global:n`) to the GPU processes, and then merge those results back together on GPU process 0. We've created the script for this and saved it as Example-03.py. If you have not yet already done so, download [Example-03.py](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/for-local-instance/Example-03.py) and save it in your working directory. Execute the cell below to find a max cut approximation of `sampleGraph3` using 4 GPU processes."
"The max cut problem for the `sampleGraph3` took some time to solve sequentially. Like we did in [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb), we can create a python script to run the recursive algorithm in parallel. One option is to distribute the top-level of the recursion (e.g. solutions to `Global:0`, `Global:1`, ...`Global:n`) to the GPU processes, and then merge those results back together on GPU process 0. We've created the script for this and saved it as Example-03.py. If you have not yet already done so, download [Example-03.py](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/Example-03.py) and save it in your working directory. Execute the cell below to find a max cut approximation of `sampleGraph3` using 4 GPU processes."
]
},
{
Expand Down Expand Up @@ -2016,7 +2016,7 @@
"id": "1e090881"
},
"source": [
"![](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/images/nvidia-logo.png?raw=1)"
"![](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/images/nvidia-logo.png?raw=1)"
]
}
],
Expand Down
12 changes: 12 additions & 0 deletions qaoa-for-max-cut/04_Assessment-Solution.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/04_Assessment-Solution.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/04_Assessment-Solution.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
14 changes: 13 additions & 1 deletion qaoa-for-max-cut/04_Assessment.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=qaoa-for-max-cut/04_Assessment.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/04_Assessment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -390,7 +402,7 @@
"source": [
"## 4.4 Weighted Max Cut using a modified Divide-and-Conquer QAOA\n",
"\n",
"If you have not already done so, download the Example-04.py from the repository and save it to your working directory. Add the modifications that were made in the exercises above to the [Example-04.py](https://github.com/mmvandieren/cuda-q-academic/blob/main/qaoa-for-max-cut/for-local-instance/Example-04.py) which calls up the example graph from [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb) with random weights assigned to the vertices. In particular fill in your code between the lines `# Edit the code above` and `# Edit the code below` for the functions: `hamiltonian_max_cut`, `merger_graph_penalties`, and `cutvalue`. Make sure to save the file. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is. \n",
"If you have not already done so, download the Example-04.py from the repository and save it to your working directory. Add the modifications that were made in the exercises above to the [Example-04.py](https://github.com/NVIDIA/cuda-q-academic/blob/main/qaoa-for-max-cut/Example-04.py) which calls up the example graph from [Lab 2](2_One-level-divide-and-conquer-QAOA.ipynb) with random weights assigned to the vertices. In particular fill in your code between the lines `# Edit the code above` and `# Edit the code below` for the functions: `hamiltonian_max_cut`, `merger_graph_penalties`, and `cutvalue`. Make sure to save the file. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is. \n",
"\n",
"For the assessment, make modifications to the Example-04.py to improve performance by making some adjustments as discussed at the end of [Lab 3](3_Recursive-divide-and-conquer.ipynb). Here are a few recommendations:\n",
"\n",
Expand Down
12 changes: 12 additions & 0 deletions quick-start-to-quantum/00_quick_start_to_quantum.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=quick-start-to-quantum/00_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/quick-start-to-quantum/00_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
14 changes: 13 additions & 1 deletion quick-start-to-quantum/01_quick_start_to_quantum.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=quick-start-to-quantum/01_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/quick-start-to-quantum/01_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -174,7 +186,7 @@
"id": "fe371889",
"metadata": {},
"source": [
"\n",
"$\\newcommand{\\ket}[1]{|#1\\rangle}$\n",
"\n",
"In quantum computing, instead of bits, information is stored in *qubits*. While a single bit can only be in one of 2 states at a given time, a single qubit can be in one of infinitely many states! This suggests that we might be able to handle information more efficiently with qubits than we could with bits.\n",
"\n",
Expand Down
12 changes: 12 additions & 0 deletions quick-start-to-quantum/02_quick_start_to_quantum.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://account.qbraid.com?gitHubUrl=https://github.com/NVIDIA/cuda-q-academic.git&redirectUrl=quick-start-to-quantum/02_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png\" alt=\"Launch On qBraid\" width=\"150\"/></a>\n",
"\n",
"<a href=\"https://colab.research.google.com/github/NVIDIA/cuda-q-academic/blob/main/quick-start-to-quantum/02_quick_start_to_quantum.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" width=\"150\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down