diff --git a/180108057_Ujjwal Rustagi(Week 3).ipynb b/180108057_Ujjwal Rustagi(Week 3).ipynb
new file mode 100644
index 000000000..246d03e13
--- /dev/null
+++ b/180108057_Ujjwal Rustagi(Week 3).ipynb
@@ -0,0 +1,1014 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Programming Exercise 2: Logistic Regression\n",
+ "\n",
+ "## Introduction\n",
+ "\n",
+ "In this exercise, you will implement logistic regression and apply it to two different datasets. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.\n",
+ "\n",
+ "All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).\n",
+ "\n",
+ "Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, and [`matplotlib`](https://matplotlib.org/) for plotting. In this assignment, we will also use [`scipy`](https://docs.scipy.org/doc/scipy/reference/), which contains scientific and numerical computation functions and tools. \n",
+ "\n",
+ "You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# used for manipulating directory paths\n",
+ "import os\n",
+ "\n",
+ "# Scientific and vector computation for python\n",
+ "import numpy as np\n",
+ "\n",
+ "# Plotting library\n",
+ "from matplotlib import pyplot\n",
+ "\n",
+ "# Optimization module in scipy\n",
+ "from scipy import optimize\n",
+ "\n",
+ "# library written for this exercise providing additional functions for assignment submission, and others\n",
+ "import utils\n",
+ "\n",
+ "# define the submission/grader object for this exercise\n",
+ "grader = utils.Grader()\n",
+ "\n",
+ "# tells matplotlib to embed plots within the notebook\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Submission and Grading\n",
+ "\n",
+ "\n",
+ "After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.\n",
+ "\n",
+ "\n",
+ "| Section | Part | Submission function | Points \n",
+ "| :- |:- | :- | :-:\n",
+ "| 1 | [Sigmoid Function](#section1) | [`sigmoid`](#sigmoid) | 5 \n",
+ "| 2 | [Compute cost for logistic regression](#section2) | [`costFunction`](#costFunction) | 30 \n",
+ "| 3 | [Gradient for logistic regression](#section2) | [`costFunction`](#costFunction) | 30 \n",
+ "| 4 | [Predict Function](#section4) | [`predict`](#predict) | 5 \n",
+ "| 5 | [Compute cost for regularized LR](#section5) | [`costFunctionReg`](#costFunctionReg) | 15 \n",
+ "| 6 | [Gradient for regularized LR](#section5) | [`costFunctionReg`](#costFunctionReg) | 15 \n",
+ "| | Total Points | | 100 \n",
+ "\n",
+ "\n",
+ "\n",
+ "You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "
\n",
+ "At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once. They must also be re-executed everytime the submitted function is updated.\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1 Logistic Regression\n",
+ "\n",
+ "In this part of the exercise, you will build a logistic regression model to predict whether a student gets admitted into a university. Suppose that you are the administrator of a university department and\n",
+ "you want to determine each applicant’s chance of admission based on their results on two exams. You have historical data from previous applicants that you can use as a training set for logistic regression. For each training example, you have the applicant’s scores on two exams and the admissions\n",
+ "decision. Your task is to build a classification model that estimates an applicant’s probability of admission based the scores from those two exams. \n",
+ "\n",
+ "The following cell will load the data and corresponding labels:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load data\n",
+ "# The first two columns contains the exam scores and the third column\n",
+ "# contains the label.\n",
+ "data = np.loadtxt(os.path.join('Data', 'ex2data1.txt'), delimiter=',')\n",
+ "X, y = data[:, 0:2], data[:, 2]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.1 Visualizing the data\n",
+ "\n",
+ "Before starting to implement any learning algorithm, it is always good to visualize the data if possible. We display the data on a 2-dimensional plot by calling the function `plotData`. You will now complete the code in `plotData` so that it displays a figure where the axes are the two exam scores, and the positive and negative examples are shown with different markers.\n",
+ "\n",
+ "To help you get more familiar with plotting, we have left `plotData` empty so you can try to implement it yourself. However, this is an optional (ungraded) exercise. We also provide our implementation below so you can\n",
+ "copy it or refer to it. If you choose to copy our example, make sure you learn\n",
+ "what each of its commands is doing by consulting the `matplotlib` and `numpy` documentation.\n",
+ "\n",
+ "```python\n",
+ "# Find Indices of Positive and Negative Examples\n",
+ "pos = y == 1\n",
+ "neg = y == 0\n",
+ "\n",
+ "# Plot Examples\n",
+ "pyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10)\n",
+ "pyplot.plot(X[neg, 0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1)\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plotData(X, y):\n",
+ " \"\"\"\n",
+ " Plots the data points X and y into a new figure. Plots the data \n",
+ " points with * for the positive examples and o for the negative examples.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " X : array_like\n",
+ " An Mx2 matrix representing the dataset. \n",
+ " \n",
+ " y : array_like\n",
+ " Label values for the dataset. A vector of size (M, ).\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Plot the positive and negative examples on a 2D plot, using the\n",
+ " option 'k*' for the positive examples and 'ko' for the negative examples. \n",
+ " \"\"\"\n",
+ " # Create New Figure\n",
+ " fig = pyplot.figure()\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ " # Find Indexes of Positive and NEgative Examples\n",
+ " \n",
+ " pos = y == 1\n",
+ " neg = y == 0\n",
+ "\n",
+ " pyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10)\n",
+ " pyplot.plot(X[neg,0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1)\n",
+ " \n",
+ " # ============================================================"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, we call the implemented function to display the loaded data:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO2de3hU5bXwf2sGyBCDQSC1UqJQoV65SWyl7YGEYKVfOQXb2gO13o6XeuspDfVyKocYxK9VqLY+52gLUpG0X/Bo2wge28MdrNrWYL1bSy2ICMpFiaTIJWF9f8yekISZZGYysy8z6/c877Nn7z177zV7ZvZ63/Wui6gqhmEYhgEQ8loAwzAMwz+YUjAMwzBaMaVgGIZhtGJKwTAMw2jFlIJhGIbRSg+vBegOAwYM0MGDB3sthmEYRqDYuHHjblUtibcv0Eph8ODBNDQ0eC2GYRhGoBCRtxLtM/ORYRiG0YopBcMwDKOVrCkFEfm5iOwUkVfabOsnIitFZJOzPMHZLiJyn4j8TUReEpFzsiWXYRiGkZhsjhQWA5M6bLsVWK2qw4DVzjrAF4FhTrsGeCCLchmGYRgJyJpSUNUNwPsdNk8BHnZePwxMbbN9iUb5A9BXRE7KlmzdoampiTlzqiktLSEcDlFaWsKcOdU0NTV5LZphGEa3cdv76ERV3QGgqjtE5GPO9k8Ab7d53zZn246OJxCRa4iOJjj55JOzK20HmpqaKC8/j+LiN6muPsCQIbB5827q6u5m2bJfsW7dHygqKnJVJsMwjEzil4lmibMtbvpWVV2gqmWqWlZSEtfNNmvcc888iovfZNasAwwdCuEwDB0Ks2YdoLj4Te65Z56r8hiGYWQat5XCezGzkLPc6WzfBpS2ed8gYLvLsnXJwoX3M336AaSDChOBadMO8OCD2Z0KaWxs5MILL6SxsTGr1zH8hX3vhpu4rRSWAZc5ry8DHm+z/VLHC+k8oDFmZvIT27fvYciQ+PuGDInuzybLli2jvr6e5cuXZ/U6hr+w791wk2y6pNYBzwKnicg2EbkS+CFwvohsAs531gGeBP4O/A1YCFyfLbm6w8CB/dm8Of6+zZuj+7PJz3/+83ZLwx287qnb9264STa9j6ar6kmq2lNVB6nqIlXdo6qVqjrMWb7vvFdV9QZVPVVVh6uqL3NXXH319dTVRehYrE4Vli6NcNVV12X0ehMnTkREWtszzzwDwNNPP91u+8SJEzN63XynoxJwu6cetO/da6VpZBa/TDQHgqqqm2hsPJW5cyNs2gTNzbBpE8ydG6Gx8VSqqm7K6PVuu+02CgsLW9cPHTrUbglQWFjIrFmzMnrdfKejEnC7p56J793NB7WZt3IMVQ1sGzNmjLrNvn37tKZmtpaWlmg4HNLS0hKtqZmt+/bty8r11qxZo4WFhUrUG6tdKyws1LVr12bluvlM3759293nXr16tVvGWmVlZdZk6O73vmTJEgW0trY2azLGKC8vV0ArKiqyfi0jMwANmuC56vmDvTvNC6XgBcuXL9dIJNLuwRCJRHT58uVei5YTVFZWtru3PXr0iPswdlshd+d7z+aDuuP98kJpGt2jM6Vg5qME+Clyee/evfTo0YNQKETv3r0JhUL06NGDvXv3ui5LLtLRXNPc3Nzp+wsLC/mf//kfysvLsypXKt+7m/MQZtbMcRJpiyC0bI0U9u3bp2PGnKUTJkR04UJ01Sp04UJ0woSIjhlzVtZMRYkoLy/XUCiko0eP1hUrVujo0aM1FArZcD2DdGauwaMRWirfezLyZ3J0Y2bNYIONFFLDb5HLxcXFzJs3j4aGBs4//3yee+457r77bo4//nhX5chlKioqeOSRR4hEIsfs82qElsr3XlFRwRNPPNGuB9+WTI9uEt2vSCTCI488kvVRlJFFEmmLILRsjRQGDRqgCxeia9ce2xYsQEtLS7JyXcNbamtrtaioSEOhkPbu3bu153vKKacEZoTm5vxTx/sVCoW0qKjIlclto3tgI4XU8Dpy2fCGRYsWsX//fkaOHMnjjz/e2iMfMmRIYEZobs4/dbxfI0eOZP/+/RZkF3BMKcTB68hlwxs6mmvef/995s+fT3FxMQDhcJiZM2dSX1/vsaSJcfNBbWbN3ESiI4lgUlZWpg0NmQ9+njOnmvXr72bWrPbJ71SjgWrjx9/M7Nk1Gb+uYSRLY2Mjl19+OYsXL25VWgBTp05l3LhxzJgxg1AoREtLCz/+8Y956qmnfK3MDHcRkY2qWhZ3nymFY2lbN2HatFjdhGgqi8bGU61uguE5tbW1XHrppdTW1vLNb37Ta3GMgNGZUjDzURyKiopYt+4PjB9/M3fcUcKkSSHuuKOE8eNvNoVg+AJLkmdkCxspGEYAmDhxIqtXr25d79WrF4cOHWpdxqisrGTVqlVeiBiXRGYuw1tspGAYASeoUcSWLC94mFIwjADgdnBapjAzV/AwpWAYASEIUcSJcjCtW7fOl7UgjGMxpWAYAcLvyRETmbnazl360cxlHMWUgmEEiGwGp2WiME9QzVzGUUwpGEaAyGYUcaYmhe+88072798fd9/+/fupqKgwE5KPMZdUw8hj2rqMTp06lXXr1lFRUcGaNWvSPufatWuZNGlSO8+ojtiIwVt855IqIt8RkVdE5FURmeFs6yciK0Vkk7M8wQvZDH9gxeCzz8SJE+nbty/19fX07ds3Y4V5KioqOOOMMxLuN4Xgb1xXCiJyNnA18GlgJDBZRIYBtwKrVXUYsNpZN/IU82/PPrfddhuh0NFHQCZjHwYPHsyVV155jKdUKBTyjaeUER8vRgpnAH9Q1f2q2gysBy4EpgAPO+95GJjqgWxGB7zqsZt/e3Zo6zI6YcKEdkqhI93p0dfX11NeXn6Mp1RhYaFvPKVSIZ9Grl4ohVeAcSLSX0QKgf8DlAInquoOAGf5sXgHi8g1ItIgIg27du1yTeh8xa0eu5s1hvOZZOtR9+zZs9s9+lyqt5BPI1fXlYKqvg7cBawEfge8CHReKb398QtUtUxVy0pKSrIkpRHDrR57UNM4BI2uXEYBRISCgoJu9+hzqd5CXo1cE5Vkc6sB/xe4HngDOMnZdhLwRlfHZqscZz5TWVnZrpRjr1692i1jrbKyMuPXtmLw7hGvbKeI6Jw5c3T06NEqIjpgwADdu3ev16K6zt69e3XAgAGe/Q/cAL+V4xSRjznLk4GvAHXAMuAy5y2XAY97IVuQyYTd08seexDSOOQKbSOjQ6EQIsJxxx3HkCFDeO655/iXf/kXdu/enRfmko4sW7aM3bt306tXr9Zt+TRy9Sp47Vci8hqwHLhBVT8AfgicLyKbgPOddSMFMmH39Doi1e9pHHKFtvb+3/3ud4waNarV3h8Oh3n33XeBPDGXdCD2mc8880xP/geeT2onGkIEoZn5qD3l5eUKaEVFRbfPFc+8EIlEdPny5RmQNDHl5eUaCoV09OjRumLFCh09erSGQqGMfCbjKFOmTNEf/ehH2tLSoqqqEyZMyGlzSWckazJ163+wZMkSBbS2tjZr18Bv5iMjM2TTY8erHrsXk5Oe98w8oL6+nqqqqlaX1FmzZuXtRH8yJlOIFjZy43/g+aR2Im0RhJbvI4XOJmZjLd0J2nzqsbvRMwsCy5Yt03A4nJcT/V39l4YOHZq1/4EXzh10MlLw/MHenZbvSkE1ex47Hc0Lzc3NOn/+fJ0yZUoGpfcHmTS7ZYK9e/fq1KlTXff8iSnHnj17um429APxTKahUEivvPLKrP4Pstm5S4QphRzHK/t/UPHS7TYZvBq5xJRjOBzWUCikvXv31lAopEVFRXkxiqqtrdWioiJPPrvb7tidKQWbU8gBzGMnNfweKOeWTbm8vDzunFRLSwtHjhzho48+4siRIzQ1NeWFF5KXEdh+csc2pZAD5FI6ATfw2u22I16l+Bg7dmy79XiprgsLC7n22msDGYWcKl5HYPumc5doCBGEZuajKPlk/88kfjG7eWFTVj1qLgqFQnk5uew33HTuwOYUDONYvLQhd8QNm3KiuZQePXq47otvHIubnTtTCoYRB7+53WZ75JLMiASiOZDyZXI5X+lMKdicgpG3eG1D7ki2bcpdzaWICPPmzWuX8sLIQxJpiyA0GykYuYRbI5d4I5IePXro448/rqo2J5UPYCMFw/A/bo1c4o1IIpEIH374IQDhcJiZM2dSX1+f0esawUCiSiOYlJWVaUNDg9diGEagqKioYMOGDYwcOZK77rqLW265hRdffJHx48ezZs0ar8UzXEBENqpqWbx9NlIwjDzDb3Mphr+wkYJhGEaeYSMFw8gy+Zh+28hNTCkYRgbIRNU7w/ADphQMIwN4XhjFMDKEKQXDSAOvktgZRrYxpWAYaeD39NuGkS6eKAUR+a6IvCoir4hInYhERGSIiPxRRDaJyCMi0ssL2QwjGfyWftswMoXrSkFEPgH8G1CmqmcDYWAacBdwr6oOAz4ArnRbNsNIBT8VRjGMTOGV+agH0FtEegCFwA5gAvCYs/9hYKpHsgWKpqYm5sypprS0hHA4RGlpCXPmVNPU1OS1aHmBbwqjYG6xRmZwXSmo6jvAfGArUWXQCGwE9qpqs/O2bcAn4h0vIteISIOINOzatcsNkX1LU1MT5eXnsX793VRX72bFCqW6ejfr199Nefl5phhcwE9V78wt1sgEXpiPTgCmAEOAgcBxwBfjvDVuqLWqLlDVMlUtKykpyZ6gAeCee+ZRXPwms2YdYOhQCIdh6FCYNesAxcVvcs8987wWMeeJpYxYvXo1999/P6tWrfIsZYS5xRqZwAvz0URgs6ruUtXDwK+BzwJ9HXMSwCBguweypYTXppuFC+9n+vQDiLTfLgLTph3gwQcfcEWOfKa+vp6qqiqeeOIJ6uvrefLJJ13LMGpusUY28EIpbAXOE5FCERGgEngNWAt8zXnPZcDjHsiWNH4w3WzfvochQ+LvGzIkut9wBy966eYW60+CPrfjxZzCH4lOKD8PvOzIsAC4BagSkb8B/YFFbsuWCn4w3Qwc2J/Nm+Pv27w5ut/IDn7opZtbrD8J+tyOJ95Hqlqtqqer6tmqeomqHlTVv6vqp1V1qKpepKoHvZAtWfxgurn66uupq4vQMdGtKixdGuGqq67Lugz5il966bngFhv0nnVHgj63YxHNaeIH001V1U00Np7K3LkRNm2C5mbYtAnmzo3Q2HgqVVU3ZV2GfMVPvXQ/ucWmQ9B71n4YNWYSUwppkk3TTbIT2EVFRaxb9wfGj7+ZO+4oYdKkEHfcUcL48Tezbt0fKCoqSluG7uLH3l+mZfJLL91PbrHpEPSetV9GjZnClEKaZMt0k+oEdlFREbNn17B1606am1vYunUns2fXeKoQwJ+9v2zI5Ideul8qqSWrdHOtZ+2nUWNGUNXAtjFjxqhX7Nu3T8eMOUsnTIjoggXoypXoggXohAkRHTPmLN23b19a562pma0TJkR0zRp07dqjbc2a6LlramZn+JNkh/LycgW0oqLCa1FayYZM5eXlGgqFdPTo0bpixQodPXq0hkIhX31ut1iyZIkCWltb2+n71qxZo4WFhUo0FiluKyws1LVr17ojeIZYvny5RiKRdp8jEono8uXLvRbtGIAGTfBctZFCGjQ1NXHPPfN49913Wbv2ADNmCF/8ItTUDOi26cYPE9jp4Mfenxsy+aWX7geSNQPlXM/awQ+jxoyQSFsEoXkxUmg7Qli4EF21Cl24sPsjhBihkOiqVe1HCbG2ciUaDocy9Ekyix97f36UKZeorKxsdy979erVbhlrlZWVcY8PUs86GYI0asRGCpkj2/EJQY098GPvz48y5RLJTLCGw2FmzJgR9/ic6Vk75MqosUulICKfEpHVIvKKsz5CRIIxjZ4Fsm3eCXLsgV+8cfwuU67QldLt1asXLS0tCR/yQfea6kgs5UkoFH2shsNh11KeZJJkRgoLgX8HDgOo6ktE6x/kJdmOTwh67IEfe39+lClX6EzpfupTnwISzzHkSs8610hGKRSq6p86bGuO+848INvmHT/HHiSDH3t/ycjkx7iKoBBTum05cOAAf/nLX4DEE/u50rPONZJRCrtF5FScVNYi8jWidRDykmTMO93NnurX2INk8GPvLxmZOsYwmJJInpjSHTp0KAUFBa3bm5ujfcegBnHlLYlmoGMN+CSwCtgPvAP8Hjilq+PcaF57H8WLT9ixY0dWvZOM7NAxhiFZn3tDdcqUKfqjH/1IW1paOvX4Mk+vzLF3716dOnWq7t27N63j6cT7qCuFEAK+7rw+DujT2fvdbl4Fr+3bt09ramZraWmJhsMhLS0t0Zqa2a3bcyH4LNdJ1p3yhBNO8FrUwJFrrqZ+pLudls6UQqfmI1U9AtzovP6Hqu7LxOgk6HRm3glq8Fm+kYw7JcC+ffsCmXrBS2xiP/tkM19UMnMKK0XkeyJSKiL9Yi3jkuQIfsieanRNV+6UMWJ2cTB7eLL40dkg6LiZMSAZpfCvwA3ABmCj0xq6feUcJdPeSV6X/MxlErlTxsMC3ZLHj84GQcfNTKyiHd1oAkRZWZk2NPhLP82ZU8369Xcza1Z7E5JqNNZg/PibmT27JqlzxTKmFhe/yfTpBxgyJKpY6uqiMQtBcFH1O7/4xS+47rrr2L9/PwUFBRw8eJAjR460e08kEuHRRx9l8uTJHklpGLB27VomT57M/v37j9mXaqdFRDaqalm8fclENPcUkX8TkcecdqOI9EzqynlIJoPP/FDyM9fpaOooLS1t3Wf28PaYm663uBWdn4z56AFgDHC/08Y424w4ZDL4zCats09HU8fgwYNbt7e1hy9YsCCnHojpPOD9WCMjXYKq4FyZxE/klhRrwIvJbPOieVlPwQ2CmjG1K7rrY51N2vrcq6o2Nzfr/Pnz9ZxzzsmpuIV0XBr9WCMjXYIah5KpTKx0M0tqixPRDICIfBJoSVcJichpIvJCm/ahiMxwvJpWisgmZ3lCutfIFYKaMbUr/NzjTJR6ITZJmiseNMm4NPqxRkamCGoJUFcm8RNpi1gDKoGtwDpgPbAFqOjquGQaEAbeBU4B7gZudbbfCtzV1fG5PlLI1UC4IPQ4u1srwG+k83mSqUfRu3fvQEQpB+X77GwUnckRNulGNOvRh3cBMAIYCRQkc0yS5/0C8LTz+g3gJOf1ScAbXR2f60ohWyU/3SYof8i25FqBnnQ/T1fHff/73/fmA6VIUL7PzsxamTR5dUspEI1R6Ntm/QTg+q6OS6YBPwdudF7v7bDvgwTHXEM0TqLh5JNP7vbN8TudpdQICkH5Q3Yk1/L4pPt54qWtCIVCvh/tdSQI32dno+hMjrC7qxReiLPtz10dl8R5ewG7gRM1BaXQtuX6SCGXCMIfMh65lscnnc9TW1ur4XA47nfn59FePPz2fSY7is70CLszpZDMRHNI5KhTpIiEnQd6d/ki8LyqvuesvyciJznXOAnYmYFrGD4hqBXQci2PTzqfZ9GiRRw5cgTp6BtN8NJi++37TDYHV6J92bjnySiF/wX+W0QqRWQCUAf8LgPXnu6cK8Yy4DLn9WXA4xm4huEj/PaHTIZcy+OTzucpLi5m/vz5rFy5MvD1rv32fXaVg6ugoKBdjYq2ZO2eJxpC6FEzTgi4FngM+BXwLSDc1XFdnLMQ2AMUt9nWH1gNbHKW/bo6j5mPgkWmfKzdJFHcwpQpUzyWLD26+3n8Zn5JFb9+n53d12zcc7rrfaRHH9z9gBGpHJPNZkohWPj1D+klfg7ki0dtba0WFRVpKBTS3r17aygU0qKiosAFgfmNzu5rNu55t5QC0fiE4x2FsJVoltR7ujrOjWZKITVinkyDBg3QUEh00KABgfNkyjWCFlkbxNFeEOjsvmbjnnemFJKZUyhW1Q+BrwAPqeoYIHghjHlOLOPq+vV3U129mxUrlOrq3axffzfl5edZKm6PCFpkbS6mxfZDHqTO7qvr9zyRtog14GWiwWQrgHOdbS91dZwbzUYKyZOr0dFBI4iBfLlONkZrfjcL0s2RwhyiHkh/U9XnnNxHmzKrmoxsYxlX/YGbxVKM5MjGaM3P+b26okuloKqPquoIVb3eWf+7qn41+6LlD25UV8uVMqF+GOp3h65cEIPi2hlk3Ej0FzSzYFuSGSkYWcQtW3+uZFwNcg8sRlAD+XKFbIzWcimjrCkFj3GrutrVV19PXV2E6JTQUVRh6dIIV111XUauk22C3ANrSxAD+XKFbIzWcsosmGiyIQgtFyaaBw0aoAsXxi+ks2ABWlpakpHrBDXjaq5OzJprp/dkOigsSPm9SHeiWUROd9JbFHXYPimzqil/ccvWn8kyoW6SUz2wNuSia2fQyPRoLWfMgom0BfBvRGsc1BMtrDOlzb7nEx3nZrORQn4QpB6YERyyMVoLSsQ3aY4UrgbGqOpUoBz4DxH5jrPv2HSJRlrkiq0/GdL1ssqZHpjhK7IxWvNbwr10EO34NIrtEHlNVc9ss15ENCnea8AEVR3ljoiJKSsr04aGBq/F6BYx76Pi4jeZNu0AQ4ZEvYGWLo3Q2Hiqr007qdD2c06ffvRz1tUl9zl/8YtfcN1117F//34KCgo4ePAghYWFPPDAA3zzm9908ZMYRmKmTp3KuHHjmDFjBqFQiJaWFn784x/z1FNPUV9f77V4rYjIRlUti7sz0RACWAOM6rCtB7AEaEl0nJstF8xHqrlRXa0ruhtRbROzhpE56MR81NlIYRDQrKrvxtn3OVV9OiMqqxvkwkghXygtLaG6ejdDhx67b9MmuOOOErZuTVxXKSg9MMMIAp2NFBIqhSBgSiE4hMMhVqxQwuFj9zU3w6RJIZqbW9wXzDDykM6UggWvGa6QKxHVmSDoqToMb3Drd2NKwXCFfPKy6opcSNVhuI9bv5uklYKIHC8i/WItm0IZuUdV1U00Np7K3LkRXn4ZFi+Giy6Cykr4058Oc+jQobyp6ZArqToMd3Hrd9OlUhCRb4nIe8BLRKuubQTMkI872U1zhVhE9XnnzeC228L89a/wgx/AypVw770tPPvsj3O22E8uJUsz3MOr300yI4XvAWep6mBVHeK0T2ZUigBilcyipKIYi4qKKCjoxbnn9uTOO8lKAkA/2utzNVWHkV28+t0koxTeBPZn8qIi0ldEHhORv4jI6yIy1jFLrRSRTc7yhExeM9O4ld3Uz6SjGLNd7MeP9nqroWCkg1e/m2SUwr8Dz4jIz0Tkvljr5nV/AvxOVU8HRgKvA7cCq1V1GLDaWfctVsksPcWY7QSAfrXXW6oOIx28+N0koxR+RjS6+Q8cnVPYmO4FReR4YBywCEBVD6nqXmAK8LDztoeBqeleww1ypZJZd0hHMWbaNTVI9nqroWCkg9u/m2SUQrOqVqnqQ6r6cKx145qfBHYBD4nIn0XkQRE5DjhRVXcAOMuPxTtYRK4RkQYRadi1a1c3xOge5nefnmLMtGtqkOz1uZAszXAft383ySiFtc6D+KQMuaT2AM4BHlDV0cA/SMFUpKoLVLVMVctKSkq6IUb3ML/79BRjW9fUTZui0cybNsHcudHEeFVVN6UkQ5Ds9VZDwUgHt383Xaa5EJF4f3tN1wNJRD4O/EFVBzvr/0RUKQwFylV1h4icBKxT1dM6O5eXaS7yJbtpZ8yZU8369Xcza1Z7E5Jq9CE/fvzNzJ5dc8xxTU1N3HPPPB588AG2b9/DwIH9ueqq66iquinte/bEE09w0UUXceDAgdZtkUiERx99lMmTJ6d1TsPIVbqV5qKNG+qQTLikOgn23haR2AO/kmg67mXAZc62y4DH072GGwS1klkmSbfXX1RUxOzZNWzdupPm5ha2bt3J7Nk13bpnZq/PP/zofpwTJEqf2rYBZwNfBy6NtWSO6+R8o4gGwL1EtLLbCUB/ol5Hm5xlv67Okyups4OMX9J+W2rt/GPJkiUK+K6qWRAgndTZMUSkmmjltTOBJ4EvAr9X1a9lVj2ljmVJNWJYau3cpLGxkcsvv5zFixdTXFzcbl9FRQXr1q2joqKCNWvWeCRhMEmryI4e7dW/TNTM9KKzfiKwvKvj3Gg2UkiPWO9+0KABGgqJDho0IOeK+hi5QdvRQGVlZbv63L169Wq3jLXKykqvxfY9pFmjOcZHqnoEaHZiDHYSdSs1Aoil5zCCRNtgxCC5HweZZJRCg4j0BRYSDVp7HvhTVqUysoal5zD8TGfBiBMmTGD//sQZd/zkfhxkUqq8JiKDgeNV9aVsCZQKNqeQOt0ti2kY2WTt2rVMnjy504d/QUEBR44c4fDhw63bzP04NbrlkioiV8Zeq+oW4FVn8tkIIJaew/AzyQQjzpw5k4KCAnM/zhLJmI8qReRJJ6L5bKI5kPpkWS4jS1h6DsPvdJUE7plnnrF0IVkkmeC1bxBNUPcyUZfUGar6vWwLZmQHS89hBIHOghEtXUh2SSZOYRhHlcIZRKOPq1Q1ozUW0sHmFFLH0nMYQaCiooINGzYwcuRI7rrrLm655RZefPFFxo8fbzEJGaBbcwrAcuA/VPVbwHiiEcfPZVA+w0UsPYcRBGw04B3JjBSOV9UPO2wbpqqbsipZEthIwTAMI3XSGimIyM0AqvqhiFzUYfcVGZTPMIw4WMI3wws6Mx9Na/P63zvsm5QFWQzDaIMf600buU9nSkESvI63bhgZp6mpiTlzqiktLSEcDlFaWsKcOdV5k4rDr/WmjdymM6WgCV7HWzfyjGw/sP2So8lNxRSketNG7pJwollEWoiWyhSgNxBzQRUgoqo9XZGwE2yi2RvaurVOn37UrbWuLnNurelWdcskbnzOtiST4sHy+xiZIK2JZlUNq+rxqtpHVXs4r2PrnisEwzvcSKq3cOH9TJ/eXiEAiMC0aQd48MEHun2NrnA7eWCQ6k0buUsycQqG0Q43Hth+yNHkhWLqKsWDKQQj25hSMFLGjQe2H3I0eaWYrN604SWmFIyUceOB7XWOpqamJvr27e2JYlq0aJElfDM8w5SCkTKdPbDr6iIMG3ZGt711qqpuorHxVObOjbBpEzQ3R+s9zJ0bneStqropg5+oPbEJ5hNOOEhtLa4rJkvxYHhJSkV2Mi9dOXMAABtASURBVHZRkS3APqAFaFbVMhHpBzwCDAa2AF9X1Q86O0+q3kdNTU3cc888Fi68n+3b9zBwYH+uvvp6qqpuspw/KZAoqV5dXYQ//1kZMQIuueRgt711Yt/Xgw8+0Pp9XXXVdVn/vmKeTzNnHmDmTDjxRLj4Ylo/T21tD/bvP81yRRmBpTPvIy+VQpmq7m6z7W7gfVX9oYjcCpygqrd0dp5UlILb7oW5TrwH9rBhZ9Dc/Eduv/2gZ26kmaBtdbqPPoJHH4Xf/hZ27oT+/eHw4ULefvs9+70YgSUoSuENoFxVd4jIScA6VT2ts/OkohT84Pee63RV6nPOnAG8/fYu9wVLkXA4xIoVSjh87L7mZpg0KURzc4v7guURjY2NXH755SxevJji4mKvxck5ups6OxsosEJENorINc62E1V1B4Cz/Fi8A0XkGhFpEJGGXbuSf8D4we891+nKW+edd3YHIkWFHzyf8h3L++QdXimFz6nqOcAXgRtEZFyyB6rqAlUtU9WykpKSpC/oB7/3XKerh2mfPmQ84CsbeO35ZFjeJy/xRCmo6nZnuRP4DfBp4D3HbISz3JnJa1rvL/tcffX1PPRQfG+dX/4SKioIxIjMS88nv5LtNN6W98k/uK4UROQ4EekTew18AXgFWAZc5rztMuDxTF7Xen/Zp6rqJp5/HmpqaPcwramB996DK68MxojMqtMdS7bNObfddlu79B6HDh1qt4Romo9Zs2Zl5fpGG1TV1QZ8EnjRaa8Ctznb+wOriZb7XA306+pcY8aM0WTZt2+fjhlzlk6YENEFC9CVK9EFC9AJEyI6ZsxZum/fvqTPlej8NTWzddCgARoKiQ4aNEBramZ3+7xBY+DA/jplCvrxj6OhUHR5xRXok09G73dpaYnXIhppUF5eroBWVFRk7Rpr1qzRwsJCJTrn2K4VFhbq2rVrs3btfANo0ETP6EQ7gtBSUQqqRx/cpaUlGg6HtLS0JCMP7rYKZ+FCdNUqdOHCzCmcIFFTM1snTIjomjXo2rXR9uSTUcXQpw8qQt4qzER43aHYu3evTp06Vffu3du6rbKyst1DuVevXu2WsVZZWZlRWZYvX66RSKTdNSKRiC5fvjyj18l3TClkmXgPwrVr0TVrooqhpma21yK6RscR2bJl6Cc/iX7+8+S9woyHHzoUS5YsUUBra2tbt3XWa89m7722tlaLioo0FApp7969NRQKaVFRUTvZjO7TmVKwNBcZwNxdj9LRHv+Vr8DHPw5z5uBK+umg4XZ67njE8/TxKo13ruV9CmKdbVMKGcDcXdtTVFTE7Nk1vPba3ykq6s1f/woTJ8K0abBkSTRKOB8VZjy86FAk6+lz5513up7G26u8T9l6eAcx3sKUQgYwd9djiaUVOeOMj/jBD2DFCpg7F/7+d6iqiiqGfFSYHfGiQ5GKp4/babzr6+upqqoiFIo+msLhMDNnzqS+vj4r14uRrYd3EOMtTClkAHN3PZaYWeTOO9ubjaqrownmHn00fxVmW7zoUKRiGuquOSco5pNMPbxzId7ClEIGsGCnY+nMLHLxxdEEc/mqMNviVYci2Qpv3TXn+NV8kq2Hd07EWySagQ5C84v3kWr23F2DSigkumpVe2+sWFu5Muqaat5H2Y+f6Qw3PH3ciG9Ih2x6VwUh3gJzST1KJnzCvfYrDwKDBg3QhQvjK4UFC9D+/Qvtfjl41aEoLy/XUCiko0eP1hUrVujo0aM1FAp16wGerfiGeLEU3SWbD2+/x1uYUnDIhE+4H/zKg4DFbvifKVOm6I9+9CNtaWlRVdXm5madP3++TpkyJe1zZqsHHi+WIhMkenifd9553VJAfo+3MKXgkOqDKt6IYMKEcTpuXIE97LrAS7OI4S3Z6IFnywwV7+FdUFDQbQWUjVFYJulMKeTVRHMqPuExl8r16++muno3K1Yo1dW7OXBgA9u2HeTAga7Pkc9YUrnOaWpqYs6c6m7XsvYjyU5id4ZbXjzxvKsOHjwIdM8TKdB1thNpiyC0VEcKXU1+hsOh1vd2NqoYPz6ay6ercxj+x4v5oXwwQSZjPuns3ruVZmPKlCk6dOjQducNhUKu5HnyEmykECUVn/BkXCq7OofhbxKNBtevv5vy8vOy1mv3Q2qLbNNVfENX9/7cc891Jc1GfX09CxYsaHedI0eOAAFzI80geaUUUvEJ7yrSdGeHEkDxzmH4G68ezvmQK6sr80ky9z4TZqhk8CrPk1/JK6WQSpBZV6OKoiIsUC3gePVw9kOurGzPaXSVriLZe+9Wmg23FFAQyCulkMrkZ2ejirq6CGVl42wCNeB49XD2OleWV2aztiR7793Mmup2nie/kldKAY5m8Ny6dSfNzS1s3bqT2bNrjnmYdzaq+PDDU3n88f/p8hyGv3H74Rzrne/fv59rrmmfNRbcM0H6YU4j2XvvphdPrqXtTpe8UwrJYi6VuY+beYfa9s7vums/K1dGs8a++WY0a+zLL7tngvTDnMbVV1/P//t/iUfisXvvZtbUQLuRZhDRjt9KgCgrK9OGhgavxTACSuxBXVz8JtOmHWDIkGgvdenS6MM5k8p/zpxq1q+/m1mz2j+MVeG22+D11wv57ne/R1XVTVnvcITDIVasUMLhY/c1N8OkSSGam1uyKsO7777LGWcM5qyzDnLZZbTe+4cfhldfLeD117fw8Y9/PKsy5DMislFVy+Lts5GCkbe4ORrsrHd+xRXQp89xrpkgvZ7TAFiw4AFGjIBPfQpmz4YLLoguP/UpGDEiut/wBs9GCiISBhqAd1R1sogMAZYC/YDngUtU9VBn57CRghEU/NA7j9HZqGXu3Ajjx9/M7Nk1WZWhtLSE6urdDB167L5Nm+COO0rYunXnsTuNjODXkcJ3gNfbrN8F3Kuqw4APgCs9kcoIBEFLE+GH3nkMP9T/8INbrhGfHl5cVEQGAV8C7gSqRESACcA3nLc8DNwOpDyGPHz4MNu2beNAx+REhmdEIhEGDRpEz549M3K+tnMB1dWxuYDd1NXdzbJlv/KlI0B0Ujt+79ztoMeY2eyee+Zxxx0PsH37HgYO7M9VV13nypwGxJRk/JFCEDIDNDU1cc8981i48P7W+3f11de7dv+yiSfmIxF5DPgB0Af4HnA58AdVHersLwV+q6pnxzn2GuAagJNPPnnMW2+91W7/5s2b6dOnD/3790c6GnAN11FV9uzZw759+xiSqGuYIn4wf6SKm5PaQSCI32GMtt/l9OlHv8u6uuB8l74yH4nIZGCnqm5suznOW+NqK1VdoKplqlpWUlJyzP4DBw6YQvARIkL//v0zOnLzg0tlqpiLc3v8YMJKl3TiPIJk7nR9pCAiPwAuAZqBCHA88BvgAuDjqtosImOB21X1gs7OFW+i+fXXX+eMM85ISabGxkYuv/xyFi9eTHFxcUrHGsmRzveSCD9N2hrpEzPBPPigNyasdEl1ktyPIwtfjRRU9d9VdZCqDgamAWtU9WJgLfA1522XAY+7JZNfi4sb8fHTpK2RPslmF/AbqU6S+yGCPBX8FKdwC9FJ578B/YFFbl04FsaeyXD23/zmN4gIf/nLX+Luv/zyy3nssceSPt/27dv52teiOvOFF17gySefbN23bt261iIkqTB48GB2796d8nFe42YksmF0JNVOSdDMnZ4qBVVdp6qTndd/V9VPq+pQVb1IVQ9m67puVHWqq6vj85//PEuXLs2IzAMHDmxVIplSCkElyPZoI/ik2ikJmvutn0YKrnHbbbe1y50eK6aRqaIaTU1NPP300yxatKhVKagqN954I2eeeSZf+tKX2NmmIMPgwYP5/ve/z9ixYykrK+P555/nggsu4NRTT+WnP/0pAFu2bOHss8/m0KFDzJ49m0ceeYRRo0Zx11138dOf/pR7772XUaNG8dRTT7Fr1y6++tWvcu6553Luuefy9NNPA7Bnzx6+8IUvMHr0aL71rW8R1BQnNmlreEmqnZLAmTsTlWQLQotXjvO1117rtAxdjGwUF49RW1ur//qv/6qqqmPHjtWNGzfqr371K504caI2NzfrO++8o8XFxfroo4+qquopp5yi999/v6qqzpgxQ4cPH64ffvih7ty5U0tKSlRVdfPmzXrWWWepqupDDz2kN9xwQ+v1qqurdd68ea3r06dP16eeekpVVd966y09/fTTVVX129/+ttbU1Kiq6hNPPKGA7tq1K+3PmQrJfi+GEQRipURLS0s0HA5paWlJwjKunZX2nTAhojU1s12Xn07KcXoSvOYHYkU1LrroonbukpkoqlFXV8eMGTMAmDZtGnV1dRw+fJjp06cTDocZOHAgEyZMaHfMl7/8ZQCGDx9OU1MTffr0oU+fPkQikZTzua9atYrXXnutdf3DDz9k3759bNiwgV//+tcAfOlLX+KEE05I+zMaRj4TmyRPJpaiquomli37FXPnxo9R8Zu5M2+VArQvqlFQUMDBgwe7XVRjz549rFmzhldeeQURoaWlBRHhwgsv7DR2oqCgAKBVlhihUIjm5uaUZDhy5AjPPvssvXv3PmZfUOM3cjmC1Mht/BBBngp5OacQIxtFNR577DEuvfRS3nrrLbZs2cLbb7/NkCFD6NevH0uXLqWlpYUdO3awdu3atK/Rp08f9u3bl3D9C1/4Av/5n//Zuv7CCy8AMG7cOH75y18C8Nvf/pYPPvggbRncxA+VwgyjOwTJ/TavlUI2imrU1dVx4YUXttv21a9+lXfffZdhw4YxfPhwrrvuOsaPH5/2NSoqKnjttdcYNWoUjzzyCP/8z//Mb37zm9aJ5vvuu4+GhgZGjBjBmWee2TpZXV1dzYYNGzjnnHNYsWIFJ598ctoyuEnQ/LwNI8jkXJGdTEbOGpmjO9+LpVk2jMziq4hmw0iVoPl5G0aQMaVg+J7A+XkbRoAxpWD4HktrYRjuYUrB8D2W1sIw3MOUguF7LK2FYbhHXiuFIBW+yHeC5OdtGEEmb5VCNgOiRISZM2e2rs+fP5/bb7+902Pq6+vbpaZIh1RTYS9btowf/vCHca+/ePFitm/fntL1Y0n7DMMILnmrFLIZEFVQUMCvf/3rlB7QmVAKqfLlL3+ZW2+9Ne7101EKhmEEn7xVCtksfNGjRw+uueYa7r333mP2vfXWW1RWVjJixAgqKyvZunUrzzzzDMuWLeOmm25i1KhRvPnmm+2OWb58OZ/5zGcYPXo0EydO5L333gMSp8LesmULp59+OldddRVnn302F198MatWreJzn/scw4YN409/+hMQffDfeOONx1z/rrvuoqGhgYsvvphRo0bx0UcfsXHjRsaPH8+YMWO44IIL2LFjBwAbN25k5MiRjB07lv/6r/9K+54ZhuETEqVPDULrTursUEh01ar2qWxjbeVKNBwOJXWeeBx33HHa2Niop5xyiu7du1fnzZun1dXVqqo6efJkXbx4saqqLlq0SKdMmaKqqpdddllrKu2OvP/++3rkyBFVVV24cKFWVVWpauJU2Js3b9ZwOKwvvfSStrS06DnnnKNXXHGFHjlyROvr61uv2TYFd8frjx8/Xp977jlVVT106JCOHTtWd+7cqaqqS5cu1SuuuEJVVYcPH67r1q1TVdXvfe97rem9O2Kpsw3DP9BJ6uy8HSlkOyDq+OOP59JLL+W+++5rt/3ZZ5/lG9/4BgCXXHIJv//977s817Zt27jgggsYPnw48+bN49VXXwVgw4YNfPOb3wSOTYU9ZMgQhg8fTigU4qyzzqKyshIRYfjw4WzZsiWlz/LGG2/wyiuvcP755zNq1Cjmzp3Ltm3baGxsZO/eva15nC655JKUzmt4izlaGPHIW6XgRkDUjBkzWLRoEf/4xz8SvieZVNbf/va3ufHGG3n55Zf52c9+1q7+Q6LjO6bfbpuaO9VU3KrKWWedxQsvvMALL7zAyy+/zIoVK1DVwKbizncs86yRiLxVCm4ERPXr14+vf/3rLFq0qHXbZz/72dYSnb/85S/5/Oc/Dxyb/rotjY2NfOITnwDg4Ycfbt2eyVTYnaXjPu2009i1axfPPvssAIcPH+bVV1+lb9++FBcXt452YrIY/scyzxqJcF0piEhERP4kIi+KyKsiUuNsHyIifxSRTSLyiIj0yqYcbgVEzZw5s50X0n333cdDDz3EiBEjqK2t5Sc/+QkQrdA2b948Ro8efcxE8+23385FF13EP/3TPzFgwIDW7ZlMhd3x+pdffjnXXnsto0aNoqWlhccee4xbbrmFkSNHMmrUKJ555hkAHnroIW644QbGjh0bt6iP4U+y6WhhBBvXU2dL1N5wnKo2iUhP4PfAd4Aq4NequlREfgq8qKqd/jItdXZwsO/FX4TDIVasUMLhY/c1N8OkSSGam1vcF8xwBV+lznYmv2MGy55OU2AC8Jiz/WFgqtuyGUa+YJlnjUR4MqcgImEReQHYCawE3gT2qmpsBnQb8IkEx14jIg0i0rBr1y53BDaMHMMyzxqJ8EQpqGqLqo4CBgGfBuLZFeLatVR1gaqWqWpZSUlJovNnTFaj+9j34T8s86yRCE+9j1R1L7AOOA/oKyI9nF2DgLRyLEQiEfbs2WMPIp+gquzZs4dIJOK1KEYbLPOskQgvJppLgMOquldEegMrgLuAy4BftZlofklV7+/sXPEmmg8fPsy2bdva+fIb3hKJRBg0aBA9e/b0WhTDMOh8orlHvI1Z5iTgYREJEx2p/LeqPiEirwFLRWQu8GdgUWcnSUTPnj0Zkqigr2EYhtEprisFVX0JGB1n+9+Jzi8YhmEYHpG3Ec2GYRjGsZhSMAzDMFpxfaI5k4jILuCtNA8fACRfBcd7giRvkGQFkzebBElWCJa83ZH1FFWN69MfaKXQHUSkIdHsux8JkrxBkhVM3mwSJFkhWPJmS1YzHxmGYRitmFIwDMMwWslnpbDAawFSJEjyBklWMHmzSZBkhWDJmxVZ83ZOwTAMwziWfB4pGIZhGB0wpWAYhmG0khdKwS8lQFPBqTnxZxF5wln3s6xbRORlEXlBRBqcbf1EZKUj70oROcFrOQFEpK+IPCYifxGR10VkrI9lPc25p7H2oYjM8Ku8ACLyXec/9oqI1Dn/PV/+dkXkO46cr4rIDGebb+6tiPxcRHaKyCtttsWVT6LcJyJ/E5GXROScdK+bF0oBOAhMUNWRwChgkoicRzQ7672qOgz4ALjSQxk78h3g9TbrfpYVoEJVR7Xxm74VWO3Iu9pZ9wM/AX6nqqcDI4neY1/KqqpvOPd0FDAG2A/8Bp/KKyKfAP4NKFPVs4EwMA0f/nZF5GzgaqL51kYCk0VkGP66t4uBSR22JZLvi8Awp10DpF9kW1XzqgGFwPPAZ4hGA/Zwto8F/tdr+RxZBjlf+ATgCUD8KqsjzxZgQIdtbwAnOa9PAt7wgZzHA5txHCz8LGsc2b8APO1neYlWS3wb6Ec02eYTwAV+/O0CFwEPtln/D+Bmv91bYDDwSpv1uPIBPwOmx3tfqi1fRgrdKgHqAT8m+gM94qz3x7+yQrRK3goR2Sgi1zjbTlTVHQDO8mOeSXeUTwK7gIcc09yDInIc/pS1I9OAOue1L+VV1XeA+cBWYAfQCGzEn7/dV4BxItJfRAqB/wOU4tN724ZE8sUUcoy073PeKAXtRglQNxGRycBOVd3YdnOct3ouaxs+p6rnEB3C3iAi47wWKAE9gHOAB1R1NPAPfGJ66QzHBv9l4FGvZekMx749BRgCDASOI/qb6Ijnv11VfZ2oWWsl8DvgRaC504P8TcaeEXmjFGJoFkqAZpjPAV8WkS3AUqImpB/jT1kBUNXtznInUZv3p4H3ROQkAGe50zsJW9kGbFPVPzrrjxFVEn6UtS1fBJ5X1fecdb/KOxHYrKq7VPUw8Gvgs/j0t6uqi1T1HFUdB7wPbMK/9zZGIvm2ER3pxEj7PueFUhCREhHp67zuTfTH+zqwFvia87bLgMe9kfAoqvrvqjpIVQcTNRmsUdWL8aGsACJynIj0ib0mavt+BVhGVE7wibyq+i7wtoic5myqBF7Dh7J2YDpHTUfgX3m3AueJSKGICEfvr19/ux9zlicDXyF6j/16b2Mkkm8ZcKnjhXQe0BgzM6WM1xM+Lk3WjCBa4vMlog+s2c72TwJ/Av5GdGhe4LWsHeQuB57ws6yOXC867VXgNmd7f6KT5ZucZT+vZXXkGgU0OL+FeuAEv8rqyFsI7AGK22zzs7w1wF+c/1ktUODj3+5TRJXWi0Cl3+4tUSW1AzhMdCRwZSL5iJqP/ovoXOnLRD3A0rqupbkwDMMwWskL85FhGIaRHKYUDMMwjFZMKRiGYRitmFIwDMMwWjGlYBiGYbRiSsHISUSkpUOGUdcil+NltzSMoGAuqUZOIiJNqlrk0bXHAU3AEo1mC3XjmmFVbXHjWkZuYyMFI28QkWIReSMW0ezk+7/aef2AiDRIm3obzvYtIvJ/ReRZZ/85IvK/IvKmiFwb7zqquoFo2oTOZLnIyeX/oohscLaFRWS+RGtTvCQi33a2VzoJ/F52RiEFbWSbLSK/By4SkVNF5HdOYsKnROT0TNw3I7/o0fVbDCOQ9Hay4sb4gao+IiI3AotF5CfACaq60Nl/m6q+LyJhYLWIjFDVl5x9b6vqWBG5l2iO+88BEaIR3D9NU77ZwAWq+k4sBQvRPPhDgNGq2uwUVIk416xU1b+KyBLgOqL5sAAOqOrnAURkNXCtqm4Skc8A9xPNnWUYSWNKwchVPtJoVtx2qOpKEbmIaEqAkW12fd1J+92DaJ76M4mmwoBoXhmIpg8oUtV9wD4ROSAifTWaZDFVniaqnP6baOI4iObk+qk6aaYdJTWSaJK5vzrveRi4gaNK4REAESkimnzu0WjaISCaYsIwUsKUgpFXiEiIaNr0j4gWg9kmIkOA7wHnquoHIrKY6EggxkFneaTN69h6Wv8hVb3W6c1/CXhBREYRzV/TcZIvXkrktvzDWYaI1i04RhEaRirYnIKRb3yXaIbc6cDPRaQn0Yps/wAaReRE4tcAyCgicqqq/lFVZxOtTFYKrACujaWZFpF+RJPLDRaRoc6hlwDrO55PVT8ENjujoFjN3pEd32cYXWFKwchVendwSf2hiHwKuAqYqapPARuAWar6ItEsuq8CPydq2kkbEakDngVOE5FtIhKvJvE8Z+L4FUeOF4EHiaaffklEXgS+oaoHgCuImoVeJjo6STSPcTFwpXPsq0QL3hhGSphLqmEYhtGKjRQMwzCMVkwpGIZhGK2YUjAMwzBaMaVgGIZhtGJKwTAMw2jFlIJhGIbRiikFwzAMo5X/D2r06iWZghgyAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ "
"
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "plotData(X, y)\n",
+ "# add axes labels\n",
+ "pyplot.xlabel('Exam 1 score')\n",
+ "pyplot.ylabel('Exam 2 score')\n",
+ "pyplot.legend(['Admitted', 'Not admitted'])\n",
+ "pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### 1.2 Implementation\n",
+ "\n",
+ "#### 1.2.1 Warmup exercise: sigmoid function\n",
+ "\n",
+ "Before you start with the actual cost function, recall that the logistic regression hypothesis is defined as:\n",
+ "\n",
+ "$$ h_\\theta(x) = g(\\theta^T x)$$\n",
+ "\n",
+ "where function $g$ is the sigmoid function. The sigmoid function is defined as: \n",
+ "\n",
+ "$$g(z) = \\frac{1}{1+e^{-z}}$$.\n",
+ "\n",
+ "Your first step is to implement this function `sigmoid` so it can be\n",
+ "called by the rest of your program. When you are finished, try testing a few\n",
+ "values by calling `sigmoid(x)` in a new cell. For large positive values of `x`, the sigmoid should be close to 1, while for large negative values, the sigmoid should be close to 0. Evaluating `sigmoid(0)` should give you exactly 0.5. Your code should also work with vectors and matrices. **For a matrix, your function should perform the sigmoid function on every element.**\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def sigmoid(z):\n",
+ " \"\"\"\n",
+ " Compute sigmoid function given the input z.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " z : array_like\n",
+ " The input to the sigmoid function. This can be a 1-D vector \n",
+ " or a 2-D matrix. \n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " g : array_like\n",
+ " The computed sigmoid function. g has the same shape as z, since\n",
+ " the sigmoid is computed element-wise on z.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the sigmoid of each value of z (z can be a matrix, vector or scalar).\n",
+ " \"\"\"\n",
+ " # convert input to a numpy array\n",
+ " z = np.array(z)\n",
+ " \n",
+ " # You need to return the following variables correctly \n",
+ " g = np.zeros(z.shape)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " g = 1 / (1 + np.exp(-z))\n",
+ "\n",
+ " # =============================================================\n",
+ " return g"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The following cell evaluates the sigmoid function at `z=0`. You should get a value of 0.5. You can also try different values for `z` to experiment with the sigmoid function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "g( 0 ) = 0.5\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test the implementation of sigmoid function here\n",
+ "z = 0\n",
+ "g = sigmoid(z)\n",
+ "\n",
+ "print('g(', z, ') = ', g)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading. \n",
+ "\n",
+ "The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "Execute the following cell to grade your solution to the first part of this exercise.\n",
+ "\n",
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Submitting Solutions | Programming Exercise logistic-regression\n",
+ "\n",
+ "Login (email address): rustagi@iitg.ac.in\n"
+ ]
+ }
+ ],
+ "source": [
+ "# appends the implemented function in part 1 to the grader object\n",
+ "grader[1] = sigmoid\n",
+ "\n",
+ "# send the added functions to coursera grader for getting a grade on this part\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "#### 1.2.2 Cost function and gradient\n",
+ "\n",
+ "Now you will implement the cost function and gradient for logistic regression. Before proceeding we add the intercept term to X. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Setup the data matrix appropriately, and add ones for the intercept term\n",
+ "m, n = X.shape\n",
+ "\n",
+ "# Add intercept term to X\n",
+ "X = np.concatenate([np.ones((m, 1)), X], axis=1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, complete the code for the function `costFunction` to return the cost and gradient. Recall that the cost function in logistic regression is\n",
+ "\n",
+ "$$ J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m} \\left[ -y^{(i)} \\log\\left(h_\\theta\\left( x^{(i)} \\right) \\right) - \\left( 1 - y^{(i)}\\right) \\log \\left( 1 - h_\\theta\\left( x^{(i)} \\right) \\right) \\right]$$\n",
+ "\n",
+ "and the gradient of the cost is a vector of the same length as $\\theta$ where the $j^{th}$\n",
+ "element (for $j = 0, 1, \\cdots , n$) is defined as follows:\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_j} = \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left( x^{(i)} \\right) - y^{(i)} \\right) x_j^{(i)} $$\n",
+ "\n",
+ "Note that while this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of $h_\\theta(x)$.\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def costFunction(theta, X, y):\n",
+ " \"\"\"\n",
+ " Compute cost and gradient for logistic regression. \n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " The parameters for logistic regression. This a vector\n",
+ " of shape (n+1, ).\n",
+ " \n",
+ " X : array_like\n",
+ " The input dataset of shape (m x n+1) where m is the total number\n",
+ " of data points and n is the number of features. We assume the \n",
+ " intercept has already been added to the input.\n",
+ " \n",
+ " y : arra_like\n",
+ " Labels for the input. This is a vector of shape (m, ).\n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " J : float\n",
+ " The computed value for the cost function. \n",
+ " \n",
+ " grad : array_like\n",
+ " A vector of shape (n+1, ) which is the gradient of the cost\n",
+ " function with respect to theta, at the current values of theta.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the cost of a particular choice of theta. You should set J to \n",
+ " the cost. Compute the partial derivatives and set grad to the partial\n",
+ " derivatives of the cost w.r.t. each parameter in theta.\n",
+ " \"\"\"\n",
+ " # Initialize some useful values\n",
+ " m = y.size # number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly \n",
+ " J = 0\n",
+ " grad = np.zeros(theta.shape)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " h = sigmoid(X.dot(theta.T))\n",
+ " \n",
+ " J = (1 / m) * np.sum(-y.dot(np.log(h)) - (1 - y).dot(np.log(1 - h)))\n",
+ " grad = (1 / m) * (h - y).dot(X)\n",
+ " \n",
+ " # =============================================================\n",
+ " return J, grad"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once you are done call your `costFunction` using two test cases for $\\theta$ by executing the next cell."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(n+1)\n",
+ "\n",
+ "cost, grad = costFunction(initial_theta, X, y)\n",
+ "\n",
+ "print('Cost at initial theta (zeros): {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.693\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros):')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}]'.format(*grad))\n",
+ "print('Expected gradients (approx):\\n\\t[-0.1000, -12.0092, -11.2628]\\n')\n",
+ "\n",
+ "# Compute and display cost and gradient with non-zero theta\n",
+ "test_theta = np.array([-24, 0.2, 0.2])\n",
+ "cost, grad = costFunction(test_theta, X, y)\n",
+ "\n",
+ "print('Cost at test theta: {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.218\\n')\n",
+ "\n",
+ "print('Gradient at test theta:')\n",
+ "print('\\t[{:.3f}, {:.3f}, {:.3f}]'.format(*grad))\n",
+ "print('Expected gradients (approx):\\n\\t[0.043, 2.566, 2.647]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[2] = costFunction\n",
+ "grader[3] = costFunction\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 1.2.3 Learning parameters using `scipy.optimize`\n",
+ "\n",
+ "In the previous assignment, you found the optimal parameters of a linear regression model by implementing gradient descent. You wrote a cost function and calculated its gradient, then took a gradient descent step accordingly. This time, instead of taking gradient descent steps, you will use the [`scipy.optimize` module](https://docs.scipy.org/doc/scipy/reference/optimize.html). SciPy is a numerical computing library for `python`. It provides an optimization module for root finding and minimization. As of `scipy 1.0`, the function `scipy.optimize.minimize` is the method to use for optimization problems(both constrained and unconstrained).\n",
+ "\n",
+ "For logistic regression, you want to optimize the cost function $J(\\theta)$ with parameters $\\theta$.\n",
+ "Concretely, you are going to use `optimize.minimize` to find the best parameters $\\theta$ for the logistic regression cost function, given a fixed dataset (of X and y values). You will pass to `optimize.minimize` the following inputs:\n",
+ "- `costFunction`: A cost function that, when given the training set and a particular $\\theta$, computes the logistic regression cost and gradient with respect to $\\theta$ for the dataset (X, y). It is important to note that we only pass the name of the function without the parenthesis. This indicates that we are only providing a reference to this function, and not evaluating the result from this function.\n",
+ "- `initial_theta`: The initial values of the parameters we are trying to optimize.\n",
+ "- `(X, y)`: These are additional arguments to the cost function.\n",
+ "- `jac`: Indication if the cost function returns the Jacobian (gradient) along with cost value. (True)\n",
+ "- `method`: Optimization method/algorithm to use\n",
+ "- `options`: Additional options which might be specific to the specific optimization method. In the following, we only tell the algorithm the maximum number of iterations before it terminates.\n",
+ "\n",
+ "If you have completed the `costFunction` correctly, `optimize.minimize` will converge on the right optimization parameters and return the final values of the cost and $\\theta$ in a class object. Notice that by using `optimize.minimize`, you did not have to write any loops yourself, or set a learning rate like you did for gradient descent. This is all done by `optimize.minimize`: you only needed to provide a function calculating the cost and the gradient.\n",
+ "\n",
+ "In the following, we already have code written to call `optimize.minimize` with the correct arguments."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# set options for optimize.minimize\n",
+ "options= {'maxiter': 400}\n",
+ "\n",
+ "# see documention for scipy's optimize.minimize for description about\n",
+ "# the different parameters\n",
+ "# The function returns an object `OptimizeResult`\n",
+ "# We use truncated Newton algorithm for optimization which is \n",
+ "# equivalent to MATLAB's fminunc\n",
+ "# See https://stackoverflow.com/questions/18801002/fminunc-alternate-in-numpy\n",
+ "res = optimize.minimize(costFunction,\n",
+ " initial_theta,\n",
+ " (X, y),\n",
+ " jac=True,\n",
+ " method='TNC',\n",
+ " options=options)\n",
+ "\n",
+ "# the fun property of `OptimizeResult` object returns\n",
+ "# the value of costFunction at optimized theta\n",
+ "cost = res.fun\n",
+ "\n",
+ "# the optimized theta is in the x property\n",
+ "theta = res.x\n",
+ "\n",
+ "# Print theta to screen\n",
+ "print('Cost at theta found by optimize.minimize: {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.203\\n');\n",
+ "\n",
+ "print('theta:')\n",
+ "print('\\t[{:.3f}, {:.3f}, {:.3f}]'.format(*theta))\n",
+ "print('Expected theta (approx):\\n\\t[-25.161, 0.206, 0.201]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once `optimize.minimize` completes, we want to use the final value for $\\theta$ to visualize the decision boundary on the training data as shown in the figure below. \n",
+ "\n",
+ "![](Figures/decision_boundary1.png)\n",
+ "\n",
+ "To do so, we have written a function `plotDecisionBoundary` for plotting the decision boundary on top of training data. You do not need to write any code for plotting the decision boundary, but we also encourage you to look at the code in `plotDecisionBoundary` to see how to plot such a boundary using the $\\theta$ values. You can find this function in the `utils.py` file which comes with this assignment."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Plot Boundary\n",
+ "utils.plotDecisionBoundary(plotData, theta, X, y)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "#### 1.2.4 Evaluating logistic regression\n",
+ "\n",
+ "After learning the parameters, you can use the model to predict whether a particular student will be admitted. For a student with an Exam 1 score of 45 and an Exam 2 score of 85, you should expect to see an admission\n",
+ "probability of 0.776. Another way to evaluate the quality of the parameters we have found is to see how well the learned model predicts on our training set. In this part, your task is to complete the code in function `predict`. The predict function will produce “1” or “0” predictions given a dataset and a learned parameter vector $\\theta$. \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def predict(theta, X):\n",
+ " \"\"\"\n",
+ " Predict whether the label is 0 or 1 using learned logistic regression.\n",
+ " Computes the predictions for X using a threshold at 0.5 \n",
+ " (i.e., if sigmoid(theta.T*x) >= 0.5, predict 1)\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " Parameters for logistic regression. A vecotor of shape (n+1, ).\n",
+ " \n",
+ " X : array_like\n",
+ " The data to use for computing predictions. The rows is the number \n",
+ " of points to compute predictions, and columns is the number of\n",
+ " features.\n",
+ "\n",
+ " Returns\n",
+ " -------\n",
+ " p : array_like\n",
+ " Predictions and 0 or 1 for each row in X. \n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Complete the following code to make predictions using your learned \n",
+ " logistic regression parameters.You should set p to a vector of 0's and 1's \n",
+ " \"\"\"\n",
+ " m = X.shape[0] # Number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly\n",
+ " p = np.zeros(m)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " p = np.roound(sigmoid(X.dot(theta.T)))\n",
+ " \n",
+ " # ============================================================\n",
+ " return p"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After you have completed the code in `predict`, we proceed to report the training accuracy of your classifier by computing the percentage of examples it got correct."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Predict probability for a student with score 45 on exam 1 \n",
+ "# and score 85 on exam 2 \n",
+ "prob = sigmoid(np.dot([1, 45, 85], theta))\n",
+ "print('For a student with scores 45 and 85,'\n",
+ " 'we predict an admission probability of {:.3f}'.format(prob))\n",
+ "print('Expected value: 0.775 +/- 0.002\\n')\n",
+ "\n",
+ "# Compute accuracy on our training set\n",
+ "p = predict(theta, X)\n",
+ "print('Train Accuracy: {:.2f} %'.format(np.mean(p == y) * 100))\n",
+ "print('Expected accuracy (approx): 89.00 %')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[4] = predict\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2 Regularized logistic regression\n",
+ "\n",
+ "In this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly.\n",
+ "Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. From these two tests, you would like to determine whether the microchips should be accepted or rejected. To help you make the decision, you have a dataset of test results on past microchips, from which you can build a logistic regression model.\n",
+ "\n",
+ "First, we load the data from a CSV file:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load Data\n",
+ "# The first two columns contains the X values and the third column\n",
+ "# contains the label (y).\n",
+ "data = np.loadtxt(os.path.join('Data', 'ex2data2.txt'), delimiter=',')\n",
+ "X = data[:, :2]\n",
+ "y = data[:, 2]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.1 Visualize the data\n",
+ "\n",
+ "Similar to the previous parts of this exercise, `plotData` is used to generate a figure, where the axes are the two test scores, and the positive (y = 1, accepted) and negative (y = 0, rejected) examples are shown with\n",
+ "different markers."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plotData(X, y)\n",
+ "# Labels and Legend\n",
+ "pyplot.xlabel('Microchip Test 1')\n",
+ "pyplot.ylabel('Microchip Test 2')\n",
+ "\n",
+ "# Specified in plot order\n",
+ "pyplot.legend(['y = 1', 'y = 0'], loc='upper right')\n",
+ "pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The above figure shows that our dataset cannot be separated into positive and negative examples by a straight-line through the plot. Therefore, a straight-forward application of logistic regression will not perform well on this dataset since logistic regression will only be able to find a linear decision boundary.\n",
+ "\n",
+ "### 2.2 Feature mapping\n",
+ "\n",
+ "One way to fit the data better is to create more features from each data point. In the function `mapFeature` defined in the file `utils.py`, we will map the features into all polynomial terms of $x_1$ and $x_2$ up to the sixth power.\n",
+ "\n",
+ "$$ \\text{mapFeature}(x) = \\begin{bmatrix} 1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 & x_1^3 & \\dots & x_1 x_2^5 & x_2^6 \\end{bmatrix}^T $$\n",
+ "\n",
+ "As a result of this mapping, our vector of two features (the scores on two QA tests) has been transformed into a 28-dimensional vector. A logistic regression classifier trained on this higher-dimension feature vector will have a more complex decision boundary and will appear nonlinear when drawn in our 2-dimensional plot.\n",
+ "While the feature mapping allows us to build a more expressive classifier, it also more susceptible to overfitting. In the next parts of the exercise, you will implement regularized logistic regression to fit the data and also see for yourself how regularization can help combat the overfitting problem.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Note that mapFeature also adds a column of ones for us, so the intercept\n",
+ "# term is handled\n",
+ "X = utils.mapFeature(X[:, 0], X[:, 1])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### 2.3 Cost function and gradient\n",
+ "\n",
+ "Now you will implement code to compute the cost function and gradient for regularized logistic regression. Complete the code for the function `costFunctionReg` below to return the cost and gradient.\n",
+ "\n",
+ "Recall that the regularized cost function in logistic regression is\n",
+ "\n",
+ "$$ J(\\theta) = \\frac{1}{m} \\sum_{i=1}^m \\left[ -y^{(i)}\\log \\left( h_\\theta \\left(x^{(i)} \\right) \\right) - \\left( 1 - y^{(i)} \\right) \\log \\left( 1 - h_\\theta \\left( x^{(i)} \\right) \\right) \\right] + \\frac{\\lambda}{2m} \\sum_{j=1}^n \\theta_j^2 $$\n",
+ "\n",
+ "Note that you should not regularize the parameters $\\theta_0$. The gradient of the cost function is a vector where the $j^{th}$ element is defined as follows:\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_0} = \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left(x^{(i)}\\right) - y^{(i)} \\right) x_j^{(i)} \\qquad \\text{for } j =0 $$\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_j} = \\left( \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left(x^{(i)}\\right) - y^{(i)} \\right) x_j^{(i)} \\right) + \\frac{\\lambda}{m}\\theta_j \\qquad \\text{for } j \\ge 1 $$\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def costFunctionReg(theta, X, y, lambda_):\n",
+ " \"\"\"\n",
+ " Compute cost and gradient for logistic regression with regularization.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " Logistic regression parameters. A vector with shape (n, ). n is \n",
+ " the number of features including any intercept. If we have mapped\n",
+ " our initial features into polynomial features, then n is the total \n",
+ " number of polynomial features. \n",
+ " \n",
+ " X : array_like\n",
+ " The data set with shape (m x n). m is the number of examples, and\n",
+ " n is the number of features (after feature mapping).\n",
+ " \n",
+ " y : array_like\n",
+ " The data labels. A vector with shape (m, ).\n",
+ " \n",
+ " lambda_ : float\n",
+ " The regularization parameter. \n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " J : float\n",
+ " The computed value for the regularized cost function. \n",
+ " \n",
+ " grad : array_like\n",
+ " A vector of shape (n, ) which is the gradient of the cost\n",
+ " function with respect to theta, at the current values of theta.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the cost `J` of a particular choice of theta.\n",
+ " Compute the partial derivatives and set `grad` to the partial\n",
+ " derivatives of the cost w.r.t. each parameter in theta.\n",
+ " \"\"\"\n",
+ " # Initialize some useful values\n",
+ " m = y.size # number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly \n",
+ " J = 0\n",
+ " grad = np.zeros(theta.shape)\n",
+ "\n",
+ " # ===================== YOUR CODE HERE ======================\n",
+ " h = sigmoid(X.dot(theta.T))\n",
+ " \n",
+ " temp = theta\n",
+ " temp[0] = 0\n",
+ " \n",
+ " J = (1 / m) * np.sum(-y.dot(np.loh(h)) - (1 - y).dot(np.log(1 - h))) + (lambda_ / (2 * sum(np.square(temp))))\n",
+ " \n",
+ " grad = (1 / m) * (h - y).dot(X)\n",
+ " grad = grad + (lambda_ / m) * temp\n",
+ " \n",
+ " \n",
+ " # =============================================================\n",
+ " return J, grad"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once you are done with the `costFunctionReg`, we call it below using the initial value of $\\theta$ (initialized to all zeros), and also another test case where $\\theta$ is all ones."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(X.shape[1])\n",
+ "\n",
+ "# Set regularization parameter lambda to 1\n",
+ "# DO NOT use `lambda` as a variable name in python\n",
+ "# because it is a python keyword\n",
+ "lambda_ = 1\n",
+ "\n",
+ "# Compute and display initial cost and gradient for regularized logistic\n",
+ "# regression\n",
+ "cost, grad = costFunctionReg(initial_theta, X, y, lambda_)\n",
+ "\n",
+ "print('Cost at initial theta (zeros): {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx) : 0.693\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros) - first five values only:')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5]))\n",
+ "print('Expected gradients (approx) - first five values only:')\n",
+ "print('\\t[0.0085, 0.0188, 0.0001, 0.0503, 0.0115]\\n')\n",
+ "\n",
+ "\n",
+ "# Compute and display cost and gradient\n",
+ "# with all-ones theta and lambda = 10\n",
+ "test_theta = np.ones(X.shape[1])\n",
+ "cost, grad = costFunctionReg(test_theta, X, y, 10)\n",
+ "\n",
+ "print('------------------------------\\n')\n",
+ "print('Cost at test theta : {:.2f}'.format(cost))\n",
+ "print('Expected cost (approx): 3.16\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros) - first five values only:')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5]))\n",
+ "print('Expected gradients (approx) - first five values only:')\n",
+ "print('\\t[0.3460, 0.1614, 0.1948, 0.2269, 0.0922]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[5] = costFunctionReg\n",
+ "grader[6] = costFunctionReg\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 2.3.1 Learning parameters using `scipy.optimize.minimize`\n",
+ "\n",
+ "Similar to the previous parts, you will use `optimize.minimize` to learn the optimal parameters $\\theta$. If you have completed the cost and gradient for regularized logistic regression (`costFunctionReg`) correctly, you should be able to step through the next part of to learn the parameters $\\theta$ using `optimize.minimize`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.4 Plotting the decision boundary\n",
+ "\n",
+ "To help you visualize the model learned by this classifier, we have provided the function `plotDecisionBoundary` which plots the (non-linear) decision boundary that separates the positive and negative examples. In `plotDecisionBoundary`, we plot the non-linear decision boundary by computing the classifier’s predictions on an evenly spaced grid and then and draw a contour plot where the predictions change from y = 0 to y = 1. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.5 Optional (ungraded) exercises\n",
+ "\n",
+ "In this part of the exercise, you will get to try out different regularization parameters for the dataset to understand how regularization prevents overfitting.\n",
+ "\n",
+ "Notice the changes in the decision boundary as you vary $\\lambda$. With a small\n",
+ "$\\lambda$, you should find that the classifier gets almost every training example correct, but draws a very complicated boundary, thus overfitting the data. See the following figures for the decision boundaries you should get for different values of $\\lambda$. \n",
+ "\n",
+ "
\n",
+ " Decision boundary with too much regularization\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "This is not a good decision boundary: for example, it predicts that a point at $x = (−0.25, 1.5)$ is accepted $(y = 1)$, which seems to be an incorrect decision given the training set.\n",
+ "With a larger $\\lambda$, you should see a plot that shows an simpler decision boundary which still separates the positives and negatives fairly well. However, if $\\lambda$ is set to too high a value, you will not get a good fit and the decision boundary will not follow the data so well, thus underfitting the data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(X.shape[1])\n",
+ "\n",
+ "# Set regularization parameter lambda to 1 (you should vary this)\n",
+ "lambda_ = 1\n",
+ "\n",
+ "# set options for optimize.minimize\n",
+ "options= {'maxiter': 100}\n",
+ "\n",
+ "res = optimize.minimize(costFunctionReg,\n",
+ " initial_theta,\n",
+ " (X, y, lambda_),\n",
+ " jac=True,\n",
+ " method='TNC',\n",
+ " options=options)\n",
+ "\n",
+ "# the fun property of OptimizeResult object returns\n",
+ "# the value of costFunction at optimized theta\n",
+ "cost = res.fun\n",
+ "\n",
+ "# the optimized theta is in the x property of the result\n",
+ "theta = res.x\n",
+ "\n",
+ "utils.plotDecisionBoundary(plotData, theta, X, y)\n",
+ "pyplot.xlabel('Microchip Test 1')\n",
+ "pyplot.ylabel('Microchip Test 2')\n",
+ "pyplot.legend(['y = 1', 'y = 0'])\n",
+ "pyplot.grid(False)\n",
+ "pyplot.title('lambda = %0.2f' % lambda_)\n",
+ "\n",
+ "# Compute accuracy on our training set\n",
+ "p = predict(theta, X)\n",
+ "\n",
+ "print('Train Accuracy: %.1f %%' % (np.mean(p == y) * 100))\n",
+ "print('Expected accuracy (with lambda = 1): 83.1 % (approx)\\n')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You do not need to submit any solutions for these optional (ungraded) exercises.*"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/180108057_Ujjwal Rustagi.ipynb b/180108057_Ujjwal Rustagi.ipynb
new file mode 100644
index 000000000..246d03e13
--- /dev/null
+++ b/180108057_Ujjwal Rustagi.ipynb
@@ -0,0 +1,1014 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Programming Exercise 2: Logistic Regression\n",
+ "\n",
+ "## Introduction\n",
+ "\n",
+ "In this exercise, you will implement logistic regression and apply it to two different datasets. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.\n",
+ "\n",
+ "All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).\n",
+ "\n",
+ "Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, and [`matplotlib`](https://matplotlib.org/) for plotting. In this assignment, we will also use [`scipy`](https://docs.scipy.org/doc/scipy/reference/), which contains scientific and numerical computation functions and tools. \n",
+ "\n",
+ "You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# used for manipulating directory paths\n",
+ "import os\n",
+ "\n",
+ "# Scientific and vector computation for python\n",
+ "import numpy as np\n",
+ "\n",
+ "# Plotting library\n",
+ "from matplotlib import pyplot\n",
+ "\n",
+ "# Optimization module in scipy\n",
+ "from scipy import optimize\n",
+ "\n",
+ "# library written for this exercise providing additional functions for assignment submission, and others\n",
+ "import utils\n",
+ "\n",
+ "# define the submission/grader object for this exercise\n",
+ "grader = utils.Grader()\n",
+ "\n",
+ "# tells matplotlib to embed plots within the notebook\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Submission and Grading\n",
+ "\n",
+ "\n",
+ "After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.\n",
+ "\n",
+ "\n",
+ "| Section | Part | Submission function | Points \n",
+ "| :- |:- | :- | :-:\n",
+ "| 1 | [Sigmoid Function](#section1) | [`sigmoid`](#sigmoid) | 5 \n",
+ "| 2 | [Compute cost for logistic regression](#section2) | [`costFunction`](#costFunction) | 30 \n",
+ "| 3 | [Gradient for logistic regression](#section2) | [`costFunction`](#costFunction) | 30 \n",
+ "| 4 | [Predict Function](#section4) | [`predict`](#predict) | 5 \n",
+ "| 5 | [Compute cost for regularized LR](#section5) | [`costFunctionReg`](#costFunctionReg) | 15 \n",
+ "| 6 | [Gradient for regularized LR](#section5) | [`costFunctionReg`](#costFunctionReg) | 15 \n",
+ "| | Total Points | | 100 \n",
+ "\n",
+ "\n",
+ "\n",
+ "You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "
\n",
+ "At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once. They must also be re-executed everytime the submitted function is updated.\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1 Logistic Regression\n",
+ "\n",
+ "In this part of the exercise, you will build a logistic regression model to predict whether a student gets admitted into a university. Suppose that you are the administrator of a university department and\n",
+ "you want to determine each applicant’s chance of admission based on their results on two exams. You have historical data from previous applicants that you can use as a training set for logistic regression. For each training example, you have the applicant’s scores on two exams and the admissions\n",
+ "decision. Your task is to build a classification model that estimates an applicant’s probability of admission based the scores from those two exams. \n",
+ "\n",
+ "The following cell will load the data and corresponding labels:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load data\n",
+ "# The first two columns contains the exam scores and the third column\n",
+ "# contains the label.\n",
+ "data = np.loadtxt(os.path.join('Data', 'ex2data1.txt'), delimiter=',')\n",
+ "X, y = data[:, 0:2], data[:, 2]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.1 Visualizing the data\n",
+ "\n",
+ "Before starting to implement any learning algorithm, it is always good to visualize the data if possible. We display the data on a 2-dimensional plot by calling the function `plotData`. You will now complete the code in `plotData` so that it displays a figure where the axes are the two exam scores, and the positive and negative examples are shown with different markers.\n",
+ "\n",
+ "To help you get more familiar with plotting, we have left `plotData` empty so you can try to implement it yourself. However, this is an optional (ungraded) exercise. We also provide our implementation below so you can\n",
+ "copy it or refer to it. If you choose to copy our example, make sure you learn\n",
+ "what each of its commands is doing by consulting the `matplotlib` and `numpy` documentation.\n",
+ "\n",
+ "```python\n",
+ "# Find Indices of Positive and Negative Examples\n",
+ "pos = y == 1\n",
+ "neg = y == 0\n",
+ "\n",
+ "# Plot Examples\n",
+ "pyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10)\n",
+ "pyplot.plot(X[neg, 0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1)\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plotData(X, y):\n",
+ " \"\"\"\n",
+ " Plots the data points X and y into a new figure. Plots the data \n",
+ " points with * for the positive examples and o for the negative examples.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " X : array_like\n",
+ " An Mx2 matrix representing the dataset. \n",
+ " \n",
+ " y : array_like\n",
+ " Label values for the dataset. A vector of size (M, ).\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Plot the positive and negative examples on a 2D plot, using the\n",
+ " option 'k*' for the positive examples and 'ko' for the negative examples. \n",
+ " \"\"\"\n",
+ " # Create New Figure\n",
+ " fig = pyplot.figure()\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ " # Find Indexes of Positive and NEgative Examples\n",
+ " \n",
+ " pos = y == 1\n",
+ " neg = y == 0\n",
+ "\n",
+ " pyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10)\n",
+ " pyplot.plot(X[neg,0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1)\n",
+ " \n",
+ " # ============================================================"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, we call the implemented function to display the loaded data:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO2de3hU5bXwf2sGyBCDQSC1UqJQoV65SWyl7YGEYKVfOQXb2gO13o6XeuspDfVyKocYxK9VqLY+52gLUpG0X/Bo2wge28MdrNrWYL1bSy2ICMpFiaTIJWF9f8yekISZZGYysy8z6/c877Nn7z177zV7ZvZ63/Wui6gqhmEYhgEQ8loAwzAMwz+YUjAMwzBaMaVgGIZhtGJKwTAMw2jFlIJhGIbRSg+vBegOAwYM0MGDB3sthmEYRqDYuHHjblUtibcv0Eph8ODBNDQ0eC2GYRhGoBCRtxLtM/ORYRiG0YopBcMwDKOVrCkFEfm5iOwUkVfabOsnIitFZJOzPMHZLiJyn4j8TUReEpFzsiWXYRiGkZhsjhQWA5M6bLsVWK2qw4DVzjrAF4FhTrsGeCCLchmGYRgJyJpSUNUNwPsdNk8BHnZePwxMbbN9iUb5A9BXRE7KlmzdoampiTlzqiktLSEcDlFaWsKcOdU0NTV5LZphGEa3cdv76ERV3QGgqjtE5GPO9k8Ab7d53zZn246OJxCRa4iOJjj55JOzK20HmpqaKC8/j+LiN6muPsCQIbB5827q6u5m2bJfsW7dHygqKnJVJsMwjEzil4lmibMtbvpWVV2gqmWqWlZSEtfNNmvcc888iovfZNasAwwdCuEwDB0Ks2YdoLj4Te65Z56r8hiGYWQat5XCezGzkLPc6WzfBpS2ed8gYLvLsnXJwoX3M336AaSDChOBadMO8OCD2Z0KaWxs5MILL6SxsTGr1zH8hX3vhpu4rRSWAZc5ry8DHm+z/VLHC+k8oDFmZvIT27fvYciQ+PuGDInuzybLli2jvr6e5cuXZ/U6hr+w791wk2y6pNYBzwKnicg2EbkS+CFwvohsAs531gGeBP4O/A1YCFyfLbm6w8CB/dm8Of6+zZuj+7PJz3/+83ZLwx287qnb9264STa9j6ar6kmq2lNVB6nqIlXdo6qVqjrMWb7vvFdV9QZVPVVVh6uqL3NXXH319dTVRehYrE4Vli6NcNVV12X0ehMnTkREWtszzzwDwNNPP91u+8SJEzN63XynoxJwu6cetO/da6VpZBa/TDQHgqqqm2hsPJW5cyNs2gTNzbBpE8ydG6Gx8VSqqm7K6PVuu+02CgsLW9cPHTrUbglQWFjIrFmzMnrdfKejEnC7p56J793NB7WZt3IMVQ1sGzNmjLrNvn37tKZmtpaWlmg4HNLS0hKtqZmt+/bty8r11qxZo4WFhUrUG6tdKyws1LVr12bluvlM3759293nXr16tVvGWmVlZdZk6O73vmTJEgW0trY2azLGKC8vV0ArKiqyfi0jMwANmuC56vmDvTvNC6XgBcuXL9dIJNLuwRCJRHT58uVei5YTVFZWtru3PXr0iPswdlshd+d7z+aDuuP98kJpGt2jM6Vg5qME+Clyee/evfTo0YNQKETv3r0JhUL06NGDvXv3ui5LLtLRXNPc3Nzp+wsLC/mf//kfysvLsypXKt+7m/MQZtbMcRJpiyC0bI0U9u3bp2PGnKUTJkR04UJ01Sp04UJ0woSIjhlzVtZMRYkoLy/XUCiko0eP1hUrVujo0aM1FArZcD2DdGauwaMRWirfezLyZ3J0Y2bNYIONFFLDb5HLxcXFzJs3j4aGBs4//3yee+457r77bo4//nhX5chlKioqeOSRR4hEIsfs82qElsr3XlFRwRNPPNGuB9+WTI9uEt2vSCTCI488kvVRlJFFEmmLILRsjRQGDRqgCxeia9ce2xYsQEtLS7JyXcNbamtrtaioSEOhkPbu3bu153vKKacEZoTm5vxTx/sVCoW0qKjIlclto3tgI4XU8Dpy2fCGRYsWsX//fkaOHMnjjz/e2iMfMmRIYEZobs4/dbxfI0eOZP/+/RZkF3BMKcTB68hlwxs6mmvef/995s+fT3FxMQDhcJiZM2dSX1/vsaSJcfNBbWbN3ESiI4lgUlZWpg0NmQ9+njOnmvXr72bWrPbJ71SjgWrjx9/M7Nk1Gb+uYSRLY2Mjl19+OYsXL25VWgBTp05l3LhxzJgxg1AoREtLCz/+8Y956qmnfK3MDHcRkY2qWhZ3nymFY2lbN2HatFjdhGgqi8bGU61uguE5tbW1XHrppdTW1vLNb37Ta3GMgNGZUjDzURyKiopYt+4PjB9/M3fcUcKkSSHuuKOE8eNvNoVg+AJLkmdkCxspGEYAmDhxIqtXr25d79WrF4cOHWpdxqisrGTVqlVeiBiXRGYuw1tspGAYASeoUcSWLC94mFIwjADgdnBapjAzV/AwpWAYASEIUcSJcjCtW7fOl7UgjGMxpWAYAcLvyRETmbnazl360cxlHMWUgmEEiGwGp2WiME9QzVzGUUwpGEaAyGYUcaYmhe+88072798fd9/+/fupqKgwE5KPMZdUw8hj2rqMTp06lXXr1lFRUcGaNWvSPufatWuZNGlSO8+ojtiIwVt855IqIt8RkVdE5FURmeFs6yciK0Vkk7M8wQvZDH9gxeCzz8SJE+nbty/19fX07ds3Y4V5KioqOOOMMxLuN4Xgb1xXCiJyNnA18GlgJDBZRIYBtwKrVXUYsNpZN/IU82/PPrfddhuh0NFHQCZjHwYPHsyVV155jKdUKBTyjaeUER8vRgpnAH9Q1f2q2gysBy4EpgAPO+95GJjqgWxGB7zqsZt/e3Zo6zI6YcKEdkqhI93p0dfX11NeXn6Mp1RhYaFvPKVSIZ9Grl4ohVeAcSLSX0QKgf8DlAInquoOAGf5sXgHi8g1ItIgIg27du1yTeh8xa0eu5s1hvOZZOtR9+zZs9s9+lyqt5BPI1fXlYKqvg7cBawEfge8CHReKb398QtUtUxVy0pKSrIkpRHDrR57UNM4BI2uXEYBRISCgoJu9+hzqd5CXo1cE5Vkc6sB/xe4HngDOMnZdhLwRlfHZqscZz5TWVnZrpRjr1692i1jrbKyMuPXtmLw7hGvbKeI6Jw5c3T06NEqIjpgwADdu3ev16K6zt69e3XAgAGe/Q/cAL+V4xSRjznLk4GvAHXAMuAy5y2XAY97IVuQyYTd08seexDSOOQKbSOjQ6EQIsJxxx3HkCFDeO655/iXf/kXdu/enRfmko4sW7aM3bt306tXr9Zt+TRy9Sp47Vci8hqwHLhBVT8AfgicLyKbgPOddSMFMmH39Doi1e9pHHKFtvb+3/3ud4waNarV3h8Oh3n33XeBPDGXdCD2mc8880xP/geeT2onGkIEoZn5qD3l5eUKaEVFRbfPFc+8EIlEdPny5RmQNDHl5eUaCoV09OjRumLFCh09erSGQqGMfCbjKFOmTNEf/ehH2tLSoqqqEyZMyGlzSWckazJ163+wZMkSBbS2tjZr18Bv5iMjM2TTY8erHrsXk5Oe98w8oL6+nqqqqlaX1FmzZuXtRH8yJlOIFjZy43/g+aR2Im0RhJbvI4XOJmZjLd0J2nzqsbvRMwsCy5Yt03A4nJcT/V39l4YOHZq1/4EXzh10MlLw/MHenZbvSkE1ex47Hc0Lzc3NOn/+fJ0yZUoGpfcHmTS7ZYK9e/fq1KlTXff8iSnHnj17um429APxTKahUEivvPLKrP4Pstm5S4QphRzHK/t/UPHS7TYZvBq5xJRjOBzWUCikvXv31lAopEVFRXkxiqqtrdWioiJPPrvb7tidKQWbU8gBzGMnNfweKOeWTbm8vDzunFRLSwtHjhzho48+4siRIzQ1NeWFF5KXEdh+csc2pZAD5FI6ATfw2u22I16l+Bg7dmy79XiprgsLC7n22msDGYWcKl5HYPumc5doCBGEZuajKPlk/88kfjG7eWFTVj1qLgqFQnk5uew33HTuwOYUDONYvLQhd8QNm3KiuZQePXq47otvHIubnTtTCoYRB7+53WZ75JLMiASiOZDyZXI5X+lMKdicgpG3eG1D7ki2bcpdzaWICPPmzWuX8sLIQxJpiyA0GykYuYRbI5d4I5IePXro448/rqo2J5UPYCMFw/A/bo1c4o1IIpEIH374IQDhcJiZM2dSX1+f0esawUCiSiOYlJWVaUNDg9diGEagqKioYMOGDYwcOZK77rqLW265hRdffJHx48ezZs0ar8UzXEBENqpqWbx9NlIwjDzDb3Mphr+wkYJhGEaeYSMFw8gy+Zh+28hNTCkYRgbIRNU7w/ADphQMIwN4XhjFMDKEKQXDSAOvktgZRrYxpWAYaeD39NuGkS6eKAUR+a6IvCoir4hInYhERGSIiPxRRDaJyCMi0ssL2QwjGfyWftswMoXrSkFEPgH8G1CmqmcDYWAacBdwr6oOAz4ArnRbNsNIBT8VRjGMTOGV+agH0FtEegCFwA5gAvCYs/9hYKpHsgWKpqYm5sypprS0hHA4RGlpCXPmVNPU1OS1aHmBbwqjYG6xRmZwXSmo6jvAfGArUWXQCGwE9qpqs/O2bcAn4h0vIteISIOINOzatcsNkX1LU1MT5eXnsX793VRX72bFCqW6ejfr199Nefl5phhcwE9V78wt1sgEXpiPTgCmAEOAgcBxwBfjvDVuqLWqLlDVMlUtKykpyZ6gAeCee+ZRXPwms2YdYOhQCIdh6FCYNesAxcVvcs8987wWMeeJpYxYvXo1999/P6tWrfIsZYS5xRqZwAvz0URgs6ruUtXDwK+BzwJ9HXMSwCBguweypYTXppuFC+9n+vQDiLTfLgLTph3gwQcfcEWOfKa+vp6qqiqeeOIJ6uvrefLJJ13LMGpusUY28EIpbAXOE5FCERGgEngNWAt8zXnPZcDjHsiWNH4w3WzfvochQ+LvGzIkut9wBy966eYW60+CPrfjxZzCH4lOKD8PvOzIsAC4BagSkb8B/YFFbsuWCn4w3Qwc2J/Nm+Pv27w5ut/IDn7opZtbrD8J+tyOJ95Hqlqtqqer6tmqeomqHlTVv6vqp1V1qKpepKoHvZAtWfxgurn66uupq4vQMdGtKixdGuGqq67Lugz5il966bngFhv0nnVHgj63YxHNaeIH001V1U00Np7K3LkRNm2C5mbYtAnmzo3Q2HgqVVU3ZV2GfMVPvXQ/ucWmQ9B71n4YNWYSUwppkk3TTbIT2EVFRaxb9wfGj7+ZO+4oYdKkEHfcUcL48Tezbt0fKCoqSluG7uLH3l+mZfJLL91PbrHpEPSetV9GjZnClEKaZMt0k+oEdlFREbNn17B1606am1vYunUns2fXeKoQwJ+9v2zI5Ideul8qqSWrdHOtZ+2nUWNGUNXAtjFjxqhX7Nu3T8eMOUsnTIjoggXoypXoggXohAkRHTPmLN23b19a562pma0TJkR0zRp07dqjbc2a6LlramZn+JNkh/LycgW0oqLCa1FayYZM5eXlGgqFdPTo0bpixQodPXq0hkIhX31ut1iyZIkCWltb2+n71qxZo4WFhUo0FiluKyws1LVr17ojeIZYvny5RiKRdp8jEono8uXLvRbtGIAGTfBctZFCGjQ1NXHPPfN49913Wbv2ADNmCF/8ItTUDOi26cYPE9jp4Mfenxsy+aWX7geSNQPlXM/awQ+jxoyQSFsEoXkxUmg7Qli4EF21Cl24sPsjhBihkOiqVe1HCbG2ciUaDocy9Ekyix97f36UKZeorKxsdy979erVbhlrlZWVcY8PUs86GYI0asRGCpkj2/EJQY098GPvz48y5RLJTLCGw2FmzJgR9/ic6Vk75MqosUulICKfEpHVIvKKsz5CRIIxjZ4Fsm3eCXLsgV+8cfwuU67QldLt1asXLS0tCR/yQfea6kgs5UkoFH2shsNh11KeZJJkRgoLgX8HDgOo6ktE6x/kJdmOTwh67IEfe39+lClX6EzpfupTnwISzzHkSs8610hGKRSq6p86bGuO+848INvmHT/HHiSDH3t/ycjkx7iKoBBTum05cOAAf/nLX4DEE/u50rPONZJRCrtF5FScVNYi8jWidRDykmTMO93NnurX2INk8GPvLxmZOsYwmJJInpjSHTp0KAUFBa3bm5ujfcegBnHlLYlmoGMN+CSwCtgPvAP8Hjilq+PcaF57H8WLT9ixY0dWvZOM7NAxhiFZn3tDdcqUKfqjH/1IW1paOvX4Mk+vzLF3716dOnWq7t27N63j6cT7qCuFEAK+7rw+DujT2fvdbl4Fr+3bt09ramZraWmJhsMhLS0t0Zqa2a3bcyH4LNdJ1p3yhBNO8FrUwJFrrqZ+pLudls6UQqfmI1U9AtzovP6Hqu7LxOgk6HRm3glq8Fm+kYw7JcC+ffsCmXrBS2xiP/tkM19UMnMKK0XkeyJSKiL9Yi3jkuQIfsieanRNV+6UMWJ2cTB7eLL40dkg6LiZMSAZpfCvwA3ABmCj0xq6feUcJdPeSV6X/MxlErlTxsMC3ZLHj84GQcfNTKyiHd1oAkRZWZk2NPhLP82ZU8369Xcza1Z7E5JqNNZg/PibmT27JqlzxTKmFhe/yfTpBxgyJKpY6uqiMQtBcFH1O7/4xS+47rrr2L9/PwUFBRw8eJAjR460e08kEuHRRx9l8uTJHklpGLB27VomT57M/v37j9mXaqdFRDaqalm8fclENPcUkX8TkcecdqOI9EzqynlIJoPP/FDyM9fpaOooLS1t3Wf28PaYm663uBWdn4z56AFgDHC/08Y424w4ZDL4zCats09HU8fgwYNbt7e1hy9YsCCnHojpPOD9WCMjXYKq4FyZxE/klhRrwIvJbPOieVlPwQ2CmjG1K7rrY51N2vrcq6o2Nzfr/Pnz9ZxzzsmpuIV0XBr9WCMjXYIah5KpTKx0M0tqixPRDICIfBJoSVcJichpIvJCm/ahiMxwvJpWisgmZ3lCutfIFYKaMbUr/NzjTJR6ITZJmiseNMm4NPqxRkamCGoJUFcm8RNpi1gDKoGtwDpgPbAFqOjquGQaEAbeBU4B7gZudbbfCtzV1fG5PlLI1UC4IPQ4u1srwG+k83mSqUfRu3fvQEQpB+X77GwUnckRNulGNOvRh3cBMAIYCRQkc0yS5/0C8LTz+g3gJOf1ScAbXR2f60ohWyU/3SYof8i25FqBnnQ/T1fHff/73/fmA6VIUL7PzsxamTR5dUspEI1R6Ntm/QTg+q6OS6YBPwdudF7v7bDvgwTHXEM0TqLh5JNP7vbN8TudpdQICkH5Q3Yk1/L4pPt54qWtCIVCvh/tdSQI32dno+hMjrC7qxReiLPtz10dl8R5ewG7gRM1BaXQtuX6SCGXCMIfMh65lscnnc9TW1ur4XA47nfn59FePPz2fSY7is70CLszpZDMRHNI5KhTpIiEnQd6d/ki8LyqvuesvyciJznXOAnYmYFrGD4hqBXQci2PTzqfZ9GiRRw5cgTp6BtN8NJi++37TDYHV6J92bjnySiF/wX+W0QqRWQCUAf8LgPXnu6cK8Yy4DLn9WXA4xm4huEj/PaHTIZcy+OTzucpLi5m/vz5rFy5MvD1rv32fXaVg6ugoKBdjYq2ZO2eJxpC6FEzTgi4FngM+BXwLSDc1XFdnLMQ2AMUt9nWH1gNbHKW/bo6j5mPgkWmfKzdJFHcwpQpUzyWLD26+3n8Zn5JFb9+n53d12zcc7rrfaRHH9z9gBGpHJPNZkohWPj1D+klfg7ki0dtba0WFRVpKBTS3r17aygU0qKiosAFgfmNzu5rNu55t5QC0fiE4x2FsJVoltR7ujrOjWZKITVinkyDBg3QUEh00KABgfNkyjWCFlkbxNFeEOjsvmbjnnemFJKZUyhW1Q+BrwAPqeoYIHghjHlOLOPq+vV3U129mxUrlOrq3axffzfl5edZKm6PCFpkbS6mxfZDHqTO7qvr9zyRtog14GWiwWQrgHOdbS91dZwbzUYKyZOr0dFBI4iBfLlONkZrfjcL0s2RwhyiHkh/U9XnnNxHmzKrmoxsYxlX/YGbxVKM5MjGaM3P+b26okuloKqPquoIVb3eWf+7qn41+6LlD25UV8uVMqF+GOp3h65cEIPi2hlk3Ej0FzSzYFuSGSkYWcQtW3+uZFwNcg8sRlAD+XKFbIzWcimjrCkFj3GrutrVV19PXV2E6JTQUVRh6dIIV111XUauk22C3ANrSxAD+XKFbIzWcsosmGiyIQgtFyaaBw0aoAsXxi+ks2ABWlpakpHrBDXjaq5OzJprp/dkOigsSPm9SHeiWUROd9JbFHXYPimzqil/ccvWn8kyoW6SUz2wNuSia2fQyPRoLWfMgom0BfBvRGsc1BMtrDOlzb7nEx3nZrORQn4QpB6YERyyMVoLSsQ3aY4UrgbGqOpUoBz4DxH5jrPv2HSJRlrkiq0/GdL1ssqZHpjhK7IxWvNbwr10EO34NIrtEHlNVc9ss15ENCnea8AEVR3ljoiJKSsr04aGBq/F6BYx76Pi4jeZNu0AQ4ZEvYGWLo3Q2Hiqr007qdD2c06ffvRz1tUl9zl/8YtfcN1117F//34KCgo4ePAghYWFPPDAA3zzm9908ZMYRmKmTp3KuHHjmDFjBqFQiJaWFn784x/z1FNPUV9f77V4rYjIRlUti7sz0RACWAOM6rCtB7AEaEl0nJstF8xHqrlRXa0ruhtRbROzhpE56MR81NlIYRDQrKrvxtn3OVV9OiMqqxvkwkghXygtLaG6ejdDhx67b9MmuOOOErZuTVxXKSg9MMMIAp2NFBIqhSBgSiE4hMMhVqxQwuFj9zU3w6RJIZqbW9wXzDDykM6UggWvGa6QKxHVmSDoqToMb3Drd2NKwXCFfPKy6opcSNVhuI9bv5uklYKIHC8i/WItm0IZuUdV1U00Np7K3LkRXn4ZFi+Giy6Cykr4058Oc+jQobyp6ZArqToMd3Hrd9OlUhCRb4nIe8BLRKuubQTMkI872U1zhVhE9XnnzeC228L89a/wgx/AypVw770tPPvsj3O22E8uJUsz3MOr300yI4XvAWep6mBVHeK0T2ZUigBilcyipKIYi4qKKCjoxbnn9uTOO8lKAkA/2utzNVWHkV28+t0koxTeBPZn8qIi0ldEHhORv4jI6yIy1jFLrRSRTc7yhExeM9O4ld3Uz6SjGLNd7MeP9nqroWCkg1e/m2SUwr8Dz4jIz0Tkvljr5nV/AvxOVU8HRgKvA7cCq1V1GLDaWfctVsksPcWY7QSAfrXXW6oOIx28+N0koxR+RjS6+Q8cnVPYmO4FReR4YBywCEBVD6nqXmAK8LDztoeBqeleww1ypZJZd0hHMWbaNTVI9nqroWCkg9u/m2SUQrOqVqnqQ6r6cKx145qfBHYBD4nIn0XkQRE5DjhRVXcAOMuPxTtYRK4RkQYRadi1a1c3xOge5nefnmLMtGtqkOz1uZAszXAft383ySiFtc6D+KQMuaT2AM4BHlDV0cA/SMFUpKoLVLVMVctKSkq6IUb3ML/79BRjW9fUTZui0cybNsHcudHEeFVVN6UkQ5Ds9VZDwUgHt383Xaa5EJF4f3tN1wNJRD4O/EFVBzvr/0RUKQwFylV1h4icBKxT1dM6O5eXaS7yJbtpZ8yZU8369Xcza1Z7E5Jq9CE/fvzNzJ5dc8xxTU1N3HPPPB588AG2b9/DwIH9ueqq66iquinte/bEE09w0UUXceDAgdZtkUiERx99lMmTJ6d1TsPIVbqV5qKNG+qQTLikOgn23haR2AO/kmg67mXAZc62y4DH072GGwS1klkmSbfXX1RUxOzZNWzdupPm5ha2bt3J7Nk13bpnZq/PP/zofpwTJEqf2rYBZwNfBy6NtWSO6+R8o4gGwL1EtLLbCUB/ol5Hm5xlv67Okyups4OMX9J+W2rt/GPJkiUK+K6qWRAgndTZMUSkmmjltTOBJ4EvAr9X1a9lVj2ljmVJNWJYau3cpLGxkcsvv5zFixdTXFzcbl9FRQXr1q2joqKCNWvWeCRhMEmryI4e7dW/TNTM9KKzfiKwvKvj3Gg2UkiPWO9+0KABGgqJDho0IOeK+hi5QdvRQGVlZbv63L169Wq3jLXKykqvxfY9pFmjOcZHqnoEaHZiDHYSdSs1Aoil5zCCRNtgxCC5HweZZJRCg4j0BRYSDVp7HvhTVqUysoal5zD8TGfBiBMmTGD//sQZd/zkfhxkUqq8JiKDgeNV9aVsCZQKNqeQOt0ti2kY2WTt2rVMnjy504d/QUEBR44c4fDhw63bzP04NbrlkioiV8Zeq+oW4FVn8tkIIJaew/AzyQQjzpw5k4KCAnM/zhLJmI8qReRJJ6L5bKI5kPpkWS4jS1h6DsPvdJUE7plnnrF0IVkkmeC1bxBNUPcyUZfUGar6vWwLZmQHS89hBIHOghEtXUh2SSZOYRhHlcIZRKOPq1Q1ozUW0sHmFFLH0nMYQaCiooINGzYwcuRI7rrrLm655RZefPFFxo8fbzEJGaBbcwrAcuA/VPVbwHiiEcfPZVA+w0UsPYcRBGw04B3JjBSOV9UPO2wbpqqbsipZEthIwTAMI3XSGimIyM0AqvqhiFzUYfcVGZTPMIw4WMI3wws6Mx9Na/P63zvsm5QFWQzDaIMf600buU9nSkESvI63bhgZp6mpiTlzqiktLSEcDlFaWsKcOdV5k4rDr/WmjdymM6WgCV7HWzfyjGw/sP2So8lNxRSketNG7pJwollEWoiWyhSgNxBzQRUgoqo9XZGwE2yi2RvaurVOn37UrbWuLnNurelWdcskbnzOtiST4sHy+xiZIK2JZlUNq+rxqtpHVXs4r2PrnisEwzvcSKq3cOH9TJ/eXiEAiMC0aQd48MEHun2NrnA7eWCQ6k0buUsycQqG0Q43Hth+yNHkhWLqKsWDKQQj25hSMFLGjQe2H3I0eaWYrN604SWmFIyUceOB7XWOpqamJvr27e2JYlq0aJElfDM8w5SCkTKdPbDr6iIMG3ZGt711qqpuorHxVObOjbBpEzQ3R+s9zJ0bneStqropg5+oPbEJ5hNOOEhtLa4rJkvxYHhJSkV2Mi9dOXMAABtASURBVHZRkS3APqAFaFbVMhHpBzwCDAa2AF9X1Q86O0+q3kdNTU3cc888Fi68n+3b9zBwYH+uvvp6qqpuspw/KZAoqV5dXYQ//1kZMQIuueRgt711Yt/Xgw8+0Pp9XXXVdVn/vmKeTzNnHmDmTDjxRLj4Ylo/T21tD/bvP81yRRmBpTPvIy+VQpmq7m6z7W7gfVX9oYjcCpygqrd0dp5UlILb7oW5TrwH9rBhZ9Dc/Eduv/2gZ26kmaBtdbqPPoJHH4Xf/hZ27oT+/eHw4ULefvs9+70YgSUoSuENoFxVd4jIScA6VT2ts/OkohT84Pee63RV6nPOnAG8/fYu9wVLkXA4xIoVSjh87L7mZpg0KURzc4v7guURjY2NXH755SxevJji4mKvxck5ups6OxsosEJENorINc62E1V1B4Cz/Fi8A0XkGhFpEJGGXbuSf8D4we891+nKW+edd3YHIkWFHzyf8h3L++QdXimFz6nqOcAXgRtEZFyyB6rqAlUtU9WykpKSpC/oB7/3XKerh2mfPmQ84CsbeO35ZFjeJy/xRCmo6nZnuRP4DfBp4D3HbISz3JnJa1rvL/tcffX1PPRQfG+dX/4SKioIxIjMS88nv5LtNN6W98k/uK4UROQ4EekTew18AXgFWAZc5rztMuDxTF7Xen/Zp6rqJp5/HmpqaPcwramB996DK68MxojMqtMdS7bNObfddlu79B6HDh1qt4Romo9Zs2Zl5fpGG1TV1QZ8EnjRaa8Ctznb+wOriZb7XA306+pcY8aM0WTZt2+fjhlzlk6YENEFC9CVK9EFC9AJEyI6ZsxZum/fvqTPlej8NTWzddCgARoKiQ4aNEBramZ3+7xBY+DA/jplCvrxj6OhUHR5xRXok09G73dpaYnXIhppUF5eroBWVFRk7Rpr1qzRwsJCJTrn2K4VFhbq2rVrs3btfANo0ETP6EQ7gtBSUQqqRx/cpaUlGg6HtLS0JCMP7rYKZ+FCdNUqdOHCzCmcIFFTM1snTIjomjXo2rXR9uSTUcXQpw8qQt4qzER43aHYu3evTp06Vffu3du6rbKyst1DuVevXu2WsVZZWZlRWZYvX66RSKTdNSKRiC5fvjyj18l3TClkmXgPwrVr0TVrooqhpma21yK6RscR2bJl6Cc/iX7+8+S9woyHHzoUS5YsUUBra2tbt3XWa89m7722tlaLioo0FApp7969NRQKaVFRUTvZjO7TmVKwNBcZwNxdj9LRHv+Vr8DHPw5z5uBK+umg4XZ67njE8/TxKo13ruV9CmKdbVMKGcDcXdtTVFTE7Nk1vPba3ykq6s1f/woTJ8K0abBkSTRKOB8VZjy86FAk6+lz5513up7G26u8T9l6eAcx3sKUQgYwd9djiaUVOeOMj/jBD2DFCpg7F/7+d6iqiiqGfFSYHfGiQ5GKp4/babzr6+upqqoiFIo+msLhMDNnzqS+vj4r14uRrYd3EOMtTClkAHN3PZaYWeTOO9ubjaqrownmHn00fxVmW7zoUKRiGuquOSco5pNMPbxzId7ClEIGsGCnY+nMLHLxxdEEc/mqMNviVYci2Qpv3TXn+NV8kq2Hd07EWySagQ5C84v3kWr23F2DSigkumpVe2+sWFu5Muqaat5H2Y+f6Qw3PH3ciG9Ih2x6VwUh3gJzST1KJnzCvfYrDwKDBg3QhQvjK4UFC9D+/Qvtfjl41aEoLy/XUCiko0eP1hUrVujo0aM1FAp16wGerfiGeLEU3SWbD2+/x1uYUnDIhE+4H/zKg4DFbvifKVOm6I9+9CNtaWlRVdXm5madP3++TpkyJe1zZqsHHi+WIhMkenifd9553VJAfo+3MKXgkOqDKt6IYMKEcTpuXIE97LrAS7OI4S3Z6IFnywwV7+FdUFDQbQWUjVFYJulMKeTVRHMqPuExl8r16++muno3K1Yo1dW7OXBgA9u2HeTAga7Pkc9YUrnOaWpqYs6c6m7XsvYjyU5id4ZbXjzxvKsOHjwIdM8TKdB1thNpiyC0VEcKXU1+hsOh1vd2NqoYPz6ay6ercxj+x4v5oXwwQSZjPuns3ruVZmPKlCk6dOjQducNhUKu5HnyEmykECUVn/BkXCq7OofhbxKNBtevv5vy8vOy1mv3Q2qLbNNVfENX9/7cc891Jc1GfX09CxYsaHedI0eOAAFzI80geaUUUvEJ7yrSdGeHEkDxzmH4G68ezvmQK6sr80ky9z4TZqhk8CrPk1/JK6WQSpBZV6OKoiIsUC3gePVw9kOurGzPaXSVriLZe+9Wmg23FFAQyCulkMrkZ2ejirq6CGVl42wCNeB49XD2OleWV2aztiR7793Mmup2nie/kldKAY5m8Ny6dSfNzS1s3bqT2bNrjnmYdzaq+PDDU3n88f/p8hyGv3H74Rzrne/fv59rrmmfNRbcM0H6YU4j2XvvphdPrqXtTpe8UwrJYi6VuY+beYfa9s7vums/K1dGs8a++WY0a+zLL7tngvTDnMbVV1/P//t/iUfisXvvZtbUQLuRZhDRjt9KgCgrK9OGhgavxTACSuxBXVz8JtOmHWDIkGgvdenS6MM5k8p/zpxq1q+/m1mz2j+MVeG22+D11wv57ne/R1XVTVnvcITDIVasUMLhY/c1N8OkSSGam1uyKsO7777LGWcM5qyzDnLZZbTe+4cfhldfLeD117fw8Y9/PKsy5DMislFVy+Lts5GCkbe4ORrsrHd+xRXQp89xrpkgvZ7TAFiw4AFGjIBPfQpmz4YLLoguP/UpGDEiut/wBs9GCiISBhqAd1R1sogMAZYC/YDngUtU9VBn57CRghEU/NA7j9HZqGXu3Ajjx9/M7Nk1WZWhtLSE6urdDB167L5Nm+COO0rYunXnsTuNjODXkcJ3gNfbrN8F3Kuqw4APgCs9kcoIBEFLE+GH3nkMP9T/8INbrhGfHl5cVEQGAV8C7gSqRESACcA3nLc8DNwOpDyGPHz4MNu2beNAx+REhmdEIhEGDRpEz549M3K+tnMB1dWxuYDd1NXdzbJlv/KlI0B0Ujt+79ztoMeY2eyee+Zxxx0PsH37HgYO7M9VV13nypwGxJRk/JFCEDIDNDU1cc8981i48P7W+3f11de7dv+yiSfmIxF5DPgB0Af4HnA58AdVHersLwV+q6pnxzn2GuAagJNPPnnMW2+91W7/5s2b6dOnD/3790c6GnAN11FV9uzZw759+xiSqGuYIn4wf6SKm5PaQSCI32GMtt/l9OlHv8u6uuB8l74yH4nIZGCnqm5suznOW+NqK1VdoKplqlpWUlJyzP4DBw6YQvARIkL//v0zOnLzg0tlqpiLc3v8YMJKl3TiPIJk7nR9pCAiPwAuAZqBCHA88BvgAuDjqtosImOB21X1gs7OFW+i+fXXX+eMM85ISabGxkYuv/xyFi9eTHFxcUrHGsmRzveSCD9N2hrpEzPBPPigNyasdEl1ktyPIwtfjRRU9d9VdZCqDgamAWtU9WJgLfA1522XAY+7JZNfi4sb8fHTpK2RPslmF/AbqU6S+yGCPBX8FKdwC9FJ578B/YFFbl04FsaeyXD23/zmN4gIf/nLX+Luv/zyy3nssceSPt/27dv52teiOvOFF17gySefbN23bt261iIkqTB48GB2796d8nFe42YksmF0JNVOSdDMnZ4qBVVdp6qTndd/V9VPq+pQVb1IVQ9m67puVHWqq6vj85//PEuXLs2IzAMHDmxVIplSCkElyPZoI/ik2ikJmvutn0YKrnHbbbe1y50eK6aRqaIaTU1NPP300yxatKhVKagqN954I2eeeSZf+tKX2NmmIMPgwYP5/ve/z9ixYykrK+P555/nggsu4NRTT+WnP/0pAFu2bOHss8/m0KFDzJ49m0ceeYRRo0Zx11138dOf/pR7772XUaNG8dRTT7Fr1y6++tWvcu6553Luuefy9NNPA7Bnzx6+8IUvMHr0aL71rW8R1BQnNmlreEmqnZLAmTsTlWQLQotXjvO1117rtAxdjGwUF49RW1ur//qv/6qqqmPHjtWNGzfqr371K504caI2NzfrO++8o8XFxfroo4+qquopp5yi999/v6qqzpgxQ4cPH64ffvih7ty5U0tKSlRVdfPmzXrWWWepqupDDz2kN9xwQ+v1qqurdd68ea3r06dP16eeekpVVd966y09/fTTVVX129/+ttbU1Kiq6hNPPKGA7tq1K+3PmQrJfi+GEQRipURLS0s0HA5paWlJwjKunZX2nTAhojU1s12Xn07KcXoSvOYHYkU1LrroonbukpkoqlFXV8eMGTMAmDZtGnV1dRw+fJjp06cTDocZOHAgEyZMaHfMl7/8ZQCGDx9OU1MTffr0oU+fPkQikZTzua9atYrXXnutdf3DDz9k3759bNiwgV//+tcAfOlLX+KEE05I+zMaRj4TmyRPJpaiquomli37FXPnxo9R8Zu5M2+VArQvqlFQUMDBgwe7XVRjz549rFmzhldeeQURoaWlBRHhwgsv7DR2oqCgAKBVlhihUIjm5uaUZDhy5AjPPvssvXv3PmZfUOM3cjmC1Mht/BBBngp5OacQIxtFNR577DEuvfRS3nrrLbZs2cLbb7/NkCFD6NevH0uXLqWlpYUdO3awdu3atK/Rp08f9u3bl3D9C1/4Av/5n//Zuv7CCy8AMG7cOH75y18C8Nvf/pYPPvggbRncxA+VwgyjOwTJ/TavlUI2imrU1dVx4YUXttv21a9+lXfffZdhw4YxfPhwrrvuOsaPH5/2NSoqKnjttdcYNWoUjzzyCP/8z//Mb37zm9aJ5vvuu4+GhgZGjBjBmWee2TpZXV1dzYYNGzjnnHNYsWIFJ598ctoyuEnQ/LwNI8jkXJGdTEbOGpmjO9+LpVk2jMziq4hmw0iVoPl5G0aQMaVg+J7A+XkbRoAxpWD4HktrYRjuYUrB8D2W1sIw3MOUguF7LK2FYbhHXiuFIBW+yHeC5OdtGEEmb5VCNgOiRISZM2e2rs+fP5/bb7+902Pq6+vbpaZIh1RTYS9btowf/vCHca+/ePFitm/fntL1Y0n7DMMILnmrFLIZEFVQUMCvf/3rlB7QmVAKqfLlL3+ZW2+9Ne7101EKhmEEn7xVCtksfNGjRw+uueYa7r333mP2vfXWW1RWVjJixAgqKyvZunUrzzzzDMuWLeOmm25i1KhRvPnmm+2OWb58OZ/5zGcYPXo0EydO5L333gMSp8LesmULp59+OldddRVnn302F198MatWreJzn/scw4YN409/+hMQffDfeOONx1z/rrvuoqGhgYsvvphRo0bx0UcfsXHjRsaPH8+YMWO44IIL2LFjBwAbN25k5MiRjB07lv/6r/9K+54ZhuETEqVPDULrTursUEh01ar2qWxjbeVKNBwOJXWeeBx33HHa2Niop5xyiu7du1fnzZun1dXVqqo6efJkXbx4saqqLlq0SKdMmaKqqpdddllrKu2OvP/++3rkyBFVVV24cKFWVVWpauJU2Js3b9ZwOKwvvfSStrS06DnnnKNXXHGFHjlyROvr61uv2TYFd8frjx8/Xp977jlVVT106JCOHTtWd+7cqaqqS5cu1SuuuEJVVYcPH67r1q1TVdXvfe97rem9O2Kpsw3DP9BJ6uy8HSlkOyDq+OOP59JLL+W+++5rt/3ZZ5/lG9/4BgCXXHIJv//977s817Zt27jgggsYPnw48+bN49VXXwVgw4YNfPOb3wSOTYU9ZMgQhg8fTigU4qyzzqKyshIRYfjw4WzZsiWlz/LGG2/wyiuvcP755zNq1Cjmzp3Ltm3baGxsZO/eva15nC655JKUzmt4izlaGPHIW6XgRkDUjBkzWLRoEf/4xz8SvieZVNbf/va3ufHGG3n55Zf52c9+1q7+Q6LjO6bfbpuaO9VU3KrKWWedxQsvvMALL7zAyy+/zIoVK1DVwKbizncs86yRiLxVCm4ERPXr14+vf/3rLFq0qHXbZz/72dYSnb/85S/5/Oc/Dxyb/rotjY2NfOITnwDg4Ycfbt2eyVTYnaXjPu2009i1axfPPvssAIcPH+bVV1+lb9++FBcXt452YrIY/scyzxqJcF0piEhERP4kIi+KyKsiUuNsHyIifxSRTSLyiIj0yqYcbgVEzZw5s50X0n333cdDDz3EiBEjqK2t5Sc/+QkQrdA2b948Ro8efcxE8+23385FF13EP/3TPzFgwIDW7ZlMhd3x+pdffjnXXnsto0aNoqWlhccee4xbbrmFkSNHMmrUKJ555hkAHnroIW644QbGjh0bt6iP4U+y6WhhBBvXU2dL1N5wnKo2iUhP4PfAd4Aq4NequlREfgq8qKqd/jItdXZwsO/FX4TDIVasUMLhY/c1N8OkSSGam1vcF8xwBV+lznYmv2MGy55OU2AC8Jiz/WFgqtuyGUa+YJlnjUR4MqcgImEReQHYCawE3gT2qmpsBnQb8IkEx14jIg0i0rBr1y53BDaMHMMyzxqJ8EQpqGqLqo4CBgGfBuLZFeLatVR1gaqWqWpZSUlJovNnTFaj+9j34T8s86yRCE+9j1R1L7AOOA/oKyI9nF2DgLRyLEQiEfbs2WMPIp+gquzZs4dIJOK1KEYbLPOskQgvJppLgMOquldEegMrgLuAy4BftZlofklV7+/sXPEmmg8fPsy2bdva+fIb3hKJRBg0aBA9e/b0WhTDMOh8orlHvI1Z5iTgYREJEx2p/LeqPiEirwFLRWQu8GdgUWcnSUTPnj0Zkqigr2EYhtEprisFVX0JGB1n+9+Jzi8YhmEYHpG3Ec2GYRjGsZhSMAzDMFpxfaI5k4jILuCtNA8fACRfBcd7giRvkGQFkzebBElWCJa83ZH1FFWN69MfaKXQHUSkIdHsux8JkrxBkhVM3mwSJFkhWPJmS1YzHxmGYRitmFIwDMMwWslnpbDAawFSJEjyBklWMHmzSZBkhWDJmxVZ83ZOwTAMwziWfB4pGIZhGB0wpWAYhmG0khdKwS8lQFPBqTnxZxF5wln3s6xbRORlEXlBRBqcbf1EZKUj70oROcFrOQFEpK+IPCYifxGR10VkrI9lPc25p7H2oYjM8Ku8ACLyXec/9oqI1Dn/PV/+dkXkO46cr4rIDGebb+6tiPxcRHaKyCtttsWVT6LcJyJ/E5GXROScdK+bF0oBOAhMUNWRwChgkoicRzQ7672qOgz4ALjSQxk78h3g9TbrfpYVoEJVR7Xxm74VWO3Iu9pZ9wM/AX6nqqcDI4neY1/KqqpvOPd0FDAG2A/8Bp/KKyKfAP4NKFPVs4EwMA0f/nZF5GzgaqL51kYCk0VkGP66t4uBSR22JZLvi8Awp10DpF9kW1XzqgGFwPPAZ4hGA/Zwto8F/tdr+RxZBjlf+ATgCUD8KqsjzxZgQIdtbwAnOa9PAt7wgZzHA5txHCz8LGsc2b8APO1neYlWS3wb6Ec02eYTwAV+/O0CFwEPtln/D+Bmv91bYDDwSpv1uPIBPwOmx3tfqi1fRgrdKgHqAT8m+gM94qz3x7+yQrRK3goR2Sgi1zjbTlTVHQDO8mOeSXeUTwK7gIcc09yDInIc/pS1I9OAOue1L+VV1XeA+cBWYAfQCGzEn7/dV4BxItJfRAqB/wOU4tN724ZE8sUUcoy073PeKAXtRglQNxGRycBOVd3YdnOct3ouaxs+p6rnEB3C3iAi47wWKAE9gHOAB1R1NPAPfGJ66QzHBv9l4FGvZekMx749BRgCDASOI/qb6Ijnv11VfZ2oWWsl8DvgRaC504P8TcaeEXmjFGJoFkqAZpjPAV8WkS3AUqImpB/jT1kBUNXtznInUZv3p4H3ROQkAGe50zsJW9kGbFPVPzrrjxFVEn6UtS1fBJ5X1fecdb/KOxHYrKq7VPUw8Gvgs/j0t6uqi1T1HFUdB7wPbMK/9zZGIvm2ER3pxEj7PueFUhCREhHp67zuTfTH+zqwFvia87bLgMe9kfAoqvrvqjpIVQcTNRmsUdWL8aGsACJynIj0ib0mavt+BVhGVE7wibyq+i7wtoic5myqBF7Dh7J2YDpHTUfgX3m3AueJSKGICEfvr19/ux9zlicDXyF6j/16b2Mkkm8ZcKnjhXQe0BgzM6WM1xM+Lk3WjCBa4vMlog+s2c72TwJ/Av5GdGhe4LWsHeQuB57ws6yOXC867VXgNmd7f6KT5ZucZT+vZXXkGgU0OL+FeuAEv8rqyFsI7AGK22zzs7w1wF+c/1ktUODj3+5TRJXWi0Cl3+4tUSW1AzhMdCRwZSL5iJqP/ovoXOnLRD3A0rqupbkwDMMwWskL85FhGIaRHKYUDMMwjFZMKRiGYRitmFIwDMMwWjGlYBiGYbRiSsHISUSkpUOGUdcil+NltzSMoGAuqUZOIiJNqlrk0bXHAU3AEo1mC3XjmmFVbXHjWkZuYyMFI28QkWIReSMW0ezk+7/aef2AiDRIm3obzvYtIvJ/ReRZZ/85IvK/IvKmiFwb7zqquoFo2oTOZLnIyeX/oohscLaFRWS+RGtTvCQi33a2VzoJ/F52RiEFbWSbLSK/By4SkVNF5HdOYsKnROT0TNw3I7/o0fVbDCOQ9Hay4sb4gao+IiI3AotF5CfACaq60Nl/m6q+LyJhYLWIjFDVl5x9b6vqWBG5l2iO+88BEaIR3D9NU77ZwAWq+k4sBQvRPPhDgNGq2uwUVIk416xU1b+KyBLgOqL5sAAOqOrnAURkNXCtqm4Skc8A9xPNnWUYSWNKwchVPtJoVtx2qOpKEbmIaEqAkW12fd1J+92DaJ76M4mmwoBoXhmIpg8oUtV9wD4ROSAifTWaZDFVniaqnP6baOI4iObk+qk6aaYdJTWSaJK5vzrveRi4gaNK4REAESkimnzu0WjaISCaYsIwUsKUgpFXiEiIaNr0j4gWg9kmIkOA7wHnquoHIrKY6EggxkFneaTN69h6Wv8hVb3W6c1/CXhBREYRzV/TcZIvXkrktvzDWYaI1i04RhEaRirYnIKRb3yXaIbc6cDPRaQn0Yps/wAaReRE4tcAyCgicqqq/lFVZxOtTFYKrACujaWZFpF+RJPLDRaRoc6hlwDrO55PVT8ENjujoFjN3pEd32cYXWFKwchVendwSf2hiHwKuAqYqapPARuAWar6ItEsuq8CPydq2kkbEakDngVOE5FtIhKvJvE8Z+L4FUeOF4EHiaaffklEXgS+oaoHgCuImoVeJjo6STSPcTFwpXPsq0QL3hhGSphLqmEYhtGKjRQMwzCMVkwpGIZhGK2YUjAMwzBaMaVgGIZhtGJKwTAMw2jFlIJhGIbRiikFwzAMo5X/D2r06iWZghgyAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ "
"
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "plotData(X, y)\n",
+ "# add axes labels\n",
+ "pyplot.xlabel('Exam 1 score')\n",
+ "pyplot.ylabel('Exam 2 score')\n",
+ "pyplot.legend(['Admitted', 'Not admitted'])\n",
+ "pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### 1.2 Implementation\n",
+ "\n",
+ "#### 1.2.1 Warmup exercise: sigmoid function\n",
+ "\n",
+ "Before you start with the actual cost function, recall that the logistic regression hypothesis is defined as:\n",
+ "\n",
+ "$$ h_\\theta(x) = g(\\theta^T x)$$\n",
+ "\n",
+ "where function $g$ is the sigmoid function. The sigmoid function is defined as: \n",
+ "\n",
+ "$$g(z) = \\frac{1}{1+e^{-z}}$$.\n",
+ "\n",
+ "Your first step is to implement this function `sigmoid` so it can be\n",
+ "called by the rest of your program. When you are finished, try testing a few\n",
+ "values by calling `sigmoid(x)` in a new cell. For large positive values of `x`, the sigmoid should be close to 1, while for large negative values, the sigmoid should be close to 0. Evaluating `sigmoid(0)` should give you exactly 0.5. Your code should also work with vectors and matrices. **For a matrix, your function should perform the sigmoid function on every element.**\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def sigmoid(z):\n",
+ " \"\"\"\n",
+ " Compute sigmoid function given the input z.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " z : array_like\n",
+ " The input to the sigmoid function. This can be a 1-D vector \n",
+ " or a 2-D matrix. \n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " g : array_like\n",
+ " The computed sigmoid function. g has the same shape as z, since\n",
+ " the sigmoid is computed element-wise on z.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the sigmoid of each value of z (z can be a matrix, vector or scalar).\n",
+ " \"\"\"\n",
+ " # convert input to a numpy array\n",
+ " z = np.array(z)\n",
+ " \n",
+ " # You need to return the following variables correctly \n",
+ " g = np.zeros(z.shape)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " g = 1 / (1 + np.exp(-z))\n",
+ "\n",
+ " # =============================================================\n",
+ " return g"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The following cell evaluates the sigmoid function at `z=0`. You should get a value of 0.5. You can also try different values for `z` to experiment with the sigmoid function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "g( 0 ) = 0.5\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test the implementation of sigmoid function here\n",
+ "z = 0\n",
+ "g = sigmoid(z)\n",
+ "\n",
+ "print('g(', z, ') = ', g)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading. \n",
+ "\n",
+ "The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "Execute the following cell to grade your solution to the first part of this exercise.\n",
+ "\n",
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Submitting Solutions | Programming Exercise logistic-regression\n",
+ "\n",
+ "Login (email address): rustagi@iitg.ac.in\n"
+ ]
+ }
+ ],
+ "source": [
+ "# appends the implemented function in part 1 to the grader object\n",
+ "grader[1] = sigmoid\n",
+ "\n",
+ "# send the added functions to coursera grader for getting a grade on this part\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "#### 1.2.2 Cost function and gradient\n",
+ "\n",
+ "Now you will implement the cost function and gradient for logistic regression. Before proceeding we add the intercept term to X. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Setup the data matrix appropriately, and add ones for the intercept term\n",
+ "m, n = X.shape\n",
+ "\n",
+ "# Add intercept term to X\n",
+ "X = np.concatenate([np.ones((m, 1)), X], axis=1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now, complete the code for the function `costFunction` to return the cost and gradient. Recall that the cost function in logistic regression is\n",
+ "\n",
+ "$$ J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m} \\left[ -y^{(i)} \\log\\left(h_\\theta\\left( x^{(i)} \\right) \\right) - \\left( 1 - y^{(i)}\\right) \\log \\left( 1 - h_\\theta\\left( x^{(i)} \\right) \\right) \\right]$$\n",
+ "\n",
+ "and the gradient of the cost is a vector of the same length as $\\theta$ where the $j^{th}$\n",
+ "element (for $j = 0, 1, \\cdots , n$) is defined as follows:\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_j} = \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left( x^{(i)} \\right) - y^{(i)} \\right) x_j^{(i)} $$\n",
+ "\n",
+ "Note that while this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of $h_\\theta(x)$.\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def costFunction(theta, X, y):\n",
+ " \"\"\"\n",
+ " Compute cost and gradient for logistic regression. \n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " The parameters for logistic regression. This a vector\n",
+ " of shape (n+1, ).\n",
+ " \n",
+ " X : array_like\n",
+ " The input dataset of shape (m x n+1) where m is the total number\n",
+ " of data points and n is the number of features. We assume the \n",
+ " intercept has already been added to the input.\n",
+ " \n",
+ " y : arra_like\n",
+ " Labels for the input. This is a vector of shape (m, ).\n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " J : float\n",
+ " The computed value for the cost function. \n",
+ " \n",
+ " grad : array_like\n",
+ " A vector of shape (n+1, ) which is the gradient of the cost\n",
+ " function with respect to theta, at the current values of theta.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the cost of a particular choice of theta. You should set J to \n",
+ " the cost. Compute the partial derivatives and set grad to the partial\n",
+ " derivatives of the cost w.r.t. each parameter in theta.\n",
+ " \"\"\"\n",
+ " # Initialize some useful values\n",
+ " m = y.size # number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly \n",
+ " J = 0\n",
+ " grad = np.zeros(theta.shape)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " h = sigmoid(X.dot(theta.T))\n",
+ " \n",
+ " J = (1 / m) * np.sum(-y.dot(np.log(h)) - (1 - y).dot(np.log(1 - h)))\n",
+ " grad = (1 / m) * (h - y).dot(X)\n",
+ " \n",
+ " # =============================================================\n",
+ " return J, grad"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once you are done call your `costFunction` using two test cases for $\\theta$ by executing the next cell."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(n+1)\n",
+ "\n",
+ "cost, grad = costFunction(initial_theta, X, y)\n",
+ "\n",
+ "print('Cost at initial theta (zeros): {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.693\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros):')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}]'.format(*grad))\n",
+ "print('Expected gradients (approx):\\n\\t[-0.1000, -12.0092, -11.2628]\\n')\n",
+ "\n",
+ "# Compute and display cost and gradient with non-zero theta\n",
+ "test_theta = np.array([-24, 0.2, 0.2])\n",
+ "cost, grad = costFunction(test_theta, X, y)\n",
+ "\n",
+ "print('Cost at test theta: {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.218\\n')\n",
+ "\n",
+ "print('Gradient at test theta:')\n",
+ "print('\\t[{:.3f}, {:.3f}, {:.3f}]'.format(*grad))\n",
+ "print('Expected gradients (approx):\\n\\t[0.043, 2.566, 2.647]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[2] = costFunction\n",
+ "grader[3] = costFunction\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 1.2.3 Learning parameters using `scipy.optimize`\n",
+ "\n",
+ "In the previous assignment, you found the optimal parameters of a linear regression model by implementing gradient descent. You wrote a cost function and calculated its gradient, then took a gradient descent step accordingly. This time, instead of taking gradient descent steps, you will use the [`scipy.optimize` module](https://docs.scipy.org/doc/scipy/reference/optimize.html). SciPy is a numerical computing library for `python`. It provides an optimization module for root finding and minimization. As of `scipy 1.0`, the function `scipy.optimize.minimize` is the method to use for optimization problems(both constrained and unconstrained).\n",
+ "\n",
+ "For logistic regression, you want to optimize the cost function $J(\\theta)$ with parameters $\\theta$.\n",
+ "Concretely, you are going to use `optimize.minimize` to find the best parameters $\\theta$ for the logistic regression cost function, given a fixed dataset (of X and y values). You will pass to `optimize.minimize` the following inputs:\n",
+ "- `costFunction`: A cost function that, when given the training set and a particular $\\theta$, computes the logistic regression cost and gradient with respect to $\\theta$ for the dataset (X, y). It is important to note that we only pass the name of the function without the parenthesis. This indicates that we are only providing a reference to this function, and not evaluating the result from this function.\n",
+ "- `initial_theta`: The initial values of the parameters we are trying to optimize.\n",
+ "- `(X, y)`: These are additional arguments to the cost function.\n",
+ "- `jac`: Indication if the cost function returns the Jacobian (gradient) along with cost value. (True)\n",
+ "- `method`: Optimization method/algorithm to use\n",
+ "- `options`: Additional options which might be specific to the specific optimization method. In the following, we only tell the algorithm the maximum number of iterations before it terminates.\n",
+ "\n",
+ "If you have completed the `costFunction` correctly, `optimize.minimize` will converge on the right optimization parameters and return the final values of the cost and $\\theta$ in a class object. Notice that by using `optimize.minimize`, you did not have to write any loops yourself, or set a learning rate like you did for gradient descent. This is all done by `optimize.minimize`: you only needed to provide a function calculating the cost and the gradient.\n",
+ "\n",
+ "In the following, we already have code written to call `optimize.minimize` with the correct arguments."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# set options for optimize.minimize\n",
+ "options= {'maxiter': 400}\n",
+ "\n",
+ "# see documention for scipy's optimize.minimize for description about\n",
+ "# the different parameters\n",
+ "# The function returns an object `OptimizeResult`\n",
+ "# We use truncated Newton algorithm for optimization which is \n",
+ "# equivalent to MATLAB's fminunc\n",
+ "# See https://stackoverflow.com/questions/18801002/fminunc-alternate-in-numpy\n",
+ "res = optimize.minimize(costFunction,\n",
+ " initial_theta,\n",
+ " (X, y),\n",
+ " jac=True,\n",
+ " method='TNC',\n",
+ " options=options)\n",
+ "\n",
+ "# the fun property of `OptimizeResult` object returns\n",
+ "# the value of costFunction at optimized theta\n",
+ "cost = res.fun\n",
+ "\n",
+ "# the optimized theta is in the x property\n",
+ "theta = res.x\n",
+ "\n",
+ "# Print theta to screen\n",
+ "print('Cost at theta found by optimize.minimize: {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx): 0.203\\n');\n",
+ "\n",
+ "print('theta:')\n",
+ "print('\\t[{:.3f}, {:.3f}, {:.3f}]'.format(*theta))\n",
+ "print('Expected theta (approx):\\n\\t[-25.161, 0.206, 0.201]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once `optimize.minimize` completes, we want to use the final value for $\\theta$ to visualize the decision boundary on the training data as shown in the figure below. \n",
+ "\n",
+ "![](Figures/decision_boundary1.png)\n",
+ "\n",
+ "To do so, we have written a function `plotDecisionBoundary` for plotting the decision boundary on top of training data. You do not need to write any code for plotting the decision boundary, but we also encourage you to look at the code in `plotDecisionBoundary` to see how to plot such a boundary using the $\\theta$ values. You can find this function in the `utils.py` file which comes with this assignment."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Plot Boundary\n",
+ "utils.plotDecisionBoundary(plotData, theta, X, y)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "#### 1.2.4 Evaluating logistic regression\n",
+ "\n",
+ "After learning the parameters, you can use the model to predict whether a particular student will be admitted. For a student with an Exam 1 score of 45 and an Exam 2 score of 85, you should expect to see an admission\n",
+ "probability of 0.776. Another way to evaluate the quality of the parameters we have found is to see how well the learned model predicts on our training set. In this part, your task is to complete the code in function `predict`. The predict function will produce “1” or “0” predictions given a dataset and a learned parameter vector $\\theta$. \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def predict(theta, X):\n",
+ " \"\"\"\n",
+ " Predict whether the label is 0 or 1 using learned logistic regression.\n",
+ " Computes the predictions for X using a threshold at 0.5 \n",
+ " (i.e., if sigmoid(theta.T*x) >= 0.5, predict 1)\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " Parameters for logistic regression. A vecotor of shape (n+1, ).\n",
+ " \n",
+ " X : array_like\n",
+ " The data to use for computing predictions. The rows is the number \n",
+ " of points to compute predictions, and columns is the number of\n",
+ " features.\n",
+ "\n",
+ " Returns\n",
+ " -------\n",
+ " p : array_like\n",
+ " Predictions and 0 or 1 for each row in X. \n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Complete the following code to make predictions using your learned \n",
+ " logistic regression parameters.You should set p to a vector of 0's and 1's \n",
+ " \"\"\"\n",
+ " m = X.shape[0] # Number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly\n",
+ " p = np.zeros(m)\n",
+ "\n",
+ " # ====================== YOUR CODE HERE ======================\n",
+ "\n",
+ " p = np.roound(sigmoid(X.dot(theta.T)))\n",
+ " \n",
+ " # ============================================================\n",
+ " return p"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After you have completed the code in `predict`, we proceed to report the training accuracy of your classifier by computing the percentage of examples it got correct."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Predict probability for a student with score 45 on exam 1 \n",
+ "# and score 85 on exam 2 \n",
+ "prob = sigmoid(np.dot([1, 45, 85], theta))\n",
+ "print('For a student with scores 45 and 85,'\n",
+ " 'we predict an admission probability of {:.3f}'.format(prob))\n",
+ "print('Expected value: 0.775 +/- 0.002\\n')\n",
+ "\n",
+ "# Compute accuracy on our training set\n",
+ "p = predict(theta, X)\n",
+ "print('Train Accuracy: {:.2f} %'.format(np.mean(p == y) * 100))\n",
+ "print('Expected accuracy (approx): 89.00 %')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[4] = predict\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2 Regularized logistic regression\n",
+ "\n",
+ "In this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly.\n",
+ "Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. From these two tests, you would like to determine whether the microchips should be accepted or rejected. To help you make the decision, you have a dataset of test results on past microchips, from which you can build a logistic regression model.\n",
+ "\n",
+ "First, we load the data from a CSV file:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load Data\n",
+ "# The first two columns contains the X values and the third column\n",
+ "# contains the label (y).\n",
+ "data = np.loadtxt(os.path.join('Data', 'ex2data2.txt'), delimiter=',')\n",
+ "X = data[:, :2]\n",
+ "y = data[:, 2]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.1 Visualize the data\n",
+ "\n",
+ "Similar to the previous parts of this exercise, `plotData` is used to generate a figure, where the axes are the two test scores, and the positive (y = 1, accepted) and negative (y = 0, rejected) examples are shown with\n",
+ "different markers."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plotData(X, y)\n",
+ "# Labels and Legend\n",
+ "pyplot.xlabel('Microchip Test 1')\n",
+ "pyplot.ylabel('Microchip Test 2')\n",
+ "\n",
+ "# Specified in plot order\n",
+ "pyplot.legend(['y = 1', 'y = 0'], loc='upper right')\n",
+ "pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The above figure shows that our dataset cannot be separated into positive and negative examples by a straight-line through the plot. Therefore, a straight-forward application of logistic regression will not perform well on this dataset since logistic regression will only be able to find a linear decision boundary.\n",
+ "\n",
+ "### 2.2 Feature mapping\n",
+ "\n",
+ "One way to fit the data better is to create more features from each data point. In the function `mapFeature` defined in the file `utils.py`, we will map the features into all polynomial terms of $x_1$ and $x_2$ up to the sixth power.\n",
+ "\n",
+ "$$ \\text{mapFeature}(x) = \\begin{bmatrix} 1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 & x_1^3 & \\dots & x_1 x_2^5 & x_2^6 \\end{bmatrix}^T $$\n",
+ "\n",
+ "As a result of this mapping, our vector of two features (the scores on two QA tests) has been transformed into a 28-dimensional vector. A logistic regression classifier trained on this higher-dimension feature vector will have a more complex decision boundary and will appear nonlinear when drawn in our 2-dimensional plot.\n",
+ "While the feature mapping allows us to build a more expressive classifier, it also more susceptible to overfitting. In the next parts of the exercise, you will implement regularized logistic regression to fit the data and also see for yourself how regularization can help combat the overfitting problem.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Note that mapFeature also adds a column of ones for us, so the intercept\n",
+ "# term is handled\n",
+ "X = utils.mapFeature(X[:, 0], X[:, 1])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### 2.3 Cost function and gradient\n",
+ "\n",
+ "Now you will implement code to compute the cost function and gradient for regularized logistic regression. Complete the code for the function `costFunctionReg` below to return the cost and gradient.\n",
+ "\n",
+ "Recall that the regularized cost function in logistic regression is\n",
+ "\n",
+ "$$ J(\\theta) = \\frac{1}{m} \\sum_{i=1}^m \\left[ -y^{(i)}\\log \\left( h_\\theta \\left(x^{(i)} \\right) \\right) - \\left( 1 - y^{(i)} \\right) \\log \\left( 1 - h_\\theta \\left( x^{(i)} \\right) \\right) \\right] + \\frac{\\lambda}{2m} \\sum_{j=1}^n \\theta_j^2 $$\n",
+ "\n",
+ "Note that you should not regularize the parameters $\\theta_0$. The gradient of the cost function is a vector where the $j^{th}$ element is defined as follows:\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_0} = \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left(x^{(i)}\\right) - y^{(i)} \\right) x_j^{(i)} \\qquad \\text{for } j =0 $$\n",
+ "\n",
+ "$$ \\frac{\\partial J(\\theta)}{\\partial \\theta_j} = \\left( \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta \\left(x^{(i)}\\right) - y^{(i)} \\right) x_j^{(i)} \\right) + \\frac{\\lambda}{m}\\theta_j \\qquad \\text{for } j \\ge 1 $$\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def costFunctionReg(theta, X, y, lambda_):\n",
+ " \"\"\"\n",
+ " Compute cost and gradient for logistic regression with regularization.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " theta : array_like\n",
+ " Logistic regression parameters. A vector with shape (n, ). n is \n",
+ " the number of features including any intercept. If we have mapped\n",
+ " our initial features into polynomial features, then n is the total \n",
+ " number of polynomial features. \n",
+ " \n",
+ " X : array_like\n",
+ " The data set with shape (m x n). m is the number of examples, and\n",
+ " n is the number of features (after feature mapping).\n",
+ " \n",
+ " y : array_like\n",
+ " The data labels. A vector with shape (m, ).\n",
+ " \n",
+ " lambda_ : float\n",
+ " The regularization parameter. \n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " J : float\n",
+ " The computed value for the regularized cost function. \n",
+ " \n",
+ " grad : array_like\n",
+ " A vector of shape (n, ) which is the gradient of the cost\n",
+ " function with respect to theta, at the current values of theta.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Compute the cost `J` of a particular choice of theta.\n",
+ " Compute the partial derivatives and set `grad` to the partial\n",
+ " derivatives of the cost w.r.t. each parameter in theta.\n",
+ " \"\"\"\n",
+ " # Initialize some useful values\n",
+ " m = y.size # number of training examples\n",
+ "\n",
+ " # You need to return the following variables correctly \n",
+ " J = 0\n",
+ " grad = np.zeros(theta.shape)\n",
+ "\n",
+ " # ===================== YOUR CODE HERE ======================\n",
+ " h = sigmoid(X.dot(theta.T))\n",
+ " \n",
+ " temp = theta\n",
+ " temp[0] = 0\n",
+ " \n",
+ " J = (1 / m) * np.sum(-y.dot(np.loh(h)) - (1 - y).dot(np.log(1 - h))) + (lambda_ / (2 * sum(np.square(temp))))\n",
+ " \n",
+ " grad = (1 / m) * (h - y).dot(X)\n",
+ " grad = grad + (lambda_ / m) * temp\n",
+ " \n",
+ " \n",
+ " # =============================================================\n",
+ " return J, grad"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Once you are done with the `costFunctionReg`, we call it below using the initial value of $\\theta$ (initialized to all zeros), and also another test case where $\\theta$ is all ones."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(X.shape[1])\n",
+ "\n",
+ "# Set regularization parameter lambda to 1\n",
+ "# DO NOT use `lambda` as a variable name in python\n",
+ "# because it is a python keyword\n",
+ "lambda_ = 1\n",
+ "\n",
+ "# Compute and display initial cost and gradient for regularized logistic\n",
+ "# regression\n",
+ "cost, grad = costFunctionReg(initial_theta, X, y, lambda_)\n",
+ "\n",
+ "print('Cost at initial theta (zeros): {:.3f}'.format(cost))\n",
+ "print('Expected cost (approx) : 0.693\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros) - first five values only:')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5]))\n",
+ "print('Expected gradients (approx) - first five values only:')\n",
+ "print('\\t[0.0085, 0.0188, 0.0001, 0.0503, 0.0115]\\n')\n",
+ "\n",
+ "\n",
+ "# Compute and display cost and gradient\n",
+ "# with all-ones theta and lambda = 10\n",
+ "test_theta = np.ones(X.shape[1])\n",
+ "cost, grad = costFunctionReg(test_theta, X, y, 10)\n",
+ "\n",
+ "print('------------------------------\\n')\n",
+ "print('Cost at test theta : {:.2f}'.format(cost))\n",
+ "print('Expected cost (approx): 3.16\\n')\n",
+ "\n",
+ "print('Gradient at initial theta (zeros) - first five values only:')\n",
+ "print('\\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5]))\n",
+ "print('Expected gradients (approx) - first five values only:')\n",
+ "print('\\t[0.3460, 0.1614, 0.1948, 0.2269, 0.0922]')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "grader[5] = costFunctionReg\n",
+ "grader[6] = costFunctionReg\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 2.3.1 Learning parameters using `scipy.optimize.minimize`\n",
+ "\n",
+ "Similar to the previous parts, you will use `optimize.minimize` to learn the optimal parameters $\\theta$. If you have completed the cost and gradient for regularized logistic regression (`costFunctionReg`) correctly, you should be able to step through the next part of to learn the parameters $\\theta$ using `optimize.minimize`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.4 Plotting the decision boundary\n",
+ "\n",
+ "To help you visualize the model learned by this classifier, we have provided the function `plotDecisionBoundary` which plots the (non-linear) decision boundary that separates the positive and negative examples. In `plotDecisionBoundary`, we plot the non-linear decision boundary by computing the classifier’s predictions on an evenly spaced grid and then and draw a contour plot where the predictions change from y = 0 to y = 1. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.5 Optional (ungraded) exercises\n",
+ "\n",
+ "In this part of the exercise, you will get to try out different regularization parameters for the dataset to understand how regularization prevents overfitting.\n",
+ "\n",
+ "Notice the changes in the decision boundary as you vary $\\lambda$. With a small\n",
+ "$\\lambda$, you should find that the classifier gets almost every training example correct, but draws a very complicated boundary, thus overfitting the data. See the following figures for the decision boundaries you should get for different values of $\\lambda$. \n",
+ "\n",
+ "
\n",
+ " Decision boundary with too much regularization\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "This is not a good decision boundary: for example, it predicts that a point at $x = (−0.25, 1.5)$ is accepted $(y = 1)$, which seems to be an incorrect decision given the training set.\n",
+ "With a larger $\\lambda$, you should see a plot that shows an simpler decision boundary which still separates the positives and negatives fairly well. However, if $\\lambda$ is set to too high a value, you will not get a good fit and the decision boundary will not follow the data so well, thus underfitting the data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize fitting parameters\n",
+ "initial_theta = np.zeros(X.shape[1])\n",
+ "\n",
+ "# Set regularization parameter lambda to 1 (you should vary this)\n",
+ "lambda_ = 1\n",
+ "\n",
+ "# set options for optimize.minimize\n",
+ "options= {'maxiter': 100}\n",
+ "\n",
+ "res = optimize.minimize(costFunctionReg,\n",
+ " initial_theta,\n",
+ " (X, y, lambda_),\n",
+ " jac=True,\n",
+ " method='TNC',\n",
+ " options=options)\n",
+ "\n",
+ "# the fun property of OptimizeResult object returns\n",
+ "# the value of costFunction at optimized theta\n",
+ "cost = res.fun\n",
+ "\n",
+ "# the optimized theta is in the x property of the result\n",
+ "theta = res.x\n",
+ "\n",
+ "utils.plotDecisionBoundary(plotData, theta, X, y)\n",
+ "pyplot.xlabel('Microchip Test 1')\n",
+ "pyplot.ylabel('Microchip Test 2')\n",
+ "pyplot.legend(['y = 1', 'y = 0'])\n",
+ "pyplot.grid(False)\n",
+ "pyplot.title('lambda = %0.2f' % lambda_)\n",
+ "\n",
+ "# Compute accuracy on our training set\n",
+ "p = predict(theta, X)\n",
+ "\n",
+ "print('Train Accuracy: %.1f %%' % (np.mean(p == y) * 100))\n",
+ "print('Expected accuracy (with lambda = 1): 83.1 % (approx)\\n')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "*You do not need to submit any solutions for these optional (ungraded) exercises.*"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/exercise1.ipynb b/exercise1.ipynb
new file mode 100644
index 000000000..7e73c7714
--- /dev/null
+++ b/exercise1.ipynb
@@ -0,0 +1,1490 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Programming Exercise 1: Linear Regression\n",
+ "\n",
+ "## Introduction\n",
+ "\n",
+ "In this exercise, you will implement linear regression and get to see it work on data. Before starting on this programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.\n",
+ "\n",
+ "All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).\n",
+ "\n",
+ "Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, and [`matplotlib`](https://matplotlib.org/) for plotting.\n",
+ "\n",
+ "You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# used for manipulating directory paths\n",
+ "import os\n",
+ "\n",
+ "# Scientific and vector computation for python\n",
+ "import numpy as np\n",
+ "\n",
+ "# Plotting library\n",
+ "from matplotlib import pyplot\n",
+ "from mpl_toolkits.mplot3d import Axes3D # needed to plot 3-D surfaces\n",
+ "\n",
+ "# library written for this exercise providing additional functions for assignment submission, and others\n",
+ "import utils \n",
+ "\n",
+ "# define the submission/grader object for this exercise\n",
+ "grader = utils.Grader()\n",
+ "\n",
+ "# tells matplotlib to embed plots within the notebook\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Submission and Grading\n",
+ "\n",
+ "After completing each part of the assignment, be sure to submit your solutions to the grader.\n",
+ "\n",
+ "For this programming exercise, you are only required to complete the first part of the exercise to implement linear regression with one variable. The second part of the exercise, which is optional, covers linear regression with multiple variables. The following is a breakdown of how each part of this exercise is scored.\n",
+ "\n",
+ "**Required Exercises**\n",
+ "\n",
+ "| Section | Part |Submitted Function | Points \n",
+ "|---------|:- |:- | :-: \n",
+ "| 1 | [Warm up exercise](#section1) | [`warmUpExercise`](#warmUpExercise) | 10 \n",
+ "| 2 | [Compute cost for one variable](#section2) | [`computeCost`](#computeCost) | 20 \n",
+ "| 3 | [Gradient descent for one variable](#section3) | [`gradientDescent`](#gradientDescent) | 20 \n",
+ "| 4 | [Feature normalization](#section4) | [`featureNormalize`](#featureNormalize) | 10 |\n",
+ "| 5 | [Compute cost for multiple variables](#section5) | [`computeCostMulti`](#computeCostMulti) | 20 |\n",
+ "| 6 | [Gradient descent for multiple variables](#section5) | [`gradientDescentMulti`](#gradientDescentMulti) |10 |\n",
+ "| 7 | [Normal Equations](#section7) | [`normalEqn`](#normalEqn) | 10 |\n",
+ "| | Total Points | | 100 \n",
+ "\n",
+ "You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "
\n",
+ "At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once. They must also be re-executed everytime the submitted function is updated.\n",
+ "
\n",
+ "\n",
+ "\n",
+ "## Debugging\n",
+ "\n",
+ "Here are some things to keep in mind throughout this exercise:\n",
+ "\n",
+ "- Python array indices start from zero, not one (contrary to OCTAVE/MATLAB). \n",
+ "\n",
+ "- There is an important distinction between python arrays (called `list` or `tuple`) and `numpy` arrays. You should use `numpy` arrays in all your computations. Vector/matrix operations work only with `numpy` arrays. Python lists do not support vector operations (you need to use for loops).\n",
+ "\n",
+ "- If you are seeing many errors at runtime, inspect your matrix operations to make sure that you are adding and multiplying matrices of compatible dimensions. Printing the dimensions of `numpy` arrays using the `shape` property will help you debug.\n",
+ "\n",
+ "- By default, `numpy` interprets math operators to be element-wise operators. If you want to do matrix multiplication, you need to use the `dot` function in `numpy`. For, example if `A` and `B` are two `numpy` matrices, then the matrix operation AB is `np.dot(A, B)`. Note that for 2-dimensional matrices or vectors (1-dimensional), this is also equivalent to `A@B` (requires python >= 3.5)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## 1 Simple python and `numpy` function\n",
+ "\n",
+ "The first part of this assignment gives you practice with python and `numpy` syntax and the homework submission process. In the next cell, you will find the outline of a `python` function. Modify it to return a 5 x 5 identity matrix by filling in the following code:\n",
+ "\n",
+ "```python\n",
+ "A = np.eye(5)\n",
+ "```\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def warmUpExercise():\n",
+ " \"\"\"\n",
+ " Example function in Python which computes the identity matrix.\n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " A : array_like\n",
+ " The 5x5 identity matrix.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Return the 5x5 identity matrix.\n",
+ " \"\"\" \n",
+ " # ======== YOUR CODE HERE ======\n",
+ " A = np.eye(5) # modify this line\n",
+ " \n",
+ " # ==============================\n",
+ " return A"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The previous cell only defines the function `warmUpExercise`. We can now run it by executing the following cell to see its output. You should see output similar to the following:\n",
+ "\n",
+ "```python\n",
+ "array([[ 1., 0., 0., 0., 0.],\n",
+ " [ 0., 1., 0., 0., 0.],\n",
+ " [ 0., 0., 1., 0., 0.],\n",
+ " [ 0., 0., 0., 1., 0.],\n",
+ " [ 0., 0., 0., 0., 1.]])\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "array([[1., 0., 0., 0., 0.],\n",
+ " [0., 1., 0., 0., 0.],\n",
+ " [0., 0., 1., 0., 0.],\n",
+ " [0., 0., 0., 1., 0.],\n",
+ " [0., 0., 0., 0., 1.]])"
+ ]
+ },
+ "execution_count": 6,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "warmUpExercise()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.1 Submitting solutions\n",
+ "\n",
+ "After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the grader object, and then sending your function to Coursera for grading. \n",
+ "\n",
+ "The grader will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.\n",
+ "\n",
+ "Execute the next cell to grade your solution to the first part of this exercise.\n",
+ "\n",
+ "*You should now submit your solutions.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Submitting Solutions | Programming Exercise linear-regression\n",
+ "\n",
+ "Use token from last successful submission (rustagi@iitg.ac.in)? (Y/n): Y\n",
+ " Part Name | Score | Feedback\n",
+ " --------- | ----- | --------\n",
+ " Warm up exercise | 10 / 10 | Nice work!\n",
+ " Computing Cost (for one variable) | 0 / 40 | \n",
+ " Gradient Descent (for one variable) | 0 / 50 | \n",
+ " Feature Normalization | 0 / 0 | \n",
+ " Computing Cost (for multiple variables) | 0 / 0 | \n",
+ " Gradient Descent (for multiple variables) | 0 / 0 | \n",
+ " Normal Equations | 0 / 0 | \n",
+ " --------------------------------\n",
+ " | 10 / 100 | \n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# appends the implemented function in part 1 to the grader object\n",
+ "grader[1] = warmUpExercise\n",
+ "\n",
+ "# send the added functions to coursera grader for getting a grade on this part\n",
+ "grader.grade()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2 Linear regression with one variable\n",
+ "\n",
+ "Now you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities. You would like to use this data to help you select which city to expand to next. \n",
+ "\n",
+ "The file `Data/ex1data1.txt` contains the dataset for our linear regression problem. The first column is the population of a city (in 10,000s) and the second column is the profit of a food truck in that city (in $10,000s). A negative value for profit indicates a loss. \n",
+ "\n",
+ "We provide you with the code needed to load this data. The dataset is loaded from the data file into the variables `x` and `y`:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Read comma separated data\n",
+ "data = np.loadtxt(os.path.join('Data', 'ex1data1.txt'), delimiter=',')\n",
+ "X, y = data[:, 0], data[:, 1]\n",
+ "\n",
+ "m = y.size # number of training examples"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.1 Plotting the Data\n",
+ "\n",
+ "Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). Many other problems that you will encounter in real life are multi-dimensional and cannot be plotted on a 2-d plot. There are many plotting libraries in python (see this [blog post](https://blog.modeanalytics.com/python-data-visualization-libraries/) for a good summary of the most popular ones). \n",
+ "\n",
+ "In this course, we will be exclusively using `matplotlib` to do all our plotting. `matplotlib` is one of the most popular scientific plotting libraries in python and has extensive tools and functions to make beautiful plots. `pyplot` is a module within `matplotlib` which provides a simplified interface to `matplotlib`'s most common plotting tasks, mimicking MATLAB's plotting interface.\n",
+ "\n",
+ "
\n",
+ "You might have noticed that we have imported the `pyplot` module at the beginning of this exercise using the command `from matplotlib import pyplot`. This is rather uncommon, and if you look at python code elsewhere or in the `matplotlib` tutorials, you will see that the module is named `plt`. This is used by module renaming by using the import command `import matplotlib.pyplot as plt`. We will not using the short name of `pyplot` module in this class exercises, but you should be aware of this deviation from norm.\n",
+ "
\n",
+ "\n",
+ "\n",
+ "In the following part, your first job is to complete the `plotData` function below. Modify the function and fill in the following code:\n",
+ "\n",
+ "```python\n",
+ " pyplot.plot(x, y, 'ro', ms=10, mec='k')\n",
+ " pyplot.ylabel('Profit in $10,000')\n",
+ " pyplot.xlabel('Population of City in 10,000s')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def plotData(X, y):\n",
+ " \"\"\"\n",
+ " Plots the data points x and y into a new figure. Plots the data \n",
+ " points and gives the figure axes labels of population and profit.\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " x : array_like\n",
+ " Data point values for x-axis.\n",
+ "\n",
+ " y : array_like\n",
+ " Data point values for y-axis. Note x and y should have the same size.\n",
+ " \n",
+ " Instructions\n",
+ " ------------\n",
+ " Plot the training data into a figure using the \"figure\" and \"plot\"\n",
+ " functions. Set the axes labels using the \"xlabel\" and \"ylabel\" functions.\n",
+ " Assume the population and revenue data have been passed in as the x\n",
+ " and y arguments of this function. \n",
+ " \n",
+ " Hint\n",
+ " ----\n",
+ " You can use the 'ro' option with plot to have the markers\n",
+ " appear as red circles. Furthermore, you can make the markers larger by\n",
+ " using plot(..., 'ro', ms=10), where `ms` refers to marker size. You \n",
+ " can also set the marker edge color using the `mec` property.\n",
+ " \"\"\"\n",
+ " fig = pyplot.figure() # open a new figure\n",
+ " \n",
+ " # ====================== YOUR CODE HERE ======================= \n",
+ " pyplot.plot(X, y, 'ro', ms=10, mec='k')\n",
+ " pyplot.ylabel('Profit in $10,000')\n",
+ " pyplot.xlabel('Population of City in 10,000s')\n",
+ "\n",
+ " # =============================================================\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now run the defined function with the loaded data to visualize the data. The end result should look like the following figure:\n",
+ "\n",
+ "![](Figures/dataset1.png)\n",
+ "\n",
+ "Execute the next cell to visualize the data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYAAAAEHCAYAAACncpHfAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3de5wcVZ338c9vZhqYcdIhMElkUcCdR10VI2jUILoPrvsooxhWjBcChEtCIBd2MwZNgrvP4uNrQdTA7hq8QMYLCRHciBrZzLoYb+yG4AaUAQRl2kVEuSTqhonhFTrh9/xR1UlPp7ureqaru6f7+3696tU91VVdpzud+tU5dc7vmLsjIiKtp63eBRARkfpQABARaVEKACIiLUoBQESkRSkAiIi0KAUAEZEW1ZHUG5vZi4GbgBcCzwM3uPs/mdmVwMXAjnDTK9x9c7n36unp8RNOOCGpooqINKV77rlnp7tPLfV6YgEA2Acsd/d7zWwScI+Z3RG+dp27fzruG51wwgls3749kUKKiDQrM/tVudcTCwDu/gTwRPh8xMweAo5N6ngiIlKZmtwDMLMTgJOBu8NVS81syMy+aGZTalEGEREZLfEAYGbdwNeBZe7+DPA5oBc4iaCGsLrEfgvNbLuZbd+xY0exTUREZBwSDQBmliI4+d/s7rcBuPtT7r7f3Z8HbgTeUGxfd7/B3We6+8ypU0vewxARaTqZTIb+xYuZnk7T3tbG9HSa/sWLyWQyVT1OYgHAzAwYAB5y92vz1h+Tt9l7gAeSKoOIyEQzODjIrBkz6Fy7lq0jI+x1Z+vICJ1r1zJrxgwGBwerdixLKhuomb0ZuBO4n6AbKMAVwNkEzT8OPApcEt4wLmnmzJmuXkAi0uwymQyzZsxg0549nFLk9buA2V1dbBsaore3N/L9zOwed59Z6vUkewH9B2BFXirb519EpFWtWb2ai7PZoid/gFOABdks1193HdeuWTPu42kksIhIg9iwfj3zs9my2yzIZtmwbl1VjqcAICLSIHbu3s3xEdscF25XDQoAIiINoqe7m7JDd4HHwu2qQQFARKRBzD33XAZSqbLbrE2lmHveeVU5ngKAiEiDWLp8OTemUtxV4vW7CALAkv7+qhxPAUBEpEH09vZy08aNzO7qYlUqRQbIAhlgVSrF7K4ubtq4MVYX0DgUAEREGkhfXx/bhobYu3Ahp6bTdLa1cWo6zd6FC9k2NERfX1/VjpXYQLBq0kAwEYFgoNSa1avZsH49O3fvpqe7m7nnnsvS5curdlXcTKIGgqkGICITQi1TJLQK1QBEpOFVO0VCq1ANQEQmvEpSJEh8CgAi0vBqnSKhVSgAiEjDq3WKhFahACAiDa/WKRJahQKAiDS8WqRIqNUsXI1EAUBEGl7SKRJatYupAoCINLwkUyRkMhnmzZnDpj17uCqbpZdgpqxe4Kpslk179jBvzpymrAkoAIjIhJBUioRW7mKqgWAi0tKmp9NsHRmhXN0hA5yaTvPkrl21KlZVaCCYiEgZhV1MM0A/MB1oDx//GdgxMlKH0iVLAUBEWlp+F9NBYBbQCWwF9oaPXcAR7k13M1gBQERaWq6LaQaYB2wCroJRN4OvBr4LTXczWAFARFparovp3wIXQ0vdDFYAEJGWluti+i1gfsS2zZZvSAFARFpeX18fe81aLt+QAoCICK2Zb0gBQESE2uQbajQKACIiJJ9vqBElFgDM7MVm9n0ze8jMHjSzvwnXH2Vmd5jZI+HjlKTKICISV5L5hhpVkjWAfcByd38FwdiKJWb2SmAlsMXdXwpsCf8WEam7pPINNaqa5QIys28Ba8LlNHd/wsyOAX7g7i8vt69yAYmIVK4hcgGZ2QnAycDdwHR3fwIgfJxWizKIiMhoiQcAM+sGvg4sc/dnKthvoZltN7PtO3bsSK6AIiItKtEAYGYpgpP/ze5+W7j6qbDph/Dx6WL7uvsN7j7T3WdOnTo1yWKKiLSkJHsBGTAAPOTu1+a9tAk4P3x+PvCtpMogIiKldST43qcC5wH3m9lPw3VXAJ8AvmZm8wkG1r0vwTKIiEgJiQUAd/8PwEq8/LakjisiIvFoJLCISItSABARaVEKACIiLUoBQESkRSkAiIi0KAUAEZEqymQy9C9ezPR0mva2Nqan0/QvXtyQk8krAIiIVMng4CCzZsygc+1ato6MsNedrSMjdK5dy6wZMxgcHKx3EUepWTbQ8VA2UBFpdJlMhlkzZrBpzx5OKfL6XcDsri62DQ3VbE6BhsgGOpFNpOqciNTPmtWruTibLXryBzgFWJDNcv1119WyWGUpAJQx0apzIlI/G9avZ342W3abBdksG9atq1GJoqkJqIRGrM6JSONqb2tjr3vZ/DpZoLOtjX3799ekTGoCGqOJWJ0Tkfrp6e7mVxHbPBZu1ygUAEqYiNU5kWbXyPfk5p57LgOpVNlt1qZSzD3vvBqVKJoCQAk7d+/m+Ihtjgu3E5HkNfo9uaXLl3NjKsVdJV6/iyAALOnvr2WxylIAKGEiVudEmlUmk2HenDls2rOHq7JZegly2fcCV2WzbNqzh3lz5tS1JtDb28tNGzcyu6uLVakUGYI2/wywKpVidlcXN23c2FD3DBUASpiI1TmRZjVR7sn19fWxbWiIvQsXcmo6TWdbG6em0+xduJBtQ0P09fXVtXyF1AuoBPUCEmkc09Npto6MUO5/WgY4NZ3myV27alWshqdeQGM0EatzIs1K9+SSoQBQxkSrzok0K92TS4YCQITe3l6uXbOGJ3ftYt/+/Ty5axfXrlmjK3+RGtI9uWQoAIhIw5uIXSwnAgUAEWl4uieXDAUAEZkQdE+u+iK7gZqZAW8AjgUc+C3wY69h/1HNByATUSaTYc3q1WxYv56du3fT093N3HPPZeny5bpSlZoYVzdQM3s78AhwJfBO4F3Ax4BHwtdEpIhGT1sgAhE1ADN7COhz90cL1r8E2Ozur0i2eAHVAGQi0SBCaRTjHQjWATxeZP1vgPJ9skRa1ERJWyASVQNYBbwfuAX4dbj6xcAHga+5+9WJlxDVAGRiUdoCaRTjqgGEJ/hzACO4cHlT+PycqJO/mX3RzJ42swfy1l1pZr8xs5+Gyzsr+TAiE4HSFshEEdkN1N1/5u6fAP4e+Dt3/4S7/yzGe38ZOL3I+uvc/aRw2VxZcUVqYzwTjyhtgUwUUb2AjjOzW8zsaeBu4MfhVf0tZnZCuX3d/UfA76tWUpEaGW8PHqUtkIkiqgZwK/AN4Bh3f6m7vxQ4BvgmwX2BsVhqZkNhE9GUMb6HSCKqMfGI0hbIRBEVAHrc/VZ3PzCFvbvvd/dbgKPHcLzPEfxfOgl4AlhdakMzW2hm281s+44dO8ZwKJHKVaMHj9IWyEQR1QvoFoJmnK8wuhfQ+QTB4f1l3zxoJrrd3U+s5LVC6gUktVLNHjyZTIbrr7uODevWHRwJfN55LOnv18lfaiKqF1BUADgMmA+cSZAKwgjGBWwCBtx9b8TBTyDvJG9mx7j7E+HzfuCN7v7BqA+hACC10t7Wxl53OspskwU629rYt39/ma1E6i8qAJT7nePuzxE023xuDAf+KnAa0GNmjxP0IjrNzE4iyCn0KHBJpe8rkqSe7m5+FVEDUA8eaRZRvYA6zOwSMxsMb9zeFz6/1MzKdnNw97Pd/Rh3T7n7i9x9wN3Pc/dXu/sMd5+dqw1IdYyn66IE1INHWknUTeB1BDdsP8boZHCvAdYnWzSphJKPVYd68EgriQoAr3X3Re6+zd0fD5dt7r4IOLkWBZRo1ei62IqK1ZjWrF7NNZ/5jHrwSEuICgB/MLP3mdmB7cyszcw+APwh2aJJXEo+VrlyNaYVl13GNZ/5jCYekaYX1QvoBOAa4C84eMI/Evg+sNLd/zvh8gHqBRRFyccqo3TN0irGmwzuUXf/gLtPJUwG5+7TwnU1OflLNCUfq4xqTCKB2HMCu/vv3H0ngJnNNLNjkyuWVELJxyqzYf165mezZbdZkM2yYd26GpVIpD7GOin8ZcDtZnZrNQsjY6Oui5VRjUkkMKYA4O7nu/vJwIIql0fGQF0XK6Mak0ggMgCY2WQz+4CZfcjM+sPnRwK4+0jyRZQoSj5WGdWYRAJRI4HnAfcSpHToAl4AvBW4J3xNGkRfXx/bhobUdTEG1ZhEAlHdQH9OkLDtfwrWTwHudveXJVw+QN1ApfoGBweZN2cOC7JZFmSzHEfQ7LM2lWJtKsVNGzcqaMqEN65uoATZP4tFiOfD12SCa9X8QY1aY2rVfw+pE3cvuRDk/c8QZAO9Ilw+H667oNy+1Vxe97rXucQzPDzsyxYt8mmTJnmbmU+bNMmXLVrkw8PDh2y7efNm7+nq8lWplA+DZ8GHwVelUt7T1eWbN2+uwydoXfr3kGoDtnu5c3y5F4P9mQJ8EFgOXB4+nxK1XzWXiRwAKjkhj/cYkzs7vQv88vDEUe4EMjw87D1dXb4Vgp9BwbIVvKerq6rllNL07yFJGHcAaIRlogaAWlzR5Y5xaUeHHx2eKOKcQJYtWuSrUqmi2+aWlamU9y9ZMu4ySjT9e0gSEgsAwP1j3bfSZSIGgFpc0eUfYxn4qjInj8ITyLRJk3w4Yvth8OnpdLW+EilD/x6ShKgAENUN9KwSy3uBF1b9hkQTqUW+mfxjbCCYu7Oc/PQGGg3bWPTvIfUQ1Q00C9xM8Z5Ac9x9UlIFyzcRu4HWIkNn/jHagb2Un+Mzfy5bZRBtLPr3kCSMtxvoEPBpd7+wcAH+J2LfllaLK7r8Y/RARekNNBq2sejfQ+ohKgAsA54p8dp7qlyWplKLfDP5x5gLDERsn38C0WjYxqJ/D6mHqPkA7nT3x0q8NrHaZGqsFld0+cdYCtwIsU8gyh/UWPTvIXVR7g5xeH9gGvCC8Hkn8FHgE8AxUftWa1EvoHjH2AzeA74y7DHyXPi4oqOjZLfT4eFh71+yxKen097e1ubT02nvX7JE/c3rRP8eUk1UYSDY94DjwuefBL4ErAC+H7VvtZaJGADcD/bRXxmOA8idkFcmMA4gd4yHwC8CnwzeBn50V5dOICItKioARHUDPR/oBU4Ln38A2A48CRxvZvPMbEa1ayXNohb5ZgqPcWJbG/+aTnPRkiX8YniYnX/8I9euWaOmAxE5RFQ30OOB7wDnAZOBq4A5BIngNgLvBXa5e6L90iZiN1ARkXqL6gZarts47v4rM/sn4HYgBcxz98fM7Dhgp5e4QSwiIo0vckYwd/8cQTPQi9z99nD174CzkyyYSC0o/bK0slhzArv7bnffk/f3H71gkhiRiWZwcJBZM2bQuXYtW0dG2OvO1pEROteuZdaMGQwODta7iCKJGtOk8HGY2RfN7GkzeyBv3VFmdoeZPRI+Tknq+CLlZDIZ5s2Zw6Y9e7gqm6WXoD20F7gqm2XTnj3MmzNHNQFpaokFAODLwOkF61YCW9z9pcCW8G+RmqtFsj6RRle2F9C439zsBOB2dz8x/PvnwGnu/oSZHQP8wN1fHvU+6gUk1abka9IKxpsMLvcmZ4XNNrvM7BkzGzGzUjmCypnu7k8AhI/TyhxzoZltN7PtO3bsGMOhREpT+mWR+E1AnwRmu/tkd0+7+yR3TydZMHe/wd1nuvvMqVOnJnkoaUG1SNYn0ujiBoCn3P2hKhzvqbDph/Dx6Sq855io+19rqyRZn34r0qziBoDtZnarmZ2dPzPYGI63CTg/fH4+8K0xvMe4qfufxE2/fOJrX6vfijSvcomCcgtBArjC5YsR+3wVeIIgq+3jBDMWHk3Q++eR8PGoOMevZjK4WmTplIkhKlnfwMCAfisyoTGeZHB5QeKQGcHc/aKIfc5292PcPeXuL3L3AXf/nbu/zd1fGj7+fgwxa1zU/U9yopL13b99u34r0tSiksF9xN0/aWafoci8wO7+10kWLqea3UDV/U/i0m9FJrpxJYMDcjd+m6YTvrr/SVz6rUizi8oG+u3w8Su1KU7yerq7+VXEVZ26/wnotyLNL8lUEA2pFnP1SnPQb0WaXcsFgLjd/3KTp0vjqHV/fP1WpNnFTQVxapx1E0Fvby83bdzI7K4uVqVSZAj6qWaAVakUs7u6uGnjRk2h2GDqMXZDvxVpeuX6iOYW4N4465JakpgUfnh42PuXLPHp6bS3t7X59HRak6eP0fDwsC9btMinTZrkbWY+bdIkX7ZoUdW+y3qP3dBvRSYqIsYBRHUDPQV4E7AMyO/snAbe4+6vSSwy5VE20MY1ODjIvDlzuDibZX42y/HAr4CBVIobUylu2riRvr6+cR2jf/FiOteu5apstuQ2q1Ip9i5cyLVr1ozrWCLNZLzZQA8Dugl6C03KW54hmBxeWkCptvfvfe97NZlUZcP69cwvc/KHYEDWhnXrxnUckVYTaz4AMzve3aOSJyZGNYD6KXeFf707p7tz6/79JfevxpV5e1sbe93L9lnOAp1tbewrUxaRVjOuGoCZ/WP4dI2ZbSpcqlrSBjERMz8mVeaoaRP/bd8+tuzfT7mjVOPKXKmbRZIR1QR0U/j4aWB1kaWpTMQsoUmWOVbeJOD6Mu9RaqRsJUFL/fFFElLuDjHB/L0A15TbLukliV5Aherd02Qski7ztEmTfLjEe+eWYfDpUa+n06PeN5eFc1WYhTMbbrcqzMK5efPmmn5OkWZFRC+gqADwM+B/E+QEOhl4bf5Sbt9qLrUIAMsWLfJVqVTZk93KVMr7lyxJ5Phj6Uo53jJHHbPNzLMRAeA58PbwBL4MfBp4W/i4DPySjo5Rxx/ryTwqdXNh0BCR8QeAOcAgMAJ8v2D5Xrl9q7nUIgDEvtotuJqthlJXxCs7Ojzd0eGTOzuLnqDHU+Y4V+Fx378bvAt8efh37r1WhOsHBgYOHHc8QUv98UUqM64AcGAj+Ls42yW11CIAxL7aNTuwTzUGQMW5Ij4a/OGCE/TAwIAfXuSKu/CE/Rx4e1tbxcfs6eryC+bOjTxZfwj8BeE+ca7o6xloRVpNVACIOyHMx81stpl9OlzOqMb9h0bS3dERq6dJd3gzslo3X+PcaL0Y+AKj+9dfNn8+5wFbgb3hYycwi6DKll/mwt4xcSfFaTOLzIVzI3BeuE+598pNmqIUyyINpFx0yC3A1QRTOF4ULncAV8fZtxpLLWoAk1MpXxlxZboCfHIqVdWbkmO90foR8P5Sx86rCRRrTqnkKrxU2/vlZj4ZfHKRWke5K3rVAERqhyo1AQ0BbXl/twNDcfatxlKLAGDhibPsSR28zayqN4wrudEaFRQOHDsMDqUCUexjhk1HxdreJ6dSvoWg+amS96r3zXaRVlLNAHBU3t9HNVsAmDZpkg+EJ/mV4Qn2QE+TcP1AeGVaeBVbrAfMheBHd3fHOu5YagDFgkL+9pPDk3+x3jHVuArPBZFpVFYDUJdOkdqJCgBx5wO4GviJmX3ZzL4C3ANcVY0mqEYx99xzGU6l2EbQpn4qQZv6qeHf24BHwsFG+e3YgwTt7p2Mbo+fDjy7e3fkvYBYg5yAuQXrHgN6Smx/HEG3rW1DQ0UTsVVjYFVudO5cYKDsO41+L6VYFmkg5aJDEEAw4MXAMcBs4EzghVH7VXOptAYwlt45lVyZ5q6gh4nRbBRxNRvruEWusleGtYxife+3RFy9V+MqPNeUM9bvQF06RZJHlZqA7omzXVJLJQGg0lGmxfaNGmyUO/ktA18V0fwRpz271HE/Ep5cNxc5qaYJuoeuYnTf+1Vh889fnXHGmI4Zd2BVfhDZTPGms+XgR3d2apCWSJ1UKwBcD7w+zrZJLHEDQDWubONcmW7ZssXT7e3eSfk++Ln276nd3ZE1ksLjHt3V5en2dr+ko2PUSXVFR4cfEZ7ky44d6OyMvJou9lkvOuccv2Du3Fi1p/wgsgX8b8Cnht9JF/hZZ5xR9Sv6pCefEWkm1QoAPwP2EzTVDgH304A3gWvRw2Tz5s1+dGenX27mwwQDtC4MT8gGflRBMHguPCGOpUZSKhi947TT/PIq1DyKfbZKa0+1bMoZT+1OpBVVKwAcX2yJs281lrgBIOk+5sPDw37kYYcduPLONX0UNsPkeg1t5uB9gmr2eKnkc8a9Ym703jmNXj6RRjSuAAAcQTAd5BrgEqCj3PZJLXEDwFjSOVTiHaed5svzTrBxxg0soPiArWJX6nFP1pV8zrhXzI3eP7/RyyfSiMYbAG4F1ocn/28C/1Ru+6SWatcAuqDiK8Xh4WHv5GDTTpwbwB8Bn8ToewOFYwZ6CEYXDwwMxD5ZV/I5myVHT6OXT6QRjTcA3J/3vAO4t9z2cRfg0fA+wk+jCugVBIBlixb5h83KniRWgs8yq/hKcdmiRaNGvcYdAHV03t+lmoyWhyfr1TFP1nGuhi8381lR30XeFXOlo4NrrdHLJ9KIxhsA7i3391iXMAD0xN2+kl5AkVe9jO4nH7fZZdqkSaP648dOgZAXDOI0GZUKKoVNRVHt4V3h54x7xdzoV9iNXj6RRhQVAKJGAr/GzJ4JlxFgRu65mT0TPcystnp7e3mWYLTaKhg9yjRcfxPwFoJsk5Vk9Ny5e/eoUa89ECt76KTw+RqCrJ5jnV5xQTbLF66/nva2Nt508smcetppvLuzs+Ro2meBP48oX37WzUafdrHRyycyIZWLDkktwH8D9xKklFhYYpuFwHZg+3HHHRc74k2bNMm3ENx4nR5egU8P/x7Ou1I8uru7ol4luffNXcXHuQewoqPD0+3tvpUKcuZE1Cby7w1MOeIIP+uMM4p2waz0irnRe9k0evlEGhHV6AZa7QX4k/BxGnAf8Ofltq9kJHCc9vEVHR1+7JQpFfWlX7Zoka/s6DjQjn8JQft+1Akpd3M3TpPRQ1BykpdiwaHwpDeqOYtgbEKpAWqFn8+98addbPTyiTSahgwAowoAVwKXl9umkgAQt308HfeKvMgV8jBBjeJIRk+FWOqENDw8HMwjUOZYm8OAcjmHpnboAX8fB2sx+b2IJoO//sQTS/YiWkHpdBITMUdPo5dPpJE0XAAAXgBMynu+FTi93D6VJoMrdaW4oqPjQG+bSvPY57/virz0DFvA3wDeSdDvPndC2rJly6iby5NTqVE9lPJP5EZ0l80u8Ksp3otoQYz9jyaoYZQKUEqvINJ8GjEA/GnY7HMf8CDw0ah9xjIfwJYtW3zmq17lXXkn2GOnTPFL2tvdqTyPfU6cK9BiKQu2cDB/T2F30L8Or9TLleXDBLWWYif5ZTH2X07QvFRYXqVXEGleDRcAxrKMtQZQeFLLn75wvJk8S101b9mypWQT1OawDIUn8qNiBqMjS7w2nmCmG6sizavlAkC5k1p+s894cvmXu2pOt7f7irCWUWy5EEbdfB4mqKGMZVrIYp8rbnOWu9IriDS7lgsApU5qm2FUKgcnmOIxzaE3cT9E6ekUo66aj464Gi+8Wl9GBROrx3zPuDUADa4SaW5RASDulJATxob165mfzY5alwHmEUxllhvINQisIJjS8A8cnALydcDngZNe/3pe9rKXHfL+a1av5uJstuSArj/Agekii9kB/DPBlJHtwA3AWURPq/hZ4F0lXqt0Wsac/KktS8kfLCYiTaZcdGiUpZIaQLGcMbn2/lyzz9eIbv7pAu9ua/OBgQF3P9jm30X5SWDKXY1vDt83N3NWNnyvh2OWZ0GZq/SxNGepBiDS3Gi1GkBusvJ8G4D5QC9BKogFwIWUT8uwBHjV889z2fz59Pf3H0gZMcTBid87CSaEz5/2fS7BJO6FcrWQ7wJXh2XpIEgp0RGWq1QKi3cDHUccwTe7uriryHv3htv9JbCyoyP2ROtKryDS4spFh0ZZxnIPoLCfff4Ve1Q7fe7Kd2pYW4iTYG5Umoki25fqdZS/PjfArDCFxSUdHd6/ZEnkSNiBgYGKBkmpF5BIc6PVbgIPDw97+vDD/SiKT5jeQ/xeN23hCfrDEduuZPSkL+9rb/d0R8eoE3WpTJ+VNt9UeySs0iuINK+WDABTDj+8KqmSu6g8iVvuhL1ly5ZRJ+pyQSc3MGwF5VNKJEXpFUSaU8sFgDh92z8EPqvISbwwx05nQW2hcJtcs9JDBE025U7YUTdch8EvCoOOTsIiUg1RAaDpbgIX6wZaaDEwBAduqA4S3MztJLi5u5cgT/XicN1NJbbJ3Qh+M8GN3L0LF7JtaIi+vr5Djhl1w7UXmJZKccmSJezbv58nd+3i2jVrDrlxKyJSLRYEicY2c+ZM3759e6xt29va2OtOR5ltsgSz3R8FvBfYCHyb4r2C7gLeDhwesc3pHR3c+/DDJU/YmUyGWTNmsGnPnpLvMburi21DQzrpi0hVmNk97j6z1OtNVwMo1g200GMEAeCzBLPSRHUJfQVwUcQ2i4Drr7uu5DF7e3u5aeNGZnd1lZzFq1hXTRGRpDRdACjV1JIB+glG4L4MsLY2FrS38xBwacR7/jdwScQ2F+/bx4Z168pu09fXx7ahIfYuXMip6TSdbW2cmk6XbToSEUlK0zUBZTIZ3vjqV/PtZ589cMU+SDAI62KCAWHHE8zn+3kzPuvOPwI/A9YDvyeoHewnmKxgHkHqhr0Q2azU2dbGvv37K/14IiKJaLkmoN7eXt781rfSRzA69nsEJ/FNwFUcHIHbC3zKne8Cy4A9wDaCE/0QQW3BgN8StP/HaVbq6e6u+ucREUlK0wUAgLvuvJPbCE7mZwHnU779finB1X5+cLia4Kbv94C/AD4XcUylTBCRiaYpA8DO3bv5c+Bagqv3RRHbLyTIF1ToFIK8QdMJ8vsUy8NDuP7Gjg6W9PePrcAiInXQlAEgvyfQTsqnZ4Yw5XGJ1xYAtwAjwNuANxDUCnI9eFYCfUD2+ef5xS9+Mc6Si4jUTlMGgPyeQD3EbL8v8dpxBE1Je4H7gdMIsnMeQTCHwHMEg8b+be9e5s2ZQyaTKfo+mUyG/sWLmZ5O097WxvR0mv7Fi0tuLyKStKYMAEuXL+fGVIq7iDlZSrhdMY8BhwHHAmsIuoN+l2AQ2X8SNDP1EjYXZbNFxwIMDg4eSCe9dWSEve5sHRmhc+1aZs2YweDg4CH7iIgkrSkDQP6gq90Es26Va79fS5D/v5gbCe4R5BuIi2gAAA+RSURBVOf//x+CpqHrC7ZdkM0eMhYgk8kwb84cNu3Zw1XZ7KgbzVdls2zas6dszUFEJClNGQDg4KCr5885h90Ek6VczujJVlaG61cRnJAL3UVQe7gsfP0qgu6k5wGPAF8gmNZxOkG30SyHTp8YNYVkuZqDiEiSmjYA5KTTaY7o7OR54E7gJGASMAP4FEEvoY8RjAYunIlrNkEiuPzgcApwAfAbOGR2sDcDkw4/fNTx4ySnK1ZzEBFJWtMGgPx293uefZafAvuA5wmu1nMn7/8iyPr5VYKgcDjBSX4vwcCwYskZFhGkhxjVnEMwbuD5bHZUc44mXheRRtWUAaBYu/tvgIc5dE7e3KCvfwNyGYR+y8Gbu8WU6jZ6CkFOoA+eeeaBIBA3OZ1GEYtIrTVlAMhvd88lgTuT4Mq9XFv8xUA34+s2uggYfvDBA717kpx4XV1LRWQ8mjIA5Nrd8ydxOYLoEcGXEjQTXRGxXbluo8cRDBrL9e5595w5B7qkFnMXQQCodBSxupaKyLiVmy4sqQU4Hfg5MAysjNq+kikh3d3bzPzhgsnW28rMyZs/EXx7OC3j18pN0l5mnuD8+YFXplLev2RJ1SdeHx4e9p6urtgTyYtIa6LRpoQ0s3aCLvR9wCuBs83sldU8Rk93N9cQNOnkmnwqGRG8lKCf/ypG9wz6sBl9HNozKF9+7SDXu6fa8wCoa6mIVEW56JDEQnB++k7e36uAVeX2qbQGsGzRIp9ccJW+DHxVRA1gJXh/uN/U8Pn0vFrBReec41OOOKL8lXfecZ8LJ3ivtqgJ5g/URNLpqh9bRCYOGq0GQJBV4dd5fz8erquapcuX8wyjk8AtJRjVG2dE8HEEE8NcCzwJfDicrH1g/Xpuvu02Znd1HTKorNi4gaR696hrqYhUQz0CgBVZd8i0ZGa20My2m9n2HTt2VHSA3t5epnR2jmry6SU4Of8lwQjgcifv/F4+hTdpc805P3zVq5hJcIP5VIqPG0hqjgB1LRWRaqhHAHgceHHe3y8i6Ho/irvf4O4z3X3m1KlTKz7IvAsuYG3H6Ekc+4D3Az8kOGmXOnnfCLyL0pO19/b2csu3vkVHVxd3EtQSCscNjLV3TxxJdi0VkRZSrn0oiYVg/NUvgZcQJNq8D3hVuX0qvQfgHvSUmXL44Ye01w8X9A4q1o7fBX50d7f3L1lStidNtXv3VPLZ1AtIRKLQaPcA3H0fQZP8d4CHgK+5+4NJHGs/cAaje/MAvJWgKWg5BU1B4RX/xs2b2TkywrVr1oy68i9U7d49ceVnO12VShX9DIW1FhGRQhYEicY2c+ZM3759e0X79C9eTOfatczPZrmeYMrHnQRt+3MJAsMVZjycSrF73z56uruZe955LOnvnzAnzkwmw/XXXceGdevYuXv3hPwMIpIcM7vH3WeWfL1ZA8D0dJqtIyMl++tDcMX8xq4udv7xj+Mqn4hII4oKAE2ZCgLid5X8w549FefOUQ4eEWkGTRsA4naVnAQVjZhVDh4RaRZNGwDmnnsun4/YZi3wXog9GYumdxSRZtK0AWDp8uV8luiRvx8m/ohZ5eARkWbStAGgt7eXVGcn7+bQpG75I39TxB8xq+kdRaSZNG0AALjwggt4X0cHeyk98reSEbPKwSMizaSpA8DS5cvZeNhhvI8gXcM+RqdtqDRdg3LwiEgzaeoAUO0Rs8rBIyLNpKkDAFQ3XcPS5csTmd5RRKQemjIAFA7UetPJJ+PPP89/3nsv+/bv58lduyLz/BSjHDwi0kyaLgAkPVCrXgngRESqralyAWUyGWbNmMGmPXuK9tW/C5jd1cW2oSFdpYtI02upXEAaqCUiEl9TBQAN1BIRia+pAoAGaomIxNdUAUADtURE4muqAKCBWiIi8TVVANBALRGR+JoqAGiglohIfE0VAEADtURE4mqqgWAiInJQSw0EExGR+BQARERalAKAiEiLmhD3AMxsB0SO8SqlB9hZxeIkTeVN3kQrs8qbrIlWXohf5uPdfWqpFydEABgPM9te7iZIo1F5kzfRyqzyJmuilReqV2Y1AYmItCgFABGRFtUKAeCGehegQipv8iZamVXeZE208kKVytz09wBERKS4VqgBiIhIEU0TAMzsUTO738x+amaH5I2wwD+b2bCZDZnZa+tRzrAsLw/LmVueMbNlBducZma78rb5vzUu4xfN7GkzeyBv3VFmdoeZPRI+Timx7/nhNo+Y2fl1LvOnzOzh8N/8G2Z2ZIl9y/5+aljeK83sN3n/7u8sse/pZvbz8Pe8so7lvTWvrI+a2U9L7FuP7/fFZvZ9M3vIzB40s78J1zfk77hMeZP7Dbt7UyzAo0BPmdffCQwCBswC7q53mcNytQNPEvTXzV9/GnB7Hcv158BrgQfy1n0SWBk+XwlcU2S/o4Bfho9TwudT6ljmtwMd4fNripU5zu+nhuW9Erg8xm8mA/wpcBhwH/DKepS34PXVwP9toO/3GOC14fNJwC+AVzbq77hMeRP7DTdNDSCGM4GbPLANONLMjql3oYC3ARl3H+tAt0S4+4+A3xesPhP4Svj8K8BfFdn1HcAd7v57d/8DcAdwemIFzVOszO7+7+6+L/xzG/CiWpQljhLfcRxvAIbd/Zfu/hxwC8G/TaLKldfMDHg/8NWkyxGXuz/h7veGz0eAh4BjadDfcanyJvkbbqYA4MC/m9k9ZrawyOvHAr/O+/vxcF29fZDS/2lOMbP7zGzQzF5Vy0KVMN3dn4DgxwpMK7JNo37PABcR1AKLifr91NLSsLr/xRLNE434Hb8FeMrdHynxel2/XzM7ATgZuJsJ8DsuKG++qv6GO8ZawAZ0qrv/1symAXeY2cPhFUuOFdmnrl2gzOwwYDawqsjL9xI0C+0O24G/Cby0luUbo4b7ngHM7KPAPuDmEptE/X5q5XPAxwm+s48TNKtcVLBNI37HZ1P+6r9u36+ZdQNfB5a5+zNBZSV6tyLravIdF5Y3b33Vf8NNUwNw99+Gj08D3yCoJud7HHhx3t8vAn5bm9KV1Afc6+5PFb7g7s+4++7w+WYgZWY9tS5ggadyzWbh49NFtmm47zm8gXcGcI6HjaWFYvx+asLdn3L3/e7+PHBjiXI01HdsZh3AWcCtpbap1/drZimCk+nN7n5buLphf8clypvYb7gpAoCZvcDMJuWeE9w0eaBgs03APAvMAnblqoF1VPKqycxeGLarYmZvIPi3+l0Ny1bMJiDXG+J84FtFtvkO8HYzmxI2X7w9XFcXZnY6sAKY7e57SmwT5/dTEwX3pd5Tohz/BbzUzF4S1iI/SPBvUy9/CTzs7o8Xe7Fe32/4/2cAeMjdr817qSF/x6XKm+hvOMm72rVaCHpD3BcuDwIfDddfClwaPjfgeoLeE/cDM+tc5i6CE/rkvHX55V0afpb7CG78vKnG5fsq8ATBtMqPA/OBo4EtwCPh41HhtjOBtXn7XgQMh8uFdS7zMEFb7k/D5fPhtn8CbC73+6lTedeFv88hghPVMYXlDf9+J0EvkUw9yxuu/3Lud5u3bSN8v28maLYZyvv3f2ej/o7LlDex37BGAouItKimaAISEZHKKQCIiLQoBQARkRalACAi0qIUAEREWpQCgMRiZvvDLIMPmNm/mFlXld//AjNbE7HNaWb2pry/LzWzedUsR5FjfirMzPipIq/1mdn2MHvjw2b26cJyhZ/rTyo85loze2UF2/+Zmd1lZnvN7PKC1yKzhlqJ7JjhmJmiGXStThlfpcpq0R9Xy8RfgN15z28GPlTl978AWBOxzZVEZMpM4HM/AxxeZP2JBH3w/yz8uwNYXGS7H5DwmBOCXDavB/4h//shZtZQSmTHpEQGXeqY8VVLdRfVAGQs7gT+F4CZfSisFTxg4ZwGZnZCeEX8lfDKcWOuxmBBzvKe8PlMM/tB4Zub2bvN7G4z+4mZfdfMpluQHOtSoD+sibzFgtz5l4f7nGRm2+xgzvTcVewPzOwaM/uxmf3CzN5S5HgWXuk/YEE+9Q+E6zcBLwDuzq3L8xHgH9z9YQB33+funw33u9LMLjezOQSDi24Oy/wuM/tG3nH/j5ndVvC+uTLPDJ/vNrN/sCAp4DYzm164vbs/7e7/RTBAK1/crKGlsmOWyqBbNFOmmbWb2Zfzvsf+IseSBqIAIBWxIO9LH3C/mb0OuBB4I8EV4sVmdnK46cuBG9x9BsFV9OIKDvMfwCx3P5ngpPURd38U+Dxwnbuf5O53FuxzE7AiPN79wN/nvdbh7m8AlhWszzkLOAl4DUFag0+Z2THuPht4NjxeYZ6bE4F7yn0Id98IbCfI33ISsBl4hZlNDTe5EPhSufcgCEDb3P01wI+AiyO2zxc3o2Wp7Jil9i+1/iSC9MUnuvurif5sUmcKABJXpwWzPW0HHiPIWfJm4Bvu/kcPEtfdRpAWGODX7v6f4fP14bZxvQj4jpndD3wYKJsK28wmA0e6+w/DVV8hmLwkJ3eVfQ9wQpG3eDPwVQ+SsD0F/JCgSaWq3N0JUj2ca8GsTqdQOrVvznPA7eHzUuUvZbwZLUvtX2r9L4E/NbPPWJC/5pki20kDUQCQuHJXwie5+2Vhk0K5vLqFJ5rc3/s4+Ls7osS+nyG4H/Bq4JIy28W1N3zcT/EU6LHyAxd4EHjdGPb7EnAuQSLAf/GDE32Ukg0DB5QufylxM1qWyo5Zav+i68PmoNcQ3PdYAqytoKxSBwoAMh4/Av7KzLosyED4HoL7AwDHmdkp4fOzCZp1IJi2LnfifG+J950M/CZ8nt/DZIRgqrxR3H0X8Ie89v3zCK7iK/kcHwjbsKcS1B5+HLHPp4ArzOxlAGbWZmYfKrLdqDJ7kLL3t8DfEiRRS1LJrKFmdrWZvSfcrlR2zFIZdItmygzv7bS5+9eBvyOYPlIaWDNNCCM15u73mtmXOXiyXOvuPwlv2D4EnG9mXyDIuvi5cJuPAQNmdgWHznaUcyXwL2b2G4JMqC8J138b2GhmZwKXFexzPvD58GbzLwna1+P6BkFzzH0ENZWPuPuT5XZw96HwpvdXw2M68K9FNv1yWK5ngVPc/VmCXlRT3f1nFZSxJDN7IUHTXBp4PizXKz2Y/GQpwQm7Hfiiuz8Y7vZqDqaQ/gTwNTObT9C8975w/WYOZqPcQ/iduvvvzezjBAEG4P+F614DfMnMcheWxSY6kgaibKBSdWEAuN3dT6xzURqSBeMdfuLuA3Usw3fc/R31Or40BtUARGrIzO4B/ggsr2c5dPIXUA1ARKRl6SawiEiLUgAQEWlRCgAiIi1KAUBEpEUpAIiItCgFABGRFvX/AYH/yuNsMedbAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ "