diff --git a/01_spatial_intro_Ashlesha.Rmd b/01_spatial_intro_Ashlesha.Rmd
new file mode 100644
index 0000000..fe6d537
--- /dev/null
+++ b/01_spatial_intro_Ashlesha.Rmd
@@ -0,0 +1,413 @@
+---
+title: "Intro to Spatial Data in R"
+author: "Caitlin Mothes"
+date: "`r Sys.Date()`"
+output: github_document
+---
+
+```{r setup, include= FALSE}
+knitr::opts_chunk$set(echo = TRUE, message = FALSE, warning = FALSE, eval = FALSE)
+```
+
+## 1. Spatial Data Formats
+
+**Vector Data**
+
+- Locations (points)
+
+ - Coordinates, address, country, city
+
+- Shapes (lines or polygons)
+
+ - Political boundaries, roads, building footprints, water bodies
+
+**Raster Data**
+
+- Images (matrix of cells organized by rows and columns)
+
+ - Satellite imagery, climate, landcover, elevation
+
+ ![](spatial_formats.png){width="50%"}
+
+## 2. Import and manipulate spatial data
+
+There are a few new R packages we will need to work with spatial data, listed below with hyperlinks and decribed in more detail throughout this and other lessons.
+
+- `sf` : working with vector data
+
+- `terra` : working with raster data
+
+- `tmap` : visualizing spatial data (i.e., making maps!)
+
+- `tigris` : import vector data from the U.S. Census database (i.e., political boundaries, roads, etc.)
+
+- `elevatr` : import elevation data
+
+- `rgbif` (optional) : import species occurrence data from the GBIF database
+
+- `soilDB` (optional) : import snow depth data from SNOTEL sites across the U.S.
+
+We've already added these packages to a 'setup.R' script in this project directory, so you can use `source("setup.R")` at the beginning of each lesson if you want, otherwise you will need to install each new one manually with `install.packages()`.
+
+```{r}
+source("setup.R")
+```
+
+### 2.1 Vector Data
+
+### [`tigris`](https://github.com/walkerke/tigris)
+
+### **Polygons**
+
+All the data we are working with in this lesson is confined to the state of Colorado. Let's start by pulling in political boundaries for Colorado counties with the `tigris` package, which returns a shapefile consisting of polygons for each county.
+
+```{r}
+# download county shapefile for the state of Colorado
+co_counties <- counties(state = "CO")
+```
+
+The `tigris` package is one of many data retrieval R packages that uses API calls to pull in data from various online/open databases directly into your R session, without the need to separately download. When you close out your R session, these 'temp' files are erased, so it does not use up any of your local storage.
+
+At the end of this lesson you will learn how to save shapefiles to your computer if you do in fact want to store and use them in the future (e.g., you manipulated a data set quite a bit and don't want to re-run the entire process every new R session).
+
+### **Lines**
+
+`tigris` has many other data sets in addition to political boundaries. Today let's work with another shapefile, importing roads for Larimer county, which returns a polyline dataset for all roads in Larimer County.
+
+```{r}
+co_roads <- roads(state = "CO", county = "Larimer")
+```
+
+### [`tmap`](https://r-tmap.github.io/tmap/)
+
+Throughout this lesson we will be using the `tmap` package to produce quick static or interactive maps.
+
+`tmap` allows for both static ("plot" mode) and interactive ("view" mode) mapping options, which you can set using the function `tmap_mode()` . For today we will be making quick interactive plots. Once you set the mode with `tmap_mode()`, every plot call to `tmap` after that produces a plot in that mode.
+
+```{r}
+tmap_mode("view")
+```
+
+Lets view our Colorado counties and Larimer County roads shapefiles. To make a "quick thematic map" in `tmap` you can use the `qtm()` function. You can also use `tm_shape()` plus the type of spatial layer (e.g., `tm_polygons()`) to add your layers to the map. Both methods below will produce the same exact map, and you may think why would you ever need to use the `tm_shape()` method since its more code? The answer may be rarely, but there are some cases where you can customize your maps better with `tm_shape()` that we will see later on.
+
+Also notice that `tmap` uses `+` signs to tack on additional maps/elements similar to `ggplot2` code (i.e., no pipe!)
+
+*Note: map rendering may take a few seconds because the roads layer is pretty large and detailed.*
+
+```{r}
+# Option 1: Using qtm()
+qtm(co_counties)+
+ qtm(co_roads)
+```
+
+```{r}
+# Option 2: Using tm_shape()
+tm_shape(co_counties)+
+ tm_polygons()+
+tm_shape(co_roads)+
+ tm_lines()
+```
+
+Mess around with this map a little bit. See that you can change the basemap, turn layers on and off, and click on features to see their attributes.
+
+There are a ton of ways to customize these maps (more details on this in the spatial viz lesson!). For example, `co_counties` has an 'ALAND' variable, which represents the total land area of each county. To color by that variable we would use:
+
+```{r}
+qtm(co_counties, fill = "ALAND")
+```
+
+Let's inspect the spatial data sets a little more. What do you see when you run the following line of code?
+
+```{r}
+class(co_counties)
+```
+
+### [`sf`](https://r-spatial.github.io/sf/)
+
+By default, the `tigris` package imports spatial data in `sf` format, which stands for 'simple features'. The `sf` package provides an easy and efficient way to work with vector data, and represents spatial features as a `data.frame` or `tibble` with a **geometry** column, and therefore also works well with `tidyverse` packages to perform manipulations like you would a data frame.
+
+For example, we are going to do an exercise for the Poudre Canyon Highway, so we want to filter out the roads data set to only those features. Using your investigative geography skills and your interactive map, find the highway on your map and find out what the exact 'FULLNAME' attribute is, and use that to `filter()` the data set. Call the new roads feature `poudre_hwy`.
+
+```{r}
+poudre_hwy <- co_roads %>%
+ filter(FULLNAME == "Poudre Canyon Hwy")
+
+qtm(poudre_hwy)
+```
+
+### Points
+
+Most often when you are working with points, you start with an excel file or something similar that consists of the raw latitude and longitude. When you have spatial data that is not explicitly spatial yet or not in the `sf` format, you use the `st_as_sf()` function to transform it.
+
+Lets work with a couple locations along the Poudre highway, making a small data frame of their coordinates:
+
+```{r}
+poudre_points <- data.frame(name = c("Mishawaka", "Rustic", "Blue Lake Trailhead"),
+ long = c(-105.35634, -105.58159, -105.85563),
+ lat = c(40.68752, 40.69687, 40.57960))
+```
+
+Right now, `poudre_points` is just a data frame (run `class(poudre_points)` to check). We need to convert it to a spatial (`sf`) object first in order to map and spatially analyze it.
+
+Within the `st_as_sf()` function we need to specifying the longitude and latitude columns in our `poudre_points` data frame and the CRS (Coordinate Reference System). **Note** **that 'x' (longitude) always goes first followed by 'y' (latitude).** Otherwise it will map your points on the other side of the world.
+
+```{r}
+poudre_points_sf <- st_as_sf(poudre_points, coords = c("long", "lat"), crs = 4326)
+
+qtm(poudre_hwy)+
+ qtm(poudre_points_sf)
+```
+
+Note the 4-digit number we assign for `crs`. This is an EPSG code, which is tied to a specific CRS called WGS84 and one of the most common reference systems coordinates are recorded in (often noted by the fact that the values are in decimal degrees). This is used by Google Earth, the U.S. Department of Defense and all GPS satellites (among others). A full list of EPSG codes and coordinate reference systems can be found [here](https://spatialreference.org/ref/epsg/). Note, there are A LOT. Probably the most common used in the U.S. are WGS84 (a global CRS) and NAD83 (used by many U.S. federal agencies).
+
+### Coordinate Reference Systems
+
+Probably the most important part of working with spatial data is the coordinate reference system (CRS) that is used. The CRS describes how and where your spatial data is located on Earth. There are numerous different CRS's depending on when and how the data was collected, the spatial location and extent it was collected, etc. In order to analyze and visualize spatial data, **all objects must be in the exact same CRS**.
+
+We can check a spatial object's CRS by printing it the object name to the console, which will return a bunch of metadata about the object. You can specifically return the CRS for `sf` objects with `st_crs()`.
+
+```{r}
+# see the CRS in the header metadata:
+co_counties
+
+#return just the CRS (more detailed)
+st_crs(co_counties)
+```
+
+You can check if two objects have the same CRS like this:
+
+```{r}
+st_crs(poudre_hwy) == st_crs(poudre_points_sf)
+```
+
+Uh oh, the CRS of our points and lines doesn't match. While `tmap` performs some on-the-fly transformations to map the two layers together, in order to do any analyses with these objects you'll need to re-project one of them. You can project one object's CRS to that of another with `st_transform` like this:
+
+```{r}
+# transform the CRS of poudre_points_sf to the CRS of poudre_hwy
+poudre_points_prj <- st_transform(poudre_points_sf, st_crs(poudre_hwy))
+
+# Now check that they match
+st_crs(poudre_points_prj) == st_crs(poudre_hwy)
+```
+
+### 2.2 Raster Data
+
+### [`elevatr`](https://github.com/jhollist/elevatr/)
+
+Lets import some elevation data using the `elevatr` package. The function `get_elev_raster()` returns a raster digital elevation model (DEM) from the AWS Open Data Terrain Tiles. For this function you must supply a spatial object specifying the **extent** of the returned elevation raster and the resolution (specified by the zoom level `z`). We are importing elevation at \~ 1km resolution (more like 900 m), and we can use our `co_counties` object as the extent we want to download to, which will return elevation tiles for the state of Colorado.
+
+*Note: 'extent' is the spatial bounding box of the data (represented by the x,y coordinates of the four corners inclusive of the entire spatial data)*
+
+```{r}
+co_elevation <- get_elev_raster(co_counties, z = 7)
+```
+
+```{r}
+qtm(co_elevation)
+```
+
+By default, `tmap` uses a categorical symbology to color the cells by elevation. You can change that to a continuous palette like this (an example of when `tm_shape()` allows us to edit the map more):
+
+```{r}
+tm_shape(co_elevation)+
+ tm_raster(style = "cont", title = "Elevation (m)")
+```
+
+When we see this on a map, we see that it actually extends beyond Colorado due to how the Terrain Tiles are spatially organized.
+
+Let's inspect this raster layer a little. By printing the object name to the console we see a bunch of metadata like resolution (cell/pixel size), extent, CRS, and file name.
+
+```{r}
+co_elevation
+```
+
+### `terra`
+
+We use the `terra` package to work with raster data. For example, we only want to see elevation along the Poudre highway. We can use `crop` to crop the raster to the extent of our `poudre_hwy` spatial object using the `ext()` function to get the extent (i.e., bounding box) of our `poudre_hwy` object.
+
+However...the following line of code **doesn't work:**
+
+```{r}
+# If we try this, we get an error
+co_elevation_crop <- crop(co_elevation, ext(poudre_hwy))
+
+```
+
+This doesn't work because our `co_elevation` object is actually not in the proper format to work with the `terra` package. The `elevatr` package still uses the `raster` package to work with raster data, however this package is outdated and we want to stick with `terra` for this course and any future work you do with raster data.
+
+```{r}
+# note the data type of elevation is RasterLayer
+class(co_elevation)
+```
+
+`terra` uses objects of a new class called `SpatRaster`. Converting a `RasterLayer` to a `SpatRaster` is quick using the `rast()` function.
+
+```{r}
+co_elevation <- rast(co_elevation)
+```
+
+Now check the class:
+
+```{r}
+class(co_elevation)
+```
+
+Now we can use `terra` functions, and re-run the `crop()` code we tried earlier:
+
+```{r}
+co_elevation_crop <- crop(co_elevation, ext(poudre_hwy))
+```
+
+Plot all our spatial layers together:
+
+```{r}
+qtm(co_elevation_crop) +
+ qtm(poudre_hwy) +
+ qtm(poudre_points_prj)
+```
+
+## 3. Reading and Writing Spatial Data
+
+### 3.1 Writing spatial data
+
+All of the spatial data we've worked with are only saved as objects in our environment. To save the data to disk, the `sf` and `terra` packages have functions to do so. You are not required to save these files, but if you want to follow along with these functions save the data to the 'data/' folder.
+
+To save vector data with `sf`, use `write_sf()`
+
+```{r}
+write_sf(poudre_hwy, "data/poudre_hwy.shp")
+
+write_sf(poudre_points_prj, "data/poudre_points.shp")
+```
+
+While you can give the file any name you want, note that you **must put '.shp' as the extension of the file**. While '.shp' stands for 'shapefile', if you run the code above you'll notice a bunch of other files are saved, having the same file name but different extensions. These are auxiliary files required to properly work with the .shp shapefile. **If you ever want to share or move a shapefile,** **you must zip all the auxiliary files and .shp file together**. Think of them as a package deal!
+
+To save raster data with `terra` use `writeRaster()`
+
+```{r}
+writeRaster(co_elevation_crop, "data/poudre_elevation.tif")
+```
+
+Same as with the vector data, when saving raster data you **must add the '.tif' file extension** to the name. There are various formats raster data can be stored as (e.g., ASCII, ESRI Grid) but GeoTiffs are the most common and generally easiest to deal with in R.
+
+### 3.2 .RData Files
+
+Another way you can store data is saving your environmental variables as R Data objects. You may have already seen '.RData' files in your folders before if you ever click 'yes' when closing out of RStudio asks you to save your workspace. What this does is save everything in your environment to a file with a '.RData' extension in your project directory, and then every time you open your project it reloads everything that was in the environment. This however is often poor practice, as it prevents you from writing reproducible code and all those variables start racking up storage space on your computer. We recommend changing this setting by going to Global Options and under 'Workspace' set 'Save workspace to .RData on exit' to '**Never**'.
+
+However, there are times you may want to save your variables as R files, such as when you have a set of variables you want to quickly re-load at the beginning of your session, or some files that are pretty large in size which is often the case with spatial data (R object files are much smaller). **You can save single *or* multiple variables to an .RData file, or single variables to an .RDS file**.
+
+Since the `poudre_hwy` and `poudre_points_prj` were objects you created in this session, to avoid the need to recreate them you can save them to an .RData file with `save()` :
+
+```{r}
+save(poudre_hwy, poudre_points_prj, file = "data/poudre_spatial_objects.RData")
+```
+
+Note that you must add the 'file =' to your second argument.
+
+Now to test out how .RData files work, remove them from your environment with `rm()` (*be careful with this function though, it is permanent!*) and load them back in with `load()`
+
+```{r}
+rm(poudre_hwy, poudre_points_prj)
+```
+
+See they are no longer in your Environment pane, but after you load the .RData file back in, it loads in those two objects with the same environmental names they were given when you saved them.
+
+```{r}
+load("data/poudre_spatial_objects.RData")
+```
+
+Note that `terra` objects don't properly save to .RData files, but there is a work around if you save a single `terra` object as an .RDS file with `saveRDS`. Here is that workflow, there is just a second step to 'unpack' the loaded .RDS object with `rast()`.
+
+```{r}
+saveRDS(co_elevation_crop, "data/poudre_elevation.RDS")
+```
+
+```{r}
+readRDS("data/poudre_elevation.RDS") %>% rast()
+```
+
+Note that with .RDS files you must assign the loaded file to a new environmental variable (unlike with .RData that returns the objects with the exact names they had before).
+
+### 3.3 Reading Spatial Data
+
+To read in shapefiles, you use `read_sf()` . If you saved the `poudre_hwy` shapefile in the steps above, you can load it back into your environment like this:
+
+```{r}
+read_sf("data/poudre_hwy.shp")
+```
+
+Notice that when reading shapefiles into R you only specify the file with the '.shp' extension, and don't need to pay much attention to any of those auxiliary files. As long as all the other auxiliary files are saved in that same folder, it will read in the shapefile correctly, but if you are missing any then the .shp file becomes useless on its own.
+
+To read in raster files you use the `rast()` function and file path with the appropriate file extension
+
+```{r}
+rast("data/poudre_elevation.tif")
+```
+
+**Remember when reading in files you will want to assign them to a new variable name with `<-` to keep them in your environment**.
+
+## 4. Exercises
+
+1. **Explore the use of `extract` from the `terra` package by running `?terra::extract`. (Note we need to specify `terra::` because 'extract' is a function name in multiple packages we may have loaded in our session).**
+
+ **How would you extract the elevation at each of the three points in `poudre_points_prj` ? (2 pts)**
+
+ ```{r}
+ terra::extract(co_elevation,poudre_points_prj)
+
+ ```
+
+2. **Choose your favorite state (other than Colorado). For that state, carry out the following tasks: (8 pts)**
+
+Import the county boundaries for your state:
+
+```{r}
+MT_counties <- counties (state = "MT")
+```
+
+Import elevation for your state (using your new counties object as the extent/bounding box and set `z = 7`):
+
+```{r}
+MT_elevation <- get_elev_raster(MT_counties, z=7 )
+
+#checking and converting data class
+Mt_elevation <- rast(MT_elevation)
+
+#croping the elevation data to county boundary
+
+MT_Elevation_masked <- crop(MT_elevation, MT_counties, mask = TRUE)
+```
+
+Create an interactive map of your state counties and the elevation layer underneath (*note:* use `?qtm` to see the argument options for `fill =` to draw only the county borders, i.e. remove the fill color).
+
+```{r}
+
+tmap_mode("view")
+
+tm_map <- tm_shape(MT_Elevation_masked) +
+ tm_raster(title = "Elevation in meter") +
+ tm_shape(MT_counties) +
+ tm_borders(col = "black")
+tm_map
+```
+
+```{}
+```
+
+Choose a single county within your state county object, and crop your elevation layer to the extent of that county (*note:* use `filter()` to create an object of just your selected county that you want to crop to). Follow the steps above we used to crop `co_elevation` to the poudre hwy.
+
+```{r}
+Missoula_County <- MT_counties %>%
+ filter(NAME == "Missoula")
+
+Missoula_Elevation_masked <- crop(MT_elevation, Missoula_County, mask = TRUE)
+
+tmap_mode("view")
+
+tm_map <- tm_shape(Missoula_Elevation_masked) +
+ tm_raster(title = "Elevation in Missoula County", palette = "-Blues") +
+ tm_shape(Missoula_County) +
+ tm_borders(col = "black")
+
+tm_map
+```
diff --git a/02_spatial_analysis_Ashlesha.Rmd b/02_spatial_analysis_Ashlesha.Rmd
new file mode 100644
index 0000000..741ae56
--- /dev/null
+++ b/02_spatial_analysis_Ashlesha.Rmd
@@ -0,0 +1,378 @@
+---
+title: "Spatial Data Analysis"
+author: "Caitlin Mothes"
+date: "`r Sys.Date()`"
+output: github_document
+---
+
+```{r setup, include=FALSE}
+knitr::opts_chunk$set(echo = TRUE, eval = FALSE)
+```
+
+In the first lesson this week you were introduced to different spatial data types, various databases you can pull spatial data from and worked through importing, wrangling, and saving those spatial data types. Today we are going to dive deeper in spatial analyses in R.
+
+You have briefly used the `sf` and `terra` packages so far, but today we will be exploring them much more in depth using the wide range of spatial analysis operations they provide.
+
+You shouldn't need to install any new packages for today, but need to load in all the necessary libraries:
+
+```{r}
+source("setup.R")
+```
+
+## Load in spatial data
+
+We will be working with some new datasets today that are already included in the 'data/' folder. These include:
+
+- "spatDat.RData" : an .RData file that loads in the four objects:
+
+ - `counties` : a multipolygon layer of Colorado counties (which we used in 01_spatial_intro.Rmd)
+
+ - `rivers` : a polyline layer of all major rivers in Colorado
+
+ - `occ` : a list of three dataframes that includes species occurrence data (i.e., point locations) for Elk, Yellow-bellied Marmot, and Western Tiger Salamander in Colorado retrieved from the [GBIF](https://www.gbif.org/) database.
+
+ - `snotel_data` : spatial point dataframe (i.e., `sf` object) of daily snow depth for 8 SNOTEL sites in Colorado
+
+```{r key1}
+#load in all your vector data
+load("data/spatDat.RData")
+
+#read in the elevation and landcover rasters
+landcover <- terra::rast("data/NLCD_CO.tif")
+
+elevation <- terra::rast("data/elevation.tif")
+```
+
+### Bonus Lesson
+
+All the above objects were retrieved and cleaned in R. The lesson plan in the 'bonus/' folder titled **'get_spatial_challenge.Rmd'** is an assignment that tasks you with importing and cleaning the data that was saved in 'spatDat.RData'. If you complete this challenge assignment fully you will get *up to 3 extra credit points*. Even if you don't want to complete this challenge, it is worth your while to read and work through it!
+
+## Distance Calculations
+
+We're going to start off today with some distance calculations. Using our species occurrence data, say we want to know on average how far away is each species found from a major river, and compare that among species.
+
+Throughout today we are going to be mapping our spatial data to quickly inspect it and get a visual of the data's extent and characteristics, so lets set our `tmap` mode to interactive.
+
+```{r}
+tmap_mode("view")
+```
+
+First, our `occ` object is not in a spatial format. We first need to bind our dataframes into a single one, and convert it to an `sf` object using `st_as_sf()` :
+
+```{r}
+occ_sp <- bind_rows(occ) %>%
+ st_as_sf(coords = c("decimalLongitude", "decimalLatitude"), crs = 4236)
+```
+
+We set the CRS to `4236`, which is the EPSG code for WGS84, the most commonly used CRS for GPS coordinates (But I also checked the GBIF metadata to make sure it was in fact WGS84).
+
+Quick view of all our points, colored by species:
+
+```{r}
+qtm(occ_sp, symbols.col = "Species")
+```
+
+Now, calculating the distance to the nearest river involves point to line distance calculations, which we can perform with the `sf` package.
+
+Before performing any spatial operations, remember all of our spatial objects must be in the same CRS.
+
+### Exercise #1
+
+```{r}
+st_crs(rivers) == st_crs(occ_sp)
+```
+
+The CRS of our objects does not match. Using what you learned in week one, conduct a spatial transformation to our `occ_sp` object to coerce it to the same CRS of our `rivers` object. Call the new object `occ_prj` and double check that `rivers` and our new occurrences object are in the same CRS after transforming
+
+```{r}
+occ_prj <- occ_sp %>%
+ st_transform(crs = st_crs(rivers))
+
+```
+
+
+Now lets visualize our rivers and occurrence data:
+
+```{r}
+qtm(rivers) +
+ qtm(occ_prj, symbols.col = "Species")
+```
+
+Our occurrence data set covers all of Colorado, but rivers are only for Larimer County. So, we have to first filter our points to Larimer County.
+
+Similar to `filter()` from the {tidyverse}, we can use `st_filter()` to perform a *spatial* filtering (i.e., we want to filer just the points that occur in Larimer County).
+
+### Exercise #2
+
+Use `?st_filter` to explore the use of the function, and then use it to filter our `occ_prj` points to Larimer county and call the new object `occ_larimer`.
+
+*Note:* You will first need to create a spatial object of just Larimer county to use as a filter.
+
+```{r}
+
+Larimer_county <- counties(state = "CO")%>%
+ filter(NAME == "Larimer")
+
+occ_larimer <- st_filter(occ_prj,Larimer_county)
+
+```
+
+```{r}
+qtm(occ_larimer)
+```
+
+Great, now we just have species occurrences within Larimer County.
+
+Now for each point we want to calculate its distance to the nearest river. The most efficient way is to first find the nearest line feature for each point. We can do this with the `st_nearest_feature()` function.
+
+This function returns the index values (row number) of the river feature in the `rivers` spatial data frame that is closest in distance to each point. Here we are assigning these index values to a new column of our Larimer occurrences called 'nearest_river' that we will use later to calculate distances:
+
+```{r}
+occ_larimer$nearest_river <- st_nearest_feature(occ_larimer, rivers)
+```
+
+Now, for each point we can use the `st_distance()` function to calculate the distance to the nearest river feature, using the index value in our new "nearest_river" column. Adding `by_element = TRUE` is necessary to tell the function to perform the distance calculations by element (row), which we will fill into a new column "river_dist_m".
+
+```{r}
+occ_larimer$river_dist_m <-
+ st_distance(occ_larimer, rivers[occ_larimer$nearest_river, ], by_element = TRUE)
+```
+
+Notice that the new column "river_dist_m" is more than just a numeric class, but a "units" class, specifying that the values are in meters.
+
+```{r}
+str(occ_larimer)
+```
+
+### Exercise #3
+
+Cool, now you have the distance to the nearest river (in meters) for each individual species occurrence, but you want the average distance for each species. Using what you know of the `dplyr` functions, calculate the species average distance, then make a bar plot to compare the averages among species:
+
+*Hint*: remember that the new distance column is a 'units' data type will throw an error when you try to plot those values. You will need to make use of `mutate()` and `as.numeric` within your string of operations in order to complete task.
+
+```{r}
+
+occ_larimer$nearest_river <- st_nearest_feature(occ_larimer, rivers)
+
+# Calculate the distance to the nearest river and add it as a new column
+occ_larimer$river_dist_m <- st_distance(occ_larimer, rivers[occ_larimer$nearest_river, ], by_element = TRUE)
+
+# Calculate average distance for each species
+species_avg_dist <- occ_larimer %>%
+ mutate(distance_numeric = as.numeric(river_dist_m)) %>%
+ group_by(Species) %>%
+ summarize(avg_distance = mean(distance_numeric))
+
+ggplot(species_avg_dist, aes(x = Species, y = avg_distance, fill = Species)) +
+ geom_bar(stat = "identity") +
+ labs(title = "Average Distance to Nearest River by Species",
+ x = "Species",
+ y = "Average Distance (meters)")
+
+
+```
+
+
+
+Which species is, on average, found closest to a river?
+
+-> Elk is found nearest to the River and Western Tiger Salamender is the farthest form the river.
+## Buffers
+
+Alternatively, say you want to know what percentage of species' occurrences (points) were found within a specified distance of a river (calculated buffer). Here lets investigate how often each species is found within 100m of a river.
+
+To do this we can add a buffer around our line features and filter the points that fall within that buffer zone. We can use `st_buffer()` with a specified distance (default is meters since our `rivers` object uses 'meters' as its length unit, we can tell by checking the CRS with `st_crs()`)
+
+```{r eval=FALSE}
+river_buffer <- st_buffer(rivers, dist = 100)
+
+qtm(river_buffer)
+```
+
+If you zoom in on the map you can now see a buffer around the rivers, and this new object is actually a polygon geometry type now instead of a line.
+
+```{r}
+river_buffer
+```
+
+## Spatial Intersect
+
+We can conduct spatial intersect operations using the function `st_intersects()`. This function checks if each occurence intersects with the river buffer, and if so it returns an index value (row number) for the river feature it intersects. This function returns a list object for each occurrence, that will be empty if there are no intersections. We will add this as a column to our occurrence data set, and then create a binary yes/no river intersection column based on those results (is the list empty or not?).
+
+First look at what `st_intersects()` returns:
+
+```{r}
+st_intersects(occ_larimer, river_buffer)
+```
+
+We see it is a list of the same length as our `occ_larimer` object, where each list element is either empty (no intersections) or the index number for the river buffer feature it intersects with. To add this as a new column in our `occ_larimer` data we run this:
+
+```{r}
+occ_larimer$river_intersections <- st_intersects(occ_larimer, river_buffer)
+```
+
+Now we can create a new column in `occ_larimer` called 'river_100m' that returns TRUE/FALSE if the buffer intersects with a river. We make use of `if_else()` and the `lengths()` function to check the length of each list element in each row, as the empty ones will return a length of 0. If the length is zero/empty, then we return FALSE meaning that occurrence was not found within 100m of a river.
+
+```{r}
+occ_rivers <- occ_larimer %>%
+ mutate(river_100m = if_else(lengths(river_intersections) == 0, FALSE, TRUE))
+```
+
+Now we can calculate what percentage of occurrences are within 100 m of a river for each species using `dplyr` operations. Which species is most often found within 100m of a river?
+
+```{r}
+occ_rivers %>%
+ group_by(Species) %>%
+ summarise(total_occ = n(),
+ total_rvier = sum(river_100m == TRUE),
+ percent_river = (sum(river_100m == TRUE)/total_occ)*100)
+```
+
+
+
+#### Reflection
+
+This analysis is just for teaching purposes, why would you be cautious about these results for answering real research questions? Think about how we filtered everything to a political boundary, what's wrong with this method?
+
+## Raster Reclassification
+
+So far we've dealt with a bunch of vector data and associated analyses with the `sf` package. Now lets work through some raster data analysis using the `terra` package.
+
+First, lets explore the landcover raster by making a quick plot.
+
+```{r}
+qtm(landcover)
+```
+
+This land cover data set includes attributes (land cover classes) associated with raster values. The is because of the .aux auxiliary file paired with the .tif. in the 'data/' folder. Similar to shapefiles, this file provides metadata (in this case land cover class names) to the raster file.
+
+We can quickly view the frequency of each land cover type with the `freq()` function, where 'count' is the number of pixels in the raster of that landcover type.
+
+```{r}
+freq(landcover)
+```
+
+### Exercise 4
+
+Create a bar chart of landcover frequency, and order the bars highest to lowest (see [this resource](https://sebastiansauer.github.io/ordering-bars/) to guide you on sorting bars by a numeric variable/column). Also investigate the use of `coor_flip()` and how it might make your plot look better...
+
+```{r}
+landcover_freq <- freq(landcover)
+
+
+ggplot(landcover_freq, aes(x = reorder(value, -count), y = count))+
+ geom_bar(stat = "identity")+
+ labs(y = "Landcover Frequency",x = "Landcover Type")+
+ coord_flip()
+
+```
+
+Say we want to explore some habitat characteristics of our species of interest, and we are specifically interested in forest cover. We can use raster reclassification to create a new layer of just forest types in Colorado.
+
+Since rasters are technically matrices, we can using **indexing** and change values quickly using matrix operations. Given this particular raster uses character names associated with values (thanks to the .aux file!), we can index by those names.
+
+```{r}
+#first assign landcover to a new object name so we can manipulate it while keeping the original
+forest <- landcover
+
+#where the raster equals any of the forest categories, set that value to 1
+forest[forest %in% c("Deciduous Forest", "Evergreen Forest", "Mixed Forest")] <- 1
+
+#SPELLING IS IMPORTANT
+
+#now set all non forest pixels to NA
+forest[forest != 1] <- NA
+```
+
+Now plot the new forest layer to get a quick sense if it looks accurate or not.
+
+```{r}
+plot(forest)
+```
+
+## Extraction Statistics
+
+When we want to summarize raster values for certain shapes (points, polygons, etc), the `extract()` function from the `terra` package helps us do that.
+
+Say we want to find out the most common land cover type each of our species is found in. We can use `extract()` to get the landcover value from the raster at each of our occurrence points, and then do some summary statistics.
+
+Within this function, the first element is the raster you want to get values from, and the second element is the spatial layer you want to extract values at. Here we will use our `landcover` raster layer and the `occ_prj` object to extract values for occurrences across Colorado.
+
+First, we need to project our landcover raster to the CRS of our occurrences, otherwise the operation will only return NAs.
+
+```{r}
+# project the landcover layer
+landcover_prj <- project(landcover, crs(occ_prj))
+
+extract(landcover_prj, occ_prj)
+```
+
+Notice that this returns a 2 column data frame, with an ID for each feature (occurrence) and the extracted raster value in the second column. We can actually use `extract()` within `mutate()` to add the values as a new column to our occurrences data frame so we can do further summary statistics.
+
+However, since `extract()` returns a 2 column data frame, it will nest this into a single column in the `occ_prj` data frame. To separate this into two separate columns we can use `unnest()` :
+
+```{r}
+occ_landcover <- occ_prj %>%
+ mutate(common_landcover = extract(landcover_prj, occ_prj)) %>%
+ unnest(common_landcover) %>%
+ #lets rename the land cover column which is now called "NLCD Land Cover Class"
+ rename(common_landcover = "NLCD Land Cover Class")
+```
+
+Now, we can find the most common land cover type for each species, using some tidyverse wrangling. Note the use of `st_drop_geometry()`, this reverts the sf object back to an original data frame, which is required for some tidyverse operations.
+
+```{r}
+occ_landcover %>%
+ st_drop_geometry() %>% # this converts the data back to a dataframe, required for some tidyverse operations
+ group_by(Species) %>%
+ count(common_landcover) %>%
+ slice(which.max(n)) #returns the row with the highest count "n"
+```
+
+We can also use `extract()` to extract raster values within polygons, but here must supply some function of how to summarize all the values within each polygon. For this example, lets fine the most common landcover type in each Colorado county.
+
+```{r}
+county_landcover <-
+ counties %>%
+ mutate(landcover = extract(landcover_prj, counties, fun = "modal")) %>%
+ unnest(landcover) %>%
+ rename(value = "NLCD Land Cover Class") #renaming this helps us perform a join later on...
+```
+
+Uh oh, this gives us the raw pixel values instead of the land cover classes. We can get a table of value - class pairs by using the `cats()` function:
+
+```{r}
+classes <- as.data.frame(cats(landcover)) #coerce to a data frame because cats() actually returns it as a list
+```
+
+Value and NLCD.Land.Cover.Class are our cell value - class pairs. Now we want to join this to our `county_landcover` object to get the actual land cover name.
+
+### Exercise 5
+
+Perform the appropriate `*_join` operation to tie our `county_landcover` and `classes` data frames together. Then make a map of the counties each colored/filled by the most common NLCD land cover class.
+
+```{r}
+county_landcover_joined <- county_landcover %>%
+ left_join(classes, by = "value")
+
+qtm(county_landcover_joined, fill = "NLCD.Land.Cover.Class")
+
+```
+
+### Exercise 6
+
+Find the average elevation each species occurs at (for all Colorado occurrences). Which species is, on average, found at the highest elevations?
+
+*Hints*: Use the `elevation` and `occ_prj` objects we have created or read in above. Remember to check the CRS and perform a spatial transformation if necessary! All parts needed to answer this question have been introduced in this lesson plan.
+
+```{r}
+st_crs(elevation) == st_crs(occ_prj)
+
+species_elevation <- occ_prj %>%
+ mutate(elevation = extract(elevation, occ_prj, fun = "modal"))%>%
+ unnest(elevation)%>%
+ group_by(Species)%>%
+ summarise(avg_elevation = mean(Elevation, na.rm = TRUE))
+```
+The species in the highest elevation is Yellow-bellied Marmot 3301.699m
+
diff --git a/02_spatial_analysis_Ashlesha.md b/02_spatial_analysis_Ashlesha.md
new file mode 100644
index 0000000..d3a11ff
--- /dev/null
+++ b/02_spatial_analysis_Ashlesha.md
@@ -0,0 +1,499 @@
+Spatial Data Analysis
+================
+Caitlin Mothes
+2023-11-13
+
+In the first lesson this week you were introduced to different spatial
+data types, various databases you can pull spatial data from and worked
+through importing, wrangling, and saving those spatial data types. Today
+we are going to dive deeper in spatial analyses in R.
+
+You have briefly used the `sf` and `terra` packages so far, but today we
+will be exploring them much more in depth using the wide range of
+spatial analysis operations they provide.
+
+You shouldn’t need to install any new packages for today, but need to
+load in all the necessary libraries:
+
+``` r
+source("setup.R")
+```
+
+## Load in spatial data
+
+We will be working with some new datasets today that are already
+included in the ‘data/’ folder. These include:
+
+- “spatDat.RData” : an .RData file that loads in the four objects:
+
+ - `counties` : a multipolygon layer of Colorado counties (which we
+ used in 01_spatial_intro.Rmd)
+
+ - `rivers` : a polyline layer of all major rivers in Colorado
+
+ - `occ` : a list of three dataframes that includes species occurrence
+ data (i.e., point locations) for Elk, Yellow-bellied Marmot, and
+ Western Tiger Salamander in Colorado retrieved from the
+ [GBIF](https://www.gbif.org/) database.
+
+ - `snotel_data` : spatial point dataframe (i.e., `sf` object) of daily
+ snow depth for 8 SNOTEL sites in Colorado
+
+``` r
+#load in all your vector data
+load("data/spatDat.RData")
+
+#read in the elevation and landcover rasters
+landcover <- terra::rast("data/NLCD_CO.tif")
+
+elevation <- terra::rast("data/elevation.tif")
+```
+
+### Bonus Lesson
+
+All the above objects were retrieved and cleaned in R. The lesson plan
+in the ‘bonus/’ folder titled **‘get_spatial_challenge.Rmd’** is an
+assignment that tasks you with importing and cleaning the data that was
+saved in ‘spatDat.RData’. If you complete this challenge assignment
+fully you will get *up to 3 extra credit points*. Even if you don’t want
+to complete this challenge, it is worth your while to read and work
+through it!
+
+## Distance Calculations
+
+We’re going to start off today with some distance calculations. Using
+our species occurrence data, say we want to know on average how far away
+is each species found from a major river, and compare that among
+species.
+
+Throughout today we are going to be mapping our spatial data to quickly
+inspect it and get a visual of the data’s extent and characteristics, so
+lets set our `tmap` mode to interactive.
+
+``` r
+tmap_mode("view")
+```
+
+First, our `occ` object is not in a spatial format. We first need to
+bind our dataframes into a single one, and convert it to an `sf` object
+using `st_as_sf()` :
+
+``` r
+occ_sp <- bind_rows(occ) %>%
+ st_as_sf(coords = c("decimalLongitude", "decimalLatitude"), crs = 4236)
+```
+
+We set the CRS to `4236`, which is the EPSG code for WGS84, the most
+commonly used CRS for GPS coordinates (But I also checked the GBIF
+metadata to make sure it was in fact WGS84).
+
+Quick view of all our points, colored by species:
+
+``` r
+qtm(occ_sp, symbols.col = "Species")
+```
+
+Now, calculating the distance to the nearest river involves point to
+line distance calculations, which we can perform with the `sf` package.
+
+Before performing any spatial operations, remember all of our spatial
+objects must be in the same CRS.
+
+### Exercise \#1
+
+``` r
+st_crs(rivers) == st_crs(occ_sp)
+```
+
+The CRS of our objects does not match. Using what you learned in week
+one, conduct a spatial transformation to our `occ_sp` object to coerce
+it to the same CRS of our `rivers` object. Call the new object `occ_prj`
+and double check that `rivers` and our new occurrences object are in the
+same CRS after transforming
+
+``` r
+occ_prj <- occ_sp %>%
+ st_transform(crs = st_crs(rivers))
+```
+
+Now lets visualize our rivers and occurrence data:
+
+``` r
+qtm(rivers) +
+ qtm(occ_prj, symbols.col = "Species")
+```
+
+Our occurrence data set covers all of Colorado, but rivers are only for
+Larimer County. So, we have to first filter our points to Larimer
+County.
+
+Similar to `filter()` from the {tidyverse}, we can use `st_filter()` to
+perform a *spatial* filtering (i.e., we want to filer just the points
+that occur in Larimer County).
+
+### Exercise \#2
+
+Use `?st_filter` to explore the use of the function, and then use it to
+filter our `occ_prj` points to Larimer county and call the new object
+`occ_larimer`.
+
+*Note:* You will first need to create a spatial object of just Larimer
+county to use as a filter.
+
+``` r
+Larimer_county <- counties(state = "CO")%>%
+ filter(NAME == "Larimer")
+
+occ_larimer <- st_filter(occ_prj,Larimer_county)
+```
+
+``` r
+qtm(occ_larimer)
+```
+
+Great, now we just have species occurrences within Larimer County.
+
+Now for each point we want to calculate its distance to the nearest
+river. The most efficient way is to first find the nearest line feature
+for each point. We can do this with the `st_nearest_feature()` function.
+
+This function returns the index values (row number) of the river feature
+in the `rivers` spatial data frame that is closest in distance to each
+point. Here we are assigning these index values to a new column of our
+Larimer occurrences called ‘nearest_river’ that we will use later to
+calculate distances:
+
+``` r
+occ_larimer$nearest_river <- st_nearest_feature(occ_larimer, rivers)
+```
+
+Now, for each point we can use the `st_distance()` function to calculate
+the distance to the nearest river feature, using the index value in our
+new “nearest_river” column. Adding `by_element = TRUE` is necessary to
+tell the function to perform the distance calculations by element (row),
+which we will fill into a new column “river_dist_m”.
+
+``` r
+occ_larimer$river_dist_m <-
+ st_distance(occ_larimer, rivers[occ_larimer$nearest_river, ], by_element = TRUE)
+```
+
+Notice that the new column “river_dist_m” is more than just a numeric
+class, but a “units” class, specifying that the values are in meters.
+
+``` r
+str(occ_larimer)
+```
+
+### Exercise \#3
+
+Cool, now you have the distance to the nearest river (in meters) for
+each individual species occurrence, but you want the average distance
+for each species. Using what you know of the `dplyr` functions,
+calculate the species average distance, then make a bar plot to compare
+the averages among species:
+
+*Hint*: remember that the new distance column is a ‘units’ data type
+will throw an error when you try to plot those values. You will need to
+make use of `mutate()` and `as.numeric` within your string of operations
+in order to complete task.
+
+``` r
+occ_larimer$nearest_river <- st_nearest_feature(occ_larimer, rivers)
+
+# Calculate the distance to the nearest river and add it as a new column
+occ_larimer$river_dist_m <- st_distance(occ_larimer, rivers[occ_larimer$nearest_river, ], by_element = TRUE)
+
+# Calculate average distance for each species
+species_avg_dist <- occ_larimer %>%
+ mutate(distance_numeric = as.numeric(river_dist_m)) %>%
+ group_by(Species) %>%
+ summarize(avg_distance = mean(distance_numeric))
+
+ggplot(species_avg_dist, aes(x = Species, y = avg_distance, fill = Species)) +
+ geom_bar(stat = "identity") +
+ labs(title = "Average Distance to Nearest River by Species",
+ x = "Species",
+ y = "Average Distance (meters)")
+```
+
+Which species is, on average, found closest to a river?
+
+-\> Elk is found nearest to the River and Western Tiger Salamender is
+the farthest form the river. \## Buffers
+
+Alternatively, say you want to know what percentage of species’
+occurrences (points) were found within a specified distance of a river
+(calculated buffer). Here lets investigate how often each species is
+found within 100m of a river.
+
+To do this we can add a buffer around our line features and filter the
+points that fall within that buffer zone. We can use `st_buffer()` with
+a specified distance (default is meters since our `rivers` object uses
+‘meters’ as its length unit, we can tell by checking the CRS with
+`st_crs()`)
+
+``` r
+river_buffer <- st_buffer(rivers, dist = 100)
+
+qtm(river_buffer)
+```
+
+If you zoom in on the map you can now see a buffer around the rivers,
+and this new object is actually a polygon geometry type now instead of a
+line.
+
+``` r
+river_buffer
+```
+
+## Spatial Intersect
+
+We can conduct spatial intersect operations using the function
+`st_intersects()`. This function checks if each occurence intersects
+with the river buffer, and if so it returns an index value (row number)
+for the river feature it intersects. This function returns a list object
+for each occurrence, that will be empty if there are no intersections.
+We will add this as a column to our occurrence data set, and then create
+a binary yes/no river intersection column based on those results (is the
+list empty or not?).
+
+First look at what `st_intersects()` returns:
+
+``` r
+st_intersects(occ_larimer, river_buffer)
+```
+
+We see it is a list of the same length as our `occ_larimer` object,
+where each list element is either empty (no intersections) or the index
+number for the river buffer feature it intersects with. To add this as a
+new column in our `occ_larimer` data we run this:
+
+``` r
+occ_larimer$river_intersections <- st_intersects(occ_larimer, river_buffer)
+```
+
+Now we can create a new column in `occ_larimer` called ‘river_100m’ that
+returns TRUE/FALSE if the buffer intersects with a river. We make use of
+`if_else()` and the `lengths()` function to check the length of each
+list element in each row, as the empty ones will return a length of 0.
+If the length is zero/empty, then we return FALSE meaning that
+occurrence was not found within 100m of a river.
+
+``` r
+occ_rivers <- occ_larimer %>%
+ mutate(river_100m = if_else(lengths(river_intersections) == 0, FALSE, TRUE))
+```
+
+Now we can calculate what percentage of occurrences are within 100 m of
+a river for each species using `dplyr` operations. Which species is most
+often found within 100m of a river?
+
+``` r
+occ_rivers %>%
+ group_by(Species) %>%
+ summarise(total_occ = n(),
+ total_rvier = sum(river_100m == TRUE),
+ percent_river = (sum(river_100m == TRUE)/total_occ)*100)
+```
+
+
+
+#### Reflection
+
+This analysis is just for teaching purposes, why would you be cautious
+about these results for answering real research questions? Think about
+how we filtered everything to a political boundary, what’s wrong with
+this method?
+
+## Raster Reclassification
+
+So far we’ve dealt with a bunch of vector data and associated analyses
+with the `sf` package. Now lets work through some raster data analysis
+using the `terra` package.
+
+First, lets explore the landcover raster by making a quick plot.
+
+``` r
+qtm(landcover)
+```
+
+This land cover data set includes attributes (land cover classes)
+associated with raster values. The is because of the .aux auxiliary file
+paired with the .tif. in the ‘data/’ folder. Similar to shapefiles, this
+file provides metadata (in this case land cover class names) to the
+raster file.
+
+We can quickly view the frequency of each land cover type with the
+`freq()` function, where ‘count’ is the number of pixels in the raster
+of that landcover type.
+
+``` r
+freq(landcover)
+```
+
+### Exercise 4
+
+Create a bar chart of landcover frequency, and order the bars highest to
+lowest (see [this
+resource](https://sebastiansauer.github.io/ordering-bars/) to guide you
+on sorting bars by a numeric variable/column). Also investigate the use
+of `coor_flip()` and how it might make your plot look better…
+
+``` r
+landcover_freq <- freq(landcover)
+
+
+ggplot(landcover_freq, aes(x = reorder(value, -count), y = count))+
+ geom_bar(stat = "identity")+
+ labs(y = "Landcover Frequency",x = "Landcover Type")+
+ coord_flip()
+```
+
+Say we want to explore some habitat characteristics of our species of
+interest, and we are specifically interested in forest cover. We can use
+raster reclassification to create a new layer of just forest types in
+Colorado.
+
+Since rasters are technically matrices, we can using **indexing** and
+change values quickly using matrix operations. Given this particular
+raster uses character names associated with values (thanks to the .aux
+file!), we can index by those names.
+
+``` r
+#first assign landcover to a new object name so we can manipulate it while keeping the original
+forest <- landcover
+
+#where the raster equals any of the forest categories, set that value to 1
+forest[forest %in% c("Deciduous Forest", "Evergreen Forest", "Mixed Forest")] <- 1
+
+#SPELLING IS IMPORTANT
+
+#now set all non forest pixels to NA
+forest[forest != 1] <- NA
+```
+
+Now plot the new forest layer to get a quick sense if it looks accurate
+or not.
+
+``` r
+plot(forest)
+```
+
+## Extraction Statistics
+
+When we want to summarize raster values for certain shapes (points,
+polygons, etc), the `extract()` function from the `terra` package helps
+us do that.
+
+Say we want to find out the most common land cover type each of our
+species is found in. We can use `extract()` to get the landcover value
+from the raster at each of our occurrence points, and then do some
+summary statistics.
+
+Within this function, the first element is the raster you want to get
+values from, and the second element is the spatial layer you want to
+extract values at. Here we will use our `landcover` raster layer and the
+`occ_prj` object to extract values for occurrences across Colorado.
+
+First, we need to project our landcover raster to the CRS of our
+occurrences, otherwise the operation will only return NAs.
+
+``` r
+# project the landcover layer
+landcover_prj <- project(landcover, crs(occ_prj))
+
+extract(landcover_prj, occ_prj)
+```
+
+Notice that this returns a 2 column data frame, with an ID for each
+feature (occurrence) and the extracted raster value in the second
+column. We can actually use `extract()` within `mutate()` to add the
+values as a new column to our occurrences data frame so we can do
+further summary statistics.
+
+However, since `extract()` returns a 2 column data frame, it will nest
+this into a single column in the `occ_prj` data frame. To separate this
+into two separate columns we can use `unnest()` :
+
+``` r
+occ_landcover <- occ_prj %>%
+ mutate(common_landcover = extract(landcover_prj, occ_prj)) %>%
+ unnest(common_landcover) %>%
+ #lets rename the land cover column which is now called "NLCD Land Cover Class"
+ rename(common_landcover = "NLCD Land Cover Class")
+```
+
+Now, we can find the most common land cover type for each species, using
+some tidyverse wrangling. Note the use of `st_drop_geometry()`, this
+reverts the sf object back to an original data frame, which is required
+for some tidyverse operations.
+
+``` r
+occ_landcover %>%
+ st_drop_geometry() %>% # this converts the data back to a dataframe, required for some tidyverse operations
+ group_by(Species) %>%
+ count(common_landcover) %>%
+ slice(which.max(n)) #returns the row with the highest count "n"
+```
+
+We can also use `extract()` to extract raster values within polygons,
+but here must supply some function of how to summarize all the values
+within each polygon. For this example, lets fine the most common
+landcover type in each Colorado county.
+
+``` r
+county_landcover <-
+ counties %>%
+ mutate(landcover = extract(landcover_prj, counties, fun = "modal")) %>%
+ unnest(landcover) %>%
+ rename(value = "NLCD Land Cover Class") #renaming this helps us perform a join later on...
+```
+
+Uh oh, this gives us the raw pixel values instead of the land cover
+classes. We can get a table of value - class pairs by using the `cats()`
+function:
+
+``` r
+classes <- as.data.frame(cats(landcover)) #coerce to a data frame because cats() actually returns it as a list
+```
+
+Value and NLCD.Land.Cover.Class are our cell value - class pairs. Now we
+want to join this to our `county_landcover` object to get the actual
+land cover name.
+
+### Exercise 5
+
+Perform the appropriate `*_join` operation to tie our `county_landcover`
+and `classes` data frames together. Then make a map of the counties each
+colored/filled by the most common NLCD land cover class.
+
+``` r
+county_landcover_joined <- county_landcover %>%
+ left_join(classes, by = "value")
+
+qtm(county_landcover_joined, fill = "NLCD.Land.Cover.Class")
+```
+
+### Exercise 6
+
+Find the average elevation each species occurs at (for all Colorado
+occurrences). Which species is, on average, found at the highest
+elevations?
+
+*Hints*: Use the `elevation` and `occ_prj` objects we have created or
+read in above. Remember to check the CRS and perform a spatial
+transformation if necessary! All parts needed to answer this question
+have been introduced in this lesson plan.
+
+``` r
+st_crs(elevation) == st_crs(occ_prj)
+
+species_elevation <- occ_prj %>%
+ mutate(elevation = extract(elevation, occ_prj, fun = "modal"))%>%
+ unnest(elevation)%>%
+ group_by(Species)%>%
+ summarise(avg_elevation = mean(Elevation, na.rm = TRUE))
+```
+
+The species in the highest elevation is Yellow-bellied Marmot 3301.699m
diff --git a/bonus/get_spatial_challenge_Ashlesha.Rmd b/bonus/get_spatial_challenge_Ashlesha.Rmd
new file mode 100644
index 0000000..0705242
--- /dev/null
+++ b/bonus/get_spatial_challenge_Ashlesha.Rmd
@@ -0,0 +1,221 @@
+---
+title: "Retrieve and Wrangle Spatial Data"
+author: "Caitlin Mothes"
+date: "`r Sys.Date()`"
+output: github_document
+---
+
+```{r setup, include=FALSE}
+knitr::opts_chunk$set(echo = TRUE, eval = FALSE)
+```
+
+In this lesson you will be exposed to various R packages you can retrieve spatial data from and work through importing, wrangling, and saving various spatial data sets.
+
+```{r}
+source("setup.R")
+```
+
+Set up the `tmap` mode to interactive for some quick exploitative mapping of all these various spatial data sets.
+
+```{r}
+tmap_mode("view")
+```
+
+## Vector Data
+
+### US Census spatial data with `tigris`
+
+Import the counties shapefile for Colorado again as you did in lesson 1, along with linear water features for Larimer county.
+
+```{r}
+
+counties <- tigris::counties(state = "CO")
+
+linear_features <- linear_water(state = "CO", county = "Larimer")
+
+```
+
+This linear features file is pretty meaty. Inspect all the unique names for the features, what naming pattern do you notice? Let's filter this data set to only major rivers in the county, which all have 'Riv' at the end of their name. For working with character strings, the `stringr` package is extremely helpful and a member of the Tidyverse.
+
+To filter rows that have a specific character string, you can use `str_detect()` within `filter()`.
+
+```{r}
+rivers <- linear_features %>%
+ filter(str_detect(FULLNAME, "Riv"))
+```
+
+### Species Occurrence data with [`rgbif`](https://docs.ropensci.org/rgbif/)
+
+To experiment with point data (latitude/longitude), we are going to explore the `rgbif` package, which allows you to download species occurrences from the [Global Biodiversity Information Facility (GBIF)](https://www.gbif.org/), a database of global species occurrences with over 2.2 billion records.
+
+We are going to import occurrence data for a couple of charismatic Colorado species: Elk, Yellow-Bellied Marmots, and Western Tiger Salamanders.
+
+To pull occurrence data with this package you use the `occ_data()` function from `rgbif` and give it a species scientific name you want to retrieve data for. Since we want to perform this operation for three species, this is a good opportunity to work through the iterative coding lessons you learned last week.
+
+We first need to create a string of species scientific names to use in the download function, and create a second string with their associated common names (order matters, make sure the two strings match).
+
+```{r}
+#make a string of species names to use in the 'occ_data' function
+species <- c("Cervus canadensis", "Marmota flaviventris", "Ambystoma mavortium")
+
+#also make a string of common names
+common_name <- c("Elk", "Yellow-bellied Marmot", "Western Tiger Salamander")
+```
+
+### Exercise #1 {style="color: red"}
+
+The code below shows you the steps we want to import data for a single species. I got it started for you so that it runs for one species, but your task is to convert this chunk of code to a for loop that iterates across each species scientific and common name.
+
+*Tip for getting started*: You will need to add a couple extra steps outside of the for loop, including first creating an empty list to hold each output of each iteration and after the for loop bind all elements of the list to a single data frame using `bind_rows()` .
+
+```{r}
+# workflow outline
+species <- species[1]
+common_name <- common_name[1]
+
+occ <-
+ occ_data(
+ scientificName = species,
+ hasCoordinate = TRUE, #we only want data with spatial coordinates
+ geometry = st_bbox(counties), #filter to the state of CO
+ limit = 2000 #optional set an upper limit for total occurrences to download
+ ) %>%
+ .$data #return just the data frame. The '.' symbolizes the previous function's output.
+
+ # add species name column as ID to use later
+ occ$ID <- common_name
+
+ #clean by removing duplicate occurrences
+ occ <-
+ occ %>% distinct(decimalLatitude, decimalLongitude, .keep_all = TRUE) %>%
+ dplyr::select(Species = ID,
+ decimalLatitude,
+ decimalLongitude,
+ year,
+ month,
+ basisOfRecord)
+```
+
+Once you have your full data frame of occurrences for all three species, convert it to a spatial `sf` points object with the CRS set to 4326. Name the final object `occ`.
+
+```{r}
+
+species_df <- data.frame(scientific = species, common = common_name)
+
+
+process_species <- function(scientific_name, common_name) {
+ occ_data(
+ scientificName = scientific_name,
+ hasCoordinate = TRUE,
+ geometry = st_bbox(counties),
+ limit = 2000
+ )$data %>%
+ mutate(Species = common_name) %>%
+ distinct(decimalLatitude, decimalLongitude, .keep_all = TRUE) %>%
+ select(Species, decimalLatitude, decimalLongitude, year, month, basisOfRecord)
+}
+
+occ_list <- map2(
+ species_df$scientific,
+ species_df$common,
+ ~process_species(.x, .y)
+)
+occ <- bind_rows(occ_list)
+
+occ_sf <- st_as_sf(occ, coords = c("decimalLongitude", "decimalLatitude"), crs = 4326)
+
+
+```
+
+**Note**: we only used a few filter functions here available with the `occ_data()` function, but there are many more worth exploring!
+
+```{r}
+?occ_data
+```
+
+#### Challenge! {style="color:red"}
+
+Re-write the for loop to retrieve each species occurrences but using `purrr::map()` instead.
+
+```{r}
+
+
+# Define a function to retrieve and process occurrence for a single species
+process_species <- function(species_name, common_name) {
+ # Retrieve occurrence data using occ_data()
+ occ_data(
+ scientificName = species_name,
+ hasCoordinate = TRUE, # Only include data with spatial coordinates
+ geometry = st_bbox(counties), # Filter to the state of Colorado
+ limit = 2000 # Limit the number of occurrences to download
+ )$data %>%
+ mutate(Species = common_name) %>%
+ distinct(decimalLatitude, decimalLongitude, .keep_all = TRUE) %>%
+ select(Species, decimalLatitude, decimalLongitude, year, month, basisOfRecord) # Select relevant columns
+}
+
+# Iterate over each species using map2 and process_species function
+# map2 forr two vectors simultaneously
+species_occurrences <- map2(species, common_name, process_species)
+
+# Combine the results of all species into one data frame
+occ <- bind_rows(species_occurrences)
+
+# Convert the combined data frame to an sf object for spatial analysis
+# Set the CRS to 4326 (WGS 84)
+occ_sf <- st_as_sf(occ, coords = c("decimalLongitude", "decimalLatitude"), crs = 4326)
+
+```
+
+### SNOTEL data with [`soilDB`](http://ncss-tech.github.io/soilDB/)
+
+The `soilDB` package allows access to many databases, one of which includes daily climate data from USDA-NRCS SCAN (Soil Climate Analysis Network) stations. We are particularly interested in the SNOTEL (Snow Telemetry) sites to get daily snow depth across Colorado.
+
+First, you will need to read in the site metadata to get location information. The metadata file is included with the `soilDB` package installation, and you can bring it into your environment with `data()`
+
+```{r}
+data('SCAN_SNOTEL_metadata', package = 'soilDB')
+```
+
+### Exercise #2 {style="color: red"}
+
+Filter this metadata to only the 'SNOTEL' sites and 'Larimer' county, convert it to a spatial `sf` object (set the CRS to `4326`, WGS 84), and name it 'snotel_sites'.
+
+```{r}
+
+# Filter for SNOTEL sites in Larimer County
+snotel_sites <- SCAN_SNOTEL_metadata %>%
+ filter(Network == "SNOTEL", County == "Larimer") %>%
+ st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326)
+
+```
+
+How many SNOTEL sites are located in Colorado?
+
+8 Snotel in Colorado.
+
+### Exercise #3 {style="color: red"}
+
+Below is the string of operations you would use to import data for a single SNOTEL site for the years 2020 to 2022. Use `purrr::map()` to pull data for all unique SNOTEL sites in the `snotel_sites` object you just created. Coerce the data to a single data frame, then as a final step use `left_join()` to join the snow depth data to the station data to get the coordinates for all the sites, and make it a spatial object.
+
+```{r}
+#First Site ID
+Site <- unique(snotel_sites$Site)[1]
+
+
+data <- fetchSCAN(site.code = Site,
+ year = 2020:2022) %>%
+ # this returns a list for each variable, bind them to a single df
+ bind_rows() %>%
+ as_tibble() %>%
+ #filter just the snow depth site
+ filter(sensor.id == "SNWD.I") %>%
+ #remove metadata columns
+ dplyr::select(-(Name:pedlabsampnum))
+```
+
+```{r}
+
+
+```
+
diff --git a/data/poudre_elevation.RDS b/data/poudre_elevation.RDS
new file mode 100644
index 0000000..ccac2c9
Binary files /dev/null and b/data/poudre_elevation.RDS differ
diff --git a/data/poudre_elevation.tif b/data/poudre_elevation.tif
new file mode 100644
index 0000000..48f8ecc
Binary files /dev/null and b/data/poudre_elevation.tif differ
diff --git a/data/poudre_hwy.dbf b/data/poudre_hwy.dbf
new file mode 100644
index 0000000..4210ede
Binary files /dev/null and b/data/poudre_hwy.dbf differ
diff --git a/data/poudre_hwy.prj b/data/poudre_hwy.prj
new file mode 100644
index 0000000..5ded4bc
--- /dev/null
+++ b/data/poudre_hwy.prj
@@ -0,0 +1 @@
+GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]]
\ No newline at end of file
diff --git a/data/poudre_hwy.shp b/data/poudre_hwy.shp
new file mode 100644
index 0000000..c154b50
Binary files /dev/null and b/data/poudre_hwy.shp differ
diff --git a/data/poudre_hwy.shx b/data/poudre_hwy.shx
new file mode 100644
index 0000000..3d89126
Binary files /dev/null and b/data/poudre_hwy.shx differ
diff --git a/data/poudre_points.dbf b/data/poudre_points.dbf
new file mode 100644
index 0000000..c5a5774
Binary files /dev/null and b/data/poudre_points.dbf differ
diff --git a/data/poudre_points.prj b/data/poudre_points.prj
new file mode 100644
index 0000000..5ded4bc
--- /dev/null
+++ b/data/poudre_points.prj
@@ -0,0 +1 @@
+GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]]
\ No newline at end of file
diff --git a/data/poudre_points.shp b/data/poudre_points.shp
new file mode 100644
index 0000000..8fd08fc
Binary files /dev/null and b/data/poudre_points.shp differ
diff --git a/data/poudre_points.shx b/data/poudre_points.shx
new file mode 100644
index 0000000..af92920
Binary files /dev/null and b/data/poudre_points.shx differ
diff --git a/data/poudre_spatial_objects.RData b/data/poudre_spatial_objects.RData
new file mode 100644
index 0000000..aa65c2d
Binary files /dev/null and b/data/poudre_spatial_objects.RData differ