Friday, August 8, 2025

Module 6: Corridor Analysis

 Module 6: Corridor Analysis

In this weeks module we learned how to do corridor analysis, which considers a range of possible paths instead of a single optimal path. This is useful because the least-cost path is not always the most optimal path for what a person needs for the area. In scenario 4 we once again needed to do the conceptual steps before starting out analysis.

Step 1: State the Problem

Problem: Potential movement of black bears between the two protected national forest areas.

Goal: Create a suitability and corridor analysis of the black bear movement between the national forests.

Step 2: Break Down the Problem

Objective 1: Reclassify the roads into suitability classes.

Input dataset: Roads shapefile

Objective 2: Reclassify the slopes into suitability classes.

Input dataset: Elevation raster

Objective 3: Reclassify the Land cover into suitability classes.

Input dataset: Land cover raster

Objective 4: Invert suitability model

Input dataset: The reclassification models of the roads, elevation, and land cover.

Objective 5: Create a corridor analysis

Input: The cost path inputs for the two national forests.

Step 3: Explore Input Datasets

Once again this is from Model Builder within ArcGIS Pro which is helpful because it can do all the data for you without doing each step separately. Also, it lets you work through the problem before beginning the work. The image below is my flowchart for this scenario.

When I did the analysis, I did the corridor analysis. I made sure to get the least-cost path for both national forest polygons. First, I did the cost distance and cost path for both of the polygons. After changing the least-cost path to a polyline I then did the corridor analysis from Scenario 3. I used the corridor tool where I put in the two cost distance surfaces and inputs. There is no graduated color ramp, so I went to classify and switched it to three breaks and added in the threshold for each one. Similar to scenario 3 I times the minimum value by 1.05, 1.1, and 1.15. This showed the different areas and how they were less optimal as you went out. The image below shows the corridor analysis of the best path between the two national forests based on the criteria that was given to us in the beginning of the distance to the roads, elevation, and land cover.

When thinking about the data after the initial analysis there are more factors that I would see if we could get more information. First, I would possibly see if there were any criteria for the rivers in the area and what the black bears might want to be close to those areas as well. Another factor would be if there were any developments in the area that could disturb the bears and where they might want to go with the corridor analysis.

This scenario did have a couple of hiccups on trying to get the data to represent the best paths. Even though I did all the analysis correct I was only able to get one corridor analysis path. I tried different ways but it was only giving me one route each time. I am not sure the best way to fix this, but when doing the exercise it did say the Cost Distance, Cost Path, and Corridor Tools were being taken out and replaced with a newer one. This might help with getting the correct analysis in the future. At the same time, it is important to note that when doing the analysis it is important to take your time and try to understand what you are doing for each step. Lastly, make sure to name your outputs to where you can see which features are being represented and not to use the same name within the same geodatabase.

Wednesday, August 6, 2025

Module 6: Suitability Analysis

 Module 6: Suitability Analysis

In this weeks module we first focused on suitability analysis maps for a potential developer in Jackson County, Oregon. We needed to state the problem, break down the problem, and create a flowchart/Model Builder for this project. 

Step 1: State the Problem

Problem: How much land is suitable to build on.

Goal: Create a suitability map in raster form to find the areas that are suitable to build on.

Step 2: Break Down the Problem

Objective 1: Reclassify the land cover into suitability classes.

            Input dataset: Landcover raster

Objective 2: Reclassify the soils into suitability classes.

            Input dataset: Soils shapefile

Objective 3: Reclassify the slopes into suitability classes.

            Input dataset: Elevation raster

Objective 4: Reclassify the rivers into suitability classes.

            Input dataset: Rivers shapefile

Objective 5: Reclassify the roads into suitability classes.

            Input dataset: Roads shapefile

Step 3: Explore Input Datasets

The image below is the model builder for this scenario. These are the steps I took to finish this scenario.



Doing the three steps are helpful because you can work through the issues of the project before starting the analysis. Also, when using the model builder you can use it to create all your data at once instead of doing each step separately. We created a map with a side-by-side comparison of how the data would look if you had equal weight with all the information and if you changed the weight depending on the feature. This is helpful because it give you different scenarios of what it least to most suitable in your dataset. In the image below it shows that least suitable areas are on mountain tops and most suitable are by a river with equal weight. When you change to an unequal weight it is still the same but the kilometers square changes to being higher in group two. 

Looking at the information is able to give a better idea to the developer on the areas that seem the most suitable and the kilometers square of what they might be able to use in the areas. 

Friday, August 1, 2025

Module 5: Damage Assessment

 Module 5: Damage Assessment

This weeks module was about how to assess damage to structures after Hurricane Sandy. We were supposed to use FEMA "standards" to get the information they might need to help with relief.

The first part was to make a map that showed the tract of Hurricane Sandy from the Atlantic Ocean to the eastern coast. We learned how to create a line from points that were create from a XY Table. The geoprocessing tool is Points to Line (Data Management Tools) and it showed the path of Hurricane Sandy. The next part of the module we needed to change the symbology of the points so they would represent the different phases of a hurricane. The way we did this was to use Creating Marker Symbols because it makes it possible to make the points into a good symbology. The way to do this is to click on the symbology tab to change the different values into the correct symbols. We were given the exact ones to select for the points but under the properties tab then go to the layers menu. Under the Appearance tab choose Font. This will give you more options to change your symbology. The image below shows the final outcome of the path of Hurricane Sandy.

The next portion of this module we needed to create a survey on the Survey123 in ArcGIS Pro Online. This is useful to learn because you are able to send this out to the public and they can help you get data after a tragic event such as a hurricane. This survey was for the damage assessment for Hurricane Sandy. You are able to make so many different questions that have to be required in order to go on with the rest of the survey. Also, another important aspect is that you can add images and the exact date/time it was taken. Lastly, the survey can have a map put in and there is a point added when they open this portion of the survey. The link is where you can look at the survey for instructional view onlyhttps://arcg.is/1zTePH1.

In the final portion of the module was getting the images for pre-and post-Sandy, adding domains, and analyzing the data for the "FEMA" client. We were giving the images that were going to be used for this exercise but when getting your own data being able to get good imagery data is important to assess any damage. First we needed to create a Mosaic Dataset and then add the raster data to the dataset. Having both of the images on top of each other is helpful because you can keep going back and forth to see where the damage is located. Next we needed to add domains so we can have it for attribute values for the points that we are going to use to assess the damage from Hurricane Sandy. You want to right click you geodatabase and choose Domains. It will open up a table similar to an attribute table. We put in the domains we wanted for this assessment, which were inundation, structure damage, wind damage, and structure type. After putting in the domain you want to add codes/descriptions that are going to be used when trying to interpret the data. The image below is a look to the different domains with the view of the code/descriptions of the structure damage.


After creating the domains and the codes/descriptions we created a new feature class that has the different fields with the domains. Even though the directions say "No" on the Allow Null portion you do need to say yes in order to actually create the features. In this portion we needed to classify the structure damage. When trying to identify the properties I was looking at the pre-storm image to see if the structure looked like people would be vacationing or living there versus a commercial place. The scale that worked the best for me was between 1:500 and 1:250. I was trying to see the surrounding area to see if the structure was affected or there was no damage. The decisions that were the most difficult were trying to decide the line of when the structure is inundated or not. There is a lot of sand around but in these areas many of the structures are higher from the ground so they could be perfectly fine. Another difficult decision was trying to decide wind damage because this is an image it makes it hard to know what was wind and what was water. I would say I would like another image that was at the same time as the previous image because it made it hard to know what was wrong with the structure and what was a shadow. Also, I think we should add another structure type of commercial because I was having a hard problem with wanting to put unknown because I knew it was commercial and not industrial. A last piece of information would be the structure height to know if the structure was inundated or not from the water and sand. The image below is a view of the different points I put on each structure.


Lastly, we needed to take the data we created an analysis the data to understand what it happening post-hurricane sandy. To get the information I needed to know the distance was by using the Near geoprocessing tool where I could get the distance of the structures from the coastline. It did show that there were four structures that were past the 300-meter radius. Also, to get more structures I did have to change my coastline a little because it was not showing the structures. Lastly, I looked at the near distance portion of the attribute table and saw that most of the structures were 92-meters and above. This is interesting because if you focus on just the distance you might not get the information for all the structures. It looks like the closer to the coastline the more likely the structure is either destroyed or has some major damage. At the same time, further away from the coastline it does look like the structures were more affected by the hurricane. I would say it is reliable because I kept the same thought process while I was analyzing the data. The only problem I had was only being able to see the top of structure I was having hard time telling how much was damaged. I based the decision on the sand a debris surrounding the structure.

Blog Post #5: GIS Portfolio

 Blog Post #5: GIS Portfolio In the final weeks for the GIS Internship we were given the task of creating a GIS portfolio either on paper or...