Sunday, July 27, 2025

Module 4: Coastal Flooding

 Module 4: Coastal Flooding

In this weeks module we were focusing on how to use DEM and LiDAR data to see storm surges within 1-meter. This module was more difficult than other ones because you had to do a lot of geoprocessing before being able to do analysis. I did have to rely on past modules to help me get the information I needed for the module.

We were given two data types which were dem_lidar and dem_usgs. The dem_lidar is a high-resolution elevation model for a portion of Collier County, FL created using Lidar data. The dem_usgs is a regular elevation model from the USGS created using traditional methods (i.e. photogrammetry) before Lidar data were available. The first thing we needed to do was reclassify the values to where it showed 1-meter or less within the values. After that we needed to use the raster to polygon geoprocessing tool so that we could use the spatial tool to get the information of how many buildings were disturbed by the storm surge. Being able to get this information is important because it shows how each of the data sets interpreted their storm surge. After creating the polygons I got the select by attribute table and got the join_count to get the selected buildings within each data set. This was more helpful for me to do so that I could focus on just the buildings I needed for the table. Also, when making the map having them as their own layer it made it easier to change the colors. The last aspect we needed for the map was to figure out the error of omission and error of commission. Errors of omission are those buildings impacted by the storm surge based on the Lidar DEM (the “true” scenario) but not identified as such based on the USGS DEM. Errors of commission are the buildings not impacted by the storm surge based on the Lidar DEM but incorrectly identified as such based on the USGS DEM. The image below is how the different buildings looked based off if they were within both the data sets, in one data set, or not in a data set at all. 


As stated earlier this module was one of the most difficult because you needed to not only change from feet to meters but use a lot of geoprocessing tools in order to do any analysis.

Saturday, July 19, 2025

Module 3: LiDAR Visibility

 Module 3: LiDAR Visibility

This weeks module was learning how to do visibility analysis within ArcGIS Pro. There are two types of visibility analysis which are viewshed analysis and line-of-sight (LOS) analysis. The viewshed analysis is trying to answer the question, "Which areas in a landscape are visible from an observer point?" Where given one or more locations, the output is a map of the visible areas. On the other hand LOS tries to answer the question, "Which segments are visible along a specific site line?" Where given one or pairs of locations, the output is a map of visible segments along a set of line. It is important to note that both of the analyses use elevation models, such as digital elevation models (DEM) or a triangulated irregular network (TIN).

During this weeks module we took four courses (Introduction to 3D Visualization, Performing Line of Sight Analysis, Performing Viewshed Analysis in ArcGIS Pro, Sharing 3D Content Using Scene Layer Packages). Each one of the courses were taken on the ESRI website where they explained what you needed to do and then did an exercise with the data they gave you in the beginning. 

The first exercise was Introduction to 3D Visualization course shows how the data can be seen in 3D mapping. There are two 3D maps which are the Local Scene and the Global Scene. The local scene is used more for looking at data within a city whereas the global scene is more of looking at things internationally, such as flight patterns. The next part of this course was understanding how to present the elevation within the 3D maps. There are three types: On the Ground, Relative to the Ground, and Absolute Height. As said on the ESRI website "Elevation types are a property of the layer and are dependent on the type of elevation surface that you choose in your scene." Also, you can also manipulate height variables using techniques such as cartographic offset and vertical exaggeration. Cartographic offset vertically adjusts the height value (or z-value) of the entire layer, raising or lowering all features in the layer by a given height. Vertical exaggeration is used to emphasize subtle changes in a surface. Another thing that was learned in this exercise was extrusion of 2D features to 3D features. Extrusion enables you to take your points, lines, and polygon features and stretch them vertically to create real-world 3D objects. There are fours ways to extrude data: Adding to the feature's base height, Adding to the feature's minimum height, Adding to the feature's maximum height, and Extruding to an absolute value. Lastly, in this exercise we learned how to add 3D symbology to our maps including trees, water, and buildings. Also, we learned how to make those features look more realistic such as moving water.

The next exercise was Performing Line of Sight Analysis course determines whether two points in space are intervisible. A line of sight calculates intervisibility between the first vertex (the observer) and the last vertex (the target) along a straight line between the two. A line of sight considers any obstructions provided by a surface or multipatch feature class. Visibility between these points is determined along the sight line. There are three general processes that are used to perform visibility analysis. First you need to determine observers and targets. Second construct sight lines. Third determine line of sight. It is important to note the color of a sight line indicates the locations where the surface is visible and where it is hidden.

The third exercise was Performing Viewshed Analysis in ArcGIS Pro course we learned how to modify the input features to model the visibility from a known vantage point. The tool creates an output that models the areas that are visible from given vantage points. The Viewshed is where you can symbolize the visible area from an observer point in a raster. The geoprocessing tool considers the height of the individual, along with any obstructions surrounding the point. The areas that are considered visible from the observer point will be indicated in the output raster. The Viewshed tool is controlled through fields that are added to the input data to control the observation point elevation values, vertical offsets, horizontal and vertical scanning angles, and scanning distances. This tool was able to help an analyst see where lights cross the landscape depending on the height and the radius of the light beam. This is important because you can change the height and see where more or less areas cross one another.

The fourth and final exercise was Sharing 3D Content Using Scene Layer Packages course we learned how to change the data and share it with people who have ESRI accounts. ArcGIS Pro allows you to investigate and visualize your data in an intuitive and interactive 3D environment from any angle or perspective. You can use a scene for inspection and exploration workflows, for visualization, for communicating analytical output, or for storytelling about real-world projects and scenarios. Once again, it is important to know if you want a local or global scene. Also, we learned that there are three steps or workflow to authoring your data: 1. Load your data, 2. Display 2D data as 3D layers, and 3. Convert 2D data as 3D data. Before presenting your data there are some tips and tricks you need to remember: 1. Have all your source data and the scene in the same coordinate system, 2. Structure your content and decide what the user must see, 3. Define an area of interest (AOI) for your scene, and 4. 3D symbology is required for feature layers to be published and shared. After making your data there is a benefit of sharing your data because a wide variety of people can see it and it can be shared easily. It is important to note when using ArcGIS Pro, you can save your scene layer and its data as a scene layer package (an SLPK file). To determine the most appropriate option for sharing scene content with your intended audience, you must answer the following questions: 1. With whom do you want to share the content? (You can choose to share content with the public or a specific group), 2. How do you want to share the content?, and 3. What type of content will be shared? (Every content item that is published to a portal can be shared). To upload a scene layer package and publish a hosted scene layer, you will perform the following workflow steps: 1. Sign in to your organization, 2. Open My Content, 3. Add the scene layer package on your computer as an item, 4. Type a title and tags that describe the scene layer package, 5. Check the box next to Publish This File As A Hosted Layer, and 6. Add the item.

All the courses were very useful because first you learn about what you are doing in the exercise and why you are doing it. Also, you get step-by-step instructions of what you need to do with the data that was given to you. Lastly, there was a quiz at the end to help you understand everything you learned.

Sunday, July 13, 2025

Module 2: LiDAR (Wetland Delineation)

 Module 2: LiDAR (Wetland Delineation)

This weeks module was about using LiDAR data to create different models including Digital Elevation Model (DEM), Digital Surface Model (DSM), Canopy Density, Vegetation Height, and a LiDAR map. 

The first step was downloading data from a state website. This is important because if you know how to download the data you can use many different websites to get your LiDAR data. This data is shown in .laz file that will need to be converted to las data. To convert your data, you need to open up the geoprocessing tool “Convert LAS” and put in your data. This will create an output LAS dataset that is a .lasd data file. In order to open the data under the insert tab you click the down arrow of the New Map and click on New Local Scene. This will open the 3D dataset of LiDAR.

Before doing the second step, so that you can calculate the data, we need to make sure that we have the 3D analyst and Spatial Analyst extension by clicking Project and then Licensing. If the words are black that means you have access to the extension. Next, we need to calculate the forest height from the LiDAR point cloud. First, we need to get the geoprocessing tool “Point File Information” where we run the .lasd data that we previously created. Make sure that the coordinate system is the same as the .lasd data you got from the website. (It is important to note that many surveyors across the US use State Plane when they collect data). After that we need to create a DEM dataset. First you need to click you data under the contents pane and then click on LAS Dataset Layer. Then click on LAS Points arrow and change the points to Ground. Once you do that you search for the geoprocessing tool “LAS Dataset to Raster”. Next you need to select the .lasd file as the input and change the value field to Elevation. Under interpolation type you need to select Binning with cell assignment as Maximum, and Void Field Method as Natural Neighbor. The Output Data Type is Integer and Sampling Type is Cell Size. Lastly, you need to make the Sampling Value as 6 and Z Factor as 1. This produced the DEM dataset that was back and white. To change the color right click the data and click symbology to what you want it to look like on the map. To create a DSM dataset, you need to change the LAS Dataset Layer to Non-Ground. When using the geoprocessing tool “LAS Dataset to Raster” keep everything the same and it will once again show the data as black and white. After that to get the tree height you want to get the geoprocessing tool “Minus” and set the Input Raster 1 as the DSM file and the Input Raster 2 as the DEM file. This was able to show the height raster data, and you can look at the attribute table to see where the negative and positive values are located. Looking at the data shows the negative values are where the roads and shadows of the trees are located. Where the positive values are where the trees are located and the different heights.

The map below shows both the LiDAR data and the DEM data that was derived from the LiDAR data. This shows that the visible points are in different colors to represent the height of the topography in the area. It looks like the lowest height is a dark blue and starting in the northern portion of the data to a red in the southern portion of the data. Also, you are able to see where the roads are located with the help of the base map being topographic. Lastly, you can see the slight rises and dips within the point cloud data.


The third step was to calculate biomass density. This is done by a lot of different geoprocessing tools. The first geoprocessing tools is “LAS to MultiPoint” where you put the .lasd data from the beginning. To get the metadata that represents the bare earth you need to set the average point spacing from the attribute table from the Point File Information dataset that created as the value and set the class code as 2. To create the metadata for the vegetation it is the same value but the class code is 1. After that you want to convert the MultiPoint files to Raster with the “Point to Raster” geoprocessing tool. You will put the data you named for the bare earth as the input. Set the Value field to OID and set the Cell Assignment to Count. Lastly you use the average point spacing times 3 and round to the nearest whole number. For the vegetation you will do the same thing. Next you want to create a binary file where 1 is assigned to all values that are not null by using the “Is Null” geoprocessing tool for both raster files you previously created. The next geoprocessing tool is “Con” so that if a value of 0 is encountered it will accept it as true value and if it is 1 it pulls from the original raster. You will set the Input from the previous data tool and set Input true raster or constant value to 0. Lastly, se the Input false raster or constant value to the raster dataset. Do the same thing for the vegetation dataset. The “Plus” geoprocessing tool is next that combines the vegetation and bare earth count datasets to derive the overall density returns. To get the Plus result from integer to float you need to use the geoprocessing tool “Float” which will provide the true representation of density from the next tool. Lastly, to calculate the density we need to use the “Divide” geoprocessing tool where you set the Input raster to the count dataset for the vegetation and set input raster 2 as the Float result. Once again it shows up as black and white so you want to change the symbology to what you want.

The image below shows canopy density from the LiDAR data. The information being conveyed in the density map is the areas that have trees and the areas that are not located. This would be helpful to foresters because it shows how good the vegetation is in the area. The higher the number the better the vegetation will be because it is showing a higher reflective value. Also, it could show where roads and houses are located.


The fourth and final step was to create a chart. This is done by clicking on the Height (results from the Minus tool) and go to the Data tab and click Create Chart arrow and click Histogram. For the Histogram to show up you need to click on Band_1 that will show the value and count of the information from the shapefile.

The final image below shows the height of the vegetation and a histogram chart from the LiDAR data. The graph tells you that the highest count of tree height is 63.25 feet tall and it is almost a normal bell curve of the different heights on the landscape. Also, this says the area is growing roughly at the same time and if they needed to cut down trees for money they would be able to get more across the landscape. Lastly, the values tell you that there are some areas that are higher topographically and not just the trees themselves.



Wednesday, July 9, 2025

Module 1: Crime Analysis

 Module 1: Crime Analysis

This weeks module focused on how to use ArcGIS Pro to make crime analysis maps. There are three different types of maps that are used to represent crime, which are grid-based thematic, kernel density, and Local Moran's I. Grid-based thematic mapping overlays a regular grid of polygons on top of the crime events to produce a count of crime events per grid cell. Kernel density mapping is when points lying near the center of a search area are weighted more heavily than those lying near the edge. Local Moran's I is asking the question of if there are similar features nearby.

Grid-based Thematic

When making the Grid-based Thematic Hotspot Mapping you once again need to use the Spatial Join tool that shows the information from the 2017 homicides within the grid cells. The Select By Attributes tool you need to select the correct column so you can get the information you want which was any grid that had a homicide count greater than zero. The way I exported this feature was under the Data tab where I selected “Layer From Selection” where it creates the feature off what I selected. Next, we needed to get the top 20% of the features that were selected previously, and this is done by opening the attribute table sorting the Join_Count field and dividing the total number of records by 5 and making sure to rounding down. Since there is no selection tool for this you need to do it manually by clicking the first box and then hold the shift button to where you want to end to get all the numbers in between. After all of that we need to dissolve the features together. This is done by making a new tab that says Dissolve and using the field calculator to give the entire column the same value. Then find the Dissolve tool to create the multipart features. You can see it in the image below.

Kernel Density

First, we want to search for the Kernel Density Tool. As stated earlier make sure there are no spaces in your folders and your features. This was done similar to the maps we did for Washington DC kernel map. We need to make sure the output cell size and the search radius are in same which is in feet. After creating the map, we needed to only make two breaks based off the mean and the maximum value. This is done by right clicking the feature and hitting the symbology tab. Once you are in the tab you need to click the more button and select “Show Statistics” which will show all the information we need to make your break values. Next, we need to make the raster to a polygon but first need to use the Reclassify tool to reclassify the feature. After that we need to use the Raster to Polygon tool to same the feature to a feature class within the project geodatabase. Lastly, we needed to do the Select by Attributes and find the value of 2 which is three times the mean of the original kernel density. The image below that shows the kernel density from the 2017 hotspots.

Local Moran''s I

The last map we produced was the Local Moran’s I Hotspot which is supposed to use crime counts or rates aggregated with meaningful boundaries. Once again we used the Spatial Join tool between census tracts and 2017 homicides. Then we needed to create a new field for crime rate and create a field calculator where homicides per 1000 housing units. Next, we used the Cluster and Outlier Analysis (Anselin Local Moran’s I) tool and leave the parameters as their default settings. After that we needed to get the high-high clusters because it shows the areas that had a high among of crime per 1000 households. Lastly, we dissolve those features so it looks good on the map which you can see below.


Based on all the information above it appears that the kernel density map would be the best because it shows a smoother and more continuous map instead of breaking it into blocks or zones. Some lessons I learned when doing this module was when setting up your environment setting make sure the extent and the mask are the same. If it is not, then the feature class will show up empty. Also, in the Kernel Density (Spatial Analyst) tool you need to make sure there are no spaces in either the folder the information is saved in and the feature class.

Blog Post #5: GIS Portfolio

 Blog Post #5: GIS Portfolio In the final weeks for the GIS Internship we were given the task of creating a GIS portfolio either on paper or...