Sunday, October 19, 2025

Blog Post #3

 Blog Post #3

The interview I watched was GIS in Natural Resources with Sarah Bellchanber. The reason I chose this one is because I work with some people in Operations who do Forestry and I wanted to see if there was something she used to help her team. The first important thing she said is learning about Python is very helpful for her job. I know we learned about that for our certificate and it is nice to see that it could help find future jobs. The second important thing was learning more about the apps we might need to use for the field and making sure people know how to use them. This is helpful because I have created manuals but I should probably do a learning session before people go out in the field. The third important thing is to tell people your specialty when you apply for jobs. Since I have been doing archaeology it would be good to tell someone that and my emphasis in GIS. After listening to the interview it was nice to hear that the GIS certificate we are getting could help us get jobs in other careers in GIS.

So far, I have created shapefiles for different studies. Also, I have been helping other people on my team on how to use GIS and what analysis they can do with the study. Finally, I have worked with other divisions on online mapper and getting all the data we need to do a project into one area. One life lesson I have learned is that you are not going to love every job you do but that you can make the most out of every job. Another life lesson is that even if you work hard, it does not always mean you will get the promotion, but that should not stop you from working hard. A final life lesson I learned is that you do not have to speak at every moment and it is okay to just listen. I have definitely felt academically prepared for the internship. Knowing how a geodatabase works, creating shapefiles, and using online mapper has helped me for the internship. Also, being able to help other people has been very nice with all we have learned while getting the GIS certificate.

The approach was to have me update my profile because I created mine back in 2017 when I graduated from the University of Oklahoma and updated after graduating from the University of Mississippi. It was good I added more information because I had not updated the information in over 5 years. I wanted to make sure I had my current job but also add in the different things we learned while taking classes for the GIS certificate. It was hard to remember everything but luckily I had updated my resume to where I could just copy and paste onto my profile. LinkedIn Profile: www.linkedin.com/in/emily-demontalvo-2a0142113.

Wednesday, October 15, 2025

Module 3.1: Scale & Spatial Data Aggregation

 Module 3.1: Scale & Spatial Data Aggregation

This weeks module was about the Modifiable Areal Unit Problem (MAUP) which involve the scale and zonal effects of data. According to Manley (2014), the scale effect shows that there are hierarchies that are put into our work that if we are not aware can skew the data. Also, the scale effect is attributed to variation in numerical results owing strictly to the number of areal units used in the analysis of a given area (Openshaw and Taylor 1979). Lastly, when thinking about MAUP it is important to know that your data can be skewed but if you make changes in how you correlate your data such as with scale it can help make sure you are getting answers that make sense.

Another issue that arose this week was the basic resolution effects on raster data. When talking about raster data the most common are digital elevation models (DEM) which focuses on the grid cell size that has significant effects on derived terrain variables such as slope, aspect, and curvature. Knowing that the grid size changes the look of the maps helps to choose the correct raster grid cell size. Such as when trying to carry out realistic terrain analysis is limited by the quality of the DEM applied (Kienzle 2007). There are three important aspects to keep in mind when trying to use DEM to analysis data: 

  • "The accuracy and distribution of the elevation points used to interpolate the DEM.
  • The interpolation algorithm used to generate a continuous DEM.
  • The chosen grid cell size."
Knowing all this information can help the GIS analyst understand how to use the data to get the results they might want for terrain analysis (Kienzle 2007).

The last issue that was in this weeks module was how gerrymandering of political boundaries can cause issues because of boundary definition. These boundaries can change to make it to where one political party has more of an advantage in that congressional district. When trying to determine the compactness of a district I learned about the Polsby-Popper score and the formula. First, we needed to get the area and perimeter of all the districts. I did it in kilometers to keep it consistent. The formula was not working for me to use it all at once, so I split it to three different attributes. I did the formula for the top (4 * pi * Area). Then another formula for the bottom (Perimeter^2). Lastly, I created a formula where I divided the top number by the bottom number. The worst offender was North Carolina Congressional District 12 with the score 0.029476. It is seen in the picture below.



Sunday, October 5, 2025

Module 2.2: Interpolation

Module 2.2: Interpolation

This weeks module was about using different interpolation techniques to represent different data points. The different interpolation techniques we used were Thiessen, Inverse Distance Weighted (IDW), and Spline (Regularized and Tension). Each one of the techniques shows a different way to represent data and it all depends on what the person wants to represent changes which technique you are going to use. The techniques were used to represent water quality in Tampa Bay, Florida.

The first technique was the Thiessen or Nearest Neighbor interpolation. One advantage of using Thiessen interpolation is that it assigns a value for any unsampled location that is equal to the value found at the nearest sample location. This is helpful because it makes each point its own polygon. Another advantage is that it provides an exact interpolator which means that the interpolated surface equals the sampled values at each sample point. It means that each location is preserved and there is not a difference between the true and interpolated values at the points. A major disadvantage is the sharp edges between the different points. It causes the image to look harsh with the edges. A final disadvantage is that the areas there are no points causes bigger polygons than what might actually be in the area. An example would be if they were on a terrain that changes quickly and there is no point in which the polygon would not represent that change.

The second technique is IDW. When looking at the data distribution of the IDW it is based on the assumption that the points that are close to one another are more alike than the points farther away. This could mess up the areas that have significantly different elevations next to each other. The IDW does not provide prediction standardized error which makes the use of the DEM a little problematic. Also, the IDW is oversensitive to outliers which could cause “bullseye” areas that have one point as a high or low point.

The third technique is Spline (Regularized and Tension). Spline, it is helpful because it is smoother and looks like it could be more accurate than the IDW. It is important to note that the DEM only works if the data is very accurate. The use of Spline is more adaptable because they may be used for lines or surfaces, and they may be estimated. Also, Spline is points that is used are considered “guides” that helps make the lines smoother. Lastly, a major problem to consider is that it can create unpredictable surfaces (overestimation or underestimation) in areas with low data density or high topographic change.

What I learned about surface interpolation is that there is not one method that is better than another. When choosing the technique all depends on different factors such as cost of sampling, available resources, and accuracy. Also, it was interesting to see that the sampling techniques can change how you try to represent your data. The results that surprised me were how spline interpolation can cause a big change just because one extra point was in the area. All I did was change the spline regularized interpolation by taking away one point. The way I would decide on a different technique is if I knew the sampling techniques that were used. I feel like the systematic technique would help make all the interpolations better whereas the cluster and random sampling would need to be more with spline interpolation. Lastly, when doing the adaptive sampling pattern would work best with the spline tension interpolation because it seems to be like the original data points.

The image below is Spline Tension interpolation. This image to me represented the data points the best. The reason is because it seemed to be the most like the non-spatial data that was represented. Also, the spline interpolation is smoother when it is representing the data points. Lastly, I chose this technique because they can be changed to represent the data points, and they cab ne used for lines and surfaces unlike the other techniques.

Blog Post #5: GIS Portfolio

 Blog Post #5: GIS Portfolio In the final weeks for the GIS Internship we were given the task of creating a GIS portfolio either on paper or...