Monday, September 22, 2025

Module 2.1: TINs and DEMs

 Module 2.1: TINs and DEMs

This weeks module was using continuous surface in TIN and DEM models to represent elevation. One would use TIN for vector data models and DEM for raster data models. You can use either one to represent your data it just depends on what you are trying to present.

When doing the exercise for the module there were different ways we tried to represent the data but the one that I am going to talk about is how the contour lines were presented by using TIN or DEM. TIN stands for Triangulated Irregular Network and is created out of elevation points. A Digital Elevation Model or DEM is a digital representation of a topographic surface. This is also created from the same elevation points. The image below shows how the elevation points were able to show the different contour lines from TIN and DEM. It is important to note that the black and red lines are from the TIN data set and the gray lines are from the DEM data set.

The key differences between the two contour lines are that the DEM contour lines are smoother whereas the TIN contour lines are very sharp. Another difference is that the DEM contour lines showed more in the higher points than the TIN contour lines. It looks like the greatest difference is where the higher numbers were located. There are more DEM contour lines. The smallest difference was where the lower numbers are located for both the DEM and TIN contour lines. I would say these differences were because there were more points in the lower numbers compared to the higher numbers. Lastly, of the two sets of contour lines I would say the DEM is more accurate.

When trying to know if you should use TIN or DEM it depends on how you are getting your data. TIN is usually not widely available and has been made for the specific study. Whereas, DEM have various resolutions and used for larger study areas. Luckily with new data collections such as LiDAR it makes the differences between DEMs and TINs obsolete. It is all based on how you want to represent your data.

Saturday, September 20, 2025

Blog Post #2

Blog Post #2

I searched for jobs where I can use archaeology and GIS. The job title is a GIS specialist for an archaeological cultural resource management firm. I chose this job because I have worked for this company in the past and they were really good but I needed to find a job where I could work from home. Also, it makes it a dream job because I get to use both archaeology and GIS to do my job which is the reason I wanted to get the certificate. Based on all the requirements I have all of them because of what we have been learning in our GIS classes and already having both a BA and MA in Anthropology with an emphasis in Archaeology. Also, since I have worked with the team in the past I know how their maps looked and what they would need in order to do field work. The only thing was they did not show a range of pay and that would make me worried they are trying to hide how much they would be willing to pay.


Job announcement: https://www.terraxplorations.com/gis-specialist

While doing the job search the biggest thing I noticed is that the GIS jobs that paid the best were the senior level jobs but you needed 5 plus years of experience in GIS. The entry level jobs sounded that I could apply for currently but I would need to get paid more to live in today's economy. It is interesting to find the line between what you are qualified for and what you need to get paid. 

Monday, September 15, 2025

Module 1.3: Data Quality-Assessment

 Module 1.3: Data Quality-Assessment

This weeks module was trying to see the completeness of the roads separately and compare them to each other. There were two road networks that were used, TIGER and Jackson County street centerlines. TIGER stands for Topologically Integrated Geographic Encoding and Referencing which was collected by the Census Bureau. The TIGER data that was used is from 2000 and it does have some major errors, but in 2010 the government tried to fix these issues. The other data is from Jackson County street centerlines that is more accurate than the TIGER data but it is not more complete. Knowing where your data comes from is important because it will help see the completeness of the road.

First, I needed to make sure that the roads were only within the grids. The way I did this was by using the select by location and choosing the input features as the roads. I made sure to use the relationship as Intersect and the selecting features as the Grid shapefile. Then I hit apply to get only the lines within the grid polygons. Next, I used the Intersect geoprocessing tool and each road as the input feature. I made sure to do it separately because I did not want all the information be put together. I made the Output feature class as “Grid_Street” and “Grid_TIGER” so I would know that was different than the other shapefiles. Then in the Attributes to Join I wanted All Attributes because it would make it easier in the future to get the information for each grid. Finally, I made sure in the Output Type to put Line so that it would make the line features that intersected the grid. Lastly, I added a new column called “New_Length” to get the new information of each polyline. Once it was created, I did the calculate geometry to get the new length in kilometers. I did this to make sure that there were actually new lengths.

After getting all the data it showed that the TIGER 2000 data  were more complete than the Jackson County street centerlines data. Also, that even though the data is open to the public and done by the Federal Government it does not mean it will always be accurate. This indicates that it is necessary to have two datasets to help compare against one of each other. The image below is a choropleth that shows the percentage of differences between the two dataset length within each grid. It indicated that there were major negative and positive differences between the two data sets. The negatives indicated that the TIGER data had more total length than the Jackson County street centerlines. The positives indicated that the Jackson County street centerlines had more total length than the TIGER data. Lastly, the little change is approximately from -3 to 3 on the number scale.



Tuesday, September 9, 2025

Module 1.2: Data Quality-Standards

 Module 1.2: Data Quality-Standards

This weeks module is looking at positional accuracy of road networks. The data for the roads came from Albuquerque and TeleAtlas for Street Map USA. The Albuquerque dataset is considered pretty accurate while the TeleAtlas is distributed by ESRI in ArcGIS. We were supposed to look the different street views to see where they were similar and where they diverted from each other. It was interesting to see the ESRI Street Map USA seemed further away from the "true" data points. One would think the street views that ESRI produces would be very accurate. The way we put the "true" data points in were but doing a close up of the orthoimages to see where the streets seemed to be located. The image below shows data points and street views of both Street Map USA and Albuquerque data sets.


I did 40 data points because it was hard to get the sampling rules of 20% in each quadrant and >10% of diameter apart at the 20 data points. The way I did the accuracy assessment was using the table on Page 5 in the Positional Accuracy Handbook within excel. In order to make sense of the data I switched it to UTMs to get the Easting and Northing in meters. In the handbook we are supposed to see the difference between the test points and the "true" points of both the x and y coordinates. Also, we had to square both of the x and y coordinate differences. Then we would add together the squared differences and then add it all up to get the sum. After that we would get the average which is the sum/number of points. We would get the RMSE which is the average^1/2. Lastly, we would get the NSSDA which is 1.7308 * RMSE.

Once we got all that information we needed to write our final accuracy statement. This statement does change between a tested and complied to meet. The tested statement is used when the accuracy was determined by comparison with an independent data set of greater accuracy (Positional Accuracy Handbook, pp. 5). The compiled to meet statement is used when the data has been thoroughly tested and that method produces a consistent accuracy statistic (Positional Accuracy Handbook, pp. 5).  Based on the way these points were collected we would use the test statement. For the Streets Map USA the statement is Tested 306.9 meters horizontal accuracy at 95% confidence level and the Albuquerque Streets is Tested 16.7 meters horizontal accuracy at 95% confidence level.

By creating the different data points it helps know the positional accuracy of your shapefiles. It is important to try to get a higher accuracy data set to compare. After doing all the data points and getting the math correct it seems that the Albuquerque data set is more accurate than the Street Map USA data set. This is interesting because the Street Map USA dataset is used within ESRI ArcGIS and one would think that is more accurate then other datasets. 

Friday, September 5, 2025

Blog Post #1

 Blog Post #1

The internship program that I am doing is for the U.S. Army Corps of Engineers (USACE) as a GIS Analyst and Archaeologist. The way I plan to earn this credit is by creating an archaeological geodatabase for the Planning and Environmental Division. Create maps for different divisions and/or projects within the USACE, Mobile District. Create maps for reports and letters for the division. Do GIS analysis of LiDAR data and coastal flooding for the division. Lastly, conduct remote sensing for projects requiring quantifiable impacts analysis.

The GIS user group I chose was GIS Association of Alabama (GISAA). This group is a non-profit group that covers the whole state of Alabama. The focus is on geospatial professionals within the private and public sectors. The reason I have chosen this group is because the region I mostly focus on for my work is the state of Alabama. It seems they are able to connect a lot of different professionals together and look at different ways to use GIS data. Membership is Individual - Professional Membership - $50.00 (USD) and Individual - Student Membership - $10.00 (USD) for one year. 

The website is: https://gisaa.org/


Tuesday, September 2, 2025

Module 1.1: Calculating Metrics for Spatial Data Quality

 Module 1.1: Calculating Metrics for Spatial Data Quality

In this weeks lab we looked at data that was collected by GPS coordinates to see the precision and accuracy of that data. This is important to know because when using data you need to make sure you are paying attention to what is being given to you and not just accepting it. It is important to note that when looking at the data one needs to aggregate data for both precision and accuracy. 

In the first portion of this module we took the data points and made an average data point. This is necessary because it helps show where the data should be compared to the rest of the area. Then we needed to project the waypoints and the average waypoint to the correct projecting so the data could be used in feet or meters better than decimal degrees. Having a lot of data it is important to do the Batch geoprocessing tool. This makes sure to get all your data in the correct projection without doing it one at a time. Also, you can do multiple feature classes to make sure all the data is projected correctly. Next we need to use the Spatial Join tool to help with the distance that falls within the different percentages (50th, 68th, and 95th). This makes sure your data is correct without having to guess the different buffer zones and is much more accurate than a visual inspection. The map below is a visual representation of the horizontal accuracy and precision of the waypoints that were given from the GPS unit.


After getting all the data to make the map the next step was getting the horizontal accuracy. There was a reference point that was put in the data to help measure the distance between the average location of the waypoints and the reference point. Then we measured the distance in meters to see how far apart they were from each other to see the accuracy of the point. The following numbers were the numerical results of the horizontal accuracy and precision. The horizontal accuracy is low accuracy because everything is slightly offset from the reference point. This compares to the horizontal precision which is high because it is only 1.6 meters off from the reference point. Lastly, the way that horizontal accuracy is measured by using the average point and comparing it to the reference point that was taken in the field. On the other hand, the way the horizontal precision is measured by looking for the standard deviation of the points such as the different percentiles that were shown in the map above.

Blog Post #5: GIS Portfolio

 Blog Post #5: GIS Portfolio In the final weeks for the GIS Internship we were given the task of creating a GIS portfolio either on paper or...