Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added dmr1 for Slovenia and fixed srtm #196

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

DavixDevelop
Copy link

@DavixDevelop DavixDevelop commented Oct 31, 2020

The following items were changed/added:

  • srtm
    • Fixed the link to the tile and mask index in the config, as it got moved
    • Fixed downloading the mask and tiles index, as they categorized the tiles to pages
    • Fixed downloading the srtm tiles, as you now need to authorize before you download
  • download
    • Added basic authentication
    • You specify the username and password in the config files under the source (see config.example.yaml)
  • index
  • Added a counter, to show, how many objects were loaded
  • dmr1
    • Added new source for Slovenia
    • The index gets created by downloading a zip file that contains an ESRI Shapefile, which features then get iterated over to create links to the source tile
    • The source tile is a text file, which contains y and x coordinate and the height at the given point
    • The source tile uses the Slovenia 1996 / Slovene National Grid (EPSG:3794) projected coordinate system, which then gets converted to WGS84
    • When unpacking the file, the x and y are fliped first and an xyz file is created, after which the NS resolution is corrected, and the GeoTIFF file is created
    • Due to the index using a lot of memory, when parsing the tile, it checks if intersects with the selected region (line: 141)
    • This is only for testing purposes and is to be removed when memory usage is not important
    • On line 33 the box needs to be changed, to the selected region
    • Line:142 and 145 can be removed alongside line 34 if memory usage no important, duo to the index

Now regarding the data source (DMR1). It's created from OTR, which is a laz file, containing only the terrain and Its resolution is 1m. OTR then gets processed, and a grided point cloud is created, and then saved in a text file. More can be read here

Running the test config (without srtm) and converting it to the terrarium format using QGIS produces the following:
Sample Render

And here is a comparison with the current data set, and the proposed new one:
Comparison

Also on commit 81e0b62, the message was supposed to be "Remove otr source and replaced with dmr1", but had a typo.

If anything needs to be changed/fixed I am happy to oblige.

@nvkelso
Copy link
Member

nvkelso commented Nov 2, 2020

Wow, great work! And thanks for including tests.

Can you look into the line encodings, there's a lot of diff showing here for things I don't think you changed?

Is you intention to build locally and distribute the changes yourself (totally works!) or to incorporate them into the next build of the Nextzen / AWS open terrain project? That operational code is in https://github.com/mojodna/marblecutter and related repos.

@DavixDevelop
Copy link
Author

Yes, my intention is to incorporate the new dataset into AWS open terrain project. We are using the AWS terrain tiles for BTE (Build The Earth), a project where we want to create a 1 to 1 copy of the world in Minecraft. But, when I noticed the poor quality of the terrain in my country, I first decided to generate my own tiles from the lidar data, locally. Later I pushed the idea, to use this dataset to generate the terrain anew, so I was assigned the role of a dev of our build team and decided the best plan of action would be to push this dataset directly to the AWS terrain tiles, which would benefit us, and other users of AWS terrain tiles. And about the diff, I'll look into it. Probably happened when I pasted the whole code into Sublime Text, instead of just the modified code.

@DavixDevelop
Copy link
Author

Turned out, had some problems with tile overlapping, transparency, and height because I was the wrong function to move the file to the store. And about the link you've posted. If I understand correctly, I have to follow the procedure, for these dataset to be updated on s3?

@DavixDevelop
Copy link
Author

Wait, on a second read, what do you mean by "distribute the changes yourself"? I can build it locally, but how would I update the data on s3.amazonaws.com/elevation-tiles-prod?

@nvkelso
Copy link
Member

nvkelso commented Nov 18, 2020

To update data at s3.amazonaws.com/elevation-tiles-prod requires some coordination and the next scheduled sprint and release is in first few months of 2021.

In the meantime... you can build these new tiles yourself and you'd need to do something like a bbox limit to load tiles within some bbox area from your sidecar, else source from the public data bucket.

@DavixDevelop
Copy link
Author

Ok, I'll add bbox limit in the config for the source. Also, is there a way I can contact you, for further questions?

…nd height of 1001 for tiles and fixed the incorrect tile bounds by switching to a pre-made index file (79.71% complete)
@DavixDevelop
Copy link
Author

Ok, now I've fixed the tile bounds as they were incorrectly set, by switching to a pre-made index file. I've created my own script that supports multithreading for downloading and cropping the dataset to the border. I've bought a cheap used PC for 170€ to run the two scripts, as I download and crop the dataset in regions. The second crop script gets the actual tile bounds and add's it to the whitelist, and that's how the pre-made index gets updated. So far I've downloaded about 79.71% of the dataset. From reading the various docs about how the data get processed, if I understand correctly, the tiles need to get optimized to cloud-optimized GeoTIFF's, and overviews and vrt's need to be created and then uploaded to S3. I'm a college student and I've looked at the pricing for the S3 storage, and I could probably afford it, the problems are with the other services, like Lambda, API Gateway, IAM, EC2. I'm not sure if I could afford them, because, as I've said, I'm just a college student and I'm doing this as a part of another global volunteer project. Other than that, I have no problem creating a multithreaded script that would cloud-optimize each GeoTIFF and upload it to S3.

@DavixDevelop
Copy link
Author

Update, I've now downloaded the whole dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants