Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lack of Tactical Environment Considerations #115

Open
Sam-Bolling opened this issue Dec 30, 2024 · 5 comments
Open

Lack of Tactical Environment Considerations #115

Sam-Bolling opened this issue Dec 30, 2024 · 5 comments
Assignees
Labels
ready Was discussed during a telecon and a decision was made

Comments

@Sam-Bolling
Copy link

This public comment is respectfully submitted by the U.S. Army (USA) Program Executive Office – Intelligence, Electronic Warfare & Sensors (PEO IEW&S) Integration Directorate. The following comments apply to the candidate OGC API – Connected Systems v1.0 and all listed supporting modernized Sensor Web Enablement (SWE) candidate Standards, including the 23-053r1 OGC API – Connected Systems Standard v1.0 Reviewers Guide:

  • The OGC API - Connected Systems Standard does not distinguish separate environments for Enterprise and Tactical implementation conditions. USA PEO IEW&S Integration Directorate recommends that the OGC API Connected Systems differentiate between and subsequently specify separate implementation models (standards) for enterprise and tactical environments
  • To support implementations in tactical environments, USA PEO IEW&S Integration Directorate recommends that ISA (standards) be listed in the Implementation Models section 4.1.2 of the 23-053r1 OGC API – Connected Systems Standard v1.0 Reviewers Guide
@Sam-Bolling
Copy link
Author

For addition context relating to this issue: in 2019, the U.S. Sensor Integration Framework Working Group (SIFWG) developed a standards profile called the Sensor Integration Framework – Standards Profile (SIF-SP) that was later adopted by the U.S. Joint Enterprise Standards Committee (JESC) – for the U.S. Department of Defense (DoD) and U.S. Intelligence Community (IC). The design of the standards profile involved a two-layer framework comprised of a reference view which detailed an abstract level of sensor integration concepts, followed by technical views that provide instructions how to apply the reference view in a specific technology environment. Following this design, the SIFWG identified and separated two different technology environments: enterprise and tactical. The SIF-SP included a technical view for enterprise environments that largely focused on the OGC SensorWeb Enablement (SWE) family of standards, and a technical view for tactical denied, degraded, intermittent, or limited bandwidth (DDIL) IP environments that largely focused on the U.S. Army’s Integrated Sensor Architecture (ISA). Additional information about the SIF-SP, along with its reference view and two technical views that are available for download, can be accessed here:

Interoperability events relating to the relationship between OGC’s SWE family of standards and the U.S. Army’s ISA include:

  • During the U.S. DoD’s Enterprise Challenge 2017 event, the U.S. National MASINT Office (NMO) collaborated with U.S. National Geospatial Intelligence Agency (NGA), U.S. Army Geospatial Center (AGC), and U.S. Army PEO IEW&S, to demonstrate sensor information sharing to warfighters and analysts when and how they need it through the application of tactical to enterprise standards. USA PEO IEW&S developed an application called a SWE Bridge that mapped and integrated sensor data from ISA-compliant systems to OGC SWE-compliant systems. Participants in this activity implemented draft guidance that was later updated and published as the SIF-SP in 2019
  • During OGC Testbed 17 conducted in 2021, participants explored the feasibility of implementing concepts in the SIP-SP. During the event, additional standards implementations where tested for interoperability with the U.S. Army’s ISA systems using OGC SWE such an OGC SensorThings API server, Open Sensor Hub, and MASBUS. More information about this activity can be accessed in the resulting engineering reports OGC Testbed-17: Sensor Integration Framework Assessment ER and OGC Testbed 17: MASBUS Integration Engineering Report. From the interoperability experiments conducted at the OGC Testbed 17 event, open source software relating to OGC SWE and U.S. Army ISA can be accessed in the driver section of the Open Sensor Hub github repository

@Sam-Bolling
Copy link
Author

There is an ongoing discussion about the use of OGC API standards in limited bandwidth computing / communication environments in some of the OGC WG/SWG email distributions that may be of relevance. This comment lists out some of the discussion points from the conversations in the OGC API – Common SWG, OGC Defence Intelligence WG, and OGC Moving Features SWG that may add value to this OGC API Connected Systems standards issue.

On 29DEC2024, Teodor Hanchevici (@thanchevici) writes:

  • I want to discuss the idea of having a low bandwidth profile for OGC API [standards], one can do only so much with compression and binary formats. One of the things that came out of REPMUS exercise was the need of testing the OGC API [standards] in a tactical network (200-300kbps) and this will pose some challenges. We can do a lot with compression and different encoding (see attached some results based on data in demo.pygeoapi.io, 1000 records from each collection), but I believe we can do better.
  • "The idea of a ‘low bandwidth profile’ is to remove redundant information from the data, send only deltas when applicable, and filter what information we get back. We can enforce a ‘access’ list in the collections response, and we can enforce in this profile things like totalRecords, recordsReturned, and so on. This applies to API [standards] like Features, EDR, Coverages, Moving Features as I can’t see how to apply it to tiles and maps.
  • "For instance, one can request the data to come without any links, and with 2 properties out of 20. Of course we don’t want to send redundant information in the request either, so things like a CollectionView (imagine a database view over the collection that filters out properties) would be interesting as one can create such collections dynamically. Maybe this is already covered in [create, read, update, delete] CRUD section of Features, did not check yet.
  • "Also a different encoding that we can recommend would be beneficial, and for this people in meteo, deep-space, and underwater comms can come with suggestions. I will try to see what happens if I use CDR (OMG DDS encoding) or protobuf to encode the data (it may need some changes as I’m not sure if I can do a 1:1 mapping of Feature to IDL/protobuf). We can have a low-bandwidth data model that will complement the profile.
  • "Of course this will put some stress on both client and server developers, but in the end I believe it is worth it and could increase the adoption of the API.

OGC API Standards byte size test results 06JAN2025

Additional context about REPMUS: REPMUS (Robotic Experimentation and Prototyping with Maritime Unmanned Systems) is an annual international multidomain exercise focused on testing the capabilities and interoperability of new-generation Maritime Unmanned Systems in naval warfare. Held in Portugal, REPMUS 2024 was organized by the Portuguese Navy alongside the University of Porto, the North Atlantic Treaty Organization (NATO) Centre for Maritime Research and Experimentation, and the NATO Joint Capability Group for Maritime Unmanned Systems. The event included more than 2,000 participants from 23 nations, including representatives from NATO allies, technology companies, and academic institutions. Seven observer countries also attended. Other notable participants in addition to Kongsberg Geospatial included NATO’s Allied Command Transformation (ACT), the European Union Satellite Centre (SatCen), Frontex, the European Fisheries Control Agency, and the European Maritime Safety Agency. REPMUS 2024 was held around Troia Peninsula in Portugal from 9 to 27 September 2024. The main objective of REPMUS 2024 was to ensure the seamless integration of autonomous systems across different operational domains, including aerial, land, and underwater drones. https://www.nato.int/cps/en/natohq/news_228959.htm?selectedLocale=en

On 29DEC2024, Jérôme St-Louis (@jerstlouis) writes:

  • I'm happy to see UBJSON in there! :) LZMA2 is slower to decompress, but if bandwidth is the key issue as opposed to CPU usage, it might be worth considering as well, especially for larger payloads.
  • While the collection description etc. is relevant for the client application loading a particular data source at initialization or as a result of user interaction, I would expect the associated requests to pale in comparison to the actual data requests.
  • The Features - Part 6: Property Selection (which I believed we've now settled on the properties= / exclude-properties= parameters) will allow to achieve most of this (perhaps not the links which are required by Part 1: Core). We also define equivalent capabilities in OGC API - Coverages (which can be used with Coverage Tiles too) and in OGC API – DGGS.
  • The most important things for a low bandwidth profile in my opinion is to focus on the Time, Area and Resolution of interest. By using hierarchical zoom levels, as defined in 2D Tile Matrix Sets for OGC API - Tiles / WMS, COGs and Discrete Global Grid Systems, this guarantees a deterministic maximum amount of data needed at any one point for the system.
  • For Features, this will be possible with Part 7: Geometry Simplification, which will allow to return simplified geometry for coarser zoom level (Resolution of Interest). Tied to this is the ability to return a clipped feature for the Area of interest: if there's a use case to return a simplified version when zoomed out, there should also be a corresponding use case for retrieving a small subset of a large feature when zoomed in (issue #530.)
  • The whole premise of OGC API - Tiles (which supports "data" tiles, Coverage or Vector Tiles) and OGC API - DGGS "Zone Data Retrieval" is that this is fundamentally how clients requests data from these APIs. Tiles / DGGS Zone data are also much simpler to cache on both the server and client-side, since the clients will be making requests for a pre-determined multi-level partitioning of space.
  • As for OGC API - Maps, see sections 6.4 in the overview, 6.4.3 and 6.4.4 in particular, which explains the intended use cases. Low bandwidth scenario is not one of them -- except if combined with OGC API - Tiles (map tiles) for data inherently of a raster imagery nature.
  • For low bandwidth scenarios, clients can request raw data and render it client-side instead (they probably have a GPU to do this efficiently).

On 30DEC2024, Ingo Simonis (@ingosimonis) writes:
We are seeing an increasing number of requests for specific profiles of OGC API standards and need to improve our service to our target communities. These efforts rarely lead to a new standard but often to a Best Practice for using specific modules and individual conformance classes.

On 30DEC2024, Clemens Portele (@cportele) writes ‘some thoughts from a Features perspective:

  • Parts 6 (Property Selection) and 7 (Geometry Simplification) will provide the capabilities to reduce the response to information that is relevant for the requestor (as Jerome has pointed out).
  • Part 6 currently does not provide a capability to skip links, maybe this could be another profile (e.g., profile=no-links).
  • There is currently no concept of a collection that is a reduced view of another collection. Another way to requesting a view in a simple way could be the use of a Stored Query (Part 10).
  • An existing binary encoding for Features that can be streamed (chunked transfer encoding) is FlatGeobuf. At least as long as the data has a tabular structure and is restricted to Simple Feature geometries.
  • A low bandwidth profile that captures the required conformance classes plus additional requirements could be specified as a Best Practice or by a community specification (e.g., by DGIWG).

Additional context about DGIWG: Defence Geospatial Information Working Group (DGIWG) is a multi-national body that addresses challenges between participating nations relating to military requirements for geospatial interoperability. The vision of the DGIWG is interoperability of geospatial information and services in multinational defence environments. One way that the DGIWG makes progress towards this vision is through the publication of profiles of standards, including OGC standards. https://dgiwg.org/

On 30DEC2024, Jeff Harrison (@jeffharrison) writes:
We’ve been using FlatGeobuf in the generation of Releasable Basemap Tiles from open geospatial data sources… Yep, very interesting binary encoding.

On 30DEC2024, Teodor Hanchevici (@thanchevici) writes:

  • … there is no guarantee in the Common that the collections will include the links so technically there is no guarantee that a single request to /collections will contain the necessary information to determine what type of collections the server has. Maybe this can be addressed in the low-bandwidth profile because doing lots of GET will be a pain when doing the discovery.
  • The way we operated in the exercise was that we provided a catalogue built on top of Features and a notification was sent that a new resource is available/modified/deleted (think of sending the URL to the feature in question). If this reached an HMI, then the operator could go and fetch the feature, choose to display the feature on the map, or choose to examine the links and get all the data linked via “enclosure” relation. If it reached a UxV or another system, they would pull the data and use it. Data like sound velocity profile, salinity profile, and so on was served via EDR, but one could choose to download standard ASVP files using OGC Processes. There is a lot of redundant data in coverages and EDR that would benefit from some ‘trimming’.
  • As @Jérôme St-Louis and @Clemens Portele mentioned, Part 6 will address 75% of the problem, every byte counts, and I don’t want to keep sending a request with redundant information (fields to be included, no-links etc). A stored query may solve this, assuming it can be dynamically created and lifetime specified.
  • @Jérôme St-Louis as far as geometry simplification goes, and different resolution, as well as styling and rendering client side this is half the story, in a larger system of systems, some may need simplified geometries, but others will require the data as is (think of pulling it for navigation information or any other purposes). I will have some discussion mid-February when we start planning this year exercise.
  • I will look into FlatGeobuf and see how it compares with others. @Civ Harrison are you member of DGIWG? As far as I know FMN talks about OGC API in spiral 6 (maybe 8 years from now). Do you know if there are any talks in DGIWG?
  • @[email protected] a best practice is a good start, and then as Jeff pointed out, it could be built upon by others.

On 30DEC2024, Jeff Harrison (@jeffharrison) writes:
I wonder if the same thing could be done also for Tiles. For example provide a catalog built on top of Tiles and a notification is sent that a new resource is available/modified/deleted (think of sending the URL to the tile in question)… then the operator could go and fetch the Tiles, choose to display them, or add them to a SQLite db with the Tileset (?) Probably not that hard for raster tiles if same source used… but maybe not so easy for vector tiles given generalization that happens during generation… (?) We tried something similar with ChangesSet API but never finished…

On 30DEC2024, Carl Reed (@cnreediii) writes:
… the use of implementations of OGC API Standards in low bandwidth/DDIL environments could be an excellent use case for an OGC Interoperability Experiment. Such IEs often lead to either (or both) an OGC Best Practice or suggested revisions to the tested OGC Standards.”
… various OGC innovation initiatives have considered low bandwidth/DDIL use cases before. Consider the SOFWERX funded Mixed Reality to the Edge Concept Development Study (https://docs.ogc.org/per/19-030r1.html#_mixed_reality_to_the_edge_concept_development_study_overview. Users at the “edge” include first responders, field deployed utility workers, and the warfighter. Users at the edge may have denied, degraded, interrupted, or limited (DDIL) internet access. There is also the requirement to provide the right data at the right time to the right user._”

On 30DEC2024, Joshua Lieberman (@lieberjosh) writes:
Various OGC groups and projects over the years have experimented with deltas and changesets, most often for purposes of efficiently synchronizing / updating distributed data stores, without really coming to a consensus on how to standardized this. A common delta format / approach would also be useful for low-bandwidth transmissions, the catch being that an interoperable delta needs to be applied as a transition from one well defined and commonly understood dataset state to another. Challenges such as distributed consistency would need to be addressed. I agree that an IE could put together a common set of requirements for, say, a Delta or Change API and determine to what extent an open approach to distributed state transitions might be feasible (or worth the effort). Perhaps the dual benefit of improving both synchronization efficiency and transmission bandwidth could improve the chances for member interest.

On 31DEC2024, Sam Meek (@samadammeek) writes:
There’s also a conceptual approach to DDIL discovery and distribution. A metadata push, resource pull provides best use of bandwidth while it's available. Depending on where we want to go with this, there are also practices around resilient and ultra optimistic architectures and protocols that can handle long response times. One can also not be too chatty otherwise transfers are not initiated! Being efficient and removing redundancy in data and calls is important, as is the delta\return question (providing the user with the critical information first and then building the periphery) helps a lot and is applicable to geospatial data transfers.

On 31DEC2024, Jeff Harrison (@jeffharrison) writes:
… why can’t a Changesets API just provide the capability to … identify Checkpoints, and then provide all Updates since last Checkpoint for a particular Resource type (?) Synchronizing between different data stores is way more complicated, and likely isn’t needed for a basic OGC ‘Changesets API’.

On 01JAN2025, Chuck Heazel (@cmheazel) writes:
Use Publish-Subscribe. Subscribe to a resource and receive updates whenever there is a change.

On 01JAN2025 Joshua Lieberman (@lieberjosh) writes:
That’s certainly one good way to use a changeset, but it still requires defining the changesets and the states (checkpoints) they pertain to. The CGDI Pilot way back when used WFS requests and GeoRSS items for this purpose, but that functionality has yet to be turned into a general model in connection with OGC API.

On 01JAN2025, Brad Hards (@bradh) writes:

  • Can we expand on the problem before going into solutions? I get that reducing network consumption is generally a Good Thing, but I'm not sure I understand what success would look like in this particular context. In particular, what is OGC API [standard] actually being used for in this context? What is the need that isn't being addressed by existing, in-use mechanism like Tactical Data Links, and broadcast-only systems? Is short, infrequent, unpredictable transmission more important to the edge user than minimising bytes in total? (I think so, for power-management and signature management reasons, but I'm making broad assumptions that may not be valid)."
  • DDIL is an important case, but perhaps OGC API [standards] isn't a good fit if it’s truly Disconnected (since I'd really just like the files, thanks). Maybe we can do things to help with the D, I and L, but an explicit statement of the operational-tactical / business problem(s) would be great to measure those things against. For example, I think I can help with improving imagery delivery over OGC API Tiles and maybe Map. However, if the user doesn't actually need those, or doesn't need them enough, or it need a spectral band combination I can't do, then those are pointless.
  • So, what is the real, un-met need? And why are we using OGC API [standard] for that?

On 01JAN2025, Jérôme St-Louis (@jerstlouis) writes:

  • I think one of the key use case, which was essentially what was studied in OGC Testbed 15's Images and ChangesSet API and is the essence Jeff's use case, is to synchronize a local data repository with a remote data repository."
  • The realization of that local data repository could be anything, but ideally it would be a raw data tile cache (tiled coverage or vector data, either based on OGC 2DTMS or Discrete Global Grid Reference Systems). The Vector Tiles extension for GeoPackages on which we made additional progress during the Releasable Basemap Tiles pilot (draft ER -- btw 24-010 is either still not published or not findable), and the deterministic tiles grouping and indexing extension (for very large repositories) developed for the CDB 2.0 GeoPackage Data Store would be a lot more practical / scale a lot better than simply having a single file per tile.
  • … whenever connectivity is available, OGC API [standard]s can be used to fetch new content and update the local data store to the latest, while optionally adding additional filters such Area, Time, Resolution, Fields of interest.
  • While there could potentially be a Common concept of "checkpoints" across all OGC API [standard] data access mechanisms, this is somewhat tied with how data is managed / updated (e.g., via a CRUD method). How to fetch content is also tied to each data access mechanism.
  • One way to update things which we explored in Testbed 15 is the concept of "scenes" (called "Images" in Testbed 15) which we are considering as candidate extension for OGC API - Coverages [standard], but could perhaps apply to other APIs (including feature collection scenes, point cloud scenes, 3D models scenes...). The idea is that a collection of data consists of individual scenes which can be added / deleted / replaced. Clients can retrieve data for the collection as a whole, or for individual scenes. Scenes also have associated metadata. Having a record of scenes that were added / deleted / replaced is one way to maintain checkpoints. The 2DTMS tiles or DGGRS zones affected by a series of changes to scenes in between two checkpoints can then be calculated, so that a client could make such queries and then retrieve the affected data tiles or DGGRS zone data (or map tiles if the client maintains a pre-visualized tile cache instead). Alternatively, the clients can retrieve the data of the scenes instead if the local data store is not tile-based or if it wants to tile things its own way.
  • Another related use case is for the synchronization of metadata, as for Earth Observation imagery GeoDataCubes and very large catalogs of STAC metadata, such as our sentinel-2 Level-2A datacube mirroring the COGs on AWS. There can be metadata associated with each scene, and being able to know which scenes were added since the checkpoint of the last synchronization allows to retrieve the new metadata only (ideally using a relational model, binary encoding and compression). Having the metadata locally (even without the data) allows to perform efficient queries across millions of scenes without straining the servers (which may not allow queries returning tens of thousands of results as per our past experience).
  • While Pub/Sub mechanism allows a client to get notified when new things get added, I feel this is not so useful especially in the context of DDIL since the client may not be available to be notified, and as Josh pointed out, it does not address the aspects of checkpoints and changesets.
  • There is also nothing wrong in my opinion with polling at regular intervals, when the clients do have connectivity, and asking the server "Do you have anything new for this AoI/ToI/RoI since this last checkpoint for which I sync'ed?".”
  • Perhaps a key property of scene would be the checkpoint timestamp and/or ID, and this could provide a basis for this whole OGC API ChangeSet / DeltaUpdates mechanism. This would then allow to filter on a /scenes resource to list only the new scenes since the last checkpoint timestamp or ID, for which metadata and/or data can then be retrieved. As per Coverages issue #196 we agreed to move Scenes to a new future part of OGC API - Coverages but I have yet to make the changes in the repo (it currently still appears as a requirement class of Part 1.

On 02JAN2025, Sam Meek (@samadammeek) writes:
Pubsub is great for metadata, but poor for resource transfer in austere environments (depending on the protocol). You as a user don’t usually want all of the changes, just the one’s relevant to your tactical environment (or at least prioritised).

On 02JAN2025, Teodor Hanchevici (@thanchevici) writes:

  • One of the important parts of the REPMUS 2024] exercise is to establish a recognized environment picture. Disseminating the data was done using [implementations of] OGC API Features [standard] in combination with OGC API Processes and EDR [standards]. Clients were notified using CATL protocol about new products being available, however data was retrieved using [implementations of] OGC API [standards].
  • One of the challenges was to be able to get this information to users connected over Iridium or radio networks. Tactical Data Links are not available in this exercise (running NATO Unclassified), and the goal is to evaluate the suitability of the [implemented] API [standards].
  • Disconnected is an important case too, even if your system is disconnected from the main network, its internal subsystems are interconnected, would OGC API [standards] be a valid way to deal with this case, have the ability to swap subsystems and so on, I do not know, but it is a good question.
  • Why use OGC API [standards] – it is a standard and we don’t want to reinvent the wheel. The alliance vision is to re-use as much as possible existing standards, and focus on open ones. We need to be able to disseminate data not only to human users but also to uncrewed systems (think of disseminating new navigation information).
  • What is the real need - well OGC API [standards] encoding using JSON is wasteful, may work for happy moments when you have a high-speed connection (no way this will work underwater for instance, and underwater comms is a different topic).
  • What is the definition of success - demonstrate that [implementations of] OGC API Features, Processes, EDR, maybe Coverages and Tiles [standards] are a valid option to support data dissemination between heterogeneous participants and networks (the exercise had things from underwater acoustic modems all the way to GB connections and Starlink). As mentioned in the beginning of the thread, OGC API [standards] are scheduled for one of the future Federated Mission Network spirals.

Additional context about NATO FMN: Federated Mission Networking (FMN) is an initiative to help ensure interoperability within NATO. The 39 participating nations, also known as "FMN Affiliates" work together to develop technical capabilities required to conduct net-centric operations. Each development increment is referred to as an "FMN Spiral." The respective requirements, architecture, standards, procedures, and technical instructions are documented in a series of "FMN Spiral Specifications." For example, FMN Spiral 3 was published in 2018 and FMN Spiral 4 was published in 2021. There is also a rolling 10-year FMN Spiral Specification Roadmap of envisioned future capabilities. https://www.act.nato.int/activities/federated-mission-networking/

On 06JAN2025, Sam Meek (@samadammeek) writes:
Compression, removal of unneeded data, clever use of deltas, efficient protocols at the transport layer and use of metadata punts and profiles are all likely to play a part. IMO it'd be sensible to start with the use cases or more broadly who this is for and why.

@Sam-Bolling
Copy link
Author

On 07JAN2025, Chris Tucker writes:
... the OpenSensorHub team has been implementing OSH/SensorML Drivers for a range of tactical military sensors during the time that we were working to mature the OGC API - Connected Systems Standard candidate with the wider community. The candidate standard has worked very well with low bandwidth and DDIL features using OpenSensorHub. However, OSH was intentionally built from the ground up to deal with both light/simple and heavy weight/complex sensors in both low bandwidth/DDIL and high availability environments. We can appreciate that another software implementation may have such limitations and might seek some change in the proposed candidate standard.

On 08JAN2025, Chris Little (@chris-little) writes:
PubSub was driven by the Met Ocean community because DDIL is the global environment in which weather forecasting is done. There is quite a bit of domain specific infrastructure on top of the API-EDR Part 2: PubSub Workflow to make WIS2 operational, but lots of the patterns are widely applicable. E.g. “I’ve been offline for 24 hours, can I have the last 6 hours’ worth of info please?” or “Please repeat Message n.m from channel X.Y”… the “Please” is quite important, as it is not a military environment.

@pebau
Copy link

pebau commented Jan 9, 2025

Dear all,

great discussion, and I herewith would offer to participate actively in some upcoming workgroup, based on our experience in this field (concepts, architecture, standardization, application of datacubes), such as with the recently finished NATO SPS project Cube4EnvSec (https://cube4envsec.org/) where we built up mixed-bandwidth fixed & moving federations.

Some thoughts:

  • definitely +1 for concisely stating the problems first, and inspecting solution candidates as step 2. Bandwidth-related challenges are manifold, I am happy to add to any such collection being started; ex:

    • which datasets have been updated since timestamp X?
    • which datasets have information between timestamp X and now?
    • what changes have occurred to dataset Y since X?
    • give me only the changed regions in Y / a diff of the changes / an aggregation of the changes since X
    • give me conclusions on changes observed, based on a fusion of several datasets Y1, ...Yn which possibly reside on different servers (and I do not want to access all Yi myself, nor do the fusion)
  • generally, static concepts like (server-defined) checkpoints may be too rigid, there are many more interesting questions where rather a query language might allow clients (desktop or cloud or edge) to request whatever they can digest best, without requiring heavy logic in the client.

  • spatio-temporal raster data (aka "datacubes"), in standards modeled through coverages, contribute much to the Big Data and the bandwidth problems arising, so it's worth looking at these standards: OGC/ISO Coverage Implementation Schema + Web Coverage Service (WCS) + Web Coverage Processing Service (WCPS) which are adopted, mature, and Petascale-proven.

So far for my 2 New Year cents,
Peter

@alexrobin
Copy link
Collaborator

Discussed during 01/09 telecon.

Different aspects of the connected systems spec suite help solve DDIL related issues:

  • SWE Common encoding allow separation of datastream schemas from actual data values which help reduce payload size compared to other encoding standards. (part 2)
  • Pub/sub protocols supported by OGC API - Connected Systems help track changes and only download changes when required (part 3)
  • Future binary encoding extensions in (e.g. protobuf) will help reduce payload sizes as well (part 4)

There is also implementation experience that demonstrates the usability of the API for tactical applications running in Denied, Disrupted, Intermittent, and Limited (DDIL) environments.

Will add a section in the document to explain this.

@alexrobin alexrobin added the ready Was discussed during a telecon and a decision was made label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready Was discussed during a telecon and a decision was made
Projects
None yet
Development

No branches or pull requests

4 participants