Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add dns addr to addrsfactory #34

Closed
wants to merge 3 commits into from

Conversation

guillaumemichel
Copy link
Contributor

Lumina nodes based on rust-libp2p are unable to connect to /ipX/<ip>/tcp/<port>/tls/sni/<escapedIP>.<base36peerid>.libp2p.direct/ws/p2p/<peerid>, but they are able to connect to /dns/<escapedIP>.<base36peerid>.libp2p.direct/tcp/<port>/tls/ws/p2p/<peerid>.

I am not sure whether rust-libp2p must have explicit support for tls/sni/... addresses, but adding dns multiaddrs solve the problem.

Are there any reasons why we wouldn't advertise dns multiaddrs?

@codecov-commenter
Copy link

codecov-commenter commented Jan 9, 2025

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 55.55556% with 8 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@5d48da3). Learn more about missing BASE report.

Files with missing lines Patch % Lines
client/acme.go 55.55% 6 Missing and 2 partials ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #34   +/-   ##
=======================================
  Coverage        ?   59.12%           
=======================================
  Files           ?       11           
  Lines           ?     1003           
  Branches        ?        0           
=======================================
  Hits            ?      593           
  Misses          ?      337           
  Partials        ?       73           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

client/acme.go Outdated
Comment on lines 618 to 619
// upon successful new ipX mutliaddr creation, create an additional dns multiaddr
dnsMaStr := fmt.Sprintf("/dns/%s.%s.%s/tcp/%s/tls/ws", escapedIPStr, b36PidStr, forgeDomain, tcpPortStr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding more multiaddresses to advertise doesn't seem like a good thing, we already have quite a lot depending on the number of transports enabled (e.g. TCP, QUIC, WebTransport, WebRTC, WSS and then at least 2x for IPv4/v6). There are somewhat annoyingly a number of ways to handle WSS (e.g. /wss, /tls/sni.../ws, /dns/.../tls/ws), I think in practice we should probably:

  1. Make sure implementations we need to interoperate with are able to support the address types we want / need (e.g. seems like rust-libp2p should probably support sni)
  2. Have some recommendations in the specs repo for the WebSocket transport on the types of multiaddresses that are highly recommended (or even MUSTs) to be supported. This might need to come with better sections on the multiaddr parsing for WebSockets (cc @MarcoPolo)

It might be reasonable to have an option here that would allow the additional address type so that we can do other forms of interop testing while we wait on a PR into rust-libp2p and lumina to support sni.

Copy link
Contributor

@lidel lidel Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No surprise it was not implemented, libp2p websocket specs did not exist for long time (libp2p/specs#594) and even after they were created they did not include the /ip + /sni convention until very recently (libp2p/specs#639).

iirc p2p-forge used fully resolved multiaddr on purpose as optimization, no? we already know IP, and clients that are able to leverage that information can skip DNS lookup and connection will work.

BUT in practice, Secure WebSockets in browsers require turning multiaddr into wss://escapedIP.base36peerid.libp2p.direct:port anyway at some point, removing the ability to act on that optimization, and forcing browser client to do DNS lookup anyway.

👉 Given that entire thing was created for browsers, and using resolved /ipX/ addrs buys us nothing(?) in web context, maybe we should revisit the default convention and replace /ipX with /dnsX in the finally announced addrs by the factory?

There is instant value add, if we REMOVE resolved /ip4/ and /ip6/ and REPLACE them with /dns4/ and /dns6/ – decreased the size of published addrs:

  • BEFORE:
    • /ip4/3.237.240.9/tcp/12005/tls/sni/3-237-240-9.k51qzi5uqu5dgarn9dk0o1grux8wt6ey0vk9xjc6oumuz8x425r5tndaqmdy6q.libp2p.direct/ws/p2p/12D3KooWA8YZg39SL7KpvFgRXeN3uYu1WpdTbo2CQmFe1us7oWVP
  • AFTER:
    • /dns4/3-237-240-9.k51qzi5uqu5dgarn9dk0o1grux8wt6ey0vk9xjc6oumuz8x425r5tndaqmdy6q.libp2p.direct/tcp/12005/tls/ws/p2p/12D3KooWA8YZg39SL7KpvFgRXeN3uYu1WpdTbo2CQmFe1us7oWVP

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the PR to reflect @lidel suggestion.

If the consensus is it keep the ipX addresses instead of dnsX, we can simply provide a custom AddrsFactory to ants-watch while addressing parsing is being implemented in rust-libp2p and rolled out to lumina.

Copy link
Contributor

@lidel lidel Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. We've discussed this during weekly Shipyard sync today, and:

  1. We strongly encourage fixing rust-libp2p to support all multiaddr types mentioned in https://github.com/libp2p/specs/blob/master/websockets/README.md#addressing (incl. /ipX/../sni/../tls/ws)
  2. Avoid introducing potential release-blocking issues for Kubo 0.33 – meaning that returning multiaddrs starting with /dnsX instead of /ipX should be behind an opt-in flag for now
    • Does this work for you @guillaumemichel ? If so, add opt-in config flag WithResolvedDNSMultiaddrs(true|false) and we can merge this without impacting Kubo 0.33. Otherwise, we need to park this until after Kubo 0.33 ships.

Copy link
Contributor

@2color 2color Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, how does merging this impact the Kubo release?

Does this change the expected Addresses.Swarm configuration in Kubo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense @lidel, I updated the PR

Copy link
Contributor

@lidel lidel Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guillaumemichel thanks! I realized you were not member of engineering team in this org and has to create PR from fork – check email, and approve invite. It will allow you to create PRs with local branches in the future.
Let's continue in #40 (comment)

@2color no impact on config, if producing /dnsX addrs is enabled via opt-in option, it only changes the final multiaddr produced by addr factory, and not the "templates" in Addresses.Swarm

@lidel lidel changed the title feat: add dns addr to addrsfactory feat: WithResolvedDNSMultiaddrs(bool) fo /dnsX support in addrsfactory Jan 14, 2025
@lidel lidel changed the title feat: WithResolvedDNSMultiaddrs(bool) fo /dnsX support in addrsfactory feat: add dns addr to addrsfactory Jan 15, 2025
@lidel
Copy link
Contributor

lidel commented Jan 15, 2025

I wanted to push some tests but this PR was made from fork and I was not able to push.

Recreated this PR in local branch, let's continue in #40

@lidel lidel closed this Jan 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants