-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Announce token pool info via broadcast #211
Conversation
Codecov Report
@@ Coverage Diff @@
## main #211 +/- ##
==========================================
Coverage 100.00% 100.00%
==========================================
Files 203 206 +3
Lines 11053 11153 +100
==========================================
+ Hits 11053 11153 +100
Continue to review full report at Codecov.
|
Rather than requiring all token pool metadata (namespace, name, id) to be written to the blockchain, send it out via a broadcast message after creating the actual pool on chain. Signed-off-by: Andrew Richardson <[email protected]>
6dfa037
to
2584876
Compare
Must now be initialized before syshandlers. Signed-off-by: Andrew Richardson <[email protected]>
filter := fb.And( | ||
fb.Eq("tx", pool.TX.ID), | ||
fb.Eq("type", fftypes.OpTypeTokensAnnouncePool), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this query good enough? Could we possibly expect multiple ops of the same type within this transaction..?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of resubmission (when implemented), yes I think we could.
However, we should be able to assume there's only one in status pending
- so maybe just add that to the query?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more I think about it, I'm very wary of that breaking idempotence. Feels like we'd need a stronger mapping between an operation and its side-effects if we want to support "retries" being equivalent with "adding another operation of the same type within the transaction". Maybe just need to revisit this when we do look at adding retries...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just need to revisit this when we do look at adding retries...
Going to raise an architecture tagged issue on this, as it's a V1.0 item to put in the retry support. I agree we need to spend the time to get it right, and this isn't the time/place 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#221 raised
if err != nil { | ||
return false, err // retryable | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like some input on the transaction validation logic below. For both cases (operation found locally and operation not found locally), I want to be sure the checks are valid and complete enough.
pool.TX.ID = transaction.ID | ||
pool.TX.Type = transaction.Subject.Type | ||
|
||
// Add a new operation for the announcement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't sure if we want an operation for this leg of the transaction or not (ie tracking the state of the broadcast announcement). I went ahead and added one, and I'm using it primarily to determine if the broadcast initiated with me or with another node. But I'm not positive this is correct/necessary. Open to thoughts on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's really nice that if you list the operations on the transaction, you'll see that it triggered the broadcast on this node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff!
Few slightly rambling comments for consideration, so haven't marked approval as of yet.
Have a look through, and see if you want to make any updates before this merges. No showstoppers in the list from my perspective.
op.Input = fftypes.JSONObject{ | ||
"id": pool.ID.String(), | ||
"namespace": pool.Namespace, | ||
"name": pool.Name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note that we think this is where a custom JSON input payload could be added in, that's opaque to FireFly core and allows individual Token Connectors to define additional inputs.
For example, the existing contract address of an ERC20/ERC721.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, that is exposed as config
now on the firefly-tokens-erc1155 API. I could add that to the accepted inputs for the route, store it here, and pass it through to the /pool API (all opaque to FireFly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a commit for this
internal/assets/manager.go
Outdated
@@ -86,6 +91,28 @@ func (am *assetManager) selectTokenPlugin(ctx context.Context, name string) (tok | |||
return nil, i18n.NewError(ctx, i18n.MsgUnknownTokensPlugin, name) | |||
} | |||
|
|||
func storeTokenOpInputs(op *fftypes.Operation, pool *fftypes.TokenPool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Name suggests this is generic to all TokenOp
types.
If so, should there be a type
field?
If not, maybe call it storeTokenPoolCreateOpInputs
?
internal/assets/manager.go
Outdated
@@ -140,17 +174,12 @@ func (am *assetManager) CreateTokenPoolWithID(ctx context.Context, ns string, id | |||
fftypes.OpTypeTokensCreatePool, | |||
fftypes.OpStatusPending, | |||
author.Identifier) | |||
storeTokenOpInputs(op, pool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add
rather than store for readability?
@@ -189,6 +218,11 @@ func (am *assetManager) GetTokenAccounts(ctx context.Context, ns, typeName, name | |||
return am.database.GetTokenAccounts(ctx, filter.Condition(filter.Builder().Eq("protocolid", pool.ProtocolID))) | |||
} | |||
|
|||
func (am *assetManager) ValidateTokenPoolTx(ctx context.Context, pool *fftypes.TokenPool, protocolTxID string) error { | |||
// TODO: validate that the given token pool was created with the given protocolTxId |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it would be good to have a high level architecture picture of the flow, and what this validation means.
Not a requirement for this PR obviously, but something for the docs when we get to them.
I know you and I have had a chance to talk about the flow... and it really feels significant to me.
How the broadcast and the token are linked together and verified, and how all nodes in the network become aware of the existence of the token (existing, or newly deployed) via the exchanges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I attached a very high level picture of the flow I'm envisioning, but I've realized that I'm framing the question to the connector as "did Ethereum transaction 0xabc create pool F1", and the connector actually doesn't have the means to answer that.
We've hit this before, but ethconnect currently doesn't give a way to look up the events emitted from a particular transaction. I think it may be possible by decoding the output of eth_getTransactionReceipt
, but I'm not quite sure how difficult it would be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great. Think it would be good to add them to #218
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it may be possible by decoding the output of
eth_getTransactionReceipt
The event stream code is processing these same binary logs, so it should certainly be possible to (just like you did for transaction inputs) allow the logs to be processed: https://github.com/hyperledger/firefly-ethconnect/blob/e708f38f5eac8ad2c4a6b084577c3954e75e7488/internal/events/logprocessor.go#L110-L181
The sticky bit will be making sure the code knows how to find the right event ABI. So the most practical and easy to implement version of the API, would probably be to add a /retrieve
as a peer to /subscribe
under the event on the REST ABI. A lot like you did for the method input parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the tips - this will definitely be a work item that needs more exploration. Opened #222 to track it further.
// Find a matching operation within this transaction | ||
fb := database.OperationQueryFactory.NewFilter(am.ctx) | ||
filter := fb.And( | ||
fb.Eq("tx", tx), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I note there is not an index on the tx_id
in the SQL database impl:
firefly/db/migrations/postgres/000008_create_operations_table.up.sql
Lines 18 to 21 in e9b7b74
CREATE UNIQUE INDEX operations_id ON operations(id); | |
CREATE INDEX operations_created ON operations(created); | |
CREATE INDEX operations_backend ON operations(backend_id); | |
CREATE INDEX operations_namespace ON operations(namespace); |
I think that's a straight omission, because we have a first class query by TX in the REST model:
Path: "namespaces/{ns}/transactions/{txnid}/operations", |
... I am a little worried we're over indexing this critical table... so maybe worth working out if we need all those existing indexes. The namespace
index looks potentially superfluous to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have these URIs in the REST model:
/namespaces/{ns}/operations
/namespaces/{ns}/messages/{msgid}/operations
/namespaces/{ns}/transactions/{txnid}/operations
So all 3 are filtered by namespace, and the latter 2 are also filtered by transaction ID.
This PR adds 2 more queries by transaction ID, and DX has one query by backend ID.
I'm not sure why we have an index on "created" - maybe to support the UI?
My take: seems that the indexes most important to the internal functioning of FireFly are on ID, transaction ID, and backend ID. I'd vote to drop indexes on created
and namespace
- they're probably just there to support routes for inspection/debugging, but those paths don't seem performance critical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's drop just the namespace
one for now... the reason for created
is that the UI will provide rich filtering and counting on all its panels. To make this less problematic for the DB, each panel is scoped to a time period like "last 24h". So I would expect a realistic scenario to be that all traffic is on a single namespace at whatever TPS, and then the UI asks "show me a paginated view for everything for the last 24h"... I'm worried without an index, that means a whole table scan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... overall I'm sure we'll be revisiting indexes
return nil, err | ||
} | ||
|
||
msg, err = bm.broadcastDefinitionAsNode(ctx, pool, fftypes.SystemTagDefinePool, waitConfirm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm really torn here @awrichar on whether this should be on the ff_system
namespace, or the namespace of the token. This is perfectly consistent with datatype
broadcast that happens on ff_system
today, but I can't articulate why that object needs to be on system. Privacy groups go on the namespace they're in.
What it comes down to, is whether you see the broadcast on your namespace in the UI when you go to the timeline. It it was the side effect of another action, but is it more weird to see it, or for it to be hidden from you.
No need to change here - but maybe keep an eye on whether this looks "wonky" when we're using it all end-to-end, and we should consider revisiting before the concrete sets too hard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noted, will consider if we need to move this in the future. If we do, it probably means factoring out some common helpers from the Group broadcast (definitions on the system namespace are nicely consolidated with a set of helpers, but Groups are a bit of a one-off right now).
filter := fb.And( | ||
fb.Eq("tx", pool.TX.ID), | ||
fb.Eq("type", fftypes.OpTypeTokensAnnouncePool), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of resubmission (when implemented), yes I think we could.
However, we should be able to assume there's only one in status pending
- so maybe just add that to the query?
// Mark transaction completed | ||
transaction.Status = fftypes.OpStatusSucceeded | ||
err = sh.database.UpsertTransaction(ctx, transaction, false) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need special handling for the HashMismatch
sentinel error - which I don't think would be retryable?
firefly/pkg/database/plugin.go
Line 29 in e9b7b74
HashMismatch = i18n.NewError(context.Background(), i18n.MsgHashMismatch) |
... possible I'm identifying something that's missing elsewhere too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be no way for that to happen here. We're only querying the transaction, flipping the status to Succeeded, and then saving it back.
Perhaps the better question is whether we need a specific Update operation for this (as opposed to a full Upsert)?
} else { | ||
// No local announce operation found (broadcast originated from another node) | ||
log.L(ctx).Infof("Validating token pool transaction '%s' with protocol ID '%s'", pool.TX.ID, pool.ProtocolTxID) | ||
err = sh.assets.ValidateTokenPoolTx(ctx, &pool.TokenPool, pool.ProtocolTxID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering, why isn't this full function something we should run if we're the originator?
I realize the operation update logic is specific to me as the originator... but ideally the rest of the validation is common.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thought was that the transaction validation serves two purposes:
- Confirm that the ProtocolTxID from the broadcast matches a real transaction from the connector, and that the transaction is the one that created this pool
- Get the connector-specific outputs of that transaction (in this case, block number and transaction index)
If I'm the originator, (1) happened up on line 52, and (2) happened in the event handler when the pool was created, prior to the broadcast. So calling out to the connector for re-validation felt redundant. Doesn't mean it's wrong to redo it, but I intentionally avoided the REST API call on that path. Open to discussing it further though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the record, I'm hinging a lot on whether particular Operation and Transaction objects are found or not found in the database. Can't decide if that's ideal or not.
This data is opaque to FireFly, but can be passed through to the token plugin to influence how the pool is created (reserved for future use). Signed-off-by: Andrew Richardson <[email protected]>
e991e06
to
7d2d4ea
Compare
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @awrichar for all the follow-ups.
Popping approval on here, as the only action I think before this merges is a trivial one (the indexes)... but leaving the merge with you so you can review the few notes I popped on there.
Also remove index on namespace, as it's less likely to impact real usage scenarios and there are a lot of indexes on this table. Signed-off-by: Andrew Richardson <[email protected]>
Rather than requiring all token pool metadata (namespace, name, id) to be written
to the blockchain, send it out via a broadcast message after creating the actual
pool on chain.
Note: this contains the commit from #210 as a pre-req.