You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently only update the progress between each column / table combo. We should be able to move this deeper down into plain_text_search and regular_expression_search so that every 100 rows or every 20 seconds we get updates on specific columns that take minutes to run
The text was updated successfully, but these errors were encountered:
It appears like the bottleneck is almost entirely on get_recordset_sql(), Breaking this down further would require splitting the search into batches, but I'm not sure this is worth it?
Yeah its low priority, I've taken it off the board. I'd assume the fixed overhead would apply to each batch, or worse it would start ok and then get worse as the pagination progressed so we'd go from potentially O(n) to O(n*n)
We currently only update the progress between each column / table combo. We should be able to move this deeper down into plain_text_search and regular_expression_search so that every 100 rows or every 20 seconds we get updates on specific columns that take minutes to run
The text was updated successfully, but these errors were encountered: