-
Notifications
You must be signed in to change notification settings - Fork 401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bolt App Methods Hanging with ETIMEDOUT
Warning During chat.postMessage
Execution
#2212
Comments
Hey @dianexjeong 👋 Thanks for sharing this! I caught up on #2185 and believe these errors are caused by slow or failing server responses instead of app-related settings. It's super interesting that similar errors are found with both direct I do also find it interesting that this is showing most often with IPs associated with AWS Tokyo 🤔 We're poking into possible connectivity issues around this for both Without having a reliable way to reproduce this debugging might be difficult, but I have a few questions and hopes:
I understand how frustrating both slow requests and inconsistent errors are, but I'm also hoping we can find the cause, or at least a fix, for this! 🙏 |
I'm also wondering if updating Bolt to Edit: I misread the |
Hello @zimeg! Thank you so much for the response! 😄 Also regarding the questions:
No, I believe the app is not hosted on AWS Lambda.
Sure, I'll test it out and bring back the results!
I am calling the usageCC module mentioned in #2185 - but it is called at the last step after all the client calls are executed, which means the postMessage was hanging, and the HTTP request in the usageCC was not made, therefore I'm not suspecting that the additional HTTP request was causing the warning. |
@dianexjeong For sure! Thanks for sharing more about how these requests are being processed 🙏
I was a bit eager thinking this, the default didn't change but We're searching for server details around timeout issues, but I'm also curious if your app is running behind a proxy or VPN? Sometimes this can upset requests or responses without being super clear 👀 |
No problem @zimeg! Please let me know if you need any additional information. I would be happy to provide as much as I can 😄 Yeah, I tried refactoring the http POST request with
Nope, we are not running behind a proxy or VPN. |
note: a few steps have to happen to use this and it should be with caution! these are experimental changes and not stable. For #2212. this adds clientOptions.experimentalAxiosAdapter to App which accepts combinations of ['xhr', 'http', 'fetch'] and uses that a the default. testing setup: - git clone https://github.com/slackapi/node-slack-sdk - cd node-slack-sdk - git checkout web-api-6.x-experimental-axios-adapter - cd packages/web-api - npm install - npm run build - cd ../../../ - git clone https://github.com/slackapi/bolt-js - cd bolt-js - git checkout web-api-6.x-experimental-axios-adapter - npm install ../node-slack-sdk/packages/web-api - npm run build - cd ../path/to/app - npm install ../path/to/bolt-js - node app.js
@dianexjeong Some experimental changes are on the
The additional details are super helpful and I can't think of immediate problems with the setup, but perhaps some logs show something strange in either inbound or outbound requests? |
@zimeg I'm not getting any logs other than the |
@zimeg A quick followup - got this again. |
@zimeg New update! I noticed that the
the above IPs belong to Slack, in the AWS, Tokyo region - as it was the same IP with the result of I'm assuming that the ETIMEDOUT errors are originating from the Slack's servers. Could you look into this? Really appreciate your help! |
FWIW - I've been getting the ETIMEDOUT errors when using axios against the slack APIs lately - but only for like the past 3-4 weeks - prior to that, all have been working fine. I have about 13 IPs that seem to be hit with some overlap above. |
Hello! My Slack app running on AWS ECS(region: |
Hi, checking in here. Are you all still experiencing issues @dianexjeong @bonespiked @tomohiroyoshida ? It seems to be regional, from what I can tell. Also worth double checking to see if any proxies are in play in your situations? |
Hello @filmaj! Thanks for checking in. Yes, the timeouts are still present and nope, we are not running behind any proxies. |
@tomohiroyoshida mentioned their app runs in the AWS Tokyo region, and the IP that @dianexjeong's app resolved the Slack API to also exist in the AWS Tokyo region. @SEEWON @dianexjeong @NatsukiOka @bonespiked can you confirm which hosting provider / what region your app runs in? The reports here seem to point to low-level infrastructural issues in and around the AWS Tokyo region - want to confirm whether this is perhaps a regional problem, and knowing where your app is hosted can help confirm or disprove this theory. Thanks! |
@SEEWON and I are also hosting our app in the AWS Tokyo region! |
For ours, it wasn't a slack issue - someone had dorked up all our routing tables so 1/3 of requests were correct, 1/3 were being routed out of EU, and 1/3 were dead. It was quite a tirade of profanity upon my discovery of this.... |
Sorry to hear about that @bonespiked ! Good news that y'all figured it out though. It also partially confirms my hypothesis that this has to do with low-level networking issues - misconfigured routing tables at least make sense in terms of For the record, we have had other customer reports come in that are very similar to this, and all are in or close to the AWS Tokyo region. We continue to work with our infrastructure team to try to get more visibility into this issue. It may even be an AWS problem - hard to say at this point. Any additional information you can provide to us is helpful (@dianexjeong) such as:
|
Seems relieving that it wasn't just my problem :)
To answer your questions,
- our app is hosted on EC2 Seoul region
- no, there is nothing in between our app and Slack's api.
Please let me know if you need any other information! I'd be glad to
provide as much as I can.
|
👋 It looks like this issue has been open for 30 days with no activity. We'll mark this as stale for now, and wait 10 days for an update or for further comment before closing this issue out. If you think this issue needs to be prioritized, please comment to get the thread going again! Maintainers also review issues marked as stale on a regular basis and comment or adjust status if the issue needs to be reprioritized. |
As this issue has been inactive for more than one month, we will be closing it. Thank you to all the participants! If you would like to raise a related issue, please create a new issue which includes your specific details and references this issue number. |
@slack/bolt
versionYour
App
and Receiver ConfigurationNode.js runtime version
Expected result:
I'm experiencing VERY slow speeds when using the overall features of my app.
I would be expecting fast (if not immediate) responses with my features, which would be a) installing the app / b) opening modals / c) submitting modals / d) sending messages... etc.
I have experienced a very slow functioning of my app with this specific incident:
a) I submitted a modal with select options and text inputs.
b) The code that is triggered is as follows:
c) I expect that the above four (1~4) steps to occur immediately, if not within a second or two.
Actual result:
In the above incident, I observed that the
1)
first step of inserting into the DB was completed immediately after the modal submission, but the rest steps2) ~4)
were not happening.It took over a minute for the rest steps to be completed, and it seemed like the app was hanging at the second step, the
postMessage
method.After a minute, the rest steps were executed almost simultaneously.
The log showed a
[WARN] bolt-app http request failed connect ETIMEDOUT 52.192.46.121:443
warning, which seems to be related to the incident.Here is the full log:
The timestamp on the left is the time of when it occurred, and I am in the Asia/Seoul time zone.
There were no other error messages left with the above log
Our team has recently raised an issue regarding the axios ETIMEDOUT errors. (#2185)
This has partially been resolved by using the native
fetch
method instead of usingaxios
, as it seemed to be an error occurring from the axios itself.However, the warning message that I saw seems to be related to the bolt framework, not axios, since it doesn't have an
axios
tag to the warning.As I commented in the issue, I am experiencing VERY slow functions of my app from time to time, even for modal opens and app installations - as I am getting
expired_trigger_id
errors as well.Unfortunately, it doesn't seem to be appearing in a fixed pattern, and I am not able to reproduce the error (issue) on demand.
Is this also related to how bolt is processing the events with axios?
It would be a huge help if you could help us with this issue! This has been bugging our team for around a month already. :(
Thank you in advance!
Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at https://my.slack.com/help/requests/new instead. 🙇
Please read the Contributing guidelines and Code of Conduct before creating this issue or pull request. By submitting, you are agreeing to those rules.
The text was updated successfully, but these errors were encountered: