-
Notifications
You must be signed in to change notification settings - Fork 23
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected read and write performance of NVMe? #76
Comments
Hey @diarmuidcwc we are investigating performance issues with PCIe at the moment as there have been a few reports of issues in relation to throughput. Have you seen this issue? I'm trying to find what information I can share with you in terms of benchmarks, I'll get back to you when I have something useful. Cheers, |
Hugh I did just rundimentary test using dd and fio. Mostly dd as fio is very slow on the board.
I am getting a lot of timeout errors like this:
My NVMe has some activity LEDs and when doing this tests, the leds only briefly flash during the test, leading me to guess that the issue is not on the NVMe. It suggests that accesses are brief with long delays between them. This would be consistent with timeouts. |
Yeah, the work we're doing in-house suggests we're CPU-bound, in general. We're digging into exactly why that is at the moment. For example, dd only runs on one CPU and each dd process maxes out, at the moment, around 29 / 30 MB/s. But the PCIe/NVMe system is still mostly idle. To see what I mean, you should be able to log in 4 times, for example 4 ssh sessions, and run the dd command above in each session, and get in or around the same performance (say ~20MB/s) on each job. |
Thanks guys. Good news for me that others are seeing this ! |
Hey @diarmuid our latest reference design and Linux releases contain changes for the PCIe which have shown performance improvements for NVMe drives if you want to give it a try and see if this also improves things for you? |
Thanks. I'll check it out |
Not so sure about this improvements. Maybe it's my particular setup. I took the 2021.08 FPGA image + wic. |
Hi @diarmuid any luck with the other NVMe? |
Anecdotally things seem more stable to me with 2022.02 release. I used to see pretty frequent corruptions interacting with NVMe (I've got a WD Blue 250GB SSD attached) but haven't seen any lately. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi
I just programmed in the latest 2021.04 FPGA image and Yocto build. I am evaluating the PolarFireSOC for a product that writes Ethernet packets to an NVMe device.
My initial basic benchmarking attempts using dd (dd if=/dev/zero) are showing quite slow write performances, in the order of 23MB/s.
Do you have an internal benchmarks for NVMe write performance? At the moment I'm not sure if the limit is the CPU or the PCIe interface, I suspect the former. even though dd should be trivial, but I will be investigating this in the coming days. However if you have any internal documentation or benchmarks, could you share them
Regards
Diarmuid
The text was updated successfully, but these errors were encountered: