

There is one more one-way SAN transit and then ESXi host receives the acknowledgement and the DAVG timer stops.As soon as this leaves the FlashArray, the FlashArray latency timers stops. The FlashArray then sends out the write acknowledgement.This is another one-way trip across the SAN which is also included. The host sends the actual data to fill the buffer.The FlashArray allocates a buffer and we tell the host we are ready.When the FlashArray gets a write request, now the FlashArray latency timer starts.ESXi submits the write request to the array.The changes depending on a write or a read. Why is that? Well, you need to understand how we calculate our latency and how ESXi calculates DAVG. Basically DAVG + KAVG.įirst, our DAVG is higher than our FlashArray latency.
QUEUE QUEUEING FULL

But the workload is actually configured to push 130,000 IOPS, but for some reason it cannot. So, well, it must be the storage! Let’s look at the FlashArray: Also from the VDBench printout, the active queue depth in the VM for that virtual disk is 96, which makes sense as my VDBench workload is configured to use 96 threads. One is, sure enough the latency is relatively high: 1.4 ms (which is high for an AFA), especially just 4 KB reads. First let’s look at the virtual machine portion of the screen shot. The workload is configured to run 130,000 4K Read IOPS with 96 outstanding I/Os (threads). Workload virtual disk is on a different VMFS than the boot virtual disk of the VM.Paravirtual SCSI Adapter, default settings.Is your array lying? Why is my VM latency high? What can I do (if anything) to fix it?įor those curious, my virtual machine is configured as such: Furthermore, the latency on the FlashArray is reported as low. It was much better on my equivalent physical server. Customer and the latency in my virtual machine is high and my IOPS is not as high as I want it to be. There is A LOT of information here and it pretty much tells you everything you need to know to solve the problem. So nothing is interfering, so while this is not realistic, I think it is still valuable to explain how these things work. One thing to note here, this is one virtual machine running workload to a virtual disk on one VMFS. If you prefer a video, here is my 1 hr VMworld session that goes into depth on what I write below: And frankly not without direct guidance from VMware support. PLEASE do not make changes in your environment at least until you read my conclusion at the end.This workload is targeted specifically to make relationships easier to understand.Mileage will vary depending on your workload and configuration.This is a simple example to explain how queuing works in ESXi.So I put a blog post together of a use case and walking through solving a performance problem. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. I have had more than a few questions lately about handling this–either just general queries or performance escalations. Virtual Machine vSCSI Adapter queue depth limit.So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management.
