Skip to main content

Is anyone else having issues with accessing private storage through NPA (private access) becoming extremely slow? We are mostly a windows shop and a large portion of our end users are having an issue where when they move any file from one of our Netapp on prem storages or even a Windows based file server the speeds are insanely slow when users are using ZTNA. It seems like this mainly affects transfers of lots of small files vs large single files. With the client uninstalled and the user back on our traditional VPN we get about what we would expect (a few megabits a second depending on the end users internet speed). Enabling the client puts us down to ancient baud modem level speeds - sometimes as low as 300 bits a second!

Thank you for reaching out, @chrisisinclair! Our community team is looking into this for you and will get back to you shortly! If anyone else has any ideas that you feel may help, please share them here. 


Turns out the publishers in our DMZ are not connecting to the correct datacenter - with EDNS enabled they will connect to NYC or North Virginia and route all traffic all the way there from our Burbank, CA COLO. We worked with support and have temporarily mitigated this by disabling EDNS on all publisher servers and then manually pointed each server to the San Jose datacenter by editing the host file. 

 

We also recently did further testing by deploying a brand new v95 Ubuntu publisher in Burbank and it had the same behaviour. We then deployed a brand new v95 Ubuntu publisher in our Phoenix datacenter and it too also connected to NYC.


Hi. Any new updates regarding this concern. We are also experiencing extreme slow down when via file share so users ends up going back to traditional VPNs.


@noelnoguera has anyone from Support looked into this?


Access thru the Private access is slower than our traditional VPN.  We installed enough Publisher in our data center but I can't get more than 2mb/sec.. Most of the time it is about 1-2 MB/sec.  Is there a way to increase the speed or is this by design?    

 


@thoang3 has anyone from Support looked into this? NPA Publishers can support up to 500 Mbps, so the observed rate looks really low. If not done yet, I recommend opening a Support case, so we can have someone review the performance issues.


I know I may be late, but I am still having this issue with our recent deployment of Netskope. Has anyone found a solution to this? Our traditional VPN works as expected, over PA, we are experiencing the same issues as @noelnoguera. Any insight would be great. 


We’ve just enabled NPA for a large user group and almost immediately started seeing tickets on users reporting similarly poor SMB/DFS performance of down to 2MB/s, so it would seem this is still an issue.

In our case, waiting for Local Broker seems the only option for on-premises users, but this performance will also be unacceptable for our off-premises users.


@KunalShah we’ve just opened support case(s) on this.


@chrisisinclair I am having a similar issue where it appears my publishers in Michigan are connecting to a stitcher on the west coast.  I have the document on how to update the hosts file and turn off EDNS but I do not know where to find a list of stitchers to define.  Do you know where to get a list like this or did support just tell you which ones to use?


@jasonbeebe ​@elawaetz ​@Dickey233 - Hello, sorry I’m late to the party but I was just sent a link to this post by a customer who is experiencing similar issues. I wanted to post in case this helps anybody in the future. As Kunal has pointed out, speeds should not be this slow over ZTNA.

  • One of the older posts referenced EDNS as a potential issue. If you are an older Netskope customer, I would recommend reaching out to support, CSM, or your SE and ensure the Global Server Load Balancing (GSLB) is enabled for your tenant. This will ensure the that the clients and publishers are choosing the most performant Netskope PoPs based on packet round trip times as opposed to physical locations.
  • One of the earlier posts by ​@elawaetz mentioned SMB/DFS performance issues. In this case, we need to know if the SMB shares are distributed across multiple physical locations. If that is the case, private apps should be created for each SMB server so they can be tied to their local publisher. For example: SMB server A should be mapped to publisher A at site A. SMB server B should be mapped to publisher B at site B. If this isn’t done, you may be running into a scenario where Publisher A is trying to access SMB server B at site B. This will cause unintended latency. You can find greater detail on this configuration here - https://docs.netskope.com/en/netskope-private-access-for-smb-and-dfs-services/
  • If you have configured GSLB and specific apps for your SMB/DFS shares and are still experiencing slow performance, contact support or an SE so we can validate the location of your egress IP addresses is correct in the IP databases used by GSLB.

@AJ-Dunham thank you for sharing all of this, though it adds nothing new to the equation on our part.

We’re running GSLB since the start, SMB/DFS servers are configured individually with local publishers, and we still see a significant reduction in the speed of SMB across NPA due to the increased RTT.

We have worked at length both with the product team, and with support on this, since we saw the first reports on SMB taking a performance hit.

In our case we have deployed Local Broker at our major sites to work around the issue.

I ran a series of tests to verify Local Broker performance, using iperf3 to baseline, and “robocopy” to/from a single SMB share over 1Gbps wired LAN:

With Iperf3 we saw an average thru’put across all tests: 

  • LAN:938Mbps
  • Local Broker: 402Mbps
  • Cloud Broker: 210Mbps

With SMB we saw:

  • LAN: 786Mbps
  • Local Broker: 348Mbps
  • Cloud Broker: 114Mbps

--Erik


Reply