Skip to main content
Blog

Making Netskope Cloud Exchange Highly Available

  • 10 June 2024
  • 0 replies
  • 114 views
Making Netskope Cloud Exchange Highly Available

The Netskope Cloud Exchange

Netskope offers an interesting solution to solve this use case and that is the Netskope Cloud Exchange (also referred to as Netskope CE). It is a free offering from Netskope with which organizations can share their IoC information between their security tools using the Threat Exchange, use the Log Shipper to import transaction logs from Netskope, use the Ticket Orchestrator to relay alerts to ticketing services and use the Risk Exchange to exchange risk scores between multiple sources. It can be installed and run in a Linux Docker environment by each customer on their own. You can read more about Netskope Cloud Exchange here. As mentioned above, there are four main modules offered with the Netskope Cloud Exchange: the Log Shipper, Ticket Orchestration, Threat Exchange, and Risk Exchange. All the modules are hosted and run in a single docker environment. Make sure you go through the system requirements once before you provision your infrastructure for the Cloud Exchange.

 

High Availability on Cloud Exchange

High Availability can be described as the ability of a particular hardware/software to operate continuously without any interruptions during the service period. The goal is to maintain high service levels and enable fault tolerance so that the recipient of the service is unaffected due to unexpected failure. The Netskope Cloud Exchange plays a crucial role within our Netskope SOC as the SOC utilizes all of the modules to protect Netskope and help strengthen our defensive controls. So it is important that we set up our Cloud Exchange in such a way that it is resilient against unexpected failures which would in turn affect our security team’s operations. Let’s discuss how the Netskope CustomerZero migrated our standalone Cloud Exchange setup to a High Availability setup. Please supplement this blog along with Netskope’s official Cloud Exchange documentation available here, we will also point to reference in the official documentation when required.

 

AD_4nXf82u8AImtOcWHktzC9DJir3yLdkMqKBTNKOMSVdAPsB0eolYsqedsgkah6Xon6vV1xXIrtHXI1JyD7viVN4dVm8dwPt_xdEJRF8a5EWZpursyB5t5S8B_FH-gTHW-ZAKB0aZonck2yL5S7yFqW3KF21OCo?key=2SGAut-enuZJmrk6CqHVJA

 

Cloud Exchange HA Deployment

Internally, the CustomerZero have set up our Cloud Exchange on a Ubuntu Virtual Machine hosted on cloud. So this document is going to be for customers who have a similar deployment as ours. In case you have a different setup and would like to understand about High Availability for your setup, feel free to reach out to your Customer Success team. Follow the steps below in order to update your Cloud Exchange from standalone to Highly Available. Once again, the below steps are to follow when you have the Cloud Exchange deployed standalone and are planning to upgrade it to high availability.

  1. Create a snapshot image of the existing Cloud Exchange setup before updating it to High Availability. This can be used to restore the Cloud Exchange later in case the update fails.
  2. Copy down the contents of the ‘ta_cloud_exchange’ folder in your primary Cloud Exchange setup. This can be used to restore the Cloud Exchange later in case the update fails.
  3. Ensure that the version to be installed or updated to is greater or equal to Cloud Exchange version 5 as High Availability was introduced with version 5.
  4. Note down the MongoDB Maintenance Password and the Maintenance Token.
  5. Create 2 new virtual machines that act as the secondary nodes and make sure that the primary and secondary nodes are able to communicate with each other.
  6. Update all 3 Cloud Exchange nodes and their respective package managers.
  7. Make sure that the basic system requirements and other cloud exchange dependencies are installed on all 3 nodes as mentioned in the document.
    1. Operating System Requirements
    2. Docker & docker-compose or podman
    3. Python3
    4. Git
    5. Zip
  8. Install python dependencies on all 3 nodes as mentioned in the documentation.
    1. pyyaml
    2. python-dotenv
    3. pymongo
  9. Ensure ‘selinux’ is disabled on all the 3 nodes.
  10. Open up the required ports of all 3 nodes as mentioned in the Cloud Exchange documentation.
  11. Create a NFS volume and ensure that all 3 Cloud Exchange Nodes are able to connect to the NFS volume.
  12. Create a directory in the NFS volume which will be used by all the 3 nodes for information exchange.
  13. Mount the NFS volume to all 3 Cloud Exchange nodes and ensure its functionality by accessing the directory created on the NFS volume on Step 12.
  14. Pull the Cloud Exchange Github repository into 2 of the newly created nodes and update the repository in the primary node.
  15. If you have certificates with the standalone deployment and want to migrate them to the High Availability setup, copy the certificates to /config/ssl_certs. Further information regarding this can be found in the documentation.

AD_4nXfUluqK9h4df-itrfO_mFyAPlBZvihR6U0b6Vb1MuF5AYr1OVQraPwxzhveCk-Wbe7kmEgBiaJBlfX_WSV9fmbgHyq8BivnOdOrtkY7qSD3BUCiTOLRtpn-JA2zN3uOqibzW7zJ067Fjv_gsmS4XwUw9AA?key=2SGAut-enuZJmrk6CqHVJA

  1. Note down the Private IP Addresses of all 3 Cloud Exchange nodes. 
  2. Stop the Primary Cloud Exchange.
  3. Follow the standard installation procedure to set up the Cloud Exchange primary node first and make sure to enter the IP Addresses of all 3 nodes correctly as requested by the setup.
  4. The same JWT Token and Maintenance password apply for the 2 newly created nodes as well.
  5. Follow the same steps followed during the primary node setup and setup the secondary nodes, make sure to enter in the appropriate IP address values when requested.
  6. Once all 3 nodes are set up correctly and running, you should be able to see a similar setup on the Cloud Exchange dashboard.

AD_4nXf3kHQr1npb-_J1whRT-sQkJ70LB1uxe8XP7fWJ5KQiHIDzrhs6pQuDA_HuJHhW6rZHJakGz4BLKGRGN_Li2v3TWGuTVAsmH1GCrR9Je57i8LLDfOJeYM7w99-Uz9gn6gFi6bem8hBkcah5nmfwy_QYw-O8?key=2SGAut-enuZJmrk6CqHVJA

  1. To identify which of these is the primary node, hovering the mouse over the MongoDB column should provide further information.

AD_4nXfCIVDBIRRnEcjU5fzZP8UN8ZZ7xQUOAOW9gtdKbPIMolJ7uZHZbrpzxIysJN4qYMxFMWC2ujys1T03IBkyq_ETbN8ERyoAjft6RHVSYa4QWqvULPA05RIEMEM2bvQDlBWXU8bgDSdxTR6JfTusGuf6UzI0?key=2SGAut-enuZJmrk6CqHVJA

 

 

 

0 replies

Be the first to reply!

Reply