Skip to main content

Cribl Netskope Events and Alerts Integration

 

Netskope’s Events and Alerts can be pulled into Cribl via the Netskope REST v2 APIs. You can use Cribl Stream to filter and redirect to the destination of your liking. 

Note: Netskope Steaming Events (WebTx) aren’t supported currently. 

Requirements

  • Netskope tenant with API v2 enabled
  • Cribl Stream account

Setup Steps

Netskope 

  • Generate API token

 

Cribl

  • Create Data Source

 

Verify

  • Save and Run

 

Netskope

Generate API token

In your Netskope tenant go to Settings > Tools > REST API v2 > New Token

 

3503iC39E5DAD03E13FF1.png

 

Add permission scopes to the token. You will need to add all scopes that you want to pull. In my example, I am going to grab all Events and Alerts.

 

Give your token a name, and expiration period. 

Use the following scopes

 

Events

/api/v2/events/dataexport/events/application

/api/v2/events/dataexport/events/audit

/api/v2/events/dataexport/events/incident

/api/v2/events/dataexport/events/infrastructure

/api/v2/events/dataexport/events/network

/api/v2/events/dataexport/events/page

Alerts

/api/v2/events/dataexport/alerts/uba

/api/v2/events/dataexport/alerts/securityassessment

/api/v2/events/dataexport/alerts/quarantine

/api/v2/events/dataexport/alerts/remediation

/api/v2/events/dataexport/alerts/policy

/api/v2/events/dataexport/alerts/malware

/api/v2/events/dataexport/alerts/malsite 

/api/v2/events/dataexport/alerts/compromisedcredential

/api/v2/events/dataexport/alerts/ctep

/api/v2/events/dataexport/alerts/dlp

/api/v2/events/dataexport/alerts/watchlist

 

Save and use the “copy token” button to copy your token. You will only get a chance to get your token at this time, or you will have to revoke and reissue your token to get another chance to copy it.

 

Cribl

Create Data Source

Log into Cribl and go to Stream > Worker Groups and choose the appropriate worker group for the new data source.

 

3504iD5412AF89C0C74C3.png

 

 

Data > Sources

3502iCE03D16EBE299DE1.png

 

> Collectors REST

3505iF92CEBB337D5A578.png

 

 

Add Collector

3506iCDE685F60609117A.png

 

You will need to add a new collector for each alerts and events (two total). 

  1. Give your collector a Name
  2. Under Discover select Item List. Include all of the items you want to pull with the one call.
    1. Alerts: uba, securityassessment, quarantine, Remediation, policy, malware, maliste, compromisedcredential, ctep, dlp, watchlist
    2. Events: application, audit, incident, infrastructure, network, page
  3. Collect URL - Add the url to pull. This should be your tenant followed by the base API call you are making, with ${id} at the end. Be sure to place this in `` like : `https://<your tenant name>.goskope.com/api/v2/events/dataexport/alerts/${id}`
    1. or for events `https://<your tenant name>.goskope.com/api/v2/events/dataexport/events/${id}`
  4. Collect method = GET
  5. Collect parameters
    1. operation `next` (see note for other options)
    2. index `cribl` (this can be anything BUT if you have something else pulling the logs (like splunk) this field will need to be set with a different index name than the other product. We use this to track which logs have already been pulled)
  6. Collect headers
    1. Accept  `application/json`
    2. Netskope-api-token <your v2 token>

Operation note: 

Iterator operations supported

  • epoch timestamp - If an epoch timestamp is provided, this informs the Netskope endpoint to begin log consumption starting at this time.

  • next - The next operation value requests the next page of data from the Netskope endpoint.

  • head - The head operation value requests the first page of data stored within the Netskope endpoint.

  • tail - The tail operation value requests the most current page of data stored within the Netskope endpoint.

  • resend - If the consumer is unable to process the page of data provided, resend operation will issue a retry of the last page of data requested.

Note: After having some customers tested this, they found that they needed to make a couple more tweets to get things working correctly. 

 

Make the following changes to the retries. Under the Retry HTTP codes, add 429, 503, and 409. 

 

The other thing that needed to be changed is the Max event bytes. Change this to 134217728

https://docs.cribl.io/edge/event-breakers/

 

4194i6B42CFB8D0588B16.png

 

At this point, you should be able to Save & Run to verify that everything is working. Once you have verified that it works as expected, you will need to add a Route to send the logs to a destination that you have. 

Is there any best practice on how to configure the Scheduled job including pagination and so on!?


Is there any best practice on how to configure the Scheduled job including pagination and so on!?

Sorry for the delay. I do the 5 min default pull. 


One change. We saw some issue with a customer that gets about 20-40k logs in a ten min period with pagination set. It was fixed by setting pagination to ‘none’ in Cribl. 


Reply