Skip to main content

Has anyone had any luck with Syslog integration from Cloud Exchange into Darktrace?

Darktrace describes the expected format like this:

Darktrace expects the Netskope Web Gateway data to include the string "darktrace_netskope" followed by a JSON formatted representation of the Netskope logging. The order of fields within the JSON is not important. For example: darktrace_netskope {"src_time": "Fri Jan 1 00:10:00 2021","userkey": "user.name@company.com","dst_region": "Region","category": "Business","src_longitude": 52.2,"transaction_id": 0,"ur_normalized": "user.name@company.com","src_latitude": 0.12,"dst_longitude": 52.2,"domain": "www.example.org","dst_zipcode": "555555","access_method": "Client","src_timezone": "Global/UTC","ccl": "unknown","bypass_reason": "SSL policy matched","user_generated": "yes","dst_country": "ZZ","srcip": "198.51.100.1","site": "example","traffic_type": "Web","src_region": "Region","user": "user.name@example.org","appcategory": "Business","page_id": 0,"insertion_epoch_timestamp": 1604614281,"bypass_traffic": "yes","dst_location": "Region","count": 1,"src_location": "Location","url": "www.example.org","src_country": "ZZ","internal_id": "21c3e4368567eae234d211de","dst_latitude": 0.12,"dst_timezone": "Global/UTC","policy": "Bypass","type": "page","ssl_decrypt_policy": "yes","src_zipcode": "555555","dstip": "198.51.100.2","timestamp": 1604614213,"page": "www.example.or","dstport": 443,"userip": "192.168.1.1","organization_unit": ""}

 

I have managed to use the Syslog plugin in Cloud Exchange to send the raw JSON files, by disabling the CEF mapping, but by doing that I loose the Log Source Identifier “darktrace_netskope” which I have configured, and Darktrace doesn’t pickup the logs.

Oh no. Let me see what I can do to fix that. 


I created a ticket to get this fix. BCE-1108 for tracking. I hope for it to be a quick fix. 


This has been released into beta. Use the 3.2.0 Syslog beta plugin. Here is how to add the beta repo to your cloud exchange - https://docs.netskope.com/en/netskope-help/integrations-439794/netskope-cloud-exchange/get-started-with-cloud-exchange/using-beta-plugins/


@Gary-Jenkins I’ve tested the new plugin, and the log source identifier is forwarded as part of the JSON output now, and I have also been able to build a telemetry filter in Darktrace, that is capable of filtering on that, and thus pickup data fields from the JSON output.


Hey @elawaetz ,

I’m wondering how you managed to get the netskope → darktrace syslog integration to work in the end? You mention disabling the CEF mapping but I can’t see how to do that in our cloud exchange, and I’ve yet to have any luck from changing the log shipper mappings in a way that works for darktrace.

Any chance you could share some redacted screenshots of your working setup?

 


 

@iainmadder_fron In the plugin you are using to send the logs to your darktrace tenant, it should look like this. 

 


@iainmadder_fron I use the latest (beta) syslog plugin:
 

This will send the JSON syslogs to Darktrace.

In Darktrace I have created a Custom Telemetry:

In there you need to work on your patterns, in my case I want to match username and source IP of my publishers:

Darktrace has a guide on their support portal on how to add Cloud Exchange (or any other) syslog source as Telemetry input:

Product Guides | Customer Portal (darktrace.com)

I just added the IP of Cloud Exchange as Input Allowed IP and used “darktrace_netskope” as Load Input filter.

Using the Load and Test function to work on your pattern matching is VERY useful.

I also used a standard syslog server besides Darktrace to verify my plugin and the log format received from Cloud Exchange.


Hey @Gary-Jenkins 

Thanks for the reply. That’s not visible in ours

Could this be a cloud exchange version issue? We’re currently on 3.4.1, but I see that 5.0.1 recently came out


Hey @elawaetz ,

Thanks for sharing, that’s very useful.

Looks like the main thing we’re missing is the syslog plugin option to turn off log transformations, so I’ll focus on fixing that first, see if that fixes the issues, and otherwise refer back to your notes


Yes, this feature came around after 3.4.1 and it is in 5.0.1 like you stated. 


Reply