
The Splunk platform can often recognize the event boundaries, but if event boundary recognition doesn't occur, or happens incorrectly, you can set custom rules in the nf configuration file to establish event boundaries. Many event logs have a strict one-line-per-event format, but others don't. There are additional configuration settings that help you break your incoming data stream into events, such as line-breaking. This is valuable if a significant amount of your data consists of multiline events. Using the LINE_BREAKER setting can produce the results you want in the line breaking phase. Line breaking is relatively efficient for the Splunk platform, while line merging is relatively slow. If you configure the Splunk platform to not perform line merging by setting the SHOULD_LINEMERGE attribute to false, then the platform splits the incoming data into lines according to what the LINE_BREAKER setting determines.

You don't normally need to adjust this setting, but in cases where it is necessary, you must configure this setting in the nf configuration file on the forwarder that sends the data to Splunk Cloud Platform. By default, the Splunk platform performs line merging, and the value for SHOULD_LINEMERGE is true. Line merging, which uses the SHOULD_LINEMERGE setting to merge previously separated lines into events.The LINE_BREAKER setting expects a value in regular expression format. You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the nf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. In regular expression format, this is represented as the following string: (+). By default, the LINE_BREAKER value is any sequence of newlines and carriage returns. Line breaking, which uses the LINE_BREAKER setting to split the incoming stream of data into separate lines.The Splunk platform determines event boundaries in two phases: How the Splunk platform determines event boundaries If you use Splunk Enterprise, you can configure the settings and follow the procedures in this topic on any instance that indexes the incoming data stream. You must use a heavy forwarder that you have configured to send data to your Splunk Cloud Platform instance to break incoming data into lines and subsequently merge them as you want into events.

If you use Splunk Cloud Platform, you must forward any data where you need to configure event-line breaking, because there is no way to configure event-line breaking in the Splunk Web interface. If you have multiline events that the Splunk platform doesn't handle properly, you can configure it to change its line breaking behavior. The Splunk platform handles most multiline events correctly by default. This creates an index-time field although I usually dislike index-time fields, this is a case where it may be needed.Some events consist of more than one line. Note that you will need to change the REGEX so that it picks up the actual name of the table from the filename.

Keep the same nf as above, but add one more line: TRANSFORMS-myxml=extract-table-nameĪnd create nf like this If the source file name contains the name of the table, I would definitely use that. Unless you have some compelling reason (which you need to explain), you should not specify the parsingQueue. You should really escape it with a \ as I did, although I think Splunk may not require this. The < is a special character in regular expressions. If you specify BREAK_ONLY_BEFORE, then you shouldn't specify any other breaking criteria. However, there is no way to add in the table info.

This will give you one event per row element.
