neroden.blogg.se

Splunk transaction startswith
Splunk transaction startswith













splunk transaction startswith splunk transaction startswith
  1. #Splunk transaction startswith how to#
  2. #Splunk transaction startswith code#

| makeresults count=2 | streamstats count | eval emailRequest = "12345", eventName = case(count="1", "SentDoc", count="2", "SaveDoc"), TimeStamp = if(count="2", " 12:00:02.39692", null()) | fields _time, emailRequest, eventName, TimeStamp Looks like you do have more than two events though, so stats should be your go to. | eval duration = tostring(duration_in_seconds, "duration") | eval time = case(eventName="SentDoc", _time, eventName="SaveDoc", strptime(TimeStamp, "%Y-%m-%d %T.%5N")) `comment("IGNORE EVERYTHING ABOVE THIS LINE - IT'S JUST SET UP")` | makeresults count=2 | streamstats count | eval eventName = case(count="1", "SentDoc", count="2", "SaveDoc"), TimeStamp = if(count="2", " 12:00:02.39692", null()) | fields _time, eventName, TimeStamp But delta is easier, assuming you only have two events. Stats will work as other commenters have described. You can't muck with white space in python, so how would this site even be useful if it acts like this?

#Splunk transaction startswith code#

I have no idea why this site won't let me format a code block the way I gd want to, with empty lines and CRs where I put them. | sort 0 _time | streamstats latest(eval(case(rectype="up",_time))) as last_port_up_time latest(eval(case(rectype="down",_time))) as last_port_down_time by myIP myPort Remember that you can create calculated synthetic fields DURING a stats/ eventstats/ streamstats using eval. I specialize in solving people's data issues by doing weird Splunk tricks with data. If you search for "+daljeanis roll streamstats" you'll probably find a few. There are several of my examples of that on. Use no time window, just select out the two kinds of events and connect the down to the most recent previous up - or vice versa, whichever direction you are processing them - as long as you don't think you are missing much data. I suppose what I probably need to do here is as suggested use stream stats to count the time between the down event and the up event, and use that add the downtime duration to the up event (or vice versa to show the uptime on the down event) My use case is network event data, ports going up/down with a few symptoms around the event that define the root cause, transaction to date is still the most reliable to define a reliable duration between down and up event without accidentally counting uptime in the duration between multiple events What would you do when that time window is dynamic, as in it could be minutes or weeks apart.īy setting minutes you miss the close on a long event and by settings weeks you capture multiple short events further complicating the situation, I suspect that might be this one case where transaction is best? I have found some great ways with just stats but what you just explained I think will actually achieve what I need, and potentially to even greater effect.

#Splunk transaction startswith how to#

I have been tryouts Ng to understand how to get rid of transaction for about 4 years. Here's a couple of examples of using different sideways thinking to get what you want instead of transaction. Transaction seems to meet a need simply and easily, but it's a resource hog like no other verb in Splunk. In real life, there are a lot of ways to construct a search. If you need to pull data from additional kinds of records, then you can use either streamstats or eventstats to copy the desired info and then discard the records you don't need. ( This same model structure is used when trying to report on user activity when a load balancer is repeatedly assigning the same IP to different users.) Get all the relevant events, create a synthetic key if you don't have a real one, sort into _time or reverse _time order, then use streamstats by key, with an optional time_window if you want to limit the overall duration between the open and close event types. Streamstats is the first thing you should think of when matching event types (open and close, for example) by _time and one or more keys. (Three different kinds of events where the keys on one pair were different from the keys on a different pair, In that case, the native complexity of transaction's internal methods also avoided SPL complexity, so I left that code in place.) In four years of being in the Splunk Trust, I've only seen ONE - exactly ONE - case where transaction was the best performer, and that was a multiple key situation, iirc. Streamstats with the time_window keyword can handle the desired span and maxpause utility. For that situation you use a combination of stats and streamstats.















Splunk transaction startswith