Deep packet monitoring
- "deep dive" into the contents of thousands of PCAP trace files in a single dashboard
- Use Shark syntax to enable more than 100,000 log fields
- Easily organize, aggregate, analyze and prioritize masses of data.
- Groups metrics into 3 main categories - Application - Connection or Network
- enables errors and incidents to be assigned quickly
- Data is stored in SQL database - long data history
IT data is network data - network packets run through the entire IT supply chain - and transport information about status and performance between endpoints: DNS and LDAP codes and times, network and application performance, server response times & return codes, frontend / backend performance - or any content of a readable package.
When the application is slow or unreachable, the causes can often be found in these packages.
Sharkmon at a glance
- Long-term data - import large pcap data for hours, days and weeks - made possible by various trace tools such as Tcpdump, Tshark or a fishing gear
- Automatic Analysis - analyze thousands of sequential data using customizable Expert profiles
- Profiles contain protocols, filters, metrics, evaluation logic and threshold values
- Incidents - Create incidents based on variable thresholds per object
- Long-term perspective - visualized incidents and raw data in smart dashboards, over hours, days, weeks and months
- Incident correlations - Exporting incidents to Service Management, thereby making it part of the Correlation Framework
- Automation - the analysis step by step
Long-term monitoring - or single trace file analysis
Trace files are usually. analyzed manually - each one individually. Even the evaluation of a short period of a few minutes takes a lot of time and knowledge.
In order to analyze hours or days, many trace files have to be generated and evaluated. This can no longer be done manually.
With SharkMon the user can generate a large number of trace files or import the files from servers, cloud instances or capture devices.
By evaluating the data, the user can see:
- Are there critical incidents
- Category of Incidents - Application or Network?
- What metric caused the problem
- Which threshold was exceeded
- Direct access to the trace files
- Drilldowns and Specific Categories
SharkMon uses Wireshark display filters.
Wireshark offers corresponding filters and metrics for each protocol that can be defined in Sharkmon - what is normal and which parameter or counter is considered critical. Each of these values can be defined as a criterion for Sharkmon analysis. Deviations from the normal are also recognized.
A profile is a configuration of defined protocols, metrics and filters. Symptom triggers can be stored for these features - if a defined condition occurs - a symptom can be stored as a warning / critical.
If e.g. TLS is used in the network, TLS1.2 / 3 can be defined as a condition. The occurrence of other SSL versions can be noted as a symptom.
Whether exceeding a defined time, such as. DNS response, or a counter in a protocol field - practically every protocol field in Wireshark can be used as a monitoring metric.
Sharkmon scenarios organize the various analysis information in one process:
- Object - what is being analyzed
- Conditions - filter, conditions, time
- Data sources - PCAP data, files folders, upstream or capture appliances
- Intelligence - which analysis profile should be used
Packet data reflects a very important perspective, the quality of service from the perspective of the protocols.
This view can and must often be enriched with data from network or system management.
The Shark information about many retransmissions is an important piece of information and explains the poor response time. However, the fact that the router shows a high discard rate on the network makes the picture complete.
The information from Sharkmon can be correlated with other data sources in SLIC - the platform for cross-technology correlation - in order to show cause and effect.