Allegro packets
automated SLA Reporting
- Import of data from monitor platforms or ticket systems
- Calculation of the SLA overruns with variable methods
- Consideration of the maintenance or non-operating hours
- Presentation of the SLAs in dashboards
- List of the main cause of SLA incidents
- automated creation and dispatch of the finished reports
SLA metrics - aggregation
The SLA-relevant data is provided from many data sources and in different formats - and then often has to be aggregated manually in reporting - in order to make the final statement about the service quality.
SLIC can fully automate this process by recording the logic of the SLA once, setting up the interfaces to various tools, importing the data periodically, calculating incidents and calculating them in SLA metrics. SLIC takes into account operating times, maintenance windows and extraordinary malfunctions, which are calculated from the reports.
Availability & performance in the SLA report
SLIC can import the raw data from monitoring systems and calculate the SLA metrics using its own data adapter. This raw data is evaluated in SLIC, compared against assigned threshold values and excesses are reported as incidents.
Service-oriented dashboards with clear workflows
• SLA time line / bar chart dashboard, presentation of the SLA metrics on a time axis - per service group or service
• SLA status dashboard, presentation of the services with the exact values of the incident time and the budget values up to the achievement of the SLA threshold for a month
SLA Dashboard
Performance and availability are shown for a selected period for services or service groups.
Different technologies can be represented together and correlated with one another.
Application Incident Heat Chart
Revision-capable service level reports
System / server from SNMP
• Application data from monitoring solutions such as Steelcentral® and
• Transaction data from synthetic click routes.
The generated reports are correctly formatted and available as PDF and HTML.
Comment functions within the reports and dashboards make it possible to comment on failures or deviations in the reports and make them visible.
Slic allows maintenance times, business hours and unscheduled off times to be taken into account.
Service Discovery
Knowledge of the service architecture is a prerequisite for a service assessment.
If communication data exists between servers in the backend, this can be used in SLIC to carry out a daily automated service discovery.
The front-end and back-end server systems are listed or shown in an architecture chart.
Since the service discovery is carried out daily, the architecture charts are always up-to-date.
Changes within a service chain, e.g. new servers, are recorded and reported.
Service rating
Evaluations can be carried out depending on the data available.
Liege zB. für Web-Anwendungen Informationen vor wie Seiten-Lade-Zeiten, können Schwankungen oder Service-Level-Überschreitungen erkannt werden – und zusätzlich, ob im Backend ebenfalls Schwankungen vorlagen, die einen Zusammenhang erkennen lassen.
Für die Bewertung von Anwendungen wurde ein flexibles „Baseline“-Modul entwickelt, um Standard-Abweichungen festzustellen und diese für die Bewertung zu verwenden.
All-data import
SLIC Data Adapters calculate the values for availability and performance from the raw data and make them available in reports.
Capturing
High-performance capture appliances (iPAC) can be integrated into SLIC.
The workflow makes it possible to create filters in BPF format directly from the incident reports and to load the package data.
The inexpensive "iPAC PacketStore" systems capture packets of up to 40 Gbps dropless - on up to 650 TB of storage - per stack.
iPAC stacks can be clustered so that several IPAC stacks can be administered as one system.
If the traces are generated and saved, they are automatically analyzed for known error symptoms and their status is highlighted in color.
More information on this under iTM>
Triggered capture
Filters can be defined not only for the affected systems, but also for the connected systems.
If, for example, a server experiences a high reset rate to another system, the two-way communication is used as the trace filter - and not just the IP address of the system triggering the incident.
By correlating with external data, different service levels can be correlated with one another, which significantly improves the transparency of the business services.