The Cisco Observability Platform is specifically designed to handle large amounts of MELT (Metrics, Events, Logs, and Traces) data. It utilizes open standards like OpenTelemetry for interoperability and offers extensions for customization, allowing partners and customers to tailor its functionality to their unique needs. Focus is placed on customizations for data processing, assuming a basic understanding of the platform’s Flexible Metadata Model (FMM) and solution development.
The data processing pipeline consists of stages that lead to data storage, where MELT data undergoes processing, transformation, enrichment, and eventually lands in a data store for querying with Unified Query Language (UQL). Each stage can be customized with specific logic using a gear icon, and custom post-processing logic can be created for unalterable data. A new approach involving workflows, taps, and plugins is introduced, utilizing the CNCF Serverless Workflow specification with JSONata as the default expression language. Leveraging open standards like CloudEvents and OpenAPI ensures compatibility and ease of development.
Taps in the data processing stages allow data mutation, while their customizations are handled by plugins. Taps define input and output JSON schemas for plugins, which are expected to produce outputs conforming to the tap’s output schema. Workflows, meant for post-processing, subscribe to triggers and can range from simple event counting to complex machine learning model inferences. This abstraction simplifies development by reasoning in terms of a single event and using well-documented standards.
Events facilitate communication between processing stages, allowing for decoupling and rearrangement of stages when necessary. Events are categorized into data-related events: data:observation for side-effects of processing and data:trigger for post-processing triggers. Observations and triggers have specific event types and permissions, with triggers emitted after all mutations are completed. Events are versioned and qualified with a platform solution identifier for isolation and evolution.
To illustrate these concepts, a workflow example is provided for counting health rule violations, demonstrating steps like subscribing to triggers, validating event types, and publishing measurement events. Developers can utilize development tools like web-based editors or IDEs for workflow development, ensuring validity through unit tests and schema validation.
A step-by-step guide is outlined for creating workflows, finding trigger events, subscribing to events, and inspecting event data with JSON schemas. Expressions in JSONata can be used for validation and condition checks in workflows, ensuring the relevance of received events. By following these guidelines and utilizing the platform’s features, developers can efficiently customize data processing to meet their specific requirements.
Source link