Logs Archive and Hydration
You may have to save transactional information for compliance, legal, or other regulatory requirements. In addition to processing logs for observability and analytics, Kloudfuse introduced a supplementary mechanism for archiving pre-processed logs (with identified filters, facets, and so on) into longer-term storage, and a separate mechanism to hydrate these logs to examine them for the relevant data.
The benefits of this approach extend beyond basic regulatory compliance:
- 
You store important historical data in a cost-effective compressed format in a location that you own and control. 
- 
When decompressed, the logs are human-readable and highly searchable because of the high level of indexing through labels and other data attributes. 
- 
You can configure the archival instructions in a manner that categorizes data consumption by internal cost center. 
We currently support log archive and hydration for AWS S3.
Contact us at support@kloudfuse.com to enable this feature in your Kloudfuse cluster.
Archiving
Based on your own set of archival rules and configurations, you can add an archive section to the deployments.yaml file to specify the logs you need to write to your own archive storage.
  archive:
    enabled: true
    prefix: "<Example_Cluster/Example_Folder>"    # Optional, can specify as ""
    useSecret: true                               # Security, 4 methods, see note
    createSecret: true
    secretName: "<Example_Secret>"
    type: s3                                      # AWS storage
    s3:
      region: <Example_Region>                    # Such as us-west-2
      bucket: <Example_Bucket_Name>               # You MUST create the bucket in you archival storage location
      accessKey: <Example_Access_Key>
      secretKey: <Example_Secret_Key>
    rules: |-
      - archive:                                  # Define first archive
          args:
            archiveName: a1
            doNotIndex: false                       # Both archive and index
          conditions:
            - matcher: "#source"
              value: "s1"
              op: "=="
            - matcher: "@label"
              value: "l1"
              op: "=="
      - archive:                                  # Define next archive
          args:
            archiveName: a2
            doNotIndex: false                       # Both archive and index
          conditions:
            - matcher: "#source"
              value: "s1"
              op: "=="| 
 | 
The archive rules apply in order, and must match all conditions. In the preceding example, a log line from source “s1” gets mapped to archive a1 if it also contains label l1. If it does not contain label l1, it gets mapped to archive a2.
Archive prerequisites
You must grant access for Kloudfuse to write archives into the specified storage. There are four security approaches:
- 
createSecret = true, useSecret=trueThe helm creates the k8s secret based on the provider’s accessKeyandsecretKey. Thedeployments.yamlfile gets the value from secret.
- 
createSecret=false, useSecret=trueThe customer creates the k8s secret. The deployments.yamlfile works automatically because it picks up theenv varfromsecret.
- 
createSecret=false, useSecret=falseThis approach assumes that the customer already configured the node IAM role to have permission to access the S3 bucket, so there is no requirement to set env var.
- 
Customer creates a service account with access permission to S3. That service account maps to serviceAccountName. There is no requirement to set env var, and the pod inherits the permissions of the service account.
Hydration
Whenever you need to examine a record, you can access it directly in your archival storage because of our simple storage and compression rubric: by date (yyyymmdd format), and then by hour. When you decompress and open a log file, you can see all the facets, labels, and other tags attributed to the log by Kloudfuse.
Additionally, you can hydrate the archived logs into Kloudfuse. We run them through the metadata analysis, labeling, and so on. For older logs, this gives us the opportunity to apply the current (newer) set of grammar and rules, making them compatible (and comparable) with current logs.
To use the logs hydration UI, see Logs hydration.
