Gordon  Murray

Gordon Murray

1673462400

Send Github Commits and PR Logs to ElasticSearch using A Custom Script

Hello Readers!! In this blog, we will see how we can send GitHub commits and PR logs to Elasticsearch using a custom script. Here we will use a bash script that will send GitHub logs to elasticsearch. It will create an index in elasticsearch and push there the logs.

After sending logs to elasticsearch we can visualize the following github events in kibana:-

  • Track commit details made to the GitHub repository
  • Track events related to PRs  in the GitHub repository in a timestamp
  • Analyze relevant information related to the GitHub repository

workflow

1. GitHub User: Users will be responsible for performing actions in a GitHub repository like commits and pull requests.

2. GitHub Repository: Source Code Management system on which users will perform actions.

3. GitHub Action:  Continuous integration and continuous delivery (CI/CD) platform which will run each time when a GitHub user will commit any change and make a pull request.

4. Bash Script: The custom script is written in bash for shipping GitHub logs to Elasticsearch.

5. ElasticSearch: Stores all of the logs in the index created.

6. Kibana: Web interface for searching and visualizing logs.

Steps for sending logs to Elasticsearch using bash script: 

1. GitHub users will make commits and raise pull requests to the GitHub repository. Here is my GitHub repository which I have created for this blog.

https://github.com/NaincyKumariKnoldus/Github_logs

github repo

2. Create two Github actions in this repository. This GitHub action will get trigger on the events perform by the GitHub user.

github actions

GitHub action workflow file for getting trigger on commit events:

commit_workflow.yml:

# The name of the workflow
name: CI
#environment variables
env:
    GITHUB_REF_NAME: $GITHUB_REF_NAME
    ES_URL: ${{ secrets.ES_URL }}
 
# Controls when the workflow will run
on: [push]
#A job is a set of steps in a workflow
jobs:
    send-push-events:
        name: Push Logs to ES
        #The job will run on the latest version of an Ubuntu Linux runner.
        runs-on: ubuntu-latest
        steps:
           #This is an action that checks out your repository onto the runner, allowing you to run scripts
           - uses: actions/checkout@v2
           #The run keyword tells the job to execute a command on the runner
           - run: ./git_commit.sh

GitHub action workflow file for getting trigger on pull events:

pr_workflow.yml:

name: CI
 
env:
  GITHUB_REF_NAME: $GITHUB_REF_NAME
  ES_URL: ${{ secrets.ES_URL }}
 
on: [pull_request]
jobs:
  send-pull-events:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: ./git_pr.sh

3. Create two files inside your GitHub repository for putting bash scripts. Following is the bash script for shipping GitHub logs to Elasticsearch. This script will get executed by the GitHub actions mentioned above.

git_commit.sh will get triggered by GitHub action workflow file commit_workflow.yml:

#!/bin/bash

# get github commits
getCommitResponse=$(
   curl -s \
      -H "Accept: application/vnd.github+json" \
      -H "X-GitHub-Api-Version: 2022-11-28" \
      "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/commits?sha=$GITHUB_REF_NAME&per_page=100&page=1"
)

# get commit SHA
commitSHA=$(echo "$getCommitResponse" |
   jq '.[].sha' |
   tr -d '"')

# get the loop count based on number of commits
loopCount=$(echo "$commitSHA" |
   wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsCommitSHA=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_commit/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "commit_sha": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.commit_sha' |
                  tr -d '"')

# store ES commit sha in a temp file
echo $getEsCommitSHA | tr " " "\n" > sha_es.txt

# looping through each commit detail
for ((count = 0; count < $loopCount; count++)); do
   
   # get commitSHA
   commitSHA=$(echo "$getCommitResponse" |
      jq --argjson count "$count" '.[$count].sha' |
      tr -d '"')

   # match result for previous existing commit on ES
   matchRes=$(grep -o $commitSHA sha_es.txt)
   echo $matchRes | tr " " "\n" >> match.txt

   # filtering and pushing unmatched commit sha details to ES
   if [ -z $matchRes ]; then
      echo "Unmatched SHA: $commitSHA"
      echo $commitSHA | tr " " "\n" >> unmatch.txt
      
      # get author name
      authorName=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.name' |
         tr -d '"')

      # get commit message
      commitMessage=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.message' |
         tr -d '"')

      # get commit html url
      commitHtmlUrl=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].html_url' |
         tr -d '"')

      # get commit time
      commitTime=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.date' |
         tr -d '"')

      # send data to es
      curl -X POST "$ES_URL/github_commit/commit" \
         -H "Content-Type: application/json" \
         -d "{ \"commit_sha\" : \"$commitSHA\",
            \"branch_name\" : \"$GITHUB_REF_NAME\",
            \"author_name\" : \"$authorName\",
            \"commit_message\" : \"$commitMessage\",
            \"commit_html_url\" : \"$commitHtmlUrl\",
            \"commit_time\" : \"$commitTime\" }"
   fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

git_pr.sh will get triggered by GitHub action workflow file pr_workflow.yml:

#!/bin/bash

# get github PR details
getPrResponse=$(curl -s \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/pulls?state=all&per_page=100&page=1")

# get number of PR
totalPR=$(echo "$getPrResponse" |
  jq '.[].number' |
  tr -d '"')

# get the loop count based on number of PRs
loopCount=$(echo "$totalPR" |
  wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsPR=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_pr/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "pr_number": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.pr_number' |
                  tr -d '"')

# store ES PR number in a temp file
echo $getEsPR | tr " " "\n" > sha_es.txt

# looping through each PR detail
for ((count = 0; count < $loopCount; count++)); do

  # get PR_number
  totalPR=$(echo "$getPrResponse" |
    jq --argjson count "$count" '.[$count].number' |
    tr -d '"')
  
  # looping through each PR detail
  matchRes=$(grep -o $totalPR sha_es.txt)
  echo $matchRes | tr " " "\n" >>match.txt

  # filtering and pushing unmatched PR number details to ES
  if [ -z $matchRes ]; then
    # get PR html url
    PrHtmlUrl=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].html_url' |
      tr -d '"')

    # get PR Body
    PrBody=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].body' |
      tr -d '"')

    # get PR Number
    PrNumber=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].number' |
      tr -d '"')

    # get PR Title
    PrTitle=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].title' |
      tr -d '"')

    # get PR state
    PrState=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].state' |
      tr -d '"')

    # get PR created at
    PrCreatedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].created_at' |
      tr -d '"')

    # get PR closed at
    PrCloseAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].closed_at' |
      tr -d '"')

    # get PR merged at
    PrMergedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].merged_at' |
      tr -d '"')

    # get base branch name
    PrBaseBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].base.ref' |
      tr -d '"')

    # get source branch name
    PrSourceBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].head.ref' |
      tr -d '"')

    # send data to es
    curl -X POST "$ES_URL/github_pr/pull_request" \
      -H "Content-Type: application/json" \
      -d "{ \"pr_number\" : \"$PrNumber\",
            \"pr_url\" : \"$PrHtmlUrl\",
            \"pr_title\" : \"$PrTitle\",
            \"pr_body\" : \"$PrBody\",
            \"pr_base_branch\" : \"$PrBaseBranch\",
            \"pr_source_branch\" : \"$PrSourceBranch\",
            \"pr_state\" : \"$PrState\",
            \"pr_creation_time\" : \"$PrCreatedAt\",
            \"pr_closed_time\" : \"$PrCloseAt\",
            \"pr_merge_at\" : \"$PrMergedAt\"}"
  fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

4. Now make a push in the GitHub repository. After making a commit, GitHub action on push will run and it will send commit logs to elasticsearch.

commit action

Move to your elasticsearch for getting GitHub commits logs there.

es_data

We are now getting GitHub commits here.

5. Now raise a pull request in your GitHub repository. It will also run GitHub action on pull and this will trigger the bash script which will push pull request logs to elasticsearch.

pull

GitHub action got executed on the pull request:

github action

Now, move to elasticsearch and you will find pull request logs there.

es_pull data

6. We can visualize these logs in kibana also.

GitHub commit logs in kibana:

kibana data

GitHub pull request logs in kibana:

kibana

This is how we can analyze our GitHub logs in elasticsearch and kibana using the custom script.

We are all done now!!

Conclusion:

Thank you for sticking to the end. In this blog, we have learned how we can send GitHub commits and PR logs to Elasticsearch using a custom script. This is really very quick and simple. If you like this blog, please share my blog and show your appreciation by giving thumbs-ups, and don’t forget to give me suggestions on how I can improve my future blogs that can suit your needs.

Original article source at: https://blog.knoldus.com/

#script #github #elasticsearch #log 

Send Github Commits and PR Logs to ElasticSearch using A Custom Script
Hermann  Frami

Hermann Frami

1673157180

Sneller: Vectorized SQL for JSON At Scale: Fast, Simple, Schemaless

Vectorized SQL for JSON at scale: fast, simple, schemaless

Sneller is a high-performance vectorized SQL engine for JSON that runs directly on object storage. Sneller is optimized to handle large TB-sized JSON (and more generally, semi-structured data including deeply nested structures/fields) without needing a schema to be specified upfront or dedicated ETL/ELT/indexing steps. It is particularly well suited for the rapidly growing world of event data such as data from Security, Observability, Ops, Product Analytics and Sensor/IoT data pipelines. Under the hood, Sneller operates on ion, a structure-preserving, compact binary representation of the original JSON data.

Sneller's query performance derives from pervasive use of SIMD, specifically AVX-512 assembly in its 250+ core primitives. The main engine is capable of processing many lanes in parallel per core for very high processing throughput. This eliminates the need to pre-process JSON data into an alternate representation - such as search indices (Elasticsearch and variants) or columnar formats like parquet (as commonly done with SQL-based tools). Combined with the fact that Sneller's main 'API' is SQL (with JSON as the primary output format), this greatly simplifies processing pipelines built around JSON data.

Sneller extends standard SQL syntax via PartiQL by supporting path expressions to reference nested fields/structures in JSON. For example, the . operator dereferences fields within structures. In combination with normal SQL functions/operators, this makes for a far more ergonomic way to query deeply nested JSON than non-standard SQL extensions. Additionally, Sneller implements a large (and growing!) number of built-in functions from other SQL implementations.

Unlike traditional data stores, Sneller completely separates storage from compute, as it is foundationally built to use object storage such as S3, GCS, Azure Blob or Minio as its primary storage layer. There are no other dependencies, such as meta-databases or key/value stores, to install, manage and maintain. This means no complex redundancy-based architecture (HA) is needed to avoid data loss. It also means that scaling Sneller up or down is as simple as adding or removing compute-only nodes.

Here is a 50000 ft overview of what is essentially the complete Sneller pipeline for JSON -- you can also read our more detailed blog post Introducing sneller.

Sneller SQL for JSON

Build from source

Make sure you have Golang 1.18 installed, and build as follows:

$ git clone https://github.com/SnellerInc/sneller
$ cd sneller
$ go build ./...

AVX-512 support

Please make sure that your CPU has AVX-512 support. Also note that AVX-512 is widely available on all major cloud providers: for AWS we recommend c6i (Ice Lake) or r5 (Skylake), for GCP we recommend N2, M2, or C2 instance types, or either Dv4 or Ev4 families on Azure.

Quick test drive

The easiest way to try out sneller is via the (standalone) sneller executable. (note: this is more of a development tool, for application use see either the Docker or Kubernetes section below.)

We've made some sample data available in the sneller-samples bucket, based on the (excellent) GitHub archive. Here are some queries that illustrate what you can do with Sneller on fairly complex JSON event structures containing 100+ fields.

simple count

$ go install github.com/SnellerInc/sneller/cmd/sneller@latest
$ aws s3 cp s3://sneller-samples/gharchive-1day.ion.zst .
$ du -h gharchive-1day.ion.zst
1.3G
$ sneller -j "select count(*) from 'gharchive-1day.ion.zst'"
{"count": 3259519}

search/filter (notice SQL string operations such as LIKE/ILIKE on a nested field repo.name)

$ # all repos containing 'orvalds' (case-insensitive)
$ sneller -j "SELECT DISTINCT repo.name FROM 'gharchive-1day.ion.zst' WHERE repo.name ILIKE '%orvalds%'"
{"name": "torvalds/linux"}
{"name": "jordy-torvalds/dailystack"}
{"name": "torvalds/subsurface-for-dirk"}

standard SQL aggregations/grouping

$ # number of events per type
$ sneller -j "SELECT type, COUNT(*) FROM 'gharchive-1day.ion.zst' GROUP BY type ORDER BY COUNT(*) DESC"
{"type": "PushEvent", "count": 1686536}
...
{"type": "GollumEvent", "count": 7443}

query custom payloads (see payload.pull_request.created_at only for type = 'PullRequestEvent' rows)

$ # number of pull requests that took more than 180 days
$ sneller -j "SELECT COUNT(*) FROM 'gharchive-1day.ion.zst' WHERE type = 'PullRequestEvent' AND DATE_DIFF(DAY, payload.pull_request.created_at, created_at) >= 180"
{"count": 3161}

specialized operators like TIME_BUCKET

$ # number of events per type per hour (date histogram)
$ sneller -j "SELECT TIME_BUCKET(created_at, 3600) AS time, type, COUNT(*) FROM 'gharchive-1day.ion.zst' GROUP BY TIME_BUCKET(created_at, 3600), type"
{"time": 1641254400, "type": "PushEvent", "count": 58756}
...
{"time": 1641326400, "type": "MemberEvent", "count": 316}

combine multiple queries

# fire off multiple queries simultaneously as a single (outer) select
$ sneller -j "SELECT (SELECT COUNT(*) FROM 'gharchive-1day.ion.zst') AS query0, (SELECT DISTINCT repo.name FROM 'gharchive-1day.ion.zst' WHERE repo.name ILIKE '%orvalds%') as query1" | jq
{
  "query0": 3259519,
  "query1": [
    { "name": "torvalds/linux" },
    { "name": "jordy-torvalds/dailystack" },
    { "name": "torvalds/subsurface-for-dirk" }
  ]
}

If you're a bit more adventurous, you can grab the 1month object (contains 80M rows at 29GB compressed), here as tested on a c6i.32xlarge:

$ aws s3 cp s3://sneller-samples/gharchive-1month.ion.zst .
$ du -h gharchive-1month.ion.zst 
29G
$ time sneller -j "select count(*) from 'gharchive-1month.ion.zst'"
{"count": 79565989}
real    0m4.892s
user    6m41.630s
sys     0m48.016s
$ 
$ time sneller -j "SELECT DISTINCT repo.name FROM 'gharchive-1month.ion.zst' WHERE repo.name ILIKE '%orvalds%'"
{"name": "torvalds/linux"}
{"name": "jordy-torvalds/dailystack"}
...
{"name": "IHorvalds/AstralEye"}
real    0m4.940s
user    7m11.080s
sys     0m28.268s

Performance

Depending on the type of query, sneller is capable of processing GB/s of data per second per core, as shown in these benchmarks (measured on a c6i.12xlarge instance on AWS with an Ice Lake CPU):

$ cd vm
$ # S I N G L E   C O R E
$ GOMAXPROCS=1 go test -bench=HashAggregate
cpu: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
BenchmarkHashAggregate/case-0                  6814            170163 ns/op          6160.59 MB/s
BenchmarkHashAggregate/case-1                  5361            217318 ns/op          4823.83 MB/s
BenchmarkHashAggregate/case-2                  5019            232081 ns/op          4516.98 MB/s
BenchmarkHashAggregate/case-3                  4232            278055 ns/op          3770.13 MB/s
PASS
ok      github.com/SnellerInc/sneller/vm        6.119s
$
$ # A L L   C O R E S
$ go test -bench=HashAggregate
cpu: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
BenchmarkHashAggregate/case-0-48             155818              6969 ns/op        150424.92 MB/s
BenchmarkHashAggregate/case-1-48             129116              8764 ns/op        119612.84 MB/s
BenchmarkHashAggregate/case-2-48             121840              9379 ns/op        111768.43 MB/s
BenchmarkHashAggregate/case-3-48             119640              9578 ns/op        109444.06 MB/s
PASS
ok      github.com/SnellerInc/sneller/vm        5.576s

The following chart shows the performance for a varying numbers of cores:

Sneller Performance

Sneller is capable of scaling beyond a single server and for instance a medium-sized r6i.12xlarge cluster in AWS can achieve 1TB/s in scanning performance, even running non-trivial queries.

Spin up stack locally

It is easiest to spin up a local stack, comprising of just Sneller as the query engine and Minio as the S3 storage layer, by using Docker. Detailed instructions can be found here using sample data from the GitHub archive (but swapping this out for your own data is trivial). Note that this setup is a single node install and therefore no-HA.

Once you have followed the instructions, you can interact with Sneller on port localhost:9180 via curl, eg. as per:

$ curl -G -H "Authorization: Bearer $SNELLER_TOKEN" --data-urlencode "database=gha" \
    --data-urlencode 'json' --data-urlencode 'query=SELECT COUNT(*) FROM gharchive' \
    'http://localhost:9180/executeQuery'
{"count": 2141038}
$ curl -G -H "Authorization: Bearer $SNELLER_TOKEN" --data-urlencode "database=gha" \
    --data-urlencode 'json' --data-urlencode 'query=SELECT type, COUNT(*) FROM gharchive GROUP BY type ORDER BY COUNT(*) DESC' \
    'http://localhost:9180/executeQuery'
{"type": "PushEvent", "count": 1303922}
{"type": "CreateEvent", "count": 261401}
...
{"type": "GollumEvent", "count": 4035}
{"type": "MemberEvent", "count": 2644}

Spin up sneller stack in the cloud

It is also possible to use Kubernetes to spin up a sneller stack in the cloud. You can either do this on AWS using S3 for storage or in another (hybrid) cloud that supports Kubernetes and potentially using an object storage such as Minio.

See the Sneller on Kubernetes instructions for more details and an example of how to spin this up.

Documentation

See the docs directory for more information (technical nature).

Explore further

See docs.sneller.io for further information:

Development

See docs/DEVELOPMENT.

Contribute

Sneller is released under the AGPL-3.0 license. See the LICENSE file for more information.

Download Details:

Author: SnellerInc
Source Code: https://github.com/SnellerInc/sneller 
License: AGPL-3.0 license

#serverless #go #json #sql #log 

Sneller: Vectorized SQL for JSON At Scale: Fast, Simple, Schemaless

How to Istio Logs with Fluentbit on Kubernetes

Fluentbit is used for collecting the logs from server, linux machines, Kubernetes nodes etc. You might want to collect these Istio logs and parse them & route them to a separate index to read in kibana & elastic search.

Lets find out how we can achieve this functionality

Prerequisites

  • Kubernetes cluster
  • Helm Installed in your machine.
  • Istio setup & proxy injection enabled
  • Some pods running to generate logs
  • Optionally Elasticsearch & Kibana to visualize logs index

Fluent-bit Data pipeline configuration File

The files contain sections to define where to pick data, how to pick, what to pick & where to route data. Lets see each section.

  1. SERVICE
  2. INPUT
  3. FILTER
  4. OUTPUT
  5. PARSER

[SERVICE] Section

The global properties are defined in the SERVICE section of the configuration file.

[INPUT] Section

The INPUT section defines the source & the input plugin. This section decides how, where & what data is picked from sources. To read about input plugins follow this page.

[FILTER] Section

Set filters on incoming records or data. It supports many Filter plugins to filter & transform collected records. Read more in detail about the filter plugin.

[OUTPUT] Section

This section defines the route & destination of the matched records. It uses many output plugins for example in this case we are using elasticsearch. Read more about Output plugins.

[PARSER] Section

Parsers are basically used to structurize the collected logs. Logs collected may or may not be in the right structure that we want. It makes log processing easier. Read more about Parser plugins. Regex & JSON are mostly used.

    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On


    [INPUT]
        Name tail
        Tag_Regex  (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-
        Tag  kube.<container_name>.<namespace_name>
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Mem_Buf_Limit 20MB
        Skip_Long_Lines On

    [FILTER]
        Name                parser
        Match               kube.istio-proxy.*
        Key_Name            log
        Reserve_Data        On
        Parser              envoy

    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On
        Logstash_Prefix istio
        Retry_Limit False

    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L

    [PARSER]
        Name    envoy
        Format  regex
        Regex ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<code>[^ ]*) (?<response_flags>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<request_id>[^\"]*)" "(?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)"
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep   On
        Time_Key start_time

    [PARSER]
        Name    istio-envoy-proxy
        Format  regex
        Regex ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<response_code>[^ ]*) (?<response_flags>[^ ]*) (?<response_code_details>[^ ]*) (?<connection_termination_details>[^ ]*) (?<upstream_transport_failure_reason>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<x_request_id>[^\"]*)" (?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)" (?<upstream_cluster>[^ ]*) (?<upstream_local_address>[^ ]*) (?<downstream_local_address>[^ ]*) (?<downstream_remote_address>[^ ]*) (?<requested_server_name>[^ ]*) (?<route_name>[^  ]*)
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep   On
        Time_Key start_time


Helm Custom Values file for Istio logs index

Create a custom-values.yaml file for helm values. This file will be used in fluent-bit deployment. Carefully place the pipeline configuration under .Values. config: dictionary as shown below.

custom-values.yaml

annotations:
  sidecar.istio.io/inject: "false"
podAnnotations:
  sidecar.istio.io/inject: "false"

config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Tag_Regex  (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-
        Tag  kube.<container_name>.<namespace_name>
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Mem_Buf_Limit 20MB
        Skip_Long_Lines On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name                parser
        Match               kube.istio-proxy.*
        Key_Name            log
        Reserve_Data        On
        Parser              envoy

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On
        Logstash_Prefix istio
        Retry_Limit False

  ## https://docs.fluentbit.io/manual/pipeline/parsers
  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L

    [PARSER]
        Name    envoy
        Format  regex
        Regex ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<code>[^ ]*) (?<response_flags>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<request_id>[^\"]*)" "(?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)"
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep   On
        Time_Key start_time

    [PARSER]
        Name    istio-envoy-proxy
        Format  regex
        Regex ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<response_code>[^ ]*) (?<response_flags>[^ ]*) (?<response_code_details>[^ ]*) (?<connection_termination_details>[^ ]*) (?<upstream_transport_failure_reason>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<x_request_id>[^\"]*)" (?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)" (?<upstream_cluster>[^ ]*) (?<upstream_local_address>[^ ]*) (?<downstream_local_address>[^ ]*) (?<downstream_remote_address>[^ ]*) (?<requested_server_name>[^ ]*) (?<route_name>[^  ]*)
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep   On
        Time_Key start_time

    [PARSER]
        Name        k8s-nginx-ingress
        Format      regex
        Regex       ^(?<host>[^ ]*) - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (\[(?<proxy_alternative_upstream_name>[^ ]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<reg_id>[^ ]*).*$
        Time_Key    time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name    kube-custom
        Format  regex
        Regex   (?<tag>[^.]+)?\.?(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$

Clone this fluent-bit repo with helm chart

Clone the github repo which already have a configured fluentbit data pipeline configurations. Run the below command in your terminal.

git clone https://github.com/knoldus/istio-fluent-bit-logs.git
cd istio-fluent-bit-logs

Deploy Fluent-bit with Helm

Now, you just need to run a helm command to deploy the fluentbit on Kubernetes in the logging namespace.

helm upgrade --install fluent-bit ./ -f ./custom-values.yaml --namespace logging --create-namespace

This will take our custom data pipeline configuration & enable Istio logs collection in a separate index.

Conclusion – Istio logs

These istio logs can be used in troubleshooting the service mesh. These logs also be used in auditing the access logs & much more. Elasticsearch & Kibana is required to visualize the logs entries coming from fluentbit.

Original article source at: https://blog.knoldus.com/

#kubernetes #log 

How to Istio Logs with Fluentbit on Kubernetes

Setup Log-based Alerts in GCP

Log based alerts is useful when you want to alert your team whenever certain logs occurs or some error or exception breaks some functionality of your application.

This guide will help you to setup the log based alerts in GCP project

Prerequisite

  1. GCP Project ID
  2. Log filter or Queries to setup alerts
  3. Notification Channels setuped in GCP to send alerts. (Slack or Pagerduty or email)
  4. Markdown Doc to send with alerts.

Add Logbased Alerts

First convert your markdown doc into a string.

sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' path/to/file.md

This string will be used in content field of alerts configs.

Step 1

  1. Open alerts_config.auto.tfvars file
  2. Add alerts configs map object below the commented object.
email_alert_policy_configs = [
  # {
  #   alert_display_name     = "Your email Alert Name"
  #   filter                 = "log queries filters"
  #   notification_channels  = ["Display name of Email notification channels"]
  #   condition_display_name = "Name of the Condition"
  #   alert_strategy = {
  #     period     = "300s"
  #     auto_close = "1800s"
  #   }
  #   content = "Markdown file in string. Read README.md for convert file into String"
  # }
]

Setup terraform module

File directory structure

.
├── alerts
│   ├── alerts.tf
│   └── variables.tf
├── alerts_config.auto.tfvars
├── .gitignore
├── main.tf
├── README.md
└── variables.tf

main.tf

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.20.0"
    } 
  }
}

module "log-based-alerts" {
  slack_alert_policy_configs     = var.slack_alert_policy_configs
  pagerduty_alert_policy_configs = var.pagerduty_alert_policy_configs
  email_alert_policy_configs = var.email_alert_policy_configs
  source                         = "./alerts"
}

alerts_config.auto.tfvars

slack_alert_policy_configs = [
# {
# alert_display_name = "Your Slack Alert Name"
# filter = "log queries filters"
# notification_channels = ["Display name of Slack notification channels"]
# condition_display_name = "Name of the Condition"
# alert_strategy = {
# period = "300s"
# auto_close = "1800s"
# }
# content = "Markdown file in string. Read README.md for convert file into String"
# }
]
pagerduty_alert_policy_configs = [
# {
# alert_display_name = "Your Pageduty Alert Name"
# filter = "log queries filters"
# notification_channels = ["Display name of Pagerduty notification channels"]
# condition_display_name = "Name of the Condition"
# alert_strategy = {
# period = "300s"
# auto_close = "1800s"
# }
# content = "Markdown file in string. Read README.md for convert file into String"
# }
]

email_alert_policy_configs = [
# {
# alert_display_name = “Your email Alert Name”
# filter = “log queries filters”
# notification_channels = [“Display name of Email notification channels”]
# condition_display_name = “Name of the Condition”
# alert_strategy = {
# period = “300s”
# auto_close = “1800s”
# }
# content = “Markdown file in string. Read README.md for convert file into String”
# }
]

variables.tf

variable "slack_alert_policy_configs" {

}

variable "pagerduty_alert_policy_configs" {

}
variable "email_alert_policy_configs" {
  
}

Setup Alert module in alert/ directory

alerts.tf

locals {
  log_based_slack_alerts_configs ={
    for alc in var.slack_alert_policy_configs :
    alc.alert_display_name => {
        filter = alc.filter
        notification_channels = alc.notification_channels
        condition_display_name = alc.condition_display_name
        alert_strategy = alc.alert_strategy
        content = alc.content
    }
  }
  log_based_pagerduty_alerts_configs={
    for alc in var.pagerduty_alert_policy_configs :
    alc.alert_display_name => {
        filter = alc.filter
        notification_channels = alc.notification_channels
        condition_display_name = alc.condition_display_name
        alert_strategy = alc.alert_strategy
        content = alc.content
    }
  }
  log_based_email_alerts_configs={
    for alc in var.email_alert_policy_configs :
    alc.alert_display_name => {
        filter = alc.filter
        notification_channels = alc.notification_channels
        condition_display_name = alc.condition_display_name
        alert_strategy = alc.alert_strategy
        content = alc.content
    }
  }
}

resource "google_monitoring_alert_policy" "slack_alert_policy" {
  for_each = local.log_based_slack_alerts_configs
  display_name = each.key
  documentation {
    content   = each.value.content
    mime_type = "text/markdown"
  }
  enabled = true
  alert_strategy {
    notification_rate_limit {
      period = each.value.alert_strategy.period
    }
    auto_close = each.value.alert_strategy.auto_close
  }
  notification_channels = data.google_monitoring_notification_channel.slack_channels[*].id
  combiner              = "OR"
  conditions {
    display_name = each.value.condition_display_name
    condition_matched_log {
      filter = each.value.filter
    }
  }
}

resource "google_monitoring_alert_policy" "pagerduty_alert_policy" {
  for_each = local.log_based_pagerduty_alerts_configs
  display_name = each.key
  documentation {
    content   = each.value.content
    mime_type = "text/markdown"
  }
  enabled = true
  alert_strategy {
    notification_rate_limit {
      period = each.value.alert_strategy.period
    }
    auto_close = each.value.alert_strategy.auto_close
  }
  notification_channels = data.google_monitoring_notification_channel.pagerduty_channels[*].id
  combiner              = "OR"
  conditions {
    display_name = each.value.condition_display_name
    condition_matched_log {Install provider plugins
terraform init
Check Terraform plan
terraform plan
Apply the terraform configurations
terraform apply
      filter = each.value.filter
    }
  }
}

resource "google_monitoring_alert_policy" "email_alert_policy" {
  for_each = local.log_based_email_alerts_configs
  display_name = each.key
  documentation {
    content   = each.value.content
    mime_type = "text/markdown"
  }
  enabled = true
  alert_strategy {
    notification_rate_limit {
      period = each.value.alert_strategy.period
    }
    auto_close = each.value.alert_strategy.auto_close
  }
  notification_channels = data.google_monitoring_notification_channel.email_channels[*].id
  combiner              = "OR"
  conditions {
    display_name = each.value.condition_display_name
    condition_matched_log {
      filter = each.value.filter
    }
  }
}

data "google_monitoring_notification_channel" "slack_channels" {
  count = length(var.slack_alert_policy_configs) > 0 ?length(var.slack_alert_policy_configs[0].notification_channels):0
  display_name = var.slack_alert_policy_configs[0].notification_channels[count.index]
}


data "google_monitoring_notification_channel" "pagerduty_channels" {
  
  count = length(var.pagerduty_alert_policy_configs) > 0 ?length(var.pagerduty_alert_policy_configs[0].notification_channels):0
  display_name = var.pagerduty_alert_policy_configs[0].notification_channels[count.index]
}


data "google_monitoring_notification_channel" "email_channels" {
  
  count = length(var.email_alert_policy_configs) > 0 ?length(var.email_alert_policy_configs[0].notification_channels):0
  display_name = var.email_alert_policy_configs[0].notification_channels[count.index]
}

variables.tf

variable "slack_alert_policy_configs" {
  default = []
}

variable "pagerduty_alert_policy_configs" {
  default = []
}

variable "email_alert_policy_configs" {
  default = []
}

Usage

Install provider plugins

terraform init

Check Terraform plan

terraform plan

Apply the terraform configurations

terraform apply

Original article source at: https://blog.knoldus.com/

#gcp #log #setup 

Setup Log-based Alerts in GCP
Sheldon  Grant

Sheldon Grant

1670334981

Using Kail to View Logs in Kubernetes

Hello Readers!! In this blog, we are going to see a new tool i.e Kail. We will see how we can use kail to view logs in Kubernetes. Frequently we use the kubectl command for looking at the logs of pods in Kubernetes. But it is not resilient to use when we want to see the logs from multiple pods. So, here is a very simple tool for you that we can use to view logs from multiple pods in Kubernetes at the same time and in a single window.How to use kail to view logs in Kubernetes

So, let’s get started!

Installation:

Following is the GitHub repository for kail we can download it from here.

https://github.com/boz/kail

Run these commands for installing kail on your local:

$  wget https://github.com/boz/kail/releases/download/v0.15.0/kail_0.15.0_linux_amd64.tar.gz 

install kail

$ tar zxf kail_0.15.0_linux_amd64.tar.gz
$ sudo mv kail /usr/local/bin

which kail

My kail is all set and ready to play.

How to use Kail:

Let’s start playing with it. I will type only a single word in my terminal and it will show me the logs of all the pods that are running in my cluster.

$ kail

You can see below its showing the logs of all running pods:

kail command

We can see all the kail commands that we will use further:

$ kail –help

kail --help

If we want to see the log of any particular pod inside my container we can do this by running the following command:

$ kail -p <pod_name>

kail cmd

When we want to see the logs of all the pods that are running in a particular namespace then we have to use:

$ kail -n <namespace>

kail ns

What if we want to see the logs of the last 10 minutes? Kail also provides this option let’s see:

$ kail -p <pod_name> –since <time>

kail cmd

$ kail -n <namespace> –since <time>

kail ns

If you want to see the logs of the container that has a particular label then use this command:

$  kail -l <label> --since <time>

If you want to see the logs of all the pods running inside a node then run the following command:

$ kail –node <node_name>

kail node

If you want to see the logs of any particular deployment you can use this command:

$  kail --deploy <deployment_name> --since <time>

kail deploy

For service:

$ kail –svc <service_name>

Similarly for containers, you can see it by:

$ kail –containers <container_name>

So, if you want to use more commands, then you can easily do it. It is really so simple to use. It’s a very easy way to visualize all your kubernetes objects. 

Conclusion:

Thank you for sticking to the end. In this blog, we have seen how to use kail to view logs in Kubernetes. If you like this blog, please show your appreciation by giving thumbs-ups and share this blog and give me suggestions on how I can improve my future posts to suit your needs.

HAPPY LEARNING!

Original article source at: https://blog.knoldus.com/

#kubernetes #log 

Using Kail to View Logs in Kubernetes
Sheldon  Grant

Sheldon Grant

1669908180

Analyze Jenkins Job Build Logs using The Log Parser Plugin

Hello Readers! In this blog, we will see how we can analyze Jenkins job build logs using the log parser plugin. As we all know, Jenkins is now the most widely used CI tool. As we know job build logs in Jenkins can become very verbose sometimes, here keeping the logs in an easy format plays a very significant role. So, here is a plugin for you in this case i.e Log Parser plugin. It will make you analyze your build log in a pretty good way in Jenkins. So, let’s see how we can apply it!!

Using the Log Parser Plugin:

Step1: Set up Jenkins and create a job for which you want to use this plugin for the console build log. Use the following blog for reference: https://blog.knoldus.com/jenkins-installation-and-creation-of-freestyle-project/

Step2: Install the Log parser plugin in your Jenkins. Move to Manage Jenkins > Manage Plugins > Available > search Log parser plugin > Install. As you can see below I have already installed:

plugin

This log parser plugin will parse the console build logs which will be generated by Jenkins build. We can apply the following features in our logs by using this plugin:

  • We can categorize the build log into the sections like ERRORS, INFO, DEBUG, WARNING, and HEADER.
  • We can display these sections in summaries like the total number of errors and info on the build page.
  • We can highlight the lines of our interest in the build log as per our needs.
  • We can link the summary of errors and warnings with the full log, which makes us easy to search for a line of interest in the build log.

Step3: Next step is to write a parsing rule file that will contain the levels that we want to view in the console output log. And this file should be saved with the extension .properties or .txt. So, this a sample parser.txt file:

ok /setenv/

# match line starting with ‘error ‘, case-insensitive

error /(?i)^error /

# list of warnings here

warning /[Ww]arning/

warning /WARNING/

# creates an access link to lines in the report containing ‘SUCCESS’

info /SUCCESS/

# each line containing ‘INFO’ represents the start of a section for grouping errors and warnings found after the line.

# creates a quick access link.

start /INFO/

Step4: There are two options for adding this file. If we select the global rule inside the configuration then we have to create this file inside /var/lib/jenkins. And if we select the project rule then we have to create this file inside /var/lib/jenkins/workspace/<job_name>. So, as you can see below I have created this file:

parser.txt

Step5: Select your job configuration inside Jenkins. Open your job configuration. Inside add post build section select console output (build logs) parsing. These are the configurations that I have made:

configuration

Save and apply these configurations.

Step6: Now we will build our job and will analyze it. So, Click on Build Now. Open your console output of that build. Click on Console output (parsed). Here we can find the categories of the build logs on this page.

log parser output

So, We can see here the levels in which it is showing parsed console output. Inside info, it is highlighting the lines that we wanted. Here we can also see the log parser graph.

log parser graph

Therefore this is how we can analyze Jenkins build logs in the way that we want. It makes the build logs more useful to us.

Conclusion:

Thank you for sticking to the end. So, in this blog we have learned how we can use the log parser plugin in Jenkins to analyze our build logs. Therefore this makes our log very easy to recognize. Therefore If you like this blog, please show your appreciation by giving thumbs-ups and sharing this blog, and give me suggestions on how I can improve my future posts to suit your needs.

HAPPY LEARNING! 

Original article source at: https://blog.knoldus.com/

#jenkins #log 

Analyze Jenkins Job Build Logs using The Log Parser Plugin

LogParser.jl: Julia Package for Parsing Server Log Files

LogParser

LogParser.jl is a package for parsing server logs. Currently, only server logs having the Apache Combined format are supported (although Apache Common may parse as well). Additional types of logs may be added in the future as well.

LogParser.jl will attempt to handle the log format even if it is mangled, returning partial matches as best as possible. For example, if the end of the log entry is mangled, you may still get an IP address returned, timestamp and other parts that were able to be parsed.

Code examples

The API for this package is straightforward:

using LogParser

logarray = [...] #Any AbstractArray of Strings

#Parse file
parsed_vals = parseapachecombined.(logarray)

#Convert to DataFrame if desired
parsed_df = DataFrame(parsed_vals)

Linux: Build Status 
Windows: Build status 
Codecov: codecov 

Download Details:

Author: randyzwitch
Source Code: https://github.com/randyzwitch/LogParser.jl 
License: View license

#julia #log #analysis 

LogParser.jl: Julia Package for Parsing Server Log Files

LogViewer: Provides A Log Viewer for Laravel

LogViewer

By ARCANEDEV©

This package allows you to manage and keep track of each one of your log files.

NOTE: You can also use LogViewer as an API.

Official documentation for LogViewer can be found at the _docs folder.

Feel free to check out the releases, license, and contribution guidelines.

Features

  • A great Log viewer API.
  • Laravel 5.x to 9.x are supported.
  • Ready to use (Views, Routes, controllers … Out of the box) [Note: No need to publish assets]
  • View, paginate, filter, download and delete logs.
  • Load a custom logs storage path.
  • Localized log levels.
  • Logs menu/tree generator.
  • Grouped logs by dates and levels.
  • Customized log levels icons (font awesome by default).
  • Works great with big logs !!
  • Well documented package (IDE Friendly).
  • Well tested (100% code coverage with maximum code quality).

Table of contents

  1. Installation and Setup
  2. Configuration
  3. Usage
  4. FAQ

Supported localizations

Dear artisans, i'm counting on you to help me out to add more translations ( ^_^)b

LocalLanguage
arArabic
bgBulgarian
deGerman
enEnglish
esSpanish
etEstonian
faFarsi
frFrench
heHebrew
huHungarian
hyArmenian
idIndonesian
itItalian
jaJapanese
koKorean
msMalay
nlDutch
plPolish
pt-BRBrazilian Portuguese
roRomanian
ruRussian
siSinhalese
svSwedish
thThai
trTurkish
ukUkrainian
zhChinese (Simplified)
zh-TWChinese (Traditional)

Contribution

Any ideas are welcome. Feel free to submit any issues or pull requests, please check the contribution guidelines.

Security

If you discover any security related issues, please email arcanedev.maroc@gmail.com instead of using the issue tracker.

Credits

PREVIEW

Dashboard Logs list Single log

Download Details:

Author: ARCANEDEV
Source Code: https://github.com/ARCANEDEV/LogViewer 
License: MIT license

#php #log #laravel 

LogViewer: Provides A Log Viewer for Laravel

Laravel-log-viewer: Laravel Log Viewer

Laravel log viewer

TL;DR

Log Viewer for Laravel 5, 6, 7 & 8 (still compatible with 4.2 too) and Lumen. Install with composer, create a route to LogViewerController. No public assets, no vendor routes, works with and/or without log rotate. Inspired by Micheal Mand's Laravel 4 log viewer (works only with laravel 4.1)

What ?

Small log viewer for laravel. Looks like this:

capture d ecran 2014-12-01 a 10 37 18

Install (Laravel)

Install via composer

composer require rap2hpoutre/laravel-log-viewer

Add Service Provider to config/app.php in providers section

Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider::class,

Add a route in your web routes file:

Route::get('logs', [\Rap2hpoutre\LaravelLogViewer\LogViewerController::class, 'index']);

Go to http://myapp/logs or some other route

Install (Lumen)

Install via composer

composer require rap2hpoutre/laravel-log-viewer

Add the following in bootstrap/app.php:

$app->register(\Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider::class);

Explicitly set the namespace in app/Http/routes.php:

$router->group(['namespace' => '\Rap2hpoutre\LaravelLogViewer'], function() use ($router) {
    $router->get('logs', 'LogViewerController@index');
});

Advanced usage

Customize view

Publish log.blade.php into /resources/views/vendor/laravel-log-viewer/ for view customization:

php artisan vendor:publish \
  --provider="Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider" \
  --tag=views

Edit configuration

Publish logviewer.php configuration file into /config/ for configuration customization:

php artisan vendor:publish \
  --provider="Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider"

Troubleshooting

If you got a InvalidArgumentException in FileViewFinder.php error, it may be a problem with config caching. Double check installation, then run php artisan config:clear.

Download Details:

Author: rap2hpoutre
Source Code: https://github.com/rap2hpoutre/laravel-log-viewer 
License: MIT license

#php #laravel #log #hacktoberfest 

Laravel-log-viewer: Laravel Log Viewer
Rupert  Beatty

Rupert Beatty

1666249680

CocoaDebug: iOS Debugging Tool

Introduction

 Shake to hide or show the black bubble. (Support iPhone device and simulator)

 Share network details via email or copy to clipboard when you are in the Network Details page.

 Copy logs. (Long press the text, then select all or select copy)

 Search logs by keyword.

 Long press the black bubble to clean all network logs.

 Detect UI Blocking.

 List crash errors.

 List application and device informations, including: version, build, bundle name, bundle id, screen resolution, device, iOS version

 List all network requests sent by the application. (Support JSON and Google's Protocol buffers)

 List all sandbox folders and files, supporting to preview and edit.

 List all WKWebView consoles.

 List all React Native JavaScript consoles and Native logs.

 List all print() and NSLog() messages which have been written by developer in Xcode.

Installation

CocoaPods (Preferred)

target 'YourTargetName' do
    use_frameworks!
    pod 'CocoaDebug', :configurations => ['Debug']
end

Carthage

github  "CocoaDebug/CocoaDebug"

Framework

CocoaDebug.framework (Version 1.7.2)

WARNING: Never ship a product which has been linked with the CocoaDebug framework. The Integration Guide outline a way to use build configurations to isolate linking the framework to Debug builds.

Xcode12 build error solution

Usage

  • Don't need to do anything. CocoaDebug will start automatically.
  • To capture logs from Xcode with codes: (You can also set this in CocoaDebug->App->Monitor->Applogs without any codes.)
CocoaDebugSettings.shared.enableLogMonitoring = true //The default value is false

Screenshot

Parameters

When you initialize CocoaDebug, you can customize the following parameter values before CocoaDebug.enable().

serverURL - If the captured URLs contain server URL, CocoaDebug set server URL bold font to be marked. Not mark when this value is nil. Default value is nil.

ignoredURLs - Set the URLs which should not been captured, CocoaDebug capture all URLs when the value is nil. Default value is nil.

onlyURLs - Set the URLs which are only been captured, CocoaDebug capture all URLs when the value is nil. Default value is nil.

ignoredPrefixLogs - Set the prefix Logs which should not been captured, CocoaDebug capture all Logs when the value is nil. Default value is nil.

onlyPrefixLogs - Set the prefix Logs which are only been captured, CocoaDebug capture all Logs when the value is nil. Default value is nil.

additionalViewController - Add an additional UIViewController as child controller of CocoaDebug's main UITabBarController. Default value is nil.

emailToRecipients - Set the initial recipients to include in the email’s “To” field when share via email. Default value is nil.

emailCcRecipients - Set the initial recipients to include in the email’s “Cc” field when share via email. Default value is nil.

mainColor - Set CocoaDebug's main color with hexadecimal format. Default value is 42d459.

protobufTransferMap - Protobuf data transfer to JSON map. Default value is nil.

Thanks

Special thanks to remirobert.

Reference

https://developer.apple.com/library/archive/samplecode/CustomHTTPProtocol/Introduction/Intro.html

Download Details:

Author: CocoaDebug
Source Code: https://github.com/CocoaDebug/CocoaDebug 

#swift #debug #ios #networking #log #objective-c 

CocoaDebug: iOS Debugging Tool

Laravel-activitylog: Log Activity inside Your Laravel App

Laravel Activity Log

Log activity inside your Laravel app

The spatie/laravel-activitylog package provides easy to use functions to log the activities of the users of your app. It can also automatically log model events. The Package stores all activity in the activity_log table.

Here's a demo of how you can use it:

activity()->log('Look, I logged something');

You can retrieve all activity using the Spatie\Activitylog\Models\Activity model.

Activity::all();

Here's a more advanced example:

activity()
   ->performedOn($anEloquentModel)
   ->causedBy($user)
   ->withProperties(['customProperty' => 'customValue'])
   ->log('Look, I logged something');

$lastLoggedActivity = Activity::all()->last();

$lastLoggedActivity->subject; //returns an instance of an eloquent model
$lastLoggedActivity->causer; //returns an instance of your user model
$lastLoggedActivity->getExtraProperty('customProperty'); //returns 'customValue'
$lastLoggedActivity->description; //returns 'Look, I logged something'

Here's an example on event logging.

$newsItem->name = 'updated name';
$newsItem->save();

//updating the newsItem will cause the logging of an activity
$activity = Activity::all()->last();

$activity->description; //returns 'updated'
$activity->subject; //returns the instance of NewsItem that was saved

Calling $activity->changes() will return this array:

[
   'attributes' => [
        'name' => 'updated name',
        'text' => 'Lorum',
    ],
    'old' => [
        'name' => 'original name',
        'text' => 'Lorum',
    ],
];

Documentation

You'll find the documentation on https://spatie.be/docs/laravel-activitylog/introduction.

Find yourself stuck using the package? Found a bug? Do you have general questions or suggestions for improving the activity log? Feel free to create an issue on GitHub, we'll try to address it as soon as possible.

Installation

You can install the package via composer:

composer require spatie/laravel-activitylog

The package will automatically register itself.

You can publish the migration with

php artisan vendor:publish --provider="Spatie\Activitylog\ActivitylogServiceProvider" --tag="activitylog-migrations"

Note: The default migration assumes you are using integers for your model IDs. If you are using UUIDs, or some other format, adjust the format of the subject_id and causer_id fields in the published migration before continuing.

After publishing the migration you can create the activity_log table by running the migrations:

php artisan migrate

You can optionally publish the config file with:

php artisan vendor:publish --provider="Spatie\Activitylog\ActivitylogServiceProvider" --tag="activitylog-config"

Changelog

Please see CHANGELOG for more information about recent changes.

Upgrading

Please see UPGRADING for details.

Testing

composer test

Contributing

Please see CONTRIBUTING for details.

Security

If you've found a bug regarding security please mail security@spatie.be instead of using the issue tracker.

Credits

And a special thanks to Caneco for the logo and Ahmed Nagi for all the work he put in v4.

Download Details:

Author: Spatie
Source Code: https://github.com/spatie/laravel-activitylog 
License: MIT license

#php #laravel #monitoring #log 

Laravel-activitylog: Log Activity inside Your Laravel App
Hunter  Krajcik

Hunter Krajcik

1665209640

On_exit: Simple Log , Can Custom Log Output Function

on_exit

simple log , can custom log output function

use

import 'dart:io';
import 'package:on_exit/init.dart';

// import if you need custom output function
import 'package:on_exit/config.dart' show logConfig;

void main() async {
  // can use custom output function
  logConfig[1] = (stack, msg) {
    stderr.write(stack + " :\n💀" + msg + '\n');
  };

  log('version', 1.0);
  logw('warning');
  for (var i = 0; i < 3; ++i) {
    loge('something happended', Exception(i));
  }
  await Future.delayed(Duration(seconds: 1));
  log(123);
}

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add on_exit

With Flutter:

 $ flutter pub add on_exit

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  on_exit: ^1.0.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:on_exit/init.dart';

example/example.dart

import 'package:on_exit/init.dart';

void main() async {
  onExit(() {
    print('on exit callback');
  });
  print('hi');
  await onExit.exit(0);
}

Download Details:

Author: rmw-dart
Source Code: https://github.com/rmw-dart/on_exit 
License: MPL-2.0 license

#flutter #dart #log 

On_exit: Simple Log , Can Custom Log Output Function
Hunter  Krajcik

Hunter Krajcik

1665120480

The Calls Sensor Logs Call Events Performed By Or Received By The User

Aware Calls

The Calls sensor logs call events performed by or received by the user. It also provides higher level context on the users’ calling availability and actions.

Install the plugin into project

Edit pubspec.yaml

dependencies:
    awareframework_calls

Import the package on your source code

import 'package:awareframework_calls/awareframework_calls.dart';
import 'package:awareframework_core/awareframework_core.dart';

Public functions

calls Sensor

  • start()
  • stop()
  • sync(bool force)
  • enable()
  • disable()
  • isEnable()
  • setLabel(String label)

Configuration Keys

TODO

  • period: Float: Period to save data in minutes. (default = 1)
  • threshold: Double: If set, do not record consecutive points if change in value is less than the set value.
  • enabled: Boolean Sensor is enabled or not. (default = false)
  • debug: Boolean enable/disable logging to Logcat. (default = false)
  • label: String Label for the data. (default = "")
  • deviceId: String Id of the device that will be associated with the events and the sensor. (default = "")
  • dbEncryptionKey Encryption key for the database. (default = null)
  • dbType: Engine Which db engine to use for saving data. (default = 0) (0 = None, 1 = Room or Realm)
  • dbPath: String Path of the database. (default = "aware_accelerometer")
  • dbHost: String Host for syncing the database. (default = null)

Data Representations

The data representations is different between Android and iOS. Following links provide the information.

Example usage

// init config
var config = CallsSensorConfig()
  ..debug = true
  ..label = "label";

// init sensor
var sensor = new CallsSensor.init(config);

void method(){
    /// start 
    sensor.start();
    
    /// set observer
    sensor.onDataChanged.listen((CallsData result){
      setState((){
        // Your code here
      });
    });
    
    /// stop
    sensor.stop();
    
    /// sync
    sensor.sync(true);  
    
    // make a sensor care by the following code
    var card = new CallsCard(sensor:sensor);
    // NEXT: Add the card instance into a target Widget.
}

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add awareframework_calls

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  awareframework_calls: ^0.1.0

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:awareframework_calls/awareframework_calls.dart';

example/lib/main.dart

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:awareframework_calls/awareframework_calls.dart';
import 'package:awareframework_core/awareframework_core.dart';

void main() => runApp(new MyApp());

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => new _MyAppState();
}

class _MyAppState extends State<MyApp> {
  late CallsSensor sensor;
  late CallsSensorConfig config;

  @override
  void initState() {
    super.initState();

    config = CallsSensorConfig()..debug = true;

    sensor = new CallsSensor.init(config);
  }

  @override
  Widget build(BuildContext context) {
    return new MaterialApp(
      home: new Scaffold(
        appBar: new AppBar(
          title: const Text('Plugin Example App'),
        ),
        body: Column(
          children: [
            TextButton(
                onPressed: () {
                  sensor.onCall.listen((event) {
                    setState(() {
                      Text(event.trace);
                    });
                  });
                  sensor.start();
                },
                child: Text("Start")),
            TextButton(
                onPressed: () {
                  sensor.stop();
                },
                child: Text("Stop")),
            TextButton(
                onPressed: () {
                  sensor.sync();
                },
                child: Text("Sync")),
          ],
        ),
      ),
    );
  }
}

License

Copyright (c) 2021 AWARE Mobile Context Instrumentation Middleware/Framework (http://www.awareframework.com)

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LI CENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Original article source at: https://pub.dev/packages/awareframework_calls

#flutter #dart #log 

The Calls Sensor Logs Call Events Performed By Or Received By The User

10 Best Golang Libraries for Generating & Working with Log Files

In today's post we will learn about 10 Best Golang Libraries for Generating & Working with Log Files. 

What is a log file?

A log file is a computer-generated data file that contains information about usage patterns, activities, and operations within an operating system, application, server or another device, and is the primary data source for network observability. Log files show whether resources are performing properly and optimally, exposing possible.

Table of contents:

  • Distillog - Distilled levelled logging (think of it as stdlib + log levels).
  • Glg - glg is simple and fast leveled logging library for Go.
  • Glo - PHP Monolog inspired logging facility with identical severity levels.
  • Glog - Leveled execution logs for Go.
  • Go-cronowriter - Simple writer that rotate log files automatically based on current date and time, like cronolog.
  • Go-log - A logging library with stack traces, object dumping and optional timestamps.
  • Go-log - Simple and configurable Logging in Go, with level, formatters and writers.
  • Go-log - Log lib supports level and multi handlers.
  • Go-log - Log4j implementation in Go.
  • Go-logger - Simple logger of Go Programs, with level handlers.

1 - Distillog: Distilled levelled logging (think of it as stdlib + log levels).

What is distillog?

distillog aims to offer a minimalistic logging interface that also supports log levels. It takes the stdlib API and only slightly enhances it. Hence, you could think of it as levelled logging, distilled.

Yet another logging library for go(lang)?

Logging libraries are like opinions, everyone seems to have one -- Anon(?)

Most other logging libraries do either too little (stdlib) or too much (glog).

As with most other libraries, this one is opinionated. In terms of functionality it exposes, it attempts to sit somewhere between the stdlib and the majority of other logging libraries available (but leans mostly towards the spartan side of stdlib).

Expose an interface? Why?

By exposing an interface you can write programs that use levelled log messages, but switch between logging to various facilities by simply instantiating the appropriate logger as determined by the caller (Your program can offer a command-line switch like so - --log-to=[syslog,stderr,<file>] and the simply instantiate the appropriate logger).

Usage/examples:

As seen in the godoc, the interface is limited to:

type Logger interface {
	Debugf(format string, v ...interface{})
	Debugln(v ...interface{})

	Infof(format string, v ...interface{})
	Infoln(v ...interface{})

	Warningf(format string, v ...interface{})
	Warningln(v ...interface{})

	Errorf(format string, v ...interface{})
	Errorln(v ...interface{})

	Close() error
}

Log to stdout, or stderr using a logger instantiated like so:

outLogger := distillog.NewStdoutLogger("test")

errLogger := distillog.NewStderrLogger("test")

sysLogger := distillog.NewSyslogLogger("test")

Alternatively, you can use the package for your logging needs:

import log "github.com/amoghe/distillog"

// ... later ...

log.Infoln("Starting program")
log.Debugln("initializing the frobnicator")
log.Warningln("frobnicator failure detected, proceeding anyways...")
log.Infoln("Exiting")

If you have a file you wish to log to, you should open the file and instantiate a logger using the file handle, like so:

if fileHandle, err := ioutil.Tempfile("/tmp", "distillog-test"); err == nil {
        fileLogger := distillog.NewStreamLogger("test", fileHandle)
}

If you need a logger that manages the rotation of its files, use lumberjack, like so:

lumberjackHandle := &lumberjack.Logger{
        Filename:   "/var/log/myapp/foo.log",
        MaxSize:    500,                       // megabytes
        MaxBackups: 3,
        MaxAge:     28,                        // days
}

logger := distillog.NewStreamLogger("tag", lumberjackHandle)

// Alternatively, configure the pkg level logger to emit here

distillog.SetOutput(lumberjackHandle)

Once instantiated, you can log messages, like so:

var := "World!"
myLogger.Infof("Hello, %s", var)
myLogger.Warningln("Goodbye, cruel world!")

View on Github

2 - Glg: glg is simple and fast leveled logging library for Go.

glg is simple golang logging library

Requirement

Go 1.16

Installation

go get github.com/kpango/glg

Example

package main

import (
	"net/http"
	"time"

	"github.com/kpango/glg"
)

// NetWorkLogger sample network logger
type NetWorkLogger struct{}

func (n NetWorkLogger) Write(b []byte) (int, error) {
	// http.Post("localhost:8080/log", "", bytes.NewReader(b))
	http.Get("http://127.0.0.1:8080/log")
	glg.Success("Requested")
	glg.Infof("RawString is %s", glg.RawString(b))
	return 1, nil
}

func main() {

	// var errWriter io.Writer
	// var customWriter io.Writer
	infolog := glg.FileWriter("/tmp/info.log", 0666)

	customTag := "FINE"
	customErrTag := "CRIT"

	errlog := glg.FileWriter("/tmp/error.log", 0666)
	defer infolog.Close()
	defer errlog.Close()

	glg.Get().
		SetMode(glg.BOTH). // default is STD
		// DisableColor().
		// SetMode(glg.NONE).
		// SetMode(glg.WRITER).
		// SetMode(glg.BOTH).
		// InitWriter().
		// AddWriter(customWriter).
		// SetWriter(customWriter).
		// AddLevelWriter(glg.LOG, customWriter).
		// AddLevelWriter(glg.INFO, customWriter).
		// AddLevelWriter(glg.WARN, customWriter).
		// AddLevelWriter(glg.ERR, customWriter).
		// SetLevelWriter(glg.LOG, customWriter).
		// SetLevelWriter(glg.INFO, customWriter).
		// SetLevelWriter(glg.WARN, customWriter).
		// SetLevelWriter(glg.ERR, customWriter).
		// EnableJSON().
		SetLineTraceMode(glg.TraceLineNone).
		AddLevelWriter(glg.INFO, infolog). // add info log file destination
		AddLevelWriter(glg.ERR, errlog).   // add error log file destination
		AddLevelWriter(glg.WARN, rotate)   // add error log file destination

	glg.Info("info")
	glg.Infof("%s : %s", "info", "formatted")
	glg.Log("log")
	glg.Logf("%s : %s", "info", "formatted")
	glg.Debug("debug")
	glg.Debugf("%s : %s", "info", "formatted")
	glg.Trace("Trace")
	glg.Tracef("%s : %s", "tracef", "formatted")
	glg.Warn("warn")
	glg.Warnf("%s : %s", "info", "formatted")
	glg.Error("error")
	glg.Errorf("%s : %s", "info", "formatted")
	glg.Success("ok")
	glg.Successf("%s : %s", "info", "formatted")
	glg.Fail("fail")
	glg.Failf("%s : %s", "info", "formatted")
	glg.Print("Print")
	glg.Println("Println")
	glg.Printf("%s : %s", "printf", "formatted")

	// set global log level to ERR level
	glg.Info("before setting level to ERR this message will show")
	glg.Get().SetLevel(glg.ERR)
	glg.Info("after setting level to ERR this message will not show")
	glg.Error("this log is ERR level this will show")
	glg.Get().SetLevel(glg.DEBG)
	glg.Info("log level is now DEBG, this INFO level log will show")

	glg.Get().
		AddStdLevel(customTag, glg.STD, false).                    // user custom log level
		AddErrLevel(customErrTag, glg.STD, true).                  // user custom error log level
		SetLevelColor(glg.TagStringToLevel(customTag), glg.Cyan).  // set color output to user custom level
		SetLevelColor(glg.TagStringToLevel(customErrTag), glg.Red) // set color output to user custom level
	glg.CustomLog(customTag, "custom logging")
	glg.CustomLog(customErrTag, "custom error logging")

	// glg.Info("kpango's glg supports disable timestamp for logging")
	glg.Get().DisableTimestamp()
	glg.Info("timestamp disabled")
	glg.Warn("timestamp disabled")
	glg.Log("timestamp disabled")
	glg.Get().EnableTimestamp()
	glg.Info("timestamp enabled")
	glg.Warn("timestamp enabled")
	glg.Log("timestamp enabled")

	glg.Info("kpango's glg support line trace logging")
	glg.Error("error log shows short line trace by default")
	glg.Info("error log shows none trace by default")
	glg.Get().SetLineTraceMode(glg.TraceLineShort)
	glg.Error("after configure TraceLineShort, error log shows short line trace")
	glg.Info("after configure TraceLineShort, info log shows short line trace")
	glg.Get().DisableTimestamp()
	glg.Error("after configure TraceLineShort and DisableTimestamp, error log shows short line trace without timestamp")
	glg.Info("after configure TraceLineShort and DisableTimestamp, info log shows short line trace without timestamp")
	glg.Get().EnableTimestamp()
	glg.Get().SetLineTraceMode(glg.TraceLineLong)
	glg.Error("after configure TraceLineLong, error log shows long line trace")
	glg.Info("after configure TraceLineLong, info log shows long line trace")
	glg.Get().DisableTimestamp()
	glg.Error("after configure TraceLineLong and DisableTimestamp, error log shows long line trace without timestamp")
	glg.Info("after configure TraceLineLong and DisableTimestamp, info log shows long line trace without timestamp")
	glg.Get().EnableTimestamp()
	glg.Get().SetLineTraceMode(glg.TraceLineNone)
	glg.Error("after configure TraceLineNone, error log without line trace")
	glg.Info("after configure TraceLineNone, info log without line trace")
	glg.Get().SetLevelLineTraceMode(glg.INFO, glg.TraceLineLong)
	glg.Info("after configure Level trace INFO=TraceLineLong, only info log shows long line trace")
	glg.Error("after configure Level trace INFO=TraceLineLong, error log without long line trace")
	glg.Get().SetLevelLineTraceMode(glg.ERR, glg.TraceLineShort)
	glg.Info("after configure Level trace ERR=TraceLineShort, info log still shows long line trace")
	glg.Error("after configure Level trace ERR=TraceLineShort, error log now shows short line trace")
	glg.Get().SetLineTraceMode(glg.TraceLineNone)

	glg.Info("kpango's glg support json logging")
	glg.Get().EnableJSON()
	err := glg.Warn("kpango's glg", "support", "json", "logging")
	if err != nil {
		glg.Get().DisableJSON()
		glg.Error(err)
		glg.Get().EnableJSON()
	}
	err = glg.Info("hello", struct {
		Name   string
		Age    int
		Gender string
	}{
		Name:   "kpango",
		Age:    28,
		Gender: "male",
	}, 2020)
	if err != nil {
		glg.Get().DisableJSON()
		glg.Error(err)
		glg.Get().EnableJSON()
	}	glg.CustomLog(customTag, "custom logging")

	glg.CustomLog(customErrTag, "custom error logging")

	glg.Get().AddLevelWriter(glg.DEBG, NetWorkLogger{}) // add info log file destination

	http.Handle("/glg", glg.HTTPLoggerFunc("glg sample", func(w http.ResponseWriter, r *http.Request) {
		glg.New().
		AddLevelWriter(glg.Info, NetWorkLogger{}).
		AddLevelWriter(glg.Info, w).
		Info("glg HTTP server logger sample")
	}))

	http.ListenAndServe("port", nil)

	// fatal logging
	glg.Fatalln("fatal")
}

View on Github

3 - Glo: PHP Monolog inspired logging facility with identical severity levels.

GLO

Logging library for Golang

Inspired by Monolog for PHP, severity levels are identical

Install

go get github.com/lajosbencz/glo

Severity levels

Debug     = 100
Info      = 200
Notice    = 250
Warning   = 300
Error     = 400
Critical  = 500
Alert     = 550
Emergency = 600

Simple example

package main

import "github.com/lajosbencz/glo"

func main() {
	// Info - Warning will go to os.Stdout
	// Error - Emergency will go to os.Stderr
	log := glo.NewStdFacility()

	// goes to os.Stdout
	log.Debug("Detailed debug line: %#v", map[string]string{"x": "foo", "y": "bar"})

	// goes to os.Stderr
	log.Error("Oooof!")
}

Output:

2019-01-22T15:16:08+01:00 [DEBUG] Detailed debug line [map[x:foo y:bar]] 2019-01-22T15:16:08+01:00 [ERROR] Oooof! []

Customized example

package main

import (
	"bytes"
	"fmt"
	"os"
	"strings"

	"github.com/lajosbencz/glo"
)

func main() {
	log := glo.NewFacility()

	// write everything to a buffer
	bfr := bytes.NewBufferString("")
	handlerBfr := glo.NewHandler(bfr)
	log.PushHandler(handlerBfr)

	// write only errors and above using a short format
	handlerStd := glo.NewHandler(os.Stdout)
	formatter := glo.NewFormatter("{L}: {M}")
	filter := glo.NewFilterLevel(glo.Error)
	handlerStd.SetFormatter(formatter)
	handlerStd.PushFilter(filter)
	log.PushHandler(handlerStd)

	fmt.Println("Log output:")
	fmt.Println(strings.Repeat("=", 70))
	log.Info("Only written to the buffer")
	log.Alert("Written to both buffer and stdout")

	fmt.Println("")
	fmt.Println("Buffer contents:")
	fmt.Println(strings.Repeat("=", 70))
	fmt.Println(bfr.String())
}

View on Github

4 - Glog: Leveled execution logs for Go.

Leveled execution logs for Go.

This is an efficient pure Go implementation of leveled logs in the manner of the open source C++ package glog.

By binding methods to booleans it is possible to use the log package without paying the expense of evaluating the arguments to the log. Through the -vmodule flag, the package also provides fine-grained control over logging at the file level.

The comment from glog.go introduces the ideas:

Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides the functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style loggingcontrolled by the -v and -vmodule=file=2 flags.

Basic examples:

glog.Info("Prepare to repel boarders")
	
glog.Fatalf("Initialization failed: %s", err)

See the documentation for the V function for an explanation of these examples:

if glog.V(2) {
	glog.Info("Starting transaction...")
}
glog.V(2).Infoln("Processed", nItems, "elements")

The repository contains an open source version of the log package used inside Google. The master copy of the source lives inside Google, not here. The code in this repo is for export only and is not itself under development. Feature requests will be ignored.

Send bug reports to golang-nuts@googlegroups.com.

View on Github

5 - Go-cronowriter: Simple writer that rotate log files automatically based on current date and time, like cronolog.

This is a simple file writer that it writes message to the specified format path.

The file path is constructed based on current date and time, like cronolog.

Installation

$ go get -u github.com/utahta/go-cronowriter

Examples

import "github.com/utahta/go-cronowriter"
w := cronowriter.MustNew("/path/to/example.log.%Y%m%d")
w.Write([]byte("test"))

// output file
// /path/to/example.log.20170204

You can specify the directory as below

w := cronowriter.MustNew("/path/to/%Y/%m/%d/example.log")
w.Write([]byte("test"))

// output file
// /path/to/2017/02/04/example.log

with Location

w := cronowriter.MustNew("/path/to/example.log.%Z", writer.WithLocation(time.UTC))
w.Write([]byte("test"))

// output file
// /path/to/example.log.UTC

with Symlink

w := cronowriter.MustNew("/path/to/example.log.%Y%m%d", writer.WithSymlink("/path/to/example.log"))
w.Write([]byte("test"))

// output file
// /path/to/example.log.20170204
// /path/to/example.log -> /path/to/example.log.20170204

with Mutex

w := cronowriter.MustNew("/path/to/example.log.%Y%m%d", writer.WithMutex())

no use Mutex

w := cronowriter.MustNew("/path/to/example.log.%Y%m%d", writer.WithNopMutex())

with Debug (stdout and stderr)

w := cronowriter.MustNew("/path/to/example.log.%Y%m%d", writer.WithDebug())
w.Write([]byte("test"))

// output file, stdout and stderr
// /path/to/example.log.20170204

with Init

w := cronowriter.MustNew("/path/to/example.log.%Y%m%d", writer.WithInit())

// open the file when New() method is called
// /path/to/example.log.20170204

View on Github

6 - Go-log: A logging library with stack traces, object dumping and optional timestamps.

This is a Golang library with logging related functions which I use in my different projects.

Usage

package main

import (
    "github.com/pieterclaerhout/go-log"
)

func main() {

    log.DebugMode = true
    log.DebugSQLMode = true
    log.PrintTimestamp = true
    log.PrintColors = true
    log.TimeFormat = "2006-01-02 15:04:05.000"

    myVar := map[string]string{"hello": "world"}

    log.Debug("arg1", "arg2")
    log.Debugf("arg1 %d", 1)
    log.DebugDump(myVar, "prefix")
    log.DebugSeparator("title")
    log.DebugSQL("select * from mytable")

    log.Info("arg1", "arg2")
    log.Infof("arg1 %d", 1)
    log.InfoDump(myVar, "prefix")
    log.InfoSeparator("title")

    log.Warn("arg1", "arg2")
    log.Warnf("arg1 %d", 1)
    log.WarnDump(myVar, "prefix")
    log.WarnSeparator("title")

    log.Error("arg1", "arg2")
    log.Errorf("arg1 %d", 1)
    log.ErrorDump(myVar, "prefix")
    log.ErrorSeparator("title")

    log.Fatal("arg1", "arg2")
    log.Fatalf("arg1 %d", 1)

    err1 := funcWithError()
    log.StackTrace(err1)

    err2 := funcWithError()
    log.CheckError(err2)

}

Environment variables

The defaults are taken from the environment variables:

  • DEBUG: log.DebugMode
  • DEBUG_SQL: log.DebugSQLMode
  • PRINT_TIMESTAMP: log.PrintTimestamp

View on Github

7 - Go-log: Simple and configurable Logging in Go, with level, formatters and writers.

Logging package similar to log4j for the Golang.

  • Support dynamic log level
  • Support customized formatter
    • TextFormatter
    • JSONFormatter
  • Support multiple rolling file writers
    • FixedSizeFileWriter
    • DailyFileWriter
    • AlwaysNewFileWriter

Installation

$ go get github.com/subchen/go-log

Usage

package main

import (
	"os"
	"errors"
	"github.com/subchen/go-log"
)

func main() {
	log.Debugf("app = %s", os.Args[0])
	log.Errorf("error = %v", errors.New("some error"))

	// dynamic set level
	log.Default.Level = log.WARN

	log.Debug("cannot output debug message")
	log.Errorln("can output error message", errors.New("some error"))
}

Output

Default log to console, you can set Logger.Out to set a file writer into log.

import (
	"github.com/subchen/go-log"
	"github.com/subchen/go-log/writers"
)

log.Default.Out = &writers.FixedSizeFileWriter{
	Name:	 "/tmp/test.log",
	MaxSize:  10 * 1024 * 1024, // 10m
	MaxCount: 10,
})

Three builtin writers for use

// Create log file if file size large than fixed size (10m)
// files: /tmp/test.log.0 .. test.log.10
&writers.FixedSizeFileWriter{
	Name:	 "/tmp/test.log",
	MaxSize:  10 * 1024 * 1024, // 10m
	MaxCount: 10,
}

// Create log file every day.
// files: /tmp/test.log.20160102
&writers.DailyFileWriter{
	Name: "/tmp/test.log",
	MaxCount: 10,
}

// Create log file every process.
// files: /tmp/test.log.20160102_150405
&writers.AlwaysNewFileWriter{
	Name: "/tmp/test.log",
	MaxCount: 10,
}

// Output to multiple writes
io.MultiWriter(
	os.Stdout,
	&writers.DailyFileWriter{
		Name: "/tmp/test.log",
		MaxCount: 10,
	}
	//...
)

View on Github

8 - Go-log: Log lib supports level and multi handlers.

go-log

a golang log lib supports level and multi handlers

Use

import "github.com/siddontang/go-log/log"

//log with different level
log.Info("hello world")
log.Error("hello world")

//create a logger with specified handler
h := NewStreamHandler(os.Stdout)
l := log.NewDefault(h)
l.Info("hello world")

View on Github

9 - Go-log: Log4j implementation in Go.

Go-Log. A logger, for Go!

It's sort of log and code.google.com/p/log4go compatible, so in most cases can be used without any code changes.

Breaking change

go-log was inconsistent with the default Go 'log' package, and log.Fatal calls didn't trigger an os.Exit(1).

This has been fixed in the current release of go-log, which might break backwards compatibility.

You can disable the fix by setting ExitOnFatal to false, e.g.

log.Logger().ExitOnFatal = false

Getting started

Install go-log:

go get github.com/ian-kent/go-log/log

Use the logger in your application:

import(
  "github.com/ian-kent/go-log/log"
)

// Pass a log message and arguments directly
log.Debug("Example log message: %s", "example arg")

// Pass a function which returns a log message and arguments
log.Debug(func(){[]interface{}{"Example log message: %s", "example arg"}})
log.Debug(func(i ...interface{}){[]interface{}{"Example log message: %s", "example arg"}})

You can also get the logger instance:

logger := log.Logger()
logger.Debug("Yey!")

Or get a named logger instance:

logger := log.Logger("foo.bar")

Log levels

The default log level is DEBUG.

To get the current log level:

level := logger.Level()

Or to set the log level:

// From a LogLevel
logger.SetLevel(levels.TRACE)

// From a string
logger.SetLevel(log.Stol("TRACE"))

Log appenders

The default log appender is appenders.Console(), which logs the raw message to STDOUT.

To get the current log appender:

appender := logger.Appender()

If the appender is nil, the parent loggers appender will be used instead.

If the appender eventually resolves to nil, log data will be silently dropped.

You can set the log appender:

logger.SetAppender(appenders.Console())

View on Github

10 - Go-logger: Simple logger of Go Programs, with level handlers.

A simple go logger for easy logging in your programs. Allows setting custom format for messages.

Install

go get github.com/apsdehal/go-logger

Use go get -u to update the package.

Example

Example program demonstrates how to use the logger. See below for formatting instructions.

package main

import (
	"github.com/apsdehal/go-logger"
	"os"
)

func main () {
	// Get the instance for logger class, "test" is the module name, 1 is used to
	// state if we want coloring
	// Third option is optional and is instance of type io.Writer, defaults to os.Stderr
	log, err := logger.New("test", 1, os.Stdout)
	if err != nil {
		panic(err) // Check for error
	}

	// Critically log critical
	log.Critical("This is Critical!")
	log.CriticalF("%+v", err)
	// You can also use fmt compliant naming scheme such as log.Criticalf, log.Panicf etc
	// with small 'f'
	
	// Debug
	// Since default logging level is Info this won't print anything
	log.Debug("This is Debug!")
	log.DebugF("Here are some numbers: %d %d %f", 10, -3, 3.14)
	// Give the Warning
	log.Warning("This is Warning!")
	log.WarningF("This is Warning!")
	// Show the error
	log.Error("This is Error!")
	log.ErrorF("This is Error!")
	// Notice
	log.Notice("This is Notice!")
	log.NoticeF("%s %s", "This", "is Notice!")
	// Show the info
	log.Info("This is Info!")
	log.InfoF("This is %s!", "Info")

	log.StackAsError("Message before printing stack");

	// Show warning with format
	log.SetFormat("[%{module}] [%{level}] %{message}")
	log.Warning("This is Warning!") // output: "[test] [WARNING] This is Warning!"
	// Also you can set your format as default format for all new loggers
	logger.SetDefaultFormat("%{message}")
	log2, _ := logger.New("pkg", 1, os.Stdout)
	log2.Error("This is Error!") // output: "This is Error!"

	// Use log levels to set your log priority
	log2.SetLogLevel(DebugLevel)
	// This will be printed
	log2.Debug("This is debug!")
	log2.SetLogLevel(WarningLevel)
	// This won't be printed
	log2.Info("This is an error!")
}

View on Github

Thank you for following this article.

Related videos:

Quick, Go check the Logs! - Go / Golang Logging Tutorial

#go #golang #log #files 

10 Best Golang Libraries for Generating & Working with Log Files
Nat  Grady

Nat Grady

1661334840

Reactlog: Shiny Reactivity Visualizer

reactlog

Shiny is an R package from RStudio that makes it incredibly easy to build interactive web applications with R. Behind the scenes, Shiny builds a reactive graph that can quickly become intertwined and difficult to debug. reactlog provides a visual insight into that black box of Shiny reactivity.

After logging the reactive interactions of a Shiny application, reactlog constructs a directed dependency graph of the Shiny's reactive state at any time point in the record. The reactlog dependency graph provides users with the ability to visually see if reactive elements are:

  • Not utilized (never retrieved)
  • Over utilized (called independently many times)
  • Interacting with unexpected elements
  • Invalidating all expected dependencies
  • Freezing (and thawing), preventing triggering of future reactivity

Major Features

There are many subtle features hidden throughout reactlog. Here is a short list quickly describing what is possible within reactlog:

  • Display the reactivity dependency graph of your Shiny applications
  • Navigate throughout your reactive history to replay element interactions
  • Highlight reactive family trees
  • Filter on reactive family trees
  • Search for reactive elements

For a more in-depth explanation of reactlog, please visit the reactlog vignette.

Installation

To install the stable version from CRAN, run the following from an R console:

install.packages("reactlog")

For the latest development version:

remotes::install_github("rstudio/reactlog")

Usage

library(shiny)
library(reactlog)

# tell shiny to log all reactivity
reactlog_enable()

# run a shiny app
app <- system.file("examples/01_hello", package = "shiny")
runApp(app)

# once app has closed, display reactlog from shiny
shiny::reactlogShow()

Or while your Shiny app is running, press the key combination Ctrl+F3 (Mac: Cmd+F3) to launch the reactlog application.

To mark a specific execution time point within your Shiny app, press the key combination Ctrl+Shift+F3 (Mac: Cmd+Shift+F3). This will highlight a specific point in time in your reactlog.

Example

Here is a demo of the reactlog visualization applied to the cranwhales shiny app.

For more examples and explanation, see the reactlog vignette.

Community Support

The best place to get help with Shiny and reactlog is RStudio Community. If you're having difficulties with reactlog, feel free to ask questions here. For bug reports, please use the reactlog issue tracker.

Development

Please make sure you have GitHub Large File Storage, Node.js and yarn installed.

Installation script:

# install git lfs hooks
git lfs install

# install dependencies and build JavaScript
yarn install

# build on file change
yarn watch

By changing the file './inst/reactlog/defaultLog.js' with the contents of any log file in './inst/log-files/', different default log files can be loaded. Once the local JavaScript ('./inst/reactlog/reactlogAsset/reactlog.js') has been built with yarn build or yarn watch, refresh './inst/reactlog/reactlog.html' to avoid constantly spawning Shiny applications for testing.

Download Details:

Author: rstudio
Source Code: https://github.com/rstudio/reactlog 
License: View license

#r #log #reactivity 

Reactlog: Shiny Reactivity Visualizer