Gordon  Murray

Gordon Murray

1673462400

Send Github Commits and PR Logs to ElasticSearch using A Custom Script

Hello Readers!! In this blog, we will see how we can send GitHub commits and PR logs to Elasticsearch using a custom script. Here we will use a bash script that will send GitHub logs to elasticsearch. It will create an index in elasticsearch and push there the logs.

After sending logs to elasticsearch we can visualize the following github events in kibana:-

  • Track commit details made to the GitHub repository
  • Track events related to PRs  in the GitHub repository in a timestamp
  • Analyze relevant information related to the GitHub repository

workflow

1. GitHub User: Users will be responsible for performing actions in a GitHub repository like commits and pull requests.

2. GitHub Repository: Source Code Management system on which users will perform actions.

3. GitHub Action:  Continuous integration and continuous delivery (CI/CD) platform which will run each time when a GitHub user will commit any change and make a pull request.

4. Bash Script: The custom script is written in bash for shipping GitHub logs to Elasticsearch.

5. ElasticSearch: Stores all of the logs in the index created.

6. Kibana: Web interface for searching and visualizing logs.

Steps for sending logs to Elasticsearch using bash script: 

1. GitHub users will make commits and raise pull requests to the GitHub repository. Here is my GitHub repository which I have created for this blog.

https://github.com/NaincyKumariKnoldus/Github_logs

github repo

2. Create two Github actions in this repository. This GitHub action will get trigger on the events perform by the GitHub user.

github actions

GitHub action workflow file for getting trigger on commit events:

commit_workflow.yml:

# The name of the workflow
name: CI
#environment variables
env:
    GITHUB_REF_NAME: $GITHUB_REF_NAME
    ES_URL: ${{ secrets.ES_URL }}
 
# Controls when the workflow will run
on: [push]
#A job is a set of steps in a workflow
jobs:
    send-push-events:
        name: Push Logs to ES
        #The job will run on the latest version of an Ubuntu Linux runner.
        runs-on: ubuntu-latest
        steps:
           #This is an action that checks out your repository onto the runner, allowing you to run scripts
           - uses: actions/checkout@v2
           #The run keyword tells the job to execute a command on the runner
           - run: ./git_commit.sh

GitHub action workflow file for getting trigger on pull events:

pr_workflow.yml:

name: CI
 
env:
  GITHUB_REF_NAME: $GITHUB_REF_NAME
  ES_URL: ${{ secrets.ES_URL }}
 
on: [pull_request]
jobs:
  send-pull-events:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: ./git_pr.sh

3. Create two files inside your GitHub repository for putting bash scripts. Following is the bash script for shipping GitHub logs to Elasticsearch. This script will get executed by the GitHub actions mentioned above.

git_commit.sh will get triggered by GitHub action workflow file commit_workflow.yml:

#!/bin/bash

# get github commits
getCommitResponse=$(
   curl -s \
      -H "Accept: application/vnd.github+json" \
      -H "X-GitHub-Api-Version: 2022-11-28" \
      "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/commits?sha=$GITHUB_REF_NAME&per_page=100&page=1"
)

# get commit SHA
commitSHA=$(echo "$getCommitResponse" |
   jq '.[].sha' |
   tr -d '"')

# get the loop count based on number of commits
loopCount=$(echo "$commitSHA" |
   wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsCommitSHA=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_commit/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "commit_sha": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.commit_sha' |
                  tr -d '"')

# store ES commit sha in a temp file
echo $getEsCommitSHA | tr " " "\n" > sha_es.txt

# looping through each commit detail
for ((count = 0; count < $loopCount; count++)); do
   
   # get commitSHA
   commitSHA=$(echo "$getCommitResponse" |
      jq --argjson count "$count" '.[$count].sha' |
      tr -d '"')

   # match result for previous existing commit on ES
   matchRes=$(grep -o $commitSHA sha_es.txt)
   echo $matchRes | tr " " "\n" >> match.txt

   # filtering and pushing unmatched commit sha details to ES
   if [ -z $matchRes ]; then
      echo "Unmatched SHA: $commitSHA"
      echo $commitSHA | tr " " "\n" >> unmatch.txt
      
      # get author name
      authorName=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.name' |
         tr -d '"')

      # get commit message
      commitMessage=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.message' |
         tr -d '"')

      # get commit html url
      commitHtmlUrl=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].html_url' |
         tr -d '"')

      # get commit time
      commitTime=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.date' |
         tr -d '"')

      # send data to es
      curl -X POST "$ES_URL/github_commit/commit" \
         -H "Content-Type: application/json" \
         -d "{ \"commit_sha\" : \"$commitSHA\",
            \"branch_name\" : \"$GITHUB_REF_NAME\",
            \"author_name\" : \"$authorName\",
            \"commit_message\" : \"$commitMessage\",
            \"commit_html_url\" : \"$commitHtmlUrl\",
            \"commit_time\" : \"$commitTime\" }"
   fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

git_pr.sh will get triggered by GitHub action workflow file pr_workflow.yml:

#!/bin/bash

# get github PR details
getPrResponse=$(curl -s \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/pulls?state=all&per_page=100&page=1")

# get number of PR
totalPR=$(echo "$getPrResponse" |
  jq '.[].number' |
  tr -d '"')

# get the loop count based on number of PRs
loopCount=$(echo "$totalPR" |
  wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsPR=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_pr/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "pr_number": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.pr_number' |
                  tr -d '"')

# store ES PR number in a temp file
echo $getEsPR | tr " " "\n" > sha_es.txt

# looping through each PR detail
for ((count = 0; count < $loopCount; count++)); do

  # get PR_number
  totalPR=$(echo "$getPrResponse" |
    jq --argjson count "$count" '.[$count].number' |
    tr -d '"')
  
  # looping through each PR detail
  matchRes=$(grep -o $totalPR sha_es.txt)
  echo $matchRes | tr " " "\n" >>match.txt

  # filtering and pushing unmatched PR number details to ES
  if [ -z $matchRes ]; then
    # get PR html url
    PrHtmlUrl=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].html_url' |
      tr -d '"')

    # get PR Body
    PrBody=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].body' |
      tr -d '"')

    # get PR Number
    PrNumber=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].number' |
      tr -d '"')

    # get PR Title
    PrTitle=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].title' |
      tr -d '"')

    # get PR state
    PrState=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].state' |
      tr -d '"')

    # get PR created at
    PrCreatedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].created_at' |
      tr -d '"')

    # get PR closed at
    PrCloseAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].closed_at' |
      tr -d '"')

    # get PR merged at
    PrMergedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].merged_at' |
      tr -d '"')

    # get base branch name
    PrBaseBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].base.ref' |
      tr -d '"')

    # get source branch name
    PrSourceBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].head.ref' |
      tr -d '"')

    # send data to es
    curl -X POST "$ES_URL/github_pr/pull_request" \
      -H "Content-Type: application/json" \
      -d "{ \"pr_number\" : \"$PrNumber\",
            \"pr_url\" : \"$PrHtmlUrl\",
            \"pr_title\" : \"$PrTitle\",
            \"pr_body\" : \"$PrBody\",
            \"pr_base_branch\" : \"$PrBaseBranch\",
            \"pr_source_branch\" : \"$PrSourceBranch\",
            \"pr_state\" : \"$PrState\",
            \"pr_creation_time\" : \"$PrCreatedAt\",
            \"pr_closed_time\" : \"$PrCloseAt\",
            \"pr_merge_at\" : \"$PrMergedAt\"}"
  fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

4. Now make a push in the GitHub repository. After making a commit, GitHub action on push will run and it will send commit logs to elasticsearch.

commit action

Move to your elasticsearch for getting GitHub commits logs there.

es_data

We are now getting GitHub commits here.

5. Now raise a pull request in your GitHub repository. It will also run GitHub action on pull and this will trigger the bash script which will push pull request logs to elasticsearch.

pull

GitHub action got executed on the pull request:

github action

Now, move to elasticsearch and you will find pull request logs there.

es_pull data

6. We can visualize these logs in kibana also.

GitHub commit logs in kibana:

kibana data

GitHub pull request logs in kibana:

kibana

This is how we can analyze our GitHub logs in elasticsearch and kibana using the custom script.

We are all done now!!

Conclusion:

Thank you for sticking to the end. In this blog, we have learned how we can send GitHub commits and PR logs to Elasticsearch using a custom script. This is really very quick and simple. If you like this blog, please share my blog and show your appreciation by giving thumbs-ups, and don’t forget to give me suggestions on how I can improve my future blogs that can suit your needs.

Original article source at: https://blog.knoldus.com/

#script #github #elasticsearch #log 

What is GEEK

Buddha Community

Send Github Commits and PR Logs to ElasticSearch using A Custom Script
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

Gordon  Murray

Gordon Murray

1673462400

Send Github Commits and PR Logs to ElasticSearch using A Custom Script

Hello Readers!! In this blog, we will see how we can send GitHub commits and PR logs to Elasticsearch using a custom script. Here we will use a bash script that will send GitHub logs to elasticsearch. It will create an index in elasticsearch and push there the logs.

After sending logs to elasticsearch we can visualize the following github events in kibana:-

  • Track commit details made to the GitHub repository
  • Track events related to PRs  in the GitHub repository in a timestamp
  • Analyze relevant information related to the GitHub repository

workflow

1. GitHub User: Users will be responsible for performing actions in a GitHub repository like commits and pull requests.

2. GitHub Repository: Source Code Management system on which users will perform actions.

3. GitHub Action:  Continuous integration and continuous delivery (CI/CD) platform which will run each time when a GitHub user will commit any change and make a pull request.

4. Bash Script: The custom script is written in bash for shipping GitHub logs to Elasticsearch.

5. ElasticSearch: Stores all of the logs in the index created.

6. Kibana: Web interface for searching and visualizing logs.

Steps for sending logs to Elasticsearch using bash script: 

1. GitHub users will make commits and raise pull requests to the GitHub repository. Here is my GitHub repository which I have created for this blog.

https://github.com/NaincyKumariKnoldus/Github_logs

github repo

2. Create two Github actions in this repository. This GitHub action will get trigger on the events perform by the GitHub user.

github actions

GitHub action workflow file for getting trigger on commit events:

commit_workflow.yml:

# The name of the workflow
name: CI
#environment variables
env:
    GITHUB_REF_NAME: $GITHUB_REF_NAME
    ES_URL: ${{ secrets.ES_URL }}
 
# Controls when the workflow will run
on: [push]
#A job is a set of steps in a workflow
jobs:
    send-push-events:
        name: Push Logs to ES
        #The job will run on the latest version of an Ubuntu Linux runner.
        runs-on: ubuntu-latest
        steps:
           #This is an action that checks out your repository onto the runner, allowing you to run scripts
           - uses: actions/checkout@v2
           #The run keyword tells the job to execute a command on the runner
           - run: ./git_commit.sh

GitHub action workflow file for getting trigger on pull events:

pr_workflow.yml:

name: CI
 
env:
  GITHUB_REF_NAME: $GITHUB_REF_NAME
  ES_URL: ${{ secrets.ES_URL }}
 
on: [pull_request]
jobs:
  send-pull-events:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: ./git_pr.sh

3. Create two files inside your GitHub repository for putting bash scripts. Following is the bash script for shipping GitHub logs to Elasticsearch. This script will get executed by the GitHub actions mentioned above.

git_commit.sh will get triggered by GitHub action workflow file commit_workflow.yml:

#!/bin/bash

# get github commits
getCommitResponse=$(
   curl -s \
      -H "Accept: application/vnd.github+json" \
      -H "X-GitHub-Api-Version: 2022-11-28" \
      "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/commits?sha=$GITHUB_REF_NAME&per_page=100&page=1"
)

# get commit SHA
commitSHA=$(echo "$getCommitResponse" |
   jq '.[].sha' |
   tr -d '"')

# get the loop count based on number of commits
loopCount=$(echo "$commitSHA" |
   wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsCommitSHA=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_commit/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "commit_sha": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.commit_sha' |
                  tr -d '"')

# store ES commit sha in a temp file
echo $getEsCommitSHA | tr " " "\n" > sha_es.txt

# looping through each commit detail
for ((count = 0; count < $loopCount; count++)); do
   
   # get commitSHA
   commitSHA=$(echo "$getCommitResponse" |
      jq --argjson count "$count" '.[$count].sha' |
      tr -d '"')

   # match result for previous existing commit on ES
   matchRes=$(grep -o $commitSHA sha_es.txt)
   echo $matchRes | tr " " "\n" >> match.txt

   # filtering and pushing unmatched commit sha details to ES
   if [ -z $matchRes ]; then
      echo "Unmatched SHA: $commitSHA"
      echo $commitSHA | tr " " "\n" >> unmatch.txt
      
      # get author name
      authorName=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.name' |
         tr -d '"')

      # get commit message
      commitMessage=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.message' |
         tr -d '"')

      # get commit html url
      commitHtmlUrl=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].html_url' |
         tr -d '"')

      # get commit time
      commitTime=$(echo "$getCommitResponse" |
         jq --argjson count "$count" '.[$count].commit.author.date' |
         tr -d '"')

      # send data to es
      curl -X POST "$ES_URL/github_commit/commit" \
         -H "Content-Type: application/json" \
         -d "{ \"commit_sha\" : \"$commitSHA\",
            \"branch_name\" : \"$GITHUB_REF_NAME\",
            \"author_name\" : \"$authorName\",
            \"commit_message\" : \"$commitMessage\",
            \"commit_html_url\" : \"$commitHtmlUrl\",
            \"commit_time\" : \"$commitTime\" }"
   fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

git_pr.sh will get triggered by GitHub action workflow file pr_workflow.yml:

#!/bin/bash

# get github PR details
getPrResponse=$(curl -s \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  "https://api.github.com/repos/NaincyKumariKnoldus/Github_logs/pulls?state=all&per_page=100&page=1")

# get number of PR
totalPR=$(echo "$getPrResponse" |
  jq '.[].number' |
  tr -d '"')

# get the loop count based on number of PRs
loopCount=$(echo "$totalPR" |
  wc -w)
echo "loopcount= $loopCount"

# get data from ES
getEsPR=$(curl -H "Content-Type: application/json" -X GET "$ES_URL/github_pr/_search?pretty" -d '{
                  "size": 10000,                                                                  
                  "query": {
                     "wildcard": {
                           "pr_number": {
                              "value": "*"
                           }}}}' |
                  jq '.hits.hits[]._source.pr_number' |
                  tr -d '"')

# store ES PR number in a temp file
echo $getEsPR | tr " " "\n" > sha_es.txt

# looping through each PR detail
for ((count = 0; count < $loopCount; count++)); do

  # get PR_number
  totalPR=$(echo "$getPrResponse" |
    jq --argjson count "$count" '.[$count].number' |
    tr -d '"')
  
  # looping through each PR detail
  matchRes=$(grep -o $totalPR sha_es.txt)
  echo $matchRes | tr " " "\n" >>match.txt

  # filtering and pushing unmatched PR number details to ES
  if [ -z $matchRes ]; then
    # get PR html url
    PrHtmlUrl=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].html_url' |
      tr -d '"')

    # get PR Body
    PrBody=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].body' |
      tr -d '"')

    # get PR Number
    PrNumber=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].number' |
      tr -d '"')

    # get PR Title
    PrTitle=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].title' |
      tr -d '"')

    # get PR state
    PrState=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].state' |
      tr -d '"')

    # get PR created at
    PrCreatedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].created_at' |
      tr -d '"')

    # get PR closed at
    PrCloseAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].closed_at' |
      tr -d '"')

    # get PR merged at
    PrMergedAt=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].merged_at' |
      tr -d '"')

    # get base branch name
    PrBaseBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].base.ref' |
      tr -d '"')

    # get source branch name
    PrSourceBranch=$(echo "$getPrResponse" |
      jq --argjson count "$count" '.[$count].head.ref' |
      tr -d '"')

    # send data to es
    curl -X POST "$ES_URL/github_pr/pull_request" \
      -H "Content-Type: application/json" \
      -d "{ \"pr_number\" : \"$PrNumber\",
            \"pr_url\" : \"$PrHtmlUrl\",
            \"pr_title\" : \"$PrTitle\",
            \"pr_body\" : \"$PrBody\",
            \"pr_base_branch\" : \"$PrBaseBranch\",
            \"pr_source_branch\" : \"$PrSourceBranch\",
            \"pr_state\" : \"$PrState\",
            \"pr_creation_time\" : \"$PrCreatedAt\",
            \"pr_closed_time\" : \"$PrCloseAt\",
            \"pr_merge_at\" : \"$PrMergedAt\"}"
  fi
done

# removing temporary file
rm -rf sha_es.txt
rm -rf match.txt
rm -rf unmatch.txt

4. Now make a push in the GitHub repository. After making a commit, GitHub action on push will run and it will send commit logs to elasticsearch.

commit action

Move to your elasticsearch for getting GitHub commits logs there.

es_data

We are now getting GitHub commits here.

5. Now raise a pull request in your GitHub repository. It will also run GitHub action on pull and this will trigger the bash script which will push pull request logs to elasticsearch.

pull

GitHub action got executed on the pull request:

github action

Now, move to elasticsearch and you will find pull request logs there.

es_pull data

6. We can visualize these logs in kibana also.

GitHub commit logs in kibana:

kibana data

GitHub pull request logs in kibana:

kibana

This is how we can analyze our GitHub logs in elasticsearch and kibana using the custom script.

We are all done now!!

Conclusion:

Thank you for sticking to the end. In this blog, we have learned how we can send GitHub commits and PR logs to Elasticsearch using a custom script. This is really very quick and simple. If you like this blog, please share my blog and show your appreciation by giving thumbs-ups, and don’t forget to give me suggestions on how I can improve my future blogs that can suit your needs.

Original article source at: https://blog.knoldus.com/

#script #github #elasticsearch #log 

Desmond  Gerber

Desmond Gerber

1624347085

How to Create a Custom GitHub Actions Using JavaScript — Beginner Level

In this blog, we are going to learn how to create our own custom GitHub action using javaScript.

Prerequisite

  • Basic JavaScript Knowledge
  • Basic Git & GitHub Knowledge

About GitHub Actions

Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you’d like, including CI/CD, and combine actions in a completely customized workflow.

Types of Actions

There are three types of actions: Docker container actions, JavaScript actions, and composite run steps actions.

JavaScript Custom Action

Let’s create a Custom GitHub Action using JavaScript by creating a public repo, once the repo is created, we can clone it to our local machine using VS Code or GitPod. You need to have Node.js 12.x or higher and npm installed on your machine to perform the steps described here. You can verify the node and npm versions with the following commands in a VS Code or GitPod terminal.

node --version 
npm --version

#github #github-tutorial #github-actions #github-trend

Akshara Singh

Akshara Singh

1622015491

Bitcoin Exchange script | Cryptocurrency Exchange Script | Free Live Demo @ Coinsclone

Hey peeps, Hope you all are safe & going well

Many entrepreneurs & startups are interested to start a crypto exchange platform by using a cryptocurrency exchange script, you know why??? Let me explain. Before that, you need to know what is a cryptocurrency exchange script???

What is Cryptocurrency Exchange Script???

Cryptocurrency Exchange Script is an upgrade version of all exchange platforms, it is also called ready-made script or software. By using the crypto exchange script you can launch your crypto trading platform instantly. It is one of the easiest and fastest ways to start your crypto exchange business. Also, it helps to launch your exchange platform within 7 days.

Benefits of Bitcoin Exchange Script:

  • Customizing options - They will help you to build your cryptocurrency exchange platform based on your business needs.
  • Monitor and Engage - You can easily monitor the work process
  • Beta module - You can test your exchange in the Beta module
  • Cost-effective - The development will be around $8k - $15k (It may be vary based on the requirement)
  • Time-Period - You can launch your exchange within 1 week

Best Trading & Security Features of Bitcoin Exchange Script:

  • Multi-language
  • IEO launchpad,
  • Crypto wallet,
  • Instant buying/selling cryptocurrencies
  • Staking and lending
  • Live trading charts with margin trading API and futures 125x trading
  • Stop limit order and stop-loss orders
  • Limit maker orders
  • Multi-cryptocurrencies Support
  • Referral options
  • Admin panel
  • Perpetual swaps
  • Advanced UI/UX
  • Security Features [HTTPs authentication, Biometric authentication, Jail login, Data encryption, Two-factor authentication, SQL injection prevention, Anti Denial of Service(DoS), Cross-Site Request Forgery(CSRF) protection, Server-Side Request Forgery(SSRF) protection, Escrow services, Anti Distributed Denial of Service]

The More Important one is “Where to get the best bitcoin exchange script?”

Where to get the best bitcoin exchange script?

No one couldn’t answer the question directly because a lot of software/script providers are available in the crypto market. Among them, finding the best script provider is not an easy task. You don’t worry about that. I will help you. I did some technical inspection to find the best bitcoin exchange script provider in the techie world. Speaking of which, one software provider, Coinsclone got my attention. They have successfully delivered 100+ secured bitcoin exchanges, wallets & payment gateways to their global clients. No doubt that their exchange software is 100% bug-free and it is tightly secured. They consider customer satisfaction as their priority and they are always ready to customize your exchange based on your desired business needs.

Of course, it kindles your business interest; but before leaping, you can check their free live demo at Bitcoin Exchange Script.

Are you interested in business with them, then connect their business experts directly via

Whatsapp/Telegram: +919500575285

Mail: hello@coinsclone.com

Skype: live:hello_20214

#bitcoin exchange script #cryptocurrency exchange script #crypto exchange script #bitcoin exchange script #bitcoin exchange clone script #crypto exchange clone script

Edison  Stark

Edison Stark

1603861600

How to Compare Multiple GitHub Projects with Our GitHub Stats tool

If you have project code hosted on GitHub, chances are you might be interested in checking some numbers and stats such as stars, commits and pull requests.

You might also want to compare some similar projects in terms of the above mentioned stats, for whatever reasons that interest you.

We have the right tool for you: the simple and easy-to-use little tool called GitHub Stats.

Let’s dive right in to what we can get out of it.

Getting started

This interactive tool is really easy to use. Follow the three steps below and you’ll get what you want in real-time:

1. Head to the GitHub repo of the tool

2. Enter as many projects as you need to check on

3. Hit the Update button beside each metric

In this article we are going to compare three most popular machine learning projects for you.

#github #tools #github-statistics-react #github-stats-tool #compare-github-projects #github-projects #software-development #programming