Oral  Brekke

Oral Brekke


7 Popular Node.js Google Search API Libraries

In today's post we will learn about 7 popular Node.js Google Search API Libraries.

This NodeJS module is designed to scrape and parse Google, Bing and Baidu results using SerpApi. This Ruby Gem is meant to scrape and parse Google results using SerpApi. 

1 - Google-it

Command line Google search and save to JSON


$ npm install --save -g google-it

Example Usage

$ google-it --query="Latvian unicorn"

GIF of google-it

View on Github

2 - Google-search-results-nodejs

Google Search Results Node.JS


NPM 7+

$ npm install google-search-results-nodejs

Quick start

const SerpApi = require('google-search-results-nodejs')
const search = new SerpApi.GoogleSearch("Your Private Key")
 q: "Coffee", 
 location: "Austin, TX"
}, (result) => {

View on Github

3 - Google-search

Execute a google search through it's api


npm install google-search


var GoogleSearch = require('google-search');
var googleSearch = new GoogleSearch({
  key: 'YOUR_API_KEY',
  cx: 'YOUR_CX'

  q: "",
  start: 5,
  fileType: "pdf",
  gl: "tr", //geolocation,
  lr: "lang_tr",
  num: 10, // Number of search results to return between 1 and 10, inclusive
  siteSearch: "http://kitaplar.ankara.edu.tr/" // Restricts results to URLs from a specified site
}, function(error, response) {

View on Github

4 - Google-search-results-nodejs

Scrape and parse Google search results in Node.JS


You can install google-search-results-serpwow with:

$ npm install google-search-results-serpwow

and update with:

$ npm update google-search-results-serpwow

Simple Example

Simplest example for a standard query "pizza", returning the Google SERP (Search Engine Results Page) data as JSON.

var SerpWow = require('google-search-results-serpwow')

// create the serpwow object, passing in our API key
let serpwow = new SerpWow('API_KEY')

// #1. example using promises & async/await
async function getResult() {

  let result = await serpwow.json({
    q: 'pizza'
  // pretty-print the result
  console.log(JSON.stringify(result, 0, 2));


// #2. example using callbacks
    q: 'pizza'
  .then(result => {
    // pretty-print the result
    console.log(JSON.stringify(result, 0, 2));
  .catch(error => {
    // print the error

View on Github

5 - Node-reverse-image-search

A free solution for reverse image search using Google


const reverseImageSearch = require('node-reverse-image-search')

const doSomething = (results) => {

reverseImageSearch('i.ebayimg.com/00/s/OTAwWDkwMA==/z/3G8AAOSwzoxd80XB/$_83.JPG', doSomething)

View on Github

6 - Node-google-search-trends

Node.js module to fetch localized Google trending searches


As always, install using:

npm install node-google-search-trends [--save]

The module comes with one exposed function. It takes three parameters - localization, count and callback. Example usage:

var trends = require('node-google-search-trends');
trends('Singapore', 10, function(err, data) {
    if (err) return console.err(err);
    console.log(JSON.stringify(data, null, 2));  // Pretty prints JSON 'data'

View on Github

7 - Google-trends

Scrap recent trend words on Google for Node.js


var trends = require('google-trends')

trends.load(['kr'], function (err, result) {
  console.log(err, JSON.stringify(result))
// output
  "kr": [
      "title": "지진",
      "link": "http://www.google.co.kr/trends/hottrends?pn=p23#a=20151222-%EC%A7%80%EC%A7%84",
      "ctime": 1450728000,
      "news": {
        "picture": {
          "url": "//t0.gstatic.com/images?q=tbn:ANd9GcTEI1l0ltniQq9PVbDe_u3oHxAk2QHoRM9h54L-FB7USd14CqkjrRSZVQ28fIbNdtNlaEj8DCo",
          "source": "연합뉴스"
        "items": [
            "title": "전북 익산 규모 3.5 <b>지진</b>…서울·부산서도 감지(종합2보)",
            "snippet": "(익산=연합뉴스) 김진방 기자 = 22일 오전 4시30분께 전북 익산 북쪽 8㎞ 지점에서 규모 3.5의 <b>지진이</b> 발생했다고 전주기상지청이 밝혔다. 이번에 발생한 <b>지진은</b> 지난 8월 3일 제주 서귀포시 성산 남동쪽 22㎞ 해역에서 발생한 규모 3.7의 <b>지진에</b> 이어 올 들어&nbsp;...",
            "url": "http://www.yonhapnews.co.kr/bulletin/2015/12/22/0200000000AKR20151222009300055.HTML",
            "source": "연합뉴스"
            "title": "익산서 내륙 최대 규모 <b>지진</b>, 서울서도 싱크대 흔들렸다",
            "snippet": "전북 익산에서 올들어 두번째로 규모가 큰 <b>지진이</b> 발생했다. 내륙에서는 가장 큰 규모의 <b>지진</b>이었다. <b>지진</b>여파는 서울과 강원 등지에까지 전달됐다. 새벽 단잠을 깬 일부 시민들은 휴대폰 메시지 등 SNS를 통해 <b>지진을</b> 알렸고 지인들의 안부를 물었다. 기상청은&nbsp;...",
            "url": "http://news.khan.co.kr/kh_news/khan_art_view.html?artid=201512220836181&code=940100",
            "source": "경향신문"
    // ...

View on Github

Thank you for following this article. 

#node #google #search 

7 Popular Node.js Google Search API Libraries

FlatBuffers.jl: A Pure Julia Implementation Of Google Flatbuffers


A Julia implementation of google flatbuffers


The package is registered in METADATA.jl and so can be installed with Pkg.add.

julia> Pkg.add("FlatBuffers")


  • STABLEmost recently tagged version of the documentation.
  • LATESTin-development version of the documentation.

Project Status

The package is tested against Julia 1.0, 1.1, 1.2, 1.3, and nightly on Linux, OS X, and Windows.

Contributing and Questions

Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems or would just like to ask a question.

Download Details:

Author: JuliaData
Source Code: https://github.com/JuliaData/FlatBuffers.jl 
License: View license

#julia #google 

FlatBuffers.jl: A Pure Julia Implementation Of Google Flatbuffers
Dexter  Goodwin

Dexter Goodwin


React-ga: React Google Analytics Module


React Google Analytics Module  

This is a JavaScript module that can be used to include Google Analytics tracking code in a website or app that uses React for its front-end codebase. It does not currently use any React code internally, but has been written for use with a number of Mozilla Foundation websites that are using React, as a way to standardize our GA Instrumentation across projects.

It is designed to work with Universal Analytics and will not support the older ga.js implementation.

This module is mildly opinionated in how we instrument tracking within our front-end code. Our API is slightly more verbose than the core Google Analytics library, in the hope that the code is easier to read and understand for our engineers. See examples below.

If you use react-ga too, we'd love your feedback. Feel free to file issues, ideas and pull requests against this repo.


With npm:

npm install react-ga --save

With bower:

bower install react-ga --save

Note that React >= 0.14.0 is needed in order to use the <OutboundLink> component.


With npm

Initializing GA and Tracking Pageviews:

import ReactGA from 'react-ga';
ReactGA.pageview(window.location.pathname + window.location.search);

With bower

When included as a script tag, a variable ReactGA is exposed in the global scope.

<!-- The core React library -->
<script src="https://unpkg.com/react@15.5.0/dist/react.min.js"></script>
<!-- The ReactDOM Library -->
<script src="https://unpkg.com/react-dom@15.5.0/dist/react-dom.min.js"></script>
<!-- ReactGA library -->
<script src="/path/to/bower_components/react-ga/dist/react-ga.min.js"></script>

  ReactGA.initialize('UA-000000-01', { debug: true });

Demo Code

For a working demo have a look at the demo files or clone this repo and run npm install npm start then open http://localhost:8080 and follow the instructions. Demo requires you to have your own TrackingID.

Upgrading from 1.x to 2.x

You can safely upgrade to 2.x as there are no breaking changes. The main new feature is that the underlying ga function is now exposed via the property ReactGA.ga. This can be helpful when you need a function that ReactGA doesn't support at the moment. Also, for that reason, it is recommended that you rename your imported value as ReactGA rather than ga so as to distinguish between the React GA wrapper and the original ga function.

Community Components

While some convenience components are included inside the package, some are specific to each application. A community curated list of these is available in the wiki: https://github.com/react-ga/react-ga/wiki/Community-Components. Feel free to add any you have found useful.


ReactGA.initialize(gaTrackingID, options)

GA must be initialized using this function before any of the other tracking functions will record any data. The values are checked and sent through to the ga('create', ... call.

If you aren't getting any data back from Page Timings, you may have to add siteSpeedSampleRate: 100 to the gaOptions object. This will send 100% of hits to Google Analytics. By default only 1% are sent.


ReactGA.initialize('UA-000000-01', {
  debug: true,
  titleCase: false,
  gaOptions: {
    userId: 123

Or with multiple trackers

      trackingId: 'UA-000000-01',
      gaOptions: {
        name: 'tracker1',
        userId: 123
      trackingId: 'UA-000000-02',
      gaOptions: { name: 'tracker2' }
  { debug: true, alwaysSendToDefaultTracker: false }
gaTrackingIDString. Required. GA Tracking ID like UA-000000-01.
options.debugBoolean. Optional. If set to true, will output additional feedback to the console.
options.titleCaseBoolean. Optional. Defaults to true. If set to false, strings will not be converted to title case before sending to GA.
options.gaOptionsObject. Optional. GA configurable create only fields.
options.gaAddressString. Optional. If you are self-hosting your analytics.js, you can specify the URL for it here.
options.alwaysSendToDefaultTrackerBoolean. Optional. Defaults to true. If set to false and using multiple trackers, the event will not be send to the default tracker.
options.testModeBoolean. Optional. Defaults to false. Enables test mode. See here for more information.
options.standardImplementationBoolean. Optional. Defaults to false. Enables loading GA as google expects it. See here for more information.
options.useExistingGaBoolean. Optional. Skips call to window.ga(), assuming you have manually run it.
options.redactEmailBoolean. Optional. Defaults to true. Enables redacting a email as the string that in "Event Category" and "Event Action".

If you are having additional troubles and setting debug = true shows as working please try using the Chrome GA Debugger Extension. This will help you figure out if your implementation is off or your GA Settings are not correct.


This will set the values of custom dimensions in Google Analytics.


ReactGA.set({ dimension14: 'Sports' });

Or with multiple trackers

ReactGA.set({ userId: 123 }, ['tracker2']);
fieldsObjectObject. e.g. { userId: 123 }
trackerNamesArray. Optional. A list of extra trackers to run the command on




Or with multiple trackers

ReactGA.pageview('/about/contact-us', ['tracker2']);

This will send all the named trackers listed in the array parameter. The default tracker will or will not send according to the initialize() setting alwaysSendToDefaultTracker (defaults to true if not provided).

pathString. e.g. '/get-involved/other-ways-to-help'
trackerNamesArray. Optional. A list of extra trackers to run the command on
titleString. Optional. e.g. 'Other Ways to Help'

See example above for use with react-router.


A modal view is often an equivalent to a pageview in our UX, but without a change in URL that would record a standard GA pageview. For example, a 'contact us' modal may be accessible from any page in a site, even if we don't have a standalone 'contact us' page on its own URL. In this scenario, the modalview should be recorded using this function.


modalNameString. E.g. 'login', 'read-terms-and-conditions'


Tracking in-page event interactions is key to understanding the use of any interactive web property. This is how we record user interactions that don't trigger a change in URL.


  category: 'User',
  action: 'Created an Account'

  category: 'Social',
  action: 'Rated an App',
  value: 3

  category: 'Editing',
  action: 'Deleted Component',
  label: 'Game Widget'

  category: 'Promotion',
  action: 'Displayed Promotional Widget',
  label: 'Homepage Thing',
  nonInteraction: true
args.categoryString. Required. A top level category for these events. E.g. 'User', 'Navigation', 'App Editing', etc.
args.actionString. Required. A description of the behaviour. E.g. 'Clicked Delete', 'Added a component', 'Deleted account', etc.
args.labelString. Optional. More precise labelling of the related action. E.g. alongside the 'Added a component' action, we could add the name of a component as the label. E.g. 'Survey', 'Heading', 'Button', etc.
args.valueInt. Optional. A means of recording a numerical value against an event. E.g. a rating, a score, etc.
args.nonInteractionBoolean. Optional. If an event is not triggered by a user interaction, but instead by our code (e.g. on page load), it should be flagged as a nonInteraction event to avoid skewing bounce rate data.
args.transportString. Optional. This specifies the transport mechanism with which hits will be sent. Valid values include 'beacon', 'xhr', or 'image'.


Allow to measure periods of time such as AJAX requests and resources loading by sending hits using the analytics.js library. For more detailed description, please refer to https://developers.google.com/analytics/devguides/collection/analyticsjs/user-timings.



  category: 'JS Libraries',
  variable: 'load',
  value: 20, // in milliseconds
  label: 'CDN libs'

This is equivalent to the following Google Analytics command:

ga('send', 'timing', 'JS Libraries', 'load', 20, 'CDN libs');
args.categoryString. Required. A string for categorizing all user timing variables into logical groups.
args.variableString. Required. Name of the variable being recorded.
args.valueInt. Required. Number of milliseconds elapsed time to report.
args.labelString. Optional. It can be used to add flexibility in visualizing user timings in the reports.


The original ga function can be accessed via this method. This gives developers the flexibility of directly using ga.js features that have not yet been implemented in ReactGA. No validations will be done by ReactGA as it is being bypassed if this approach is used.

If no arguments are passed to ReactGA.ga(), the ga object is returned instead.


Usage with arguments:

ReactGA.ga('send', 'pageview', '/mypage');

Usage without arguments:

var ga = ReactGA.ga();
ga('send', 'pageview', '/mypage');

ReactGA.outboundLink(args, hitCallback)

Tracking links out to external URLs (including id.webmaker.org for OAuth 2.0 login flow). A declarative approach is found in the next section, by using an <OutboundLink> component.


    label: 'Clicked Create an Account'
  function () {
    console.log('redirect here');
args.labelString. Required. Description of where the outbound link points to. Either as a URL, or a string.
hitCallbackfunction. The react-ga implementation accounts for the possibility that GA servers are down, or GA is blocked, by using a fallback 250ms timeout. See notes in GA Dev Guide
trackerNamesArray<String> Optional. A list of extra trackers to run the command on.

<OutboundLink> Component

Outbound links can directly be used as a component in your React code and the event label will be sent directly to ReactGA.


var ReactGA = require('react-ga');

render() {
  return (
        My Link
eventLabelString. Required. Description of where the outbound link points to. Either as a URL, or a string.
toString. Required. URL the link leads to.
targetString. Optional. To open the link in a new tab, use a value of _blank.
trackerNamesArray<String> Optional. A list of extra trackers to run the command on.

For bower, use the <ReactGA.OutboundLink> component.


GA exception tracking


  description: 'An error occurred',
  fatal: true
args.descriptionString. Optional. Description of what happened.
args.fatalboolean. Optional. Set to true if it was a fatal exception.

ReactGA.plugin.require(name, [options])

Require GA plugins.


ReactGA.plugin.require('localHitSender', { path: '/log', debug: true });
nameString. Required. The name of the plugin to be required. Note: if the plugin is not an official analytics.js plugin, it must be provided elsewhere on the page.
optionsObject. Optional. An initialization object that will be passed to the plugin constructor upon instantiation.

ReactGA.plugin.execute(pluginName, action, [actionType], [payload])

Execute the action for the pluginName with the payload.


ReactGA.plugin.execute('ecommerce', 'addTransaction', {
  id: 'jd38je31j',
  revenue: '3.50'

You can use this function with four arguments to pass actionType and payload along with executed action


ReactGA.plugin.execute('ec', 'setAction', 'purchase', {
  id: 'jd38je31j',
  revenue: '3.50'

Test Mode

To enable test mode, initialize ReactGA with the testMode: true option. Here's an example from tests/utils/testMode.test.js

// This should be part of your setup
ReactGA.initialize('foo', { testMode: true });
// This would be in the component/js you are testing
ReactGA.ga('send', 'pageview', '/mypage');
// This would be how you check that the calls are made correctly
  ['create', 'foo', 'auto'],
  ['send', 'pageview', '/mypage']

Standard Implementation

To enable standard implemention of google analytics.

Add this script to your html

<!-- Google Analytics -->
  (function (i, s, o, g, r, a, m) {
    i['GoogleAnalyticsObject'] = r;
    (i[r] =
      i[r] ||
      function () {
        (i[r].q = i[r].q || []).push(arguments);
      (i[r].l = 1 * new Date());
    (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
    a.async = 1;
    a.src = g;
    m.parentNode.insertBefore(a, m);
  ga('create', 'UA-XXX-X', 'auto');
  ga('send', 'pageview');
<!-- End Google Analytics -->

Initialize ReactGA with standardImplementation: true option.

// This should be part of your setup
ReactGA.initialize('UA-XXX-X', { standardImplementation: true });



  • node.js
  • npm
  • npm install
  • npm install react@^15.6.1 prop-types@^15.5.10 - This is for the optional dependencies.

To Test

npm test

Submitting changes/fixes

Follow instructions inside CONTRIBUTING.md


Download Details:

Author: React-ga
Source Code: https://github.com/react-ga/react-ga 
License: View license

#javascript #react #google #analytics 

React-ga: React Google Analytics Module
Dexter  Goodwin

Dexter Goodwin


React-google-recaptcha: Component Wrapper for Google ReCAPTCHA


React component for Google reCAPTCHA v2.


npm install --save react-google-recaptcha


All you need to do is sign up for an API key pair. You will need the client key then you can use <ReCAPTCHA />.

The default usage imports a wrapped component that loads the google recaptcha script asynchronously then instantiates a reCAPTCHA the user can then interact with.

Code Example:

import ReCAPTCHA from "react-google-recaptcha";

function onChange(value) {
  console.log("Captcha value:", value);

    sitekey="Your client site key"

Component Props

Properties used to customise the rendering:

asyncScriptOnLoadfuncoptional callback when the google recaptcha script has been loaded
badgeenumoptional bottomright, bottomleft or inline. Positions reCAPTCHA badge. Only for invisible reCAPTCHA
hlstringoptional set the hl parameter, which allows the captcha to be used from different languages, see [reCAPTCHA hl]
isolatedbooloptional For plugin owners to not interfere with existing reCAPTCHA installations on a page. If true, this reCAPTCHA instance will be part of a separate ID space. (default: false)
onChangefuncThe function to be called when the user successfully completes the captcha
onErroredfuncoptional callback when the challenge errored, most likely due to network issues.
onExpiredfuncoptional callback when the challenge is expired and has to be redone by user. By default it will call the onChange with null to signify expired callback.
sitekeystringThe API client key
sizeenumoptional compact, normal or invisible. This allows you to change the size or do an invisible captcha
stokenstringoptional set the stoken parameter, which allows the captcha to be used from different domains, see [reCAPTCHA secure-token]
tabindexnumberoptional The tabindex on the element (default: 0)
typeenumoptional image or audio The type of initial captcha (defaults: image)
themeenumoptional light or dark The theme of the widget (defaults: light). See [example][docs_theme]

Component Instance API

The component instance also has some utility functions that can be called. These can be accessed via ref.

  • getValue() returns the value of the captcha field
  • getWidgetId() returns the recaptcha widget Id
  • reset() forces reset. See the JavaScript API doc
  • execute() programmatically invoke the challenge
  • executeAsync() programmatically invoke the challenge and return a promise that resolves to the token or errors(if encountered).
    • alternative approach to execute() in combination with the onChange() prop - example below


const recaptchaRef = React.createRef();
onSubmit = () => {
  const recaptchaValue = recaptchaRef.current.getValue();
render() {
  return (
    <form onSubmit={this.onSubmit} >
        sitekey="Your client site key"

Invisible reCAPTCHA

▶ Codesandbox invisible example

See the reCAPTCHA documentation to see how to configure it.

With the invisible option, you need to handle things a bit differently. You will need to call the execute method yourself.

import ReCAPTCHA from "react-google-recaptcha";

const recaptchaRef = React.createRef();

  <form onSubmit={() => { recaptchaRef.current.execute(); }}>
      sitekey="Your client site key"

Additionally, you can use the executeAsync method to use a promise based approach.

import ReCAPTCHA from "react-google-recaptcha";

const ReCAPTCHAForm = (props) => {
  const recaptchaRef = React.useRef();

  const onSubmitWithReCAPTCHA = async () => {
    const token = await recaptchaRef.current.executeAsync();

    // apply to form data

  return (
    <form onSubmit={onSubmitWithReCAPTCHA}>
        sitekey="Your client site key"


  <ReCAPTCHAForm />,

Advanced usage

Global properties used by reCaptcha

useRecaptchaNet: If google.com is blocked, you can set useRecaptchaNet to true so that the component uses recaptcha.net instead.

Example global properties:

window.recaptchaOptions = {
  useRecaptchaNet: true,

CSP Nonce support

window.recaptchaOptions = {
  nonce: document.querySelector('meta[name=\'csp-nonce\']').getAttribute('content'),

ReCaptcha loading google recaptcha script manually

You can also use the barebone components doing the following. Using that component will oblige you to manage the grecaptcha dep and load the script by yourself.

import { ReCAPTCHA } from "react-google-recaptcha";

const grecaptchaObject = window.grecaptcha // You must provide access to the google grecaptcha object.

    ref={(r) => this.recaptcha = r}
    sitekey="Your client site key"

Hiding the Recaptcha

According to the google docs you are allowed to hide the badge as long as you include the reCAPTCHA branding visibly in the user flow. Please include the following text:

This site is protected by reCAPTCHA and the Google
    <a href="https://policies.google.com/privacy">Privacy Policy</a> and
    <a href="https://policies.google.com/terms">Terms of Service</a> apply.

If you wish to hide the badge you must add:

.grecaptcha-badge { visibility: hidden; }

to your css.

Migrate to 2.0

  • options.removeOnUnmount: REMOVED This was only useful for the lang changes. Lang is now changed through the hl prop.
  • options.lang: REMOVED Instead pass it as the hl prop on the component.

Notes on Requirements

At least React@16.4.1 is required due to forwardRef usage in the dependency react-async-script.


Pre 1.0.0 and React < 16.4.1 support details in 0.14.0.

Download Details:

Author: Dozoisch
Source Code: https://github.com/dozoisch/react-google-recaptcha 
License: MIT license

#javascript #react #google #recaptcha 

React-google-recaptcha: Component Wrapper for Google ReCAPTCHA
Nat  Grady

Nat Grady


A Demo on How to Build Your Own Google analytics Dashboard with R


A demo on how to build your own Google Analytics dashboard with R, Shiny and MySQL


Whilst shinyga() lets you create Shiny dashboards that anyone can connect their own GA data with, the more common use case of creating a dashboard to use just your own data is better served by this example. This template lets you clone and enter your GA id to quick start your own Shiny dashboard.


  • Interactive trend graph.
  • Auto-update of GA data for last 3 years.
  • Zoomable heatmap for Day of week analysis.
  • Year on Year, Month on Month and Last Month vs same Month Last Year.
  • MySQL persistant storage for data blending your data with GA data.
  • Upload option to update MySQL data stored.
  • Analysis of impact of events on your GA data via Google's CausalImpact
  • Detection of unusual timepoints using Twitter's AnomalyDetection


Trend Upload to MySQL Analysis of data

To Use

Clone this repository to your own RStudio project.

Get your MySQL setup with a user and IP location, and the GA View ID you want to pull data from. You will also probably need to whitelist the IP of your Shiny Server. Add your local IP for testing too. If you will use shinyapps.io their IPs are:


Create another file called secrets.r file in the same directory with the below content filled in with your details. This file is called in functions.r

 # secrets.r
 options(mysql = list(
 "host" = "YOUR SQL IP",
 "port" = 3306,
 "user" = "YOUR SQL USER",
 "password" = "YOUR USER PW",
 "databaseName" = "onlinegashiny"),
 rga = list(
 "profile_id" = "YOUR GA ID",
 "daysBackToFetch" = 356*3
 shinyMulti = list(
 "max_plots" = 10
 myCausalImpact = list(
 'test_time' = 14,
 'season' = 7
 shiny.maxRequestSize = 0.5*1024^2 ## upload only 0.5 MB

Install rga() if you need to, then run the below once locally in the same folder to have the app remember your GA OAuth2 settings.

 ## Run this locally first, to store the auth token.

Run locally with shiny::runApp() or upload to your shinyapps.io account or your own Shiny server.

Customise your instance.

A guide blogpost here: http://markedmondson.me/enhance-your-google-analytics-data-with-r-and-shiny-free-online-dashboard-template

Live demo version here: https://mark.shinyapps.io/GA-dashboard-demo

Download Details:

Author: MarkEdmondson1234
Source Code: https://github.com/MarkEdmondson1234/ga-dashboard-demo  

#r #dashboard #google 

A Demo on How to Build Your Own Google analytics Dashboard with R
Nat  Grady

Nat Grady


GoogleVis: Interface Between R and The Google Chart tools


The googleVis package provides an interface between R and the Google's charts tools. It allows users to create web pages with interactive charts based on R data frames. Charts are displayed locally via the R HTTP help server. A modern browser with Internet connection is required. The data remains local and is not uploaded to Google.


You can install the stable version from CRAN:




See the googleVis package vignette for more details. For a brief introduction read the five page R Journal article.

Check out the examples from the googleVis demo.

Please read Google's Terms of Use before you start using the package.

Download Details:

Author: Mages
Source Code: https://github.com/mages/googleVis 

#r #google #chart #tools 

GoogleVis: Interface Between R and The Google Chart tools
Hunter  Krajcik

Hunter Krajcik


A Standalone Face Recognition Extracted From Google_ml_kit_plugin


This is a standalone face detection plugin from google ml kit Reason to extract this plugin is to reduce the size needed and remove unneccessary import.

Credits to original author of google_ml_kit for developing the full package.


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add google_ml_face_detection

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  google_ml_face_detection: ^0.0.3

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:google_ml_face_detection/google_ml_face_detection.dart';


import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:google_ml_face_detection/google_ml_face_detection.dart';

void main() {

class MyApp extends StatefulWidget {
  _MyAppState createState() => _MyAppState();

class _MyAppState extends State<MyApp> {
  String _platformVersion = 'Unknown';

  void initState() {

  // Platform messages are asynchronous, so we initialize in an async method.
  Future<void> initPlatformState() async {
    String platformVersion = '';
    // Platform messages may fail, so we use a try/catch PlatformException.
    // We also handle the message potentially returning null.
    // try {
    //   platformVersion =
    //       await GoogleMlFaceDetection.platformVersion ?? 'Unknown platform version';
    // } on PlatformException {
    //   platformVersion = 'Failed to get platform version.';
    // }

    // // If the widget was removed from the tree while the asynchronous platform
    // // message was in flight, we want to discard the reply rather than calling
    // // setState to update our non-existent appearance.
    // if (!mounted) return;

    // setState(() {
    //   _platformVersion = platformVersion;
    // });

  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Plugin example app'),
        body: Center(
          child: Text('Running on: $_platformVersion\n'),

Download Details:

Author: Danjodanjo
Source Code: https://github.com/danjodanjo/google_ml_face_detection 
License: MIT license

#flutter #dart #ml #google 

A Standalone Face Recognition Extracted From Google_ml_kit_plugin
Nat  Grady

Nat Grady


GoogleAuthR: Google API Client Library for R

googleAuthR - Google API R Client  

gargle backend

As of version googleAuthR>=1.0.0 the OAuth2 and service JSON authentication is provided by gargle. Refer to that documentation for details.

The plan is to migrate as much functionality to gargle from googleAuthR, but backward compatibility will be maintained for all packages depending on googleAuthR in the meantime.

Once there is feature parity, client packages can then migrate totally to gargle. At time of writing some of the major features not in gargle yet are:

  • Shiny authentication flows
  • Paging
  • Caching
  • Batching

If you are not using the above then you can use gargle directly now. Otherwise you can still use googleAuthR that will use the features of gargle and wait for more features to be migrated.


This library allows you to authenticate easily via local use in an OAuth2 flow; within a Shiny app; or via service accounts.

The main two functions are gar_auth() and gar_api_generator().


This takes care of getting the authentication token, storing it and refreshing. Use it before any call to a Google library.


This creates functions for you to use to interact with Google APIs. Use it within your own function definitions, to query the Google API you want.


Auto-build libraries for Google APIs with OAuth2 for both local and Shiny app use.

Get more details at the googleAuthR website

The googleAuthRverse Slack team has been setup for support for using googleAuthR and the libraries it helps create. Sign up via this Google form to get access.

R Google API libraries using googleAuthR

Here is a list of available Google APIs to make with this library. The below libraries are all cross-compatible as they use googleAuthR for authentication backend e.g. can use just one OAuth2 login flow and can be used in multi-user Shiny apps.

Feel free to add your own via email or a pull request if you have used googleAuthR to build something cool.

googleAuthR now has an R package generator which makes R package skeletons you can use to build your own Google API R package upon. Browse through the 154 options at this Github repository.

Thanks to


googleAuthR is available on CRAN


Check out News to see the features of the development version.

If you want to use the development version on Github, install via:


Download Details:

Author: MarkEdmondson1234
Source Code: https://github.com/MarkEdmondson1234/googleAuthR 
License: View license

#r #google #api 

GoogleAuthR: Google API Client Library for R

A React JS Google Clone using Rapid API with Amazing Features

Googl - A React JS Google Search Engine Clone

React JS Google Clone  

⚠️ Before you start

  1. Make sure Git and NodeJS is installed
  2. Yarn is faster than Npm. So use Yarn.
  3. Create .env file in root folder.
  4. Contents of .env

Now, to setup API, go to Rapid API Website and create an account.

Enable this API to fetch google search results: API: Google Search by apigeek.

Copy API Key

  1. After enabling you can get your API Keys and paste them in .env file in REACT_APP_RAPID_API_KEY.

NOTE: Make sure you don't share these keys publicaly.

📌 How to use this App?

  1. Clone this repository to your local computer.
  2. Open terminal in root directory.
  3. Type and Run npm install or yarn install.
  4. Once packages are installed, you can start this app using npm run dev or yarn dev
  5. Now app is fully configured and you can start using this app :+1:

📃 Built with

Rapid API

Built with Love

🔧 Stats

Stats for this App

🙌🏼 Contribute

You might encounter some bugs while using this app. You are more than welcome to contribute. Just submit changes via pull request and I will review them before merging. Make sure you follow community guidelines.

 ⭐ Give A Star

You can also give this repository a star to show more people and they can use this repository.

Author: Technical-Shubham-tech
Source code: https://github.com/Technical-Shubham-tech/google-clone
License: MIT license
#react #javascript #google 

A React JS Google Clone using Rapid API with Amazing Features

A User-space File System for interacting with Google Cloud Storage

gcsfuse is a user-space file system for interacting with Google Cloud Storage.

Current status

Please treat gcsfuse as beta-quality software. Use it for whatever you like, but be aware that bugs may lurk, and that we reserve the right to make small backwards-incompatible changes.

The careful user should be sure to read semantics.md for information on how gcsfuse maps file system operations to GCS operations, and especially on surprising behaviors. The list of open issues may also be of interest.


See installing.md for full installation instructions for Linux and macOS.



GCS credentials are automatically loaded using Google application default credentials, or a JSON key file can be specified explicitly using --key-file. If you haven't already done so, the easiest way to set up your credentials for testing is to run the gcloud tool:

gcloud auth login

See mounting.md for more information on credentials.

Invoking gcsfuse

To mount a bucket using gcsfuse over an existing directory /path/to/mount, invoke it like this:

gcsfuse my-bucket /path/to/mount

Important: You should run gcsfuse as the user who will be using the file system, not as root. Do not use sudo.

The gcsfuse tool will exit successfully after mounting the file system. Unmount in the usual way for a fuse file system on your operating system:

umount /path/to/mount         # macOS
fusermount -u /path/to/mount  # Linux

If you are mounting a bucket that was populated with objects by some other means besides gcsfuse, you may be interested in the --implicit-dirs flag. See the notes in semantics.md for more information.

See mounting.md for more detail, including notes on running in the foreground and fstab compatibility.


Latency and rsync

Writing files to and reading files from GCS has a much higher latency than using a local file system. If you are reading or writing one small file at a time, this may cause you to achieve a low throughput to or from GCS. If you want high throughput, you will need to either use larger files to smooth across latency hiccups or read/write multiple files at a time.

Note in particular that this heavily affects rsync, which reads and writes only one file at a time. You might try using gsutil -m rsync to transfer multiple files to or from your bucket in parallel instead of plain rsync with gcsfuse.

Rate limiting

If you would like to rate limit traffic to/from GCS in order to set limits on your GCS spending on behalf of gcsfuse, you can do so:

  • The flag --limit-ops-per-sec controls the rate at which gcsfuse will send requests to GCS.
  • The flag --limit-bytes-per-sec controls the egress bandwidth from gcsfuse to GCS.

All rate limiting is approximate, and is performed over an 8-hour window. By default, there are no limits applied.

Upload procedure control

An upload procedure is implemented as a retry loop with exponential backoff for failed requests to the GCS backend. Once the backoff duration exceeds this limit, the retry stops. Flag --max-retry-sleep controls such behavior. The default is 1 minute. A value of 0 disables retries.

GCS round trips

By default, gcsfuse uses two forms of caching to save round trips to GCS, at the cost of consistency guarantees. These caching behaviors can be controlled with the flags --stat-cache-capacity, --stat-cache-ttl and --type-cache-ttl. See semantics.md for more information.


If you are using FUSE for macOS, be aware that by default it will give gcsfuse only 60 seconds to respond to each file system operation. This means that if you write and then flush a large file and your upstream bandwidth is insufficient to write it all to GCS within 60 seconds, your gcsfuse file system may become unresponsive. This behavior can be tuned using the daemon_timeout mount option. See issue #196 for details.

Downloading object contents

Behind the scenes, when a newly-opened file is first modified, gcsfuse downloads the entire backing object's contents from GCS. The contents are stored in a local temporary file whose location is controlled by the flag --temp-dir. Later, when the file is closed or fsync'd, gcsfuse writes the contents of the local file back to GCS as a new object generation.

Files that have not been modified are read portion by portion on demand. gcsfuse uses a heuristic to detect when a file is being read sequentially, and will issue fewer, larger read requests to GCS in this case.

The consequence of this is that gcsfuse is relatively efficient when reading or writing entire large files, but will not be particularly fast for small numbers of random writes within larger files, and to a lesser extent the same is true of small random reads. Performance when copying large files into GCS is comparable to gsutil (see issue #22 for testing notes). There is some overhead due to the staging of data in a local temporary file, as discussed above.

Note that new and modified files are also fully staged in the local temporary directory until they are written out to GCS due to being closed or fsync'd. Therefore the user must ensure that there is enough free space available to handle staged content when writing large files.

Other performance issues

If you notice otherwise unreasonable performance, please file an issue.


gcsfuse is open source software, released under the Apache license. It is distributed as-is, without warranties or conditions of any kind.

For support, visit Server Fault. Tag your questions with gcsfuse and google-cloud-platform, and make sure to look at previous questions and answers before asking a new one. For bugs and feature requests, please file an issue.


gcsfuse version numbers are assigned according to Semantic Versioning. Note that the current major version is 0, which means that we reserve the right to make backwards-incompatible changes.

Download Details:

Author: GoogleCloudPlatform
Source Code: https://github.com/GoogleCloudPlatform/gcsfuse 
License: Apache-2.0 license

#go #golang #google #cloud #storage 

A User-space File System for interacting with Google Cloud Storage
Royce  Reinger

Royce Reinger


Google-cloud-ruby: Google Cloud Client Library for Ruby

Google Cloud Ruby Clients

Idiomatic Ruby client libraries for Google Cloud Platform APIs.

This repository includes client libraries for Google Cloud Platform services, along with a selected set of Google services unrelated to the cloud platform.

What's here

Client library gems

Most directories each correspond to a client library RubyGem, including its code, tests, gemspec, and documentation. Some client libraries also include handwritten samples in the samples directory, and/or autogenerated samples in the snippets directory.

Most client libraries in this repository are automatically generated by the GAPIC Generator. A small number are written and maintained by hand. You can identify a generated client library by the presence of .OwlBot.yaml in the library directory. For the most part, do not try to edit generated libraries by hand, because changes will be overwritten by the code generator.

Other directories

A few directories include support files, including:

  • .github includes configuration for GitHub Actions and bots that help to maintain this repository.
  • .kokoro includes configuration for internal Google processes that help to maintain this repository.
  • .toys includes scripts for running CI, releases, and maintenance tasks.
  • acceptance and integration include shared fixtures for acceptance tests.
  • obsolete contains older libraries that are obsolete and no longer maintained.

GitHub facilities

Issues for client libraries hosted here can be filed in the issues tab. However, this is not an official support channel. If you have support questions, file a support request through the normal Google support channels, or post questions on a forum such as StackOverflow.

Pull requests are welcome. Please see the section below on contributing.

Some maintenance tasks can be run in the actions tab by authorized personnel.

Using the client libraries

These client library RubyGems each include classes and methods that can be used to make authenticated calls to specific Google APIs. Some libraries also include additional convenience code implementing common client-side workflows or best practices.

In general, you can expect to:

Activate access to the API by creating a project on the Google Cloud Console, enabling billing if necessary, and enabling the API.

Choose a library and install it, typically by adding it to your bundle. For example, here is how you might add a the Translation service client to your Gemfile:

# Gemfile

# ... previous libraries ...
gem "google-cloud-translate", "~> 3.2"

Instantiate a client object. This object represents an authenticated connection to the service. For example, here is how you might create a client for the translation service:

require "google/cloud/translate"

translation_client = Google::Cloud::Translate.translation_service

Depending on your environment and authentication needs, you might need to provide credentials to the client object.

Make API calls by invoking methods on the client. For example, here is how you might translate a phrase:

result = translation_client.translate_text contents: ["Hello, world!"],
                                           mime_type: "text/plain",
                                           source_language_code: "en-US",
                                           target_language_code: "ja-JP",
                                           parent: "projects/my-project-name"
puts result.translations.first.translated_text
# => "こんにちは世界!"

Activating the API

To access a Google Cloud API, you will generally need to activate it in the cloud console. This typically involves three steps:

If you have not created a Google Cloud Project, do so. Point your browser to the Google Cloud Console, sign up if needed, and create or choose a project. Make note of the project number (which is numeric) or project ID (which is usually three or more words separated by hyphens). Many services will require you to pass that information in when calling an API.

For most services, you will need to provide billing information. If this is your first time using Google Cloud Platform, you may be eligible for a free trial.

Enable the API you want to use. Click the "APIs & Services" tab in the left navigation, then click the "Enable APIs and Services" button near the top. Search for the API you want by name, and click "Enable". A few APIs may be enabled automatically for you, but most APIs need to be enabled explicitly.

Once you have a project set up and have enabled an API, you are ready to begin using a client library to call the API.

Choosing a client library

This repository contains two types of API client RubyGems: the main library for the API (e.g. the google-cloud-translate gem for the Translation service), and one ore more versioned libraries for different versions of the service (e.g. google-cloud-translate-v2 and google-cloud-translate-v3 for versions 2 and 3 of the service, respectively). Note that we're referring to different versions of the backend service, not of the client library gem.

In most cases, you should install the main library (the one without a service version in the name). This library will provide all the required code for making calls to the API. It may also provide additional convenience code implementing common client-side workflows or best practices. Often the main library will bring in one or more versioned libraries as dependencies, and the client and data type classes you will use may actually be defined in a versioned library, but installing the main library will ensure you have access to the best tools and interfaces for interacting with the service.

The versioned libraries are lower-level libraries that target a specific version of the service. You may choose to intall a versioned library directly, instead of or in addition to the main library, to handle advanced use cases that require lower level access.

Note: Many services may also provide client libraries with names beginning with google-apis-. Those clients are developed in a different repository, and utilize an older client technology that lacks some of the performance and ease of use benefits of the clients in the google-cloud-ruby repository. The older clients may cover some services for which a google-cloud-ruby client is not yet available, but for services that are covered, we generally recommend the clients in the google-cloud-ruby repository over the older ones.

Most client libraries have directories in this repository, or you can look up the name of the client library to use in the documentation for the service you are using. Install this library as a RubyGem, or add it to your Gemfile.


Most API calls must be accompanied by authentication information proving that the caller has sufficient permissions to make the call. For an overview of authentication with Google, see https://cloud.google.com/docs/authentication.

These API client libraries provide several mechanisms for attaching credentials to API calls.

If your application runs on an Google Cloud Platform hosting environment such as Google Compute Engine, Google Container Engine, Google App Engine, Google Cloud Run, or Google Cloud Functions, the environment will provide "ambient" credentials which client libraries will recognize and use automatically. You can generally configure these credentials in the hosting environment, for example per-VM in Google Compute Engine.

You can also provide your own service account credentials by including a service account key file in your application's file system and setting the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path to that file. Client libraries will read this environment variable if it is set.

Finally, you can override credentials in code by setting the credentials field in the client configuration. This can be set globally on the client class or provided when you construct a client object.

See https://cloud.google.com/docs/authentication/production for more information on these and other methods of providing credentials.

Supported Ruby Versions

These libraries are currently supported on Ruby 2.6 through Ruby 3.1. Older versions of Ruby may still work, but are unsupported and not recommended.

In general, Google provides official support for Ruby versions that are actively supported by Ruby Core--that is, Ruby versions that are either in normal maintenance or in security maintenance, and not end of life. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.

Library Versioning

The libraries in this repository follow Semantic Versioning.

Libraries are released at one of two different support quality levels:

GA: Libraries defined at the GA (general availability) quality level, indicated by a gem version number greater than or equal to 1.0, are stable. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues), or unless accompanying a semver-major version update (such as version 1.x to 2.x.) Issues and requests against GA libraries are addressed with the highest priority.

Preview: Libraries defined at a Preview quality level, indicated by a gem version number less than 1.0, are expected to be mostly stable and we're working towards their release candidate. However, these libraries may get backwards-incompatible updates from time to time. We will still address issues and requests with a high priority.

Note that the gem version is distinct from the service version. Some backend services have mulitple versions, for example versions v2 and v3 of the translation service. These are treated as separate services and will have separate versioned clients, e.g. the google-cloud-translate-v2 and google-cloud-translate-v3 gems. These gems will in turn have their own gem versions, tracking the development of the two services.


Contributions to this repository are welcome. However, please note that many of the clients in this repository are automatically generated. The Ruby files in those clients will have a comment to that effect near the top; changes to those files will not be accepted as they will simply be overwritten by the code generator. If in doubt, please open an issue and ask the maintainers. See the CONTRIBUTING document for more information on how to get started.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.


Please report bugs at the project on Github.

If you have questions about how to use the clients or APIs, ask on Stack Overflow.

Author: Googleapis
Source Code: https://github.com/googleapis/google-cloud-ruby 
License: Apache-2.0 license

#ruby #google #cloud #client 

Google-cloud-ruby: Google Cloud Client Library for Ruby
Monty  Boehm

Monty Boehm


GoogleCharts.jl: Julia interface to Google Chart Tools


Julia interface to Google Chart Tools.

A Google chart involves basically four steps:

  • a specification of a Google "DataTable"
  • a specification of chart options
  • a call to make the type of chart desired.
  • a call to draw the chart

This package allows this to be done within julia by

mapping a DataFrame object into a Google DataTable.

mapping a Dict of options into a JSON object of chart options. Many of these options can be specified through keyword arguments.

providing various constructors to make the type of chart

providing a method to see the charts. This is called through Julia's show mechanism. In general, the render method can draw the chart or charts to an IOStream or file.

A basic usage (see the test/ directory for more)

using GoogleCharts, DataFrames

scatter_data = DataFrame(
    Age    = [8,  4,   11, 4, 3,   6.5],
    Weight = [12, 5.5, 14, 5, 3.5, 7  ]

options = Dict(:title => "Age vs. Weight comparison",
           :hAxis =>  Dict(:title => "Age", 
                       :minValue => 0, 
                       :maxValue => 15),    
           :vAxis =>  Dict(:title => "Weight", 
                       :minValue => 0, 
                       :maxValue => 15)

scatter_chart(scatter_data, options)

For non-nested options, keyword arguments can be given, as opposed to a dictionary:

chart = scatter_chart(scatter_data, title="Age vs. Weight comparison")

There are constructors for the following charts (cf. Charts Gallery)

       area_chart, bar_chart, bubble_chart, candlestick_chart, column_chart, combo_chart,
       gauge_chart, geo_chart, line_chart, pie_chart, scatter_chart, stepped_area_chart,
       table_chart, tree_chart, annotated_time_line, intensity_map, motion_chart, org_chart,

The helper function help_on_chart("chart_name") will open Google's documentation for the specified chart in a local browser.

The names of the data frame are used by the various charts. The order of the columns is important to the charting tools. The "Data Format" section of each web page describes this. We don't have a mechanism in place supporting Google's "Column roles".

The options are specified through a Dict which is translated into JSON by JSON.to_json. There are numerous options described in the "Configuration Options" section of each chart's web page. Some useful ones are shown in the example to set labels for the variables and the viewport. Google charts seem to like integer ranges in the viewports by default. Top-level properties, can be set using keyword arguments.

In the tests/ subdirectory is a file with implementations with this package of the basic examples from Google's web pages. Some additional examples of configurations can be found there.

The GoogleCharts.render method can draw a chart to an IOStream, a specified filename, or (when used as above) to a web page that is displayed locally. One can specify more than one chart at a time using a vector of charts.

A Plot function

There is a Plot function for plotting functions with a similar interface as Plot's plot function:

Plot(sin, 0, 2pi)

A vector of functions:

Plot([sin, u -> cos(u) > 0 ? 0 : NaN], 0, 2pi, 
       title="A function and where its derivative is positive",
           vAxis=Dict(:minValue => -1.2, :maxValue => 1.2))

The Plot function uses a line_chart. The above example shows that NaN values are handled gracefully, unlike Inf values, which we replace with NaN.

Plot also works for paired vectors:

x = linspace(0, 1., 20)
y = rand(20)
Plot(x, y)                     # dot-to-dot plot
Plot(x, y, curveType="function")         # smooths things out

parametric plots

Passing a tuple of functions will produce a parametric plot:

Plot((x -> sin(2x), cos), 0, 2pi)

scatter plots

The latter shows that Plot assumes your data is a discrete approximation to a function. For scatterplots, the Scatter convenience function is given. A simple use might be:

x = linspace(0, 1., 20)
y = rand(20)
Scatter(x, y)

If the data is in a data frame format we have a interface like:

using RDatasets
mtcars = dataset("datasets", "mtcars")
Scatter(:WT, :MPG, mtcars)

And we can even use with groupby objects:

iris = dataset("datasets", "iris")
d=iris[:, [2,3,6]]          ## in the order  "x, y, grouping factor"
gp = groupby(d, :Species)
Scatter(gp)                 ## in R this would be plot(Sepal.Width ~ Sepal.Length, iris, col=Species)
                            ## or ggplot(iris, aes(x=Sepal.Length, y=Sepal.Width, color=Species)) + geom_point()

Surface plots

Some experimental code is in place for surface plots. It needs work. The basic use is like:

surfaceplot((x,y) -> x^2 + y^2, linspace(0,1,20), linspace(0,2,20))

The above does not seem to work in many browsers and does not work reliably in IJulia (only success has been with Chrome).


The googleVis package for R does a similar thing, but has more customizability. This package should try and provide similar features. In particular, the following could be worked on:

  • Needs a julian like interface,
  • some features for interactive usage,
  • some integration with local web server.

Author: jverzani
Source Code: https://github.com/jverzani/GoogleCharts.jl 
License: View license

#julia #google #charts 

GoogleCharts.jl: Julia interface to Google Chart Tools
Royce  Reinger

Royce Reinger


TTS: A Ruby Gem for Text-to-Speech By using Google Translate Service


Using the Google Translate service as the speech engine, this gem generates a .mp3 voice file from any given string.


require 'tts'
# Will download "Hello World!.mp3" to your current directory
# Supported languages: ["zh", "en", "it", "fr"]
"Hello World!".to_file "en"

# i18n
"人民代表代表人民".to_file "zh"

# Save the file to a specific location
"Light rain with highs of 5 degrees".to_file "en", "~/weather.mp3"

# Supports large text files, as the gem will batch up the requests to Google (as each request max 100 chars)
text = "People living on the east coast of England have been warned to stay away from their homes because of further high tides expected later. The tidal surge that hit the UK is said to have been the worst for 60 years, with thousands abandoning their homes."
text.to_file "en"

#Direct playback (require mpg123 installed and in PATH with a POSIX system)
"Established in 1853, the University of Melbourne is a public-spirited institution that makes distinctive contributions to society in research, learning and teaching and engagement.".play

#Direct playback in other language (2 times)
 "Oggi il tempo è buono, andiamo in gita di esso.".play("it", 2)

#RTL Arabic language
"اليوم كان الطقس جيدا، ونحن نذهب في نزهة منه.".play("ar")

Direct play dependencies

You need to install `mpg123`

sudo apt-get install mpg123 #for debain based
brew install mpg123 #mac


tts "ruby is great"   # play the string.
text2mp3 "中国上海天气不错" "zh" #create mp3 file.


0.4 fixed issue that long text unable to generated.

0.5 added all supported languages, added direct playback feature fixed some broken rspec.

0.7 added CLI support

0.7.1 fixed new google API

Download Details: 

Author: c2h2
Source Code: https://github.com/c2h2/tts 
License: MIT license

#ruby #text #google 

TTS: A Ruby Gem for Text-to-Speech By using Google Translate Service
Rupert  Beatty

Rupert Beatty


Laravel-analytics: Retrieve Data From Google analytics

Retrieve data from Google Analytics

Using this package you can easily retrieve data from Google Analytics.

Here are a few examples of the provided methods:

use Analytics;
use Spatie\Analytics\Period;

//fetch the most visited pages for today and the past week

//fetch visitors and page views for the past week

Most methods will return an \Illuminate\Support\Collection object containing the results.


This package can be installed through Composer.

composer require spatie/laravel-analytics

Optionally, you can publish the config file of this package with this command:

php artisan vendor:publish --provider="Spatie\Analytics\AnalyticsServiceProvider"

The following config file will be published in config/analytics.php

return [

     * The view id of which you want to display data.
    'view_id' => env('ANALYTICS_VIEW_ID'),

     * Path to the client secret json file. Take a look at the README of this package
     * to learn how to get this file. You can also pass the credentials as an array
     * instead of a file path.
    'service_account_credentials_json' => storage_path('app/analytics/service-account-credentials.json'),

     * The amount of minutes the Google API responses will be cached.
     * If you set this to zero, the responses won't be cached at all.
    'cache_lifetime_in_minutes' => 60 * 24,

     * Here you may configure the "store" that the underlying Google_Client will
     * use to store it's data.  You may also add extra parameters that will
     * be passed on setCacheConfig (see docs for google-api-php-client).
     * Optional parameters: "lifetime", "prefix"
    'cache' => [
        'store' => 'file',

How to obtain the credentials to communicate with Google Analytics

Getting credentials

The first thing you’ll need to do is to get some credentials to use Google API’s. I’m assuming that you’ve already created a Google account and are signed in. Head over to Google API’s site and click "Select a project" in the header.


Next up we must specify which API’s the project may consume. In the list of API Library click "Google Analytics API". On the next screen click "Enable".


Now that you’ve created a project that has access to the Analytics API it’s time to download a file with these credentials. Click "Credentials" in the sidebar. You’ll want to create a "Service account key".


On the next screen you can give the service account a name. You can name it anything you’d like. In the service account id you’ll see an email address. We’ll use this email address later on in this guide.


Select "JSON" as the key type and click "Create" to download the JSON file.


Save the json inside your Laravel project at the location specified in the service_account_credentials_json key of the config file of this package. Because the json file contains potentially sensitive information I don't recommend committing it to your git repository.

Granting permissions to your Analytics property

I'm assuming that you've already created a Analytics account on the Analytics site. When setting up your property, click on "Advanced options" and make sure you enable Universal Analytics.


Go to "User management" in the Admin-section of the property.


On this screen you can grant access to the email address found in the client_email key from the json file you download in the previous step. Analyst role is enough.


Getting the view id

The last thing you'll have to do is fill in the view_id in the config file. You can get the right value on the Analytics site. Go to "View setting" in the Admin-section of the property.


You'll need the View ID displayed there.



When the installation is done you can easily retrieve Analytics data. Nearly all methods will return an Illuminate\Support\Collection-instance.

Here are a few examples using periods

//retrieve visitors and pageview data for the current day and the last seven days
$analyticsData = Analytics::fetchVisitorsAndPageViews(Period::days(7));

//retrieve visitors and pageviews since the 6 months ago
$analyticsData = Analytics::fetchVisitorsAndPageViews(Period::months(6));

//retrieve sessions and pageviews with yearMonth dimension since 1 year ago
$analyticsData = Analytics::performQuery(
        'metrics' => 'ga:sessions, ga:pageviews',
        'dimensions' => 'ga:yearMonth'

$analyticsData is a Collection in which each item is an array that holds keys date, visitors and pageViews

If you want to have more control over the period you want to fetch data for, you can pass a startDate and an endDate to the period object.

$startDate = Carbon::now()->subYear();
$endDate = Carbon::now();

Period::create($startDate, $endDate);

Provided methods

Visitors and pageviews

public function fetchVisitorsAndPageViews(Period $period): Collection

The function returns a Collection in which each item is an array that holds keys date, visitors, pageTitle and pageViews.

Total visitors and pageviews

public function fetchTotalVisitorsAndPageViews(Period $period): Collection

The function returns a Collection in which each item is an array that holds keys date, visitors, and pageViews.

Most visited pages

public function fetchMostVisitedPages(Period $period, int $maxResults = 20): Collection

The function returns a Collection in which each item is an array that holds keys url, pageTitle and pageViews.

Top referrers

public function fetchTopReferrers(Period $period, int $maxResults = 20): Collection

The function returns a Collection in which each item is an array that holds keys url and pageViews.

User Types

public function fetchUserTypes(Period $period): Collection

The function returns a Collection in which each item is an array that holds keys type and sessions.

Top browsers

public function fetchTopBrowsers(Period $period, int $maxResults = 10): Collection

The function returns a Collection in which each item is an array that holds keys browser and sessions.

All other Google Analytics queries

To perform all other queries on the Google Analytics resource use performQuery. Google's Core Reporting API provides more information on which metrics and dimensions might be used.

public function performQuery(Period $period, string $metrics, array $others = [])

You can get access to the underlying Google_Service_Analytics object:



Run the tests with:


Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.


Please see CHANGELOG for more information what has changed recently.


Please see CONTRIBUTING for details.


If you've found a bug regarding security please mail security@spatie.be instead of using the issue tracker.


And a special thanks to Caneco for the logo ✨

Author: Spatie
Source Code: https://github.com/spatie/laravel-analytics 
License: MIT license

#laravel #php #google 

Laravel-analytics: Retrieve Data From Google analytics
Royce  Reinger

Royce Reinger


Translations with Speech Synthesis in Your Terminal As A Ruby Gem


Termit is an easy way to translate stuff in your terminal. You can check out its node.js npm version normit.


gem install termit


termit 'source_language' 'target_language' 'text'


termit en es "hey cowboy where is your horse?"
=> "Hey vaquero dónde está tu caballo?"

termit fr en "qui est votre papa?"
=> "Who's Your Daddy?"

Quotation marks are not necessary for text data input:

termit fr ru qui est votre papa
=> "Кто твой папочка?"

Speech synthesis

Specify a -t (talk) flag to use speech synthesis (requires mpg123):

termit en fr "hey cowboy where is your horse?" -t
=> "Hey cowboy où est votre cheval ?" # and a french voice says something about a horse

You can use termit as a speech synthesizer of any supported language without having to translate anything:

termit en en "hold your horses cowboy !" -t
=> "hold your horses cowboy !" # and an english voice asks you to hold on

Learning language when committing to git

Idea by Nedomas. See and hear your messages translated to target lang every time you commit. You can do this two ways: overriding the git command, and using a post-commit hook in git.

Override the git command (zsh only)

In ~/.zshrc

export LANG=es
git(){[[ "$@" = commit\ -m* ]]&&termit en $LANG ${${@:$#}//./} -t;command git $@}

I am no shell ninja so if you know how to make it work in bash then please submit a PR.

Using a post-commit hook

Add a file named post-commit to your project's .git/hooks directory, with this in it:

termit en es "`git log -1 --pretty=format:'%s'`" -t

Remember to switch the languages according to your preference.

If you want this to be in every one of your git repositories, see this Stack Overflow answer.

Language codes:

To find all available language codes visit https://msdn.microsoft.com/en-us/library/hh456380.aspx


Works with Ruby 1.9.2 and higher.

To use speech synthesis you need to have mpg123 installed.

For Ubuntu:

sudo apt-get install mpg123

For MacOSX:

brew install mpg123


It was rewritten to work with Bing Translator . Thanks to Ragnarson for supporting it !


Termit works by scraping the private APIs and is therefore not recommended for use in production or on a large scale.

Author: Pawurb
Source Code: https://github.com/pawurb/termit 
License: MIT license

#ruby #translation #google 

Translations with Speech Synthesis in Your Terminal As A Ruby Gem