1650700800
Envision is a small, easy to use Clojure library for data processing, cleanup and visualisation. If you've heard about Incanter, you may see a couple of things that we do in a similar way.
You can check out a couple of rendered examples here.
Envision is a relatively young project. Since it's never meant to be used in hard- production (e.g. it will never be something user-facing), and is intended to be used by people who'd like to yield some information from their data, it should be stable enough from the very early releases.
Envision artifacts are released to Clojars. If you are using Maven, add the following repository definition to your pom.xml
:
<repository>
<id>clojars.org</id>
<url>http://clojars.org/repo</url>
</repository>
With Leiningen:
[clojurewerkz/envision "0.1.0-SNAPSHOT"]
With Maven:
<dependency>
<groupId>clojurewerkz</groupId>
<artifactId>envision</artifactId>
<version>0.1.0-SNAPSHOT</version>
</dependency>
Main idea of this library is to make exploratory analysis more interactive and visual, although in programmer's way. Envision creates a "throwaway environment" every time you, for example, make a line chart. You can modify chart the way you want, change all the possible configuration parameters, filter data, add exponents the ways we wouldn't be able to program for you.
We concluded that visual environments are often constraining, and creating an API for every since feature would make it amazingly big and bloated. So we do a bare minimum, which is already helpful by default through the API and let you configure everything you could've possibly imagined yourself: adding interactivity, combining charts, customizing layouts and so on.
Main entrypoint is clojurewerkz.envision.core/render
. It creates a temporary directory with all the required dependencies and returns you a path to it. For example, let's generate some data and render a line and area charts:
(ns my-ns
(:require [clojurewerkz.envision.core :as envision]
[clojurewerkz.envision.chart-config :as cfg]
(envision/render
[(envision/histogram 10 (take 100 (distribution/normal-distribution 5 10))
{:tick-format "s"})
(envision/linear-regression
(flatten (for [i (range 0 20)]
[{:year (+ 2000 i)
:income (+ 10 i (rand-int 10))
:series "series-1"}
{:year (+ 2000 i)
:income (+ 10 i (rand-int 20))
:series "series-2"}]
))
:year
:income
[:year :income :series])
(cfg/make-chart-config
{:id "line"
:headline "Line Chart"
:x "year"
:y "income"
:x-config {:order-rule "year"}
:series-type "line"
:data (flatten (for [i (range 0 20)]
[{:year (+ 2000 i)
:income (+ 10 i (rand-int 10))
:series "series-1"}
{:year (+ 2000 i)
:income (+ 10 i (rand-int 20))
:series "series-2"}]
))
:series "series"
:interpolation :cardinal
})
(cfg/make-chart-config
{:id "area"
:headline "Area Chart"
:x "year"
:y "income"
:x-config {:order-rule "year"}
:series-type "area"
:data (into [] (for [i (range 0 20)] {:year (+ 2000 i) :income (+ 10 i (rand-int 10))}))
:interpolation :cardinal
})
])
Function will return a tmp folder path, like:
/var/folders/1y/xr7zvp2j035bpq09whg7th5w0000gn/T/envision-1402385765815-3502705781
cd
into this path and start an HTTP Server on most systems you'd have Python 2.7 installed.
python -m SimpleHTTPServer
After that you can point your browser to
http://localhost:4000/templates/index.html
If you don't want to start an HTTP server, or don't have Python installed, just open templates/index_file.html
static file in your browser.
You can check out a couple of example graphs rendered as static files here.
We decided to use an simple HTTP server by default, since sometimes d3
doesn't like file://
protocol. However, you can always just open templates/index_file.html
in your browser and get pretty much same result.
In order to configure chart, you have to specify:
id
, a unique string literal identifying the chartdata
, sequence of maps, where each map represents an entry to be displayedx
, key that should be taken as x
value for each rendered pointy
, key that should be taken as y
value for each rendered pointseries-type
, one of line
, bubble
, area
and bar
for line charts, Scatterplots, area charts and barcharts, correspondinglyOptionally, you can specify:
series
, which will split your data, grouping or color-coding charts by given keys keys should be given either as a string or a vector or strings.interpolation
, interpolation type to be used in area or line chart, usually you want to use linear
, basis
, or step-after
, but there're more options, which will be mentioned in a corresponding section.x-config
specifies a configuration for X axisx-config
options:
order-rule
specifies a key to sort data points on x
axis, if it's not x
override-min
overrides minimum for an axisEnvision supports Clojure 1.4+.
To subscribe for announcements of releases, important changes and so on, please follow @ClojureWerkz on Twitter.
Envision is part of the group of libraries known as ClojureWerkz, together with Monger, Elastisch, Langohr, Welle, Titanium and several others.
Envision uses Leiningen 2. Make sure you have it installed and then run tests against all supported Clojure versions using
lein2 all test
Then create a branch and make your changes on it. Once you are done with your changes and all tests pass, submit a pull request on Github.
Author: clojurewerkz
Source Code: https://github.com/clojurewerkz/envision
License:
#machine-learning #DataVisualisation
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1624699032
At smaller companies access to and control of data is one of the biggest challenges faced by data analysts and data scientists. The same is true at larger companies when an analytics team is forced to navigate bureaucracy, cybersecurity and over-taxed IT, rather than benefit from a team of data engineers dedicated to collecting and making good data available.
Creative, persistent analysts find ways to get access to at least some of this data. Through a combination of daily processes to save email attachments, run database queries, and copy and paste from internal web pages one might build up a mighty collection of data sets on a personal computer or in a team shared drive or even a database.
But this solution does not scale well, and is rarely documented and understood by others who could take it over if a particular analyst moves on to a different role or company. In addition, it is a nightmare to maintain. One may spend a significant part of each day executing these processes and troubleshooting failures; there may be little time to actually use this data!
I lived this for years at different companies. We found ways to be effective but data management took up way too much of our time and energy. Often, we did not have the data we needed to answer a question. I continued to learn from the ingenuity of others and my own trial and error, which led me to the theoretical framework that I will present in this blog series: building a self-managed data library.
A data library is _not _a data warehouse, data lake, or any other formal BI architecture. It does not require any particular technology or skill set (coding will not be required but it will greatly increase the speed at which you can build and the degree of automation possible). So what is a data library and how can a small data analytics team use it to overcome the challenges I’ve described?
#big data #cloud & devops #data libraries #small data science teams #introduction to data libraries for small data science teams #data science
1617988080
Using data to inform decisions is essential to product management, or anything really. And thankfully, we aren’t short of it. Any online application generates an abundance of data and it’s up to us to collect it and then make sense of it.
Google Data Studio helps us understand the meaning behind data, enabling us to build beautiful visualizations and dashboards that transform data into stories. If it wasn’t already, data literacy is as much a fundamental skill as learning to read or write. Or it certainly will be.
Nothing is more powerful than data democracy, where anyone in your organization can regularly make decisions informed with data. As part of enabling this, we need to be able to visualize data in a way that brings it to life and makes it more accessible. I’ve recently been learning how to do this and wanted to share some of the cool ways you can do this in Google Data Studio.
#google-data-studio #blending-data #dashboard #data-visualization #creating-visualizations #how-to-visualize-data #data-analysis #data-visualisation
1618039260
The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.
Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.
#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt