How to use Docker containers for new Data Scientists

Docker is a tool for containerising code. There’s a ton of Docker introductions out there. I want to emphasise one important point: in 2020 you will want (or even need)to know Docker as a data scientist. Before we start with Docker however, we need to talk about a container.

What is a container?

A container is a more intuitive concept than you’d think. Basically, it wraps up your code all nice so that it has everything it needs to run in a nice little package. This is important as it makes code scalable and portable.

“A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.” – Docker.com

So, what do they mean by that? Say you’re building a (useless) application in Python that takes in text, labels them with part-of-speech (POS) tags and returns them. You write all the code for your application then commit it to Github. Your friend wants to use your application, so he clones your repository to his computer and tries to use it. Bam – error! He’s using Python 3.5 and you used 2.7. No big deal, he downloads 2.7 and tries to run it again. Bam – another error! He doesn’t have spaCy library you used in his Python. He tries to install it, but again he receives an error as it’s not the same version of the library!
This is image title

After a long, painstaking process he finally manages to get it working. However, this was neither efficient nor effective. All these problems could have been avoided if you’d both been running the code in the exact same environment. Now since our application only contains Python, we could just use the virtualenv package for this. However, what happens if our application also has bash scripts? Some java code? Maybe a local database running or other services running on some port? At this point we need more.

Why use Containers?

Containers allow us to run all our code with all its bells, whistles, languages and services across many different hosts consistently. If your friend had container software installed (like Docker), he could easily run your application on his own computer without needing to weed through the variety of dependencies necessary for it.

This is image title

Note the difference in infrastructure between using containers and VMs

Unlike virtual machines which install an entire operating system (OS) on your host machine, containers are lightweight and can be spun up with ease. Unlike language specific virtual environments (like Python’s virtualenv), they also have far more functionality. You can have containers running Python, different linux distributions, and windows along with a number of other software and services.

Containers become even more important in production, where you could be managing tons of instances of your code across large clusters of nodes. This can be an abstract concept as a new data scientist, but I can tell you from experience that companies are using Docker more and more for machine learning. It is highly encouraged those running deep learning workflows in libraries like Tensorflow do so in containers for reproducibility and portability.

So what is Docker?

As you have have no doubt guessed, Docker is an open source and enterprise software for containerising code. There are a couple offerings out there, but Docker is definitely the most popular and therefore the most desired on a resume. Now that you understand the goal of containerising code, we can dive into some Docker specific terms:

Docker Image – An image is a definition of a container. Docker images contain the plans for what software will be in the container, and what code it will execute when it runs. For example, the aforementioned application would use a base image of Python 2.7.

Dockerfile – This is the file in which we define our image. You can see an example further below, but it’s the file that contains the instructions on how to build your image. This includes things like the base image (Python, Ubuntu, etc…), other packages we might need (pip, Python libraries), and what code you want to execute when the container runs (my_app.py).

Docker Container – We know what a container does, but what a Docker container actually is in computer science terms is an instance of an image. We define an image in our Dockerfile, build that Dockerfile into an actual image, and then run the instructions from that image in a container. An image definition is like a class definition – we can create multiple containers from the same image just as we can have multiple instances of the same class.

See the image below for the general, simplified trajectory of how to use Docker.
This is image title

Getting Started with Docker
I see a lot of Docker tutorials out there – but they often focus on deploying an application, multiple services or something more complex. For demonstration purposes, and those completely new to Docker/data science, I want to demonstrate the most minimal and useless Dockerfile, image, and container that I can using the example above. Let’s start.

Install Docker

You can install Docker from the website here, or use Homebrew. I prefer the latter for managing dependencies and such but it’s up to you!

Let’s also start by creating a new directory for our work. I prefer to use terminal whenever possible, and encourage beginners to do so to gain comfort. Make a new directory, and populate it with a file named Dockerfile.

Build a Quick App

Before we decide what we’ll need in our Docker image, let’s build our app! Like I said, it’ll be a simple app that takes in text and returns POS tags using Python 2.7. Normally, you might use a web framework like flask for this, but because I want this to be as bare bones as possible we’re going to take terminal input. I named this file app.py.

import spacy

nlp = spacy.load('en_core_web_sm')
text = raw_input("Please enter a phrase: ")
doc = nlp(unicode(text))

for token in doc:
    print(token, token.pos_)

Create your Dockerfile

Feel free to try out these lines of code outside Docker – but remember the point is to containerize this code! We now need to create our image. Open up Dockerfile and put the following lines in it:

# Use an official Python runtime as a parent image
FROM python:2.7

# Set the working directory to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app
# install the only library we really need for this
RUN pip install spacy
# install spacy's english words
RUN python -m spacy download en # and finally run our Python script!

# Run your application!
CMD ["python","app.py"]

It should be pretty intuitive – but I’ll run through them. First we specify a base image with the Docker FROM command. It’s good practice to keep these images as lightweight as possible. Normally I’d use the “slim” or “alpine” versions of Python which are minimal, but we need all of Python to work with spaCy.

After, we create a new directory called app note that this directory will be inside of our Docker container. The next command is key – the COPY . command copies everything from our current directory (specified by the .), and pastes in into the app directory inside our Docker image! So if you created a new directory with app.py and this Dockerfile in it, when you build the image it’ll copy that Python code into your Docker image.

Lastly, we use the Docker RUN command to install the packages we need, and the CMD command to tell Docker we want to run our Python script when we run this container.

Build Your Image

Now that we’ve defined our image we have to build it! First, open up a terminal and cd into the directory where you have your Dockerfile and app.py. Make sure you have Docker in your $PATH and that it is running by typing a simple command like docker --help. Then run the command below.

docker build -t my_image_name .

The first part of the command is standard when using docker. build obviously builds the image. The -t allows us to name the image for convenience – here I choose the filler of my_image_name but it can be whatever you want. The . again specifies to use the Dockerfile in the current directory.

When you run the command above, you’ll notice it pulls the Python image from the official Docker registry online, and then begins installing all of the things you told it to! It takes a little while to build the first time, but it’s worth the wait. You can check to make sure the image was built by running docker image ls.

Run the Container

Now that our image is build we’re all see to actually run our container! The commands for this are simple.

docker run -it my_image_name

There ya go. The -i is for interactive, and just states that Docker should keep the terminal open for you to enter the input. If you’re running a flask application, you won’t need the i.

Upon running this, you should be met with your input prompt and can label your text and have it spit back POS tags! If you want it to keep going instead of shutting down the container after one go, just wrap it in a while True: loop. Of course in production you’d want to wrap an API around this and expose it on a port – but we’ll cover that at a later date.

Docker for Data Science

In this tutorial we had spaCy do any data science for us. However, Docker is equally as important for managing, training and deploying your models. I always train frameworks like tensorflow in a Docker image to avoid dependency and versioning issues. It’s actually recommended to use the tensorflow Docker image for GPU support: https://www.tensorflow.org/install/docker.
This is image title

With that – we’re still only at the tip of the iceberg. Companies often use Kubernetes (K8s) to manage all the Docker containers running at scale across a number of clusters: https://kubernetes.io/. There is even a machine learning framework built for K8s called Kubeflow that is growing in popularity.
This is image title

Docker’s role in data science is becoming more prevalent everyday – especially as data science moves from your local machine and into the cloud at scale. Data science is no longer completely isolated from software engineering, and the best data scientists of the next century will not only know statistics and mathematics but scalable training and deployment.

Learn More

#docker #data-science

What is GEEK

Buddha Community

How to use Docker containers for new Data Scientists
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Java Questions

Java Questions

1599137520

50 Data Science Jobs That Opened Just Last Week

Our latest survey report suggests that as the overall Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments, data scientists and AI practitioners should be aware of the skills and tools that the broader community is working on. A good grip in these skills will further help data science enthusiasts to get the best jobs that various industries in their data science functions are offering.

In this article, we list down 50 latest job openings in data science that opened just last week.

(The jobs are sorted according to the years of experience r

1| Data Scientist at IBM

**Location: **Bangalore

Skills Required: Real-time anomaly detection solutions, NLP, text analytics, log analysis, cloud migration, AI planning, etc.

Apply here.

2| Associate Data Scientist at PayPal

**Location: **Chennai

Skills Required: Data mining experience in Python, R, H2O and/or SAS, cross-functional, highly complex data science projects, SQL or SQL-like tools, among others.

Apply here.

3| Data Scientist at Citrix

Location: Bangalore

Skills Required: Data modelling, database architecture, database design, database programming such as SQL, Python, etc., forecasting algorithms, cloud platforms, designing and developing ETL and ELT processes, etc.

Apply here.

4| Data Scientist at PayPal

**Location: **Bangalore

Skills Required: SQL and querying relational databases, statistical programming language (SAS, R, Python), data visualisation tool (Tableau, Qlikview), project management, etc.

Apply here.

5| Data Science at Accenture

**Location: **Bibinagar, Telangana

Skills Required: Data science frameworks Jupyter notebook, AWS Sagemaker, querying databases and using statistical computer languages: R, Python, SLQ, statistical and data mining techniques, distributed data/computing tools such as Map/Reduce, Flume, Drill, Hadoop, Hive, Spark, Gurobi, MySQL, among others.


#careers #data science #data science career #data science jobs #data science news #data scientist #data scientists #data scientists india

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Ian  Robinson

Ian Robinson

1623175620

Data Science: Advice for Aspiring Data Scientists | Experfy Insights

Around once a month, I get emailed by a student of some type asking how to get into Data Science, I’ve answered it enough that I decided to write it out here so I can link people to it. So if you’re one of those students, welcome!

I’ll segment this into basic advice, which can be found quite easily if you just google ‘how to get into data science’ and advice that is less common, but advice that I’ve found very useful over the years. I’ll start with the latter, and move on to basic advice. Obviously take this with a grain of salt as all advice comes with a bit of survivorship bias.

Less Basic Advice:

1. Find a solid community

2. Apply Data Science to Things you Enjoy

3. Minimize the ‘Clicks to Proof of Competence’

4. Learn Through Research or Entry Level Jobs

#big data & cloud #data science #data scientist #statistics #aspiring data scientist #advice for aspiring data scientists