Stock Fundamental Analysis: EDA of SEC’s quarterly data summary

Many investors consider fundamental analysis as their secret weapon to beat the stock market. You can perform it using many methods, but one thing they have in common. They all need data about companies’ financial statements.

Luckily all stocks traded on US stock markets must quarterly report to the Securities and Exchange Commission (SEC). Every quarter SEC prepares a comfortable CSV package to help all the investors in their quest for the investment opportunity. Let’s explore how to get valuable insights from these .csv files.

In this tutorial, we will use python’s pandas library which ideal for parsing CSV files, and we will learn how to:

We will process the data and:

  • explore files in the SEC dump
  • review each column of these files and talk about the most relevant
  • remove **duplicated **data grouped by a key column or multiple columns
  • visualize the data to support our exploration using interactive Plotly charts
  • and much more

As usual, you can follow the code in the notebook shared on GitHub.

vaclavdekanovsky/data-analysis-in-examples

Permalink Dismiss GitHub is home to over 50 million developers working together to host and review code, manage…

github.com

SEC Quarterly data

There doesn’t seem to be any problem. You simply download the quarterly package from the SEC dataset page, you sort the values from the financial statements in descending order and pick the stocks on the top. The reality isn’t that straightforward. Let’s have a look and explore 45.55MB big zip file with all SEC filings for the first quarter of 2020.

The package for every quarter contains 5 files. Here’s an example of 2020 Q1:

  • readme.htm — describes the structure of the files
  • **sub.txt **— master information about the submissions including company identifiers and type of the filing
  • **num.txt **— numeric data for each financial statement and other documents
  • tag.txt — standard taxonomy tags
  • pre.txt — information about how the data from num.txt is displayed in the online presentation

Image for post

Image for post

Unzipped files in the SEC quarterly data dump

This article will only deal with the submission master because it contains more than enough information for one article. Follow-up story will examine the data in more detail. Let’s begin.

2020Q1 Submission files

In the first quarter of 2020, the companies have submitted 13560 files and the sub.txt gathers 36 columns about them.

# load the .csv file into pandas
sub = pd.read_csv(os.path.join(folder,"sub.txt"), sep="\t", dtype={"cik":str})

# explore number of rows and columns
sub.shape
[Out]: (13560, 36)

I always start with a simple function that reviews each column of the data frame, checks the percentage of empty values, and how many unique values appear in the columns.

Explore the sub.txt file to see what data each column contain

Let me highlight a few important columns in the SEC submission master.

Image for post

Image for post

Example of the quick file overview in pandas

  • adsh — EDGAR accession number uniquely identifies each report. This value is **never duplicated **in the sub.txt. Example 0001353283–20–000008 is the code for 10-K (yearly filing) of Splunk.
  • cik — Central Index Key, unique key identifying each SEC registrant. E.g. 0001353283 for Splunk. As you can see the first part of the adsh is the cik.
  • name — the name of the company submitting the quarterly financial data
  • form — the type of the report being submitted

Form s— submissions types delivered to SEC

Based on the analysis, we see that the 2020Q1 submission contains 23 unique types of financial reports. Investors’ primary interest lies in the 10-K report, which covers the annual performance of the publically traded company. Because this report is expectedly delivered only once a year, important is also 10-Q report showing quarterly changes in the company’s financials.

  • 10-K Annual report of US-based company
  • 10-Q Quarterly report and maybe
  • 20-F Annual Reports of a foreign company
  • 40-F Annual Reports of a foreign company (Canadian)

Let’s see which forms are the most common in the dataset. Plotting of the form types in the 2020Q1 will show this picture:

Using Plotly’s low level API to produce bar and pie subplots

Image for post

Image for post

Different submission types reported by the companies in 2020Q1 using visualization in Plotly

The dataset contains over 7000 8-K reports notifying about important events like agreements, layoffs, usage of material, modification of shareholder rights, change in the senior positions, and more (see SEC’s guideline). Since they are the most common we should spend some time exploring them.

#stocks #exploratory-data-analysis #python #data-analysis #stock-market #data analysis

What is GEEK

Buddha Community

Stock Fundamental Analysis: EDA of SEC’s quarterly data summary

Stock Fundamental Analysis: EDA of SEC’s quarterly data summary

Many investors consider fundamental analysis as their secret weapon to beat the stock market. You can perform it using many methods, but one thing they have in common. They all need data about companies’ financial statements.

Luckily all stocks traded on US stock markets must quarterly report to the Securities and Exchange Commission (SEC). Every quarter SEC prepares a comfortable CSV package to help all the investors in their quest for the investment opportunity. Let’s explore how to get valuable insights from these .csv files.

In this tutorial, we will use python’s pandas library which ideal for parsing CSV files, and we will learn how to:

We will process the data and:

  • explore files in the SEC dump
  • review each column of these files and talk about the most relevant
  • remove **duplicated **data grouped by a key column or multiple columns
  • visualize the data to support our exploration using interactive Plotly charts
  • and much more

As usual, you can follow the code in the notebook shared on GitHub.

vaclavdekanovsky/data-analysis-in-examples

Permalink Dismiss GitHub is home to over 50 million developers working together to host and review code, manage…

github.com

SEC Quarterly data

There doesn’t seem to be any problem. You simply download the quarterly package from the SEC dataset page, you sort the values from the financial statements in descending order and pick the stocks on the top. The reality isn’t that straightforward. Let’s have a look and explore 45.55MB big zip file with all SEC filings for the first quarter of 2020.

The package for every quarter contains 5 files. Here’s an example of 2020 Q1:

  • readme.htm — describes the structure of the files
  • **sub.txt **— master information about the submissions including company identifiers and type of the filing
  • **num.txt **— numeric data for each financial statement and other documents
  • tag.txt — standard taxonomy tags
  • pre.txt — information about how the data from num.txt is displayed in the online presentation

Image for post

Image for post

Unzipped files in the SEC quarterly data dump

This article will only deal with the submission master because it contains more than enough information for one article. Follow-up story will examine the data in more detail. Let’s begin.

2020Q1 Submission files

In the first quarter of 2020, the companies have submitted 13560 files and the sub.txt gathers 36 columns about them.

# load the .csv file into pandas
sub = pd.read_csv(os.path.join(folder,"sub.txt"), sep="\t", dtype={"cik":str})

# explore number of rows and columns
sub.shape
[Out]: (13560, 36)

I always start with a simple function that reviews each column of the data frame, checks the percentage of empty values, and how many unique values appear in the columns.

Explore the sub.txt file to see what data each column contain

Let me highlight a few important columns in the SEC submission master.

Image for post

Image for post

Example of the quick file overview in pandas

  • adsh — EDGAR accession number uniquely identifies each report. This value is **never duplicated **in the sub.txt. Example 0001353283–20–000008 is the code for 10-K (yearly filing) of Splunk.
  • cik — Central Index Key, unique key identifying each SEC registrant. E.g. 0001353283 for Splunk. As you can see the first part of the adsh is the cik.
  • name — the name of the company submitting the quarterly financial data
  • form — the type of the report being submitted

Form s— submissions types delivered to SEC

Based on the analysis, we see that the 2020Q1 submission contains 23 unique types of financial reports. Investors’ primary interest lies in the 10-K report, which covers the annual performance of the publically traded company. Because this report is expectedly delivered only once a year, important is also 10-Q report showing quarterly changes in the company’s financials.

  • 10-K Annual report of US-based company
  • 10-Q Quarterly report and maybe
  • 20-F Annual Reports of a foreign company
  • 40-F Annual Reports of a foreign company (Canadian)

Let’s see which forms are the most common in the dataset. Plotting of the form types in the 2020Q1 will show this picture:

Using Plotly’s low level API to produce bar and pie subplots

Image for post

Image for post

Different submission types reported by the companies in 2020Q1 using visualization in Plotly

The dataset contains over 7000 8-K reports notifying about important events like agreements, layoffs, usage of material, modification of shareholder rights, change in the senior positions, and more (see SEC’s guideline). Since they are the most common we should spend some time exploring them.

#stocks #exploratory-data-analysis #python #data-analysis #stock-market #data analysis

Siphiwe  Nair

Siphiwe Nair

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1624272463

How Are Data analysis and Data science Different From Each Other

With possibly everything that one can think of which revolves around data, the need for people who can transform data into a manner that helps in making the best of the available data is at its peak. This brings our attention to two major aspects of data – data science and data analysis. Many tend to get confused between the two and often misuse one in place of the other. In reality, they are different from each other in a couple of aspects. Read on to find how data analysis and data science are different from each other.

Before jumping straight into the differences between the two, it is critical to understand the commonalities between data analysis and data science. First things first – both these areas revolve primarily around data. Next, the prime objective of both of them remains the same – to meet the business objective and aid in the decision-making ability. Also, both these fields demand the person be well acquainted with the business problems, market size, opportunities, risks and a rough idea of what could be the possible solutions.

Now, addressing the main topic of interest – how are data analysis and data science different from each other.

As far as data science is concerned, it is nothing but drawing actionable insights from raw data. Data science has most of the work done in these three areas –

  • Building/collecting data
  • Cleaning/filtering data
  • Organizing data

#big data #latest news #how are data analysis and data science different from each other #data science #data analysis #data analysis and data science different

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Ian  Robinson

Ian Robinson

1623856080

Streamline Your Data Analysis With Automated Business Analysis

Have you ever visited a restaurant or movie theatre, only to be asked to participate in a survey? What about providing your email address in exchange for coupons? Do you ever wonder why you get ads for something you just searched for online? It all comes down to data collection and analysis. Indeed, everywhere you look today, there’s some form of data to be collected and analyzed. As you navigate running your business, you’ll need to create a data analytics plan for yourself. Data helps you solve problems , find new customers, and re-assess your marketing strategies. Automated business analysis tools provide key insights into your data. Below are a few of the many valuable benefits of using such a system for your organization’s data analysis needs.

Workflow integration and AI capability

Pinpoint unexpected data changes

Understand customer behavior

Enhance marketing and ROI

#big data #latest news #data analysis #streamline your data analysis #automated business analysis #streamline your data analysis with automated business analysis