1594447980

Copper prices have seen a significant slump in the first quarter of 2020. According to the World Bank, this is in large part due to the large decline that we have been seeing in the manufacturing sector and stimulus measures are having a limited effect in supporting prices.

Source: investing.com

Accordingly, the producer price index has also been falling.

However, is it necessarily a given that we can expect these trends to continue looking forward?

To answer this question, let’s consider a time series analysis using the following:

- Cointegration
- Cross-correlation
- Forecasting of inflation and 10-Year US Treasury Rates with ARIMA

Given that copper prices are stochastic in nature (i.e. follow a random walk), forecasts are not made using copper prices outright. Rather, this analysis investigates the correlations between copper and the 10-year US Treasury Rate, with a view to inferring the potential direction for copper from forecasts of US inflation and the 10-year US Treasury Rate.

For the purposes of this analysis, the copper price producer index and the 10-year US Treasury Rate is split into the following time periods for training and validation purposes:

**Training:** January 1981 to January 2018

**Validation:** February 2018 to March 2020

The relevant time series were sourced from the FRED (Federal Reserve Economic Data) source through Quandl.

The sourced time series are as follows:

- 10 Year Treasury Constant Maturity Rate (FRED/GS10)
- Producer Price Index by Commodity for Special Indexes Copper and Copper Products (FRED/WPUSI019011)
- U.S. Inflation Data (FRED/FPCPITOTLZGUSA)

The producer price index and 10-year treasury data is in monthly format and the U.S. inflation data is in yearly format.

When working with time series analysis, cointegration testing is used in order to determine whether correlations between two time series are theoretically relevant or simply due to chance.

In addition, cross-correlation is used to infer whether there is a leading or lagging relationship between the two indicators, e.g. US GDP is typically a **lagging** indicator for the S&P 500, while the S&P 500 is conversely a **leading** indicator for US GDP.

#timeseries #commodities #economics #data-science #machine-learning #data analysis

1623292080

Time series analysis is the backbone for many companies since most businesses work by analyzing their past data to predict their future decisions. Analyzing such data can be tricky but Python, as a programming language, can help to deal with such data. Python has both inbuilt tools and external libraries, making the whole analysis process both seamless and easy. Python’s **Panda** s library is frequently used to import, manage, and analyze datasets in various formats. However, in this article, we’ll use it to analyze stock prices and perform some basic time-series operations.

#data-analysis #time-series-analysis #exploratory-data-analysis #stock-market-analysis #financial-analysis #getting started with time series using pandas

1620127560

Bhavesh Bhatt, Data Scientist from Fractal Analytics posted that he has created a Python script that checks the available slots for Covid-19 vaccination centres from CoWIN API in India. He has also shared the GitHub link to the script.

The YouTube content creator posted, “Tracking available slots for Covid-19 Vaccination Centers in India on the CoWIN website can be a bit strenuous.” “I have created a Python script which checks the available slots for Covid-19 vaccination centres from CoWIN API in India. I also plan to add features in this script of booking a slot using the API directly,” he added.

We asked Bhatt how did the idea come to fruition, he said, “Registration for Covid vaccines for those above 18 started on 28th of April. When I was going through the CoWIN website – https://www.cowin.gov.in/home, I found it hard to navigate and find empty slots across different pin codes near my residence. On the site itself, I discovered public APIs shared by the government [https://apisetu.gov.in/public/marketplace/api/cowin] so I decided to play around with it and that’s how I came up with the script.”

Talking about the Python script, Bhatt mentioned that he used just 2 simple python libraries to create the Python script, which is datetime and requests. The first part of the code helps the end-user to discover a unique district_id. “Once he has the district_id, he has to input the data range for which he wants to check availability which is where the 2nd part of the script comes in handy,” Bhatt added.

#news #covid centre #covid news #covid news india #covid python #covid tracing #covid tracker #covid vaccine #covid-19 news #data scientist #python #python script

1594073100

This tutorial was supposed to be published last week. Except I couldn’t get a working (*and decent*) model ready in time to write an article about it. In fact, I’ve had to spend 2 days on the code to wrangle some semblance of useful and legible output from it.

But I’m not mad at it (*now*). This is the aim of my challenge here and truthfully I was getting rather tired of solving all the previous classification tasks in a row. And the good news is I’ve learned how to model the data in a suitable format for processing, conducting exploratory data analysis on time-series data and building a good (*the best I could come up with, like, after 2 days*) model.

So I’ve also made a meme to commemorate my journey. **I promise the tutorial is right on the other side of it.**

Yes, I made a meme of my own code.

_About the Dataset: __The Gas Sensor Array Dataset, download from, _****hereconsists of 8 sensor readings all set to detect concentration levels of a mixture of Ethylene gas with either Methane or Carbon Monoxide. The concentration levels are constantly changing with time and the sensors record this information.

Regression is one other possible type of solution that can be implemented for this dataset, but I deliberately chose to build a multivariate time-series model to familiarize myself with time-series forecasting problems and also to set more of a challenge to myself.

Time-Series data continuosuly varies with time. There may be one variable that does so (univariate), or multiple variables that vary with time (multivariate) in a given dataset.

Here, there are 11 feature variables in total; 8 sensor readings (time-dependent), Temperature, Relative Humidity and the Time (stamp) at which the recordings were observed.

As with most datasets in the UCI Machine Learning Repository, you will have to spend time cleaning up the flat files, converting them to a CSV format and insert the column headers at the top.

If this sounds exhausting to you, you can simply download**one such file** I’ve already prepped.

T

his is going to be a long tutorial with explanations liberally littered here and there, in order to explain concepts that most beginners might not be knowing. So in advance, thank you for your patience and I’ll keep the explanations to the point and as short as possible.

Before heading into the data preprocessing part, it is important to visualize what variables are changing with time and how they are changing (trends) with time. Here’s how.

Time Series Data Plot

```
# Gas Sensing Array Forecast with VAR model
# Importing libraries
import numpy as np, pandas as pd
import matplotlib.pyplot as plt, seaborn as sb
# Importing Dataset
df = pd.read_csv("dataset.csv")
ds = df.drop(['Time'], axis = 1)
# Visualize the trends in data
sb.set_style('darkgrid')
ds.plot(kind = 'line', legend = 'reverse', title = 'Visualizing Sensor Array Time-Series')
plt.legend(loc = 'upper right', shadow = True, bbox_to_anchor = (1.35, 0.8))
plt.show()
# Dropping Temperature & Relative Humidity as they do not change with Time
ds.drop(['Temperature','Rel_Humidity'], axis = 1, inplace = True)
# Again Visualizing the time-series data
sb.set_style('darkgrid')
ds.plot(kind = 'line', legend = 'reverse', title = 'Visualizing Sensor Array Time-Series')
plt.legend(loc = 'upper right', shadow = True, bbox_to_anchor = (1.35, 0.8))
plt.show()
view raw
gsr_data_prepocessing.py hosted with ❤ by GitHub
```

It is evident that the ‘Temperature’ and ‘Relative Humidity’ variables do not really change with time at all. Therefore I have dropped the columns; time, temperature and rel_humidity from the dataset, to ensure that it only contains pure, time-series data.

Non-stationary data has trends that are present in the data. We will have to eliminate this property because the Vector Autoregression (VAR) model, requires the data to be stationary.

A Stationary series is one whose mean and variance do not change with time.

One of the ways to check for stationarity is the ADF test. The ADF test has to be implemented for all the 8 sensor readings column. We’ll also split the data into train & test subsets.

#multivariate-analysis #time-series-forecasting #data-science #machine-learning #time-series-analysis #data analysis

1595685600

In this article, we will be discussing an algorithm that helps us analyze past trends and lets us focus on what is to unfold next so this algorithm is time series forecasting.

**What is Time Series Analysis?**

In this analysis, you have one variable -TIME. A time series is a set of observations taken at a specified time usually equal in intervals. It is used to predict future value based on previously observed data points.

**Here some examples where time series is used.**

- Business forecasting
- Understand the past behavior
- Plan future
- Evaluate current accomplishments.

**Components of time series :**

**Trend:**Let’s understand by example, let’s say in a new construction area someone open hardware store now while construction is going on people will buy hardware. but after completing construction buyers of hardware will be reduced. So for some times selling goes high and then low its called uptrend and downtrend.- **Seasonality: **Every year chocolate sell goes high during the end of the year due to Christmas. This same pattern happens every year while in the trend that is not the case. Seasonality is repeating same pattern at same intervals.
**Irregularity:**It is also called noise. When something unusual happens that affects the regularity, for example, there is a natural disaster once in many years lets say it is flooded so people buying medicine more in that period. This what no one predicted and you don’t know how many numbers of sales going to happen.**Cyclic:**It is basically repeating up and down movements so this means it can go more than one year so it doesn’t have fix pattern and it can happen any time and it is much harder to predict.

**Stationarity of a time series:**

A series is said to be “strictly stationary” if the marginal distribution of Y at time t[p(Yt)] is the same as at any other point in time. This implies that the mean, variance, and covariance of the series Yt are time-invariant.

However, a series said to be “weakly stationary” or “covariance stationary” if mean and variance are constant and covariance of two-point Cov(Y1, Y1+k)=Cov(Y2, Y2+k)=const, which depends only on lag k but do not depend on time explicitly.

#machine-learning #time-series-model #machine-learning-ai #time-series-forecasting #time-series-analysis

1616832900

In the last post, I talked about working with time series . In this post, I will talk about important methods in time series. Time series analysis is very frequently used in finance studies. Pandas is a very important library for time series analysis studies.

In summary, I will explain the following topics in this lesson,

- Resampling
- Shifting
- Moving Window Functions
- Time zone

Before starting the topic, our Medium page includes posts on data science, artificial intelligence, machine learning, and deep learning. Please don’t forget to follow us on ** Medium** 🌱 to see these posts and the latest posts.

Let’s get started.

#pandas-time-series #timeseries #time-series-python #time-series-analysis