1594751100
The article demonstrates the intertemporal approach that extends and generalizes the scope of the rolling time series technique for deriving models of transition processes and empirical strategies. The approach is illustrated within the context of explaining the momentum premium, a long-term ongoing challenge.
The momentum effect was documented in 1993 by Jegadeesh and Titman [1] shown generating abnormal positive returns from 1965 to 1989 for US common stocks. Since then the conventional cross-sectional momentum strategy has been epitomized as ranking assets by past one year returns lagged one month and going long a subset of past winners assets while shorting a subset of past losers. The existence of momentum across a wide spectrum of asset classes, markets, and time horizons as identified by multiple papers has cumulated in the view the factor is almost omnipresent. However, the source of the momentum premium remains an open question with no single model dominating the narrative. A number of risk-based and behavioral models have been presented to explain this phenomenon with one noteworthy discussion on the topic between Clifford Asness and Eugene Fama [2].
Recently, the momentum effect was addressed with the rolling intertemporal analysis proposed in the Uncovering Momentum paper [3]. Within this approach, the selected deciles are rolled forward across the time horizon while simultaneously their portfolio returns are collected across each month during the combined 11-month ranking, 1-month lagged, and holding intervals relative to the portfolio formation date. Analogous with rolling window forecasting, the ranking period can be associated with the in-sample interval while the holding period constitutes the out-of-sample. Below is the figure from the Uncovering Momentum paper presenting the results from running the rolling intertemporal approach on the top momentum decile for the bull market states over August 2006 to August 2017.
The boxplots highlight the bump between the in- and out-of-sample intervals. Previous research studies were primarily dedicated to analyzing the holding period. These plots switch our attention to the transition process. The left side of this bump can be explained by a random sampling model. According to this model, the momentum effect (in the out-of-sample) however should be zero and hence defines a criterion for assessing the underlying momentum theories and models. This article extends the intertemporal approach by explaining the momentum effect as a portfolio of strongly performing stocks. The implementation is based on the Quantopian platform and includes three steps: running the Quantopian pipeline, selecting the features, and rolling the intertemporal approach.
#timeseries #data analysis
1594751100
The article demonstrates the intertemporal approach that extends and generalizes the scope of the rolling time series technique for deriving models of transition processes and empirical strategies. The approach is illustrated within the context of explaining the momentum premium, a long-term ongoing challenge.
The momentum effect was documented in 1993 by Jegadeesh and Titman [1] shown generating abnormal positive returns from 1965 to 1989 for US common stocks. Since then the conventional cross-sectional momentum strategy has been epitomized as ranking assets by past one year returns lagged one month and going long a subset of past winners assets while shorting a subset of past losers. The existence of momentum across a wide spectrum of asset classes, markets, and time horizons as identified by multiple papers has cumulated in the view the factor is almost omnipresent. However, the source of the momentum premium remains an open question with no single model dominating the narrative. A number of risk-based and behavioral models have been presented to explain this phenomenon with one noteworthy discussion on the topic between Clifford Asness and Eugene Fama [2].
Recently, the momentum effect was addressed with the rolling intertemporal analysis proposed in the Uncovering Momentum paper [3]. Within this approach, the selected deciles are rolled forward across the time horizon while simultaneously their portfolio returns are collected across each month during the combined 11-month ranking, 1-month lagged, and holding intervals relative to the portfolio formation date. Analogous with rolling window forecasting, the ranking period can be associated with the in-sample interval while the holding period constitutes the out-of-sample. Below is the figure from the Uncovering Momentum paper presenting the results from running the rolling intertemporal approach on the top momentum decile for the bull market states over August 2006 to August 2017.
The boxplots highlight the bump between the in- and out-of-sample intervals. Previous research studies were primarily dedicated to analyzing the holding period. These plots switch our attention to the transition process. The left side of this bump can be explained by a random sampling model. According to this model, the momentum effect (in the out-of-sample) however should be zero and hence defines a criterion for assessing the underlying momentum theories and models. This article extends the intertemporal approach by explaining the momentum effect as a portfolio of strongly performing stocks. The implementation is based on the Quantopian platform and includes three steps: running the Quantopian pipeline, selecting the features, and rolling the intertemporal approach.
#timeseries #data analysis
1623856080
Have you ever visited a restaurant or movie theatre, only to be asked to participate in a survey? What about providing your email address in exchange for coupons? Do you ever wonder why you get ads for something you just searched for online? It all comes down to data collection and analysis. Indeed, everywhere you look today, there’s some form of data to be collected and analyzed. As you navigate running your business, you’ll need to create a data analytics plan for yourself. Data helps you solve problems , find new customers, and re-assess your marketing strategies. Automated business analysis tools provide key insights into your data. Below are a few of the many valuable benefits of using such a system for your organization’s data analysis needs.
…
#big data #latest news #data analysis #streamline your data analysis #automated business analysis #streamline your data analysis with automated business analysis
1604008800
Static code analysis refers to the technique of approximating the runtime behavior of a program. In other words, it is the process of predicting the output of a program without actually executing it.
Lately, however, the term “Static Code Analysis” is more commonly used to refer to one of the applications of this technique rather than the technique itself — program comprehension — understanding the program and detecting issues in it (anything from syntax errors to type mismatches, performance hogs likely bugs, security loopholes, etc.). This is the usage we’d be referring to throughout this post.
“The refinement of techniques for the prompt discovery of error serves as well as any other as a hallmark of what we mean by science.”
We cover a lot of ground in this post. The aim is to build an understanding of static code analysis and to equip you with the basic theory, and the right tools so that you can write analyzers on your own.
We start our journey with laying down the essential parts of the pipeline which a compiler follows to understand what a piece of code does. We learn where to tap points in this pipeline to plug in our analyzers and extract meaningful information. In the latter half, we get our feet wet, and write four such static analyzers, completely from scratch, in Python.
Note that although the ideas here are discussed in light of Python, static code analyzers across all programming languages are carved out along similar lines. We chose Python because of the availability of an easy to use ast
module, and wide adoption of the language itself.
Before a computer can finally “understand” and execute a piece of code, it goes through a series of complicated transformations:
As you can see in the diagram (go ahead, zoom it!), the static analyzers feed on the output of these stages. To be able to better understand the static analysis techniques, let’s look at each of these steps in some more detail:
The first thing that a compiler does when trying to understand a piece of code is to break it down into smaller chunks, also known as tokens. Tokens are akin to what words are in a language.
A token might consist of either a single character, like (
, or literals (like integers, strings, e.g., 7
, Bob
, etc.), or reserved keywords of that language (e.g, def
in Python). Characters which do not contribute towards the semantics of a program, like trailing whitespace, comments, etc. are often discarded by the scanner.
Python provides the tokenize
module in its standard library to let you play around with tokens:
Python
1
import io
2
import tokenize
3
4
code = b"color = input('Enter your favourite color: ')"
5
6
for token in tokenize.tokenize(io.BytesIO(code).readline):
7
print(token)
Python
1
TokenInfo(type=62 (ENCODING), string='utf-8')
2
TokenInfo(type=1 (NAME), string='color')
3
TokenInfo(type=54 (OP), string='=')
4
TokenInfo(type=1 (NAME), string='input')
5
TokenInfo(type=54 (OP), string='(')
6
TokenInfo(type=3 (STRING), string="'Enter your favourite color: '")
7
TokenInfo(type=54 (OP), string=')')
8
TokenInfo(type=4 (NEWLINE), string='')
9
TokenInfo(type=0 (ENDMARKER), string='')
(Note that for the sake of readability, I’ve omitted a few columns from the result above — metadata like starting index, ending index, a copy of the line on which a token occurs, etc.)
#code quality #code review #static analysis #static code analysis #code analysis #static analysis tools #code review tips #static code analyzer #static code analysis tool #static analyzer
1623292080
Time series analysis is the backbone for many companies since most businesses work by analyzing their past data to predict their future decisions. Analyzing such data can be tricky but Python, as a programming language, can help to deal with such data. Python has both inbuilt tools and external libraries, making the whole analysis process both seamless and easy. Python’s Panda s library is frequently used to import, manage, and analyze datasets in various formats. However, in this article, we’ll use it to analyze stock prices and perform some basic time-series operations.
#data-analysis #time-series-analysis #exploratory-data-analysis #stock-market-analysis #financial-analysis #getting started with time series using pandas
1594347720
A lot of the time, when we discuss the effect, we usually talk about side-effect. However, as I study more and more into functional programming and reading more and more functional programming books, I noticed many times “Effect” or “Effectful” had been widely said in the FP community when describing abstract things.
I dig a little deeper into what an “Effect” or “Effectful” means and put that in this blog post for a note to my future self.
Usually, what they meant for “Effect” or “Effectful” is no side effect (sometimes it does). It is Main Effect.
A type category is a Math Structure to abstract out representation for all the different fields in Math. When designing a program, we can think in the properties of that program before writing code instead of the other way around. For example, a function sum
can be empty (identity law), has the property of combined operation and needs to be associative. (1+2 is equal to 2+1). We can characterize them as and restrict input function to be a Monoid. This way, we can create a solution in a systematic approach that generates fewer bugs.
Within Type Category is a fancy word for a wrapper that produces an “effect” on a given type. I will quote the statement that Alvin Alexander mentioned in Functional and Reactive Domain Modeling:
Those statements can be rewritten as:
Similarly:
An effect can be said of what the monad handles.
Quoting from Rob Norris in Functional Programming with Effects — an effectual function returns F[A]
rather than A
.
#scala #programming #functional-programming #effect #side-effects